Cross functional
CEO, CTO & Engineers
A Beginning Full of Unanswered Questions When I joined Quantly AI, the company was little more than an idea wrapped in a fragile prototype. There was no real product yet, just a chat box, a handful of Python scripts generated by an LLM, and a very limited amount of financial data. It could answer some questions about public companies, but not reliably, and certainly not enough to earn the trust of serious equity analysts. I remember the first time an analyst sat in front of it. They typed a long, complex question about a company’s five-year performance and targets. The system paused, generated a shaky python snippet, failed silently, and then offered a partial answer missing half the metrics. The analyst looked at me and said: “It’s interesting… but what am I supposed to do with this?” That moment set the tone for the rest of my work at Quantly.
Cross functional
CEO, CTO & Engineers
Quantly AI was an early-stage company building software for equity analysts. When I joined, the company was pre-seed and the “product” consisted of a prototype: a chat-first interface connected to a limited set of public market data sources.
The team at the time consisted of four people. I joined as the fifth hire and the only designer, working directly with the CEO and CTO, and day-to-day with the engineering team.
I owned product design end-to-end: product direction, research, interaction design, and decision-making from zero to one. There was no existing design function to inherit from or optimise. My role was to define what the product should become, and ensure that the direction we took could scale competitively in a crowded and fast-moving market
The initial premise was compelling. Analysts could ask natural-language questions about publicly traded companies and receive answers, charts, and calculated insights generated dynamically via Python code written by large language models.
In controlled demos, this worked well. In real analyst workflows, it did not.
As soon as we put the prototype in front of users, a set of consistent failure modes emerged.
Analysts experienced choice paralysis when faced with a blank chat interface. They asked legitimate questions that the system could not answer due to data coverage gaps. Visualisations and insights were slow, unreliable, or failed entirely. And despite explicit messaging about what the system could and could not do, users repeatedly ignored those constraints.
The core issue was not prompting quality, model capability, or user education.
The issue was that the product assumed analysts would adapt their workflow to the interface. In reality, analysts work through structured, repeatable stages. The prototype had no concept of stages, context, or progression.
A blank chat box did not communicate boundaries. It communicated unlimited capability.
Rather than asking “how do we make the chat better?”, we reframed the question entirely:
How do equity analysts actually work, and where does AI genuinely add leverage?
I led market and competitor research across existing equity research platforms, focusing on why analysts trusted certain tools despite their complexity, and where newer tools failed despite more advanced technology.
Across user sessions and industry research, a clear pattern emerged. Analysts do not begin their work with open-ended questions. They begin with a defined universe of companies and move through predictable phases:
Chat, as designed, cut across this workflow instead of supporting it.
The key strategic decision followed:
Chat should not be the product.
Chat should support the product.
As the sole designer, I owned design decisions fully and worked as a direct partner to the CEO and CTO on product direction.This inevitably created moments of tension, particularly around how much information the product should surface at once.
The CTO, coming from a different technology culture and with deep technical expertise, strongly believed in a more dashboard-heavy approach: dense screens, many graphs, multiple buttons, and high information availability. In isolation, this was a reasonable instinct, analysts deal with complex data.
My concern was different. Through testing, we consistently saw that our target users were extremely time-poor. Most would give a new product no more than 20–30 minutes. If something broke, felt overwhelming, or required learning a new mental model, they would abandon it and never return.
This created a fundamental question:
Rather than debating preferences, I pushed for evidence.
We designed and tested multiple interface directions:
We ran A/B-style comparisons during demos and testing sessions, measuring task success, error tolerance, and whether users chose to continue exploring the product.
The outcome was unambiguous. Users gravitated toward the simpler, more structured experience. They completed tasks faster, tolerated minor issues more readily, and were far more likely to continue using the platform.
This moment resolved a key internal debate and set a long-term design principle: reduce cognitive load first, then add power progressively
With alignment on direction, I led the shift from a demo-driven interface to an analyst-grade workflow platform.
Given the company stage, a large redesign was neither feasible nor desirable. Instead, we adopted an incremental delivery strategy: ship small, test with users, measure impact, and iterate continuously.
My focus was on ensuring every design decision moved the product toward something analysts could learn quickly, trust, and return to — not something that merely looked impressive in a pitch.
We redesigned the product around real analyst objects and stages of work.
Rather than starting with chat, the experience now anchored users in familiar concepts:
Chat was reintroduced inside this structure, not on top of it.
It became a contextual tool: summarising filings, comparing companies, explaining trends, and generating insights within clearly defined boundaries. When data was unavailable, the product guided users toward alternative paths instead of failing silently or returning unreliable results.
To support rapid iteration without fragmentation, I established the core product foundations:
This allowed the team to move quickly without sacrificing coherence or trust.
Because changes were shipped incrementally, we were able to observe impact in real usage rather than relying on assumptions.
Over the course of the rollout, we saw:
These signals showed that users were no longer testing the limits of the system, they were integrating it into their workflow.
With alignment on direction, I led the shift from a demo-driven interface to an analyst-grade workflow platform.
Given the company stage, a large redesign was neither feasible nor desirable. Instead, we adopted an incremental delivery strategy: ship small, test with users, measure impact, and iterate continuously.
My focus was on ensuring every design decision moved the product toward something analysts could learn quickly, trust, and return to not something that merely looked impressive in a pitch.
We redesigned the product around real analyst objects and stages of work.
Rather than starting with chat, the experience now anchored users in familiar concepts:
Chat was reintroduced inside this structure, not on top of it.
It became a contextual tool: summarising filings, comparing companies, explaining trends, and generating insights within clearly defined boundaries. When data was unavailable, the product guided users toward alternative paths instead of failing silently or returning unreliable results.
To support rapid iteration without fragmentation, I established the core product foundations:
This allowed the team to move quickly without sacrificing coherence or trust.
Because changes were shipped incrementally, we were able to observe impact in real usage rather than relying on assumptions.
Over the course of the rollout, we saw:
These signals showed that users were no longer testing the limits of the system, they were integrating it into their workflow.
Quantly evolved from a fragile chat prototype into a credible equity research platform. The product no longer asked analysts to guess what to do or tolerate failure. Instead, it met them where they already worked and used AI to accelerate understanding without increasing cognitive load.
This project reinforced a principle that now underpins my approach to AI-driven products:
AI increases the cost of complexity.
Trust is earned through clarity, not capability.
By designing around real workflows, resolving internal tension through evidence, and treating usability as a strategic advantage, we built a product analysts could depend on — not just experiment with.

