I know you’ve heard it:
“Why can’t ChatGPT do this?”
It’s the 2024 equivalent of “Why won’t Google do this?” – an absurd query that has long been the shallowest VC litmus test for early-stage ideas.1 But this updated question is asked more often, and more seriously, because ChatGPT has become the default benchmark for what’s possible in AI.
Part of the phenomenon is familiar, if unusual: the near-total conflation of a new technology with a single product implementation. A handful of contemporary examples exist: Google, Photoshop, the iPad, the Walkman, Velcro. But there’s something very different about ChatGPT: it is the first time that I can think of where the underlying technology is evolving faster than the applications built on top of it.
In the AI space, it’s the core models doing the disrupting, not the startups. Each new release leapfrogs forward, threatening to obsolete entire application layers. AI startups must not only keep pace with competitors but also adapt to an environment where foundational breakthroughs constantly redefine product strategies.
ChatGPT’s potency lies in it’s dual nature. It is simultaneously:
- a showcase for the state-of-the-art frontier of LLM capabilities
- a very narrow UX for single-threaded chat
That’s an extremely potent combination, and as a result, ChatGPT has become the de facto standard for what an “LLM interface” should be. And that’s a problem, because chat is a truly terrible interface for most AI applications. Real-world software applications have requirements that don’t fit well into a chat interface, even one delivered as an API. They need efficiency, precision, automation, integration, scalability, observability, and reproducibility. I don’t want to chat with my {docs, code, toaster, etc.} — I want to do things with them.
But the trouble with this ruthlessly effective combination of technology and interface is that it’s created an unusually rigid definition of what “AI” is, and it’s hurting innovation. Introducing an effective AI-powered product that isn’t chat-based means solving two problems: proving the AI works, and justifying the unfamiliar interface.
We need to shift our perspective. LLMs are fundamentally a technology for transforming tokens, not a product in themselves. Instead of inviting users to chat, we should focus on how core LLM operations2 can deliver value, then build features around those capabilities. To compete with the ChatGPT standard, prioritize the user experience (or developer experience), not the raw LLM capabilities.
Arguably, the most impactful consequence of ChatGPT’s success is that LLMs have become a commodity, and the real battleground is the experience of using them.
The path forward lies in treating AI like other powerful technologies – as tools to be integrated, not products to be imitated. We don’t trumpet that we chose DuckDB (for example); we simply use it create better software. Similarly, AI should enhance our applications without being their focal point.
To truly innovate in this space, we must look beyond ChatGPT and see the forest for the trees. By treating AI as the transformative technology it is, rather than a product to be copied, we can unlock its full potential and create applications that genuinely push boundaries.
The next time you hear “Why can’t ChatGPT do this?” reframe it:
“I see how ChatGPT might demo this. How are you going to deliver it to users?”