As the author of FastMCP, it might seem strange that I haven’t prioritized an MCP server for Prefect. But honestly, the user story for “chatting with your orchestrator” has always felt weak. Developers don’t want to have a conversation about their workflows; they want to build, run, and observe them with speed and precision.
It turns out I was thinking about the wrong user.
My thinking on this shifted after talking with some of our most sophisticated customers about how they debug modern applications. An alert from one system is rarely the end of the story; it’s the first domino. And Prefect is the universal pane of glass that provides the first clue in that complex, cross-system investigation. As one customer put it, “A lot of problems we learn about from Prefect are not, in fact, due to Prefect at all.” More frequently, a failed process in the orchestrator points to a flow run ID, which helps find an evicted pod in Kubernetes, which leads to a memory spike in a monitoring tool, which finally uncovers an error in an application log.
A human can perform this needle-in-a-haystack work. But increasingly, an agent does. The value wasn’t in creating a new interface for a human, but in unblocking one for a machine.
In an AI-native stack, the protagonist is often no longer a person, but an autonomous agent negotiating APIs, ingesting signals, and chaining tools to deliver value. This requires a fundamental shift from writing “user stories” to defining “agent stories.” The template for this new artifact is simple but powerful:
As an agent, given
{context}
, I use{tools}
to achieve{outcome}
with minimal human latency.
This reframing forces us to design for a different set of needs. Agents don’t care about intuitive UIs or clever microcopy. They care about clear contracts, machine-parsable errors, composability, and minimizing the latency between their actions.
This shift helps explain the rapid evolution of AI-native APIs:
Phase 1: The Wrapper. The first wave of MCP servers simply regurgitated existing APIs. This “chat with your API” approach was rightly met with skepticism from savvy teams who understood that a great user experience is more than a conversational veneer over a clunky backend.
Phase 2: The Curator. We are in this phase now. The best teams realize they must consciously design for the LLM. This is an act of curation—thoughtfully reducing scope, renaming cryptic arguments, and adding instructions that guide the agent toward the desired outcome. It’s about tailoring the tool to the new user.
Phase 3: The Ecosystem. This is the frontier. Agent workflows are inherently multi-system. The goal is no longer to build a single, monolithic tool, but to offer a composable node in a larger, automated graph. Your product’s success is measured by how well it interoperates and enables agents to chain actions across a diverse ecosystem.
As we design the next generation of software, we must build for two primary personas: our human users and the autonomous agents they deploy. In a growing number of cases, the agent is the more important one. The critical question in product design is shifting from “What does the user want to do?” to “What does the agent need to achieve?”—and that requires a new kind of answer: an agent story.
Want to read more? The next post on agentic product design is Curation is the New Discovery
Subscribe
Comments
Reply to this post on Bluesky to join the conversation.