I’m thrilled to announce the release of FastMCP 2.0! 🎉
Give it a star or check out the docs at gofastmcp.com.
🚀 FastMCP 2.0
The Model Context Protocol (MCP) aims to be the “USB-C port for AI,” providing a standard way for large language models to interact with data and tools. FastMCP’s mission has always been to make implementing this protocol as fast, simple, and Pythonic as possible.
FastMCP 1.0 was incredibly successful – so much so that its core SDK is now included in the official MCP Python SDK! You can from mcp.server.fastmcp import FastMCP
and be up and running in minutes.
However, when I wrote the first version of FastMCP, the MCP itself was only a week old. I introduced FastMCP with the tagline “because life’s too short for boilerplate,” focusing on making it easy to create MCP servers without getting bogged down in protocol details.
A few months later, the MCP ecosystem has matured. If FastMCP 1.0 was about easily creating servers, then FastMCP 2.0 is about easily working with them. This required a significant rewrite, which is why we’re back in a standalone project, but v2 is backwards-compatible with v1 while introducing powerful new features for composition, integration, and interaction.
Here’s what’s new:
đź§© Compose Servers with Ease
You can now build modular applications by combining multiple FastMCP servers together, optionally using prefixes to avoid naming collisions.
You can either mount
a local or remote server to live-link it to your server, exposing its components while forwarding all requests, or use import_server
to statically copy another server’s resources and tools into your own. See the composition docs for more.
from fastmcp import FastMCP
# Define subservers (e.g., weather_server, calc_server)weather_server = FastMCP(name="Weather")
@weather_server.tool()def get_forecast(city: str): return f"Sunny in {city}"
calc_server = FastMCP(name="Calculator")
@calc_server.tool()def add(a: int, b: int): return a + b
main_app = FastMCP(name="MainApp")
# Mount the subserversmain_app.mount("weather", weather_server)main_app.mount("calc", calc_server)
# main_app now dynamically exposes `weather_get_forecast` and `calc_add`if __name__ == "__main__": main_app.run()
🔄 Proxy Any MCP Server
Composition is great for combining servers you control, but what about interacting with third-party servers, remote servers, or those not built with FastMCP?
FastMCP can now proxy any MCP server, turning it into a FastMCP server that’s compatible with all other features, including composition.
The killer feature? You’re no longer locked into the backend server’s transport. The proxy can run using stdio
, sse
, or any other FastMCP-supported transport, regardless of how the backend is hosted.
For more information, see the proxying docs.
from fastmcp import FastMCP, Client
# Point a client at *any* backend MCP server (local FastMCP instance, remote SSE, local script...)backend_client = Client("http://api.example.com/mcp/sse") # e.g., a remote SSE server
proxy_server = FastMCP.from_client(backend_client, name="MyProxy")
# Run the proxy locally via stdio (useful for Claude Desktop, etc.)if __name__ == "__main__": proxy_server.run() # Defaults to stdio
🪄 Auto-Generate Servers from OpenAPI & FastAPI
Many developers want to make their existing REST APIs accessible to LLMs without reinventing the wheel. FastMCP 2.0 makes this trivial by automatically generating MCP servers from OpenAPI specs or FastAPI apps.
Explore the OpenAPI and FastAPI guides for more.
from fastapi import FastAPIfrom fastmcp import FastMCP
# Your existing FastAPI appfastapi_app = FastAPI()
@fastapi_app.get("/items/{item_id}")def get_item(item_id: int): return {"id": item_id, "name": f"Item {item_id}"}
# Generate an MCP servermcp_server = FastMCP.from_fastapi(fastapi_app)
# Run the MCP server (exposes FastAPI endpoints as MCP tools/resources)if __name__ == "__main__": mcp_server.run()
đź§ Client Infrastructure & LLM Sampling
FastMCP 2.0 introduces a completely new client infrastructure designed for robust interaction with any MCP server, supporting all major transports and even in-memory transport when working with local FastMCP servers.
This makes it easy to expose advanced MCP features like client-side LLM sampling. Tools running on the server can now ask the client’s LLM to perform tasks using ctx.sample()
. Imagine a server tool that fetches complex data and then asks the LLM connected to the client (like Claude or ChatGPT) to summarize it before returning the result.
from fastmcp import FastMCP, Context
mcp = FastMCP(name="SamplingDemo")
@mcp.tool()async def analyze_data_with_llm(data_uri: str, ctx: Context) -> str: """Fetches data and uses the client's LLM for analysis."""
# log to the client's console await ctx.info(f"Fetching data from {data_uri}...") data_content = await ctx.read_resource(data_uri) # Simplified
await ctx.info("Requesting LLM analysis...") # Ask the connected client's LLM to analyze the data analysis_response = await ctx.sample( f"Analyze the key trends in this data:\n\n{data_content[:1000]}" ) return analysis_response # Return the LLM's analysis```
This unlocks sophisticated workflows where server-side logic collaborates with the client-side LLM’s intelligence. For more information, see the updated Client and Context guides.
Building the MCP Ecosystem
FastMCP 2.0 is a major step towards a more connected, flexible, and developer-friendly AI ecosystem built on MCP. By simplifying proxying, composition, and integration, we hope to empower you to build and combine MCP services in powerful new ways.
Give FastMCP 2.0 a try!
- Explore the documentation
- Check out the code and examples on GitHub
- Add it to your poject:
uv add fastmcp
orpip install fastmcp
I’m excited to see what you build. Your feedback, issues, and contributions are always welcome!
Happy Engineering!
Subscribe
Comments
Reply to this post on Bluesky to join the conversation.