FastMCP 3.0 is our largest release ever. This post covers every major feature in some detail, including an overview of the architecture and links to relevant documentation. For beta 2 features (CLI toolkit, MCP Apps, CIMD, and more), see the Beta 2 announcement.
The Architecture
FastMCP 2 had features. Lots of them. Mounting servers, proxying remotes, filtering by tags, transforming tool schemas. Each feature was its own subsystem with its own code, its own mental model, its own edge cases. When you wanted to add something new, you had to figure out how it interacted with everything else—and the answer was usually “write more glue code.”
FastMCP 3 asks a different question: what if all of these features are just different combinations of the same primitives?
The architecture comes down to three concepts:
Components are the atoms of MCP. A tool, a resource, a prompt. They’re what clients actually interact with. Components have names, schemas, metadata, and behavior. They’re the thing you’re ultimately trying to expose.
Providers answer the question: where do components come from? A provider is anything that can list components and retrieve them by name. Your decorated functions are a provider. A directory of files is a provider. A remote MCP server is a provider. An OpenAPI spec is a provider. Critically, a FastMCP server is itself a provider—which means you can nest servers inside servers, infinitely.
Transforms are middleware for the component pipeline. They intercept the flow of components from providers to clients and can modify what passes through. Rename a tool, add a namespace prefix, filter by version, hide components by tag—these are all transforms. Transforms compose: you stack them, and each one processes the output of the previous.
Why Composability Matters
Here’s where it gets interesting. In FastMCP 2, “mounting” a sub-server was a massive specialized feature. Hundreds of lines of code to handle the namespacing, the middleware chains, the lifecycle management. Same story for proxying remote servers. Same story for visibility filtering.
In FastMCP 3, mounting is just two primitives combined:
- A Provider that sources components from another server
- A Transform that adds a namespace prefix
That’s it. There’s no special mounting code. The mounting behavior emerges from the composition of primitives that each do one thing well.
Proxying a remote server? That’s a Provider backed by an MCP client. The Provider wraps the client, translates list/get calls into MCP protocol calls, and returns the results. No special proxy subsystem—just a provider that happens to talk to a remote server.
Per-session visibility, where different users see different tools? That’s a Transform applied to an individual session instead of the server. The visibility transform doesn’t know or care whether it’s running globally or per-session. It just filters components based on rules. The per-session behavior comes from where you apply it.
This composability has a practical consequence: FastMCP 3 ships more features with less code, and you can combine features in ways we didn’t anticipate. Want to proxy a remote server, filter its tools by tag, rename them, and expose them only to authenticated users? That’s a Provider, three Transforms, and some auth middleware. Each piece is independent. Each piece is testable. And when we add new transforms or providers, they automatically work with everything else.
How It Actually Works
When a client asks for the list of tools, here’s what happens:
- The server collects components from all its Providers
- Each Provider runs its own transform chain (provider-level transforms)
- The server runs its transform chain on the aggregated result (server-level transforms)
- The final list goes to the client
This two-level transform system is powerful. Provider-level transforms affect only that provider’s components—useful for namespacing a mounted server. Server-level transforms affect everything—useful for global visibility rules or auth filtering.
The same flow happens for get_tool, call_tool, read_resource, and every other operation. Transforms can intercept any of these, which means you can inject behavior at any point in the pipeline.
You might be wondering: what about middleware? FastMCP still has middleware, and it operates on requests—intercepting tool calls, resource reads, and other operations as they execute. In FastMCP 2, some users tried to use middleware to dynamically modify tools or inject new components. It sort of worked, but it was unpredictable, hard to compose with other systems like auth and visibility, and operated at the server level which made it difficult to address subsets of components. Transforms are the clean answer: they’re designed for component-level modification, they compose naturally, and they integrate with the provider system. Middleware is still there for what it’s good at—authentication, logging, rate limiting, and other cross-cutting concerns at the request level. There’s some gray area, but the guideline is: transforms for shaping what components exist, middleware for handling how requests execute.
What follows is a tour of the providers and transforms that ship with FastMCP 3. Think of them less as “features” and more as building blocks—the primitives you combine to build whatever your application needs.
Providers
Providers answer the question: where do your components come from?
Custom Providers
You can write your own provider by subclassing Provider:
from fastmcp.server.providers import Provider
class DatabaseProvider(Provider): async def list_tools(self) -> Sequence[Tool]: # Query database for available tools rows = await db.fetch("SELECT * FROM tools") return [Tool(name=row['name'], description=row['description']) for row in rows]
async def get_tool(self, name: str) -> Tool | None: row = await db.fetchrow("SELECT * FROM tools WHERE name = ?", name) if row: return Tool(name=row['name'], description=row['description']) return None
# Attach to servermcp = FastMCP("Database Server", providers=[DatabaseProvider()])This pattern is powerful: need tools from a REST API? Write an APIProvider. Need tools from a Kubernetes cluster? Write a KubeProvider. The provider pattern is your extension point.
Built-In Providers
FastMCP ships with providers for the most common patterns.
LocalProvider
This is the classic FastMCP experience. You define a function, decorate it, and it becomes a component. What’s new in v3 is that LocalProvider is now explicit and reusable—you can attach the same provider to multiple servers.
from fastmcp.server.providers import LocalProvider
provider = LocalProvider()
@provider.tooldef greet(name: str) -> str: return f"Hello, {name}!"
# Attach to multiple serversserver1 = FastMCP("Server1", providers=[provider])server2 = FastMCP("Server2", providers=[provider])FileSystemProvider
This is a fundamentally different way to organize MCP servers. Instead of importing a server instance and decorating functions, you write self-contained tool files:
from fastmcp.tools import tool
@tooldef greet(name: str) -> str: """Greet someone by name.""" return f"Hello, {name}!"Then point the provider at the directory:
from fastmcp import FastMCPfrom fastmcp.server.providers import FileSystemProvider
mcp = FastMCP("server", providers=[FileSystemProvider("mcp/")])The problem it solves: traditional servers require coordination between files—either tool files import the server (creating coupling) or the server imports all tool modules (creating a registry bottleneck). FileSystemProvider removes this coupling entirely.
With reload=True, the provider re-scans on every request—changes take effect immediately without restarting the server. This is transformative for development.
SkillsProvider
Skills are the instruction files that Claude Code, Cursor, and Copilot use to learn new capabilities. SkillsProvider exposes these as MCP resources, which means any MCP client can discover and download skills from your server.
from pathlib import Pathfrom fastmcp import FastMCPfrom fastmcp.server.providers.skills import SkillsDirectoryProvider
mcp = FastMCP("Skills Server")mcp.add_provider(SkillsDirectoryProvider(roots=Path.home() / ".claude" / "skills"))Each subdirectory with a SKILL.md file becomes a discoverable skill. Clients see:
skill://{name}/SKILL.md- Main instruction fileskill://{name}/_manifest- JSON listing of all files with sizes and hashesskill://{name}/{path}- Supporting files
We also provide vendor-specific providers with locked default paths: ClaudeSkillsProvider, CursorSkillsProvider, VSCodeSkillsProvider, CodexSkillsProvider, and more.
The FastMCP client can automatically sync skills from servers to your local filesystem, making it easy to distribute skills across your organization.
OpenAPIProvider
OpenAPI-to-MCP conversion was one of FastMCP 2’s most popular features. In v3, we’ve restructured it as a provider, which means it now composes with everything else in the system.
from fastmcp.server.providers.openapi import OpenAPIProviderimport httpx
client = httpx.AsyncClient(base_url="https://api.example.com")provider = OpenAPIProvider(openapi_spec=spec, client=client)
mcp = FastMCP("API Server", providers=[provider])All endpoints become tools by default. When paired with ToolTransform (covered below), you can rename auto-generated tools, improve descriptions, and curate the output for your agent—finally making OpenAPI conversion a tool for building good context rather than blindly accumulating more of it.
ProxyProvider
ProxyProvider sources components from a remote MCP server. This is what powers create_proxy(): you connect to any MCP server and expose its components as if they were local.
from fastmcp.server import create_proxy
# Create proxy to remote serverserver = create_proxy("http://remote-server/mcp")FastMCPProvider
FastMCPProvider sources components from another FastMCP server instance. This is what powers mount(): compose servers together while keeping their middleware chains intact.
from fastmcp import FastMCP
main = FastMCP("Main")sub = FastMCP("Sub")
@sub.tooldef greet(name: str) -> str: return f"Hello, {name}!"
# Mount with namespace - greet becomes "sub_greet"main.mount(sub, prefix="sub")Under the hood, this creates a FastMCPProvider with a Namespace transform—the same primitives, with a cleaner API.
Transforms
Transforms modify components as they flow from providers to clients. They operate on two types of methods: list operations (like list_tools) receive the full sequence of components and return a transformed sequence; get operations (like get_tool) use a middleware pattern with call_next to chain lookups. Transforms can be stacked, and each one processes the output of the previous.
Transforms apply at two levels:
- Provider-level:
provider.add_transform()- affects only that provider’s components - Server-level:
server.add_transform()- affects all components from all providers
Built-In Transforms
Namespace
Namespace adds prefixes to component names (tool → api_tool) and path segments to URIs (data://x → data://api/x). Essential for avoiding collisions when composing servers.
from fastmcp.server.transforms import Namespace
provider.add_transform(Namespace("api"))ToolTransform
ToolTransform lets you reshape tools entirely: rename them, rewrite descriptions, modify argument names and schemas, add tags. This is especially powerful when you don’t control the tools you’re serving—if you’re using OpenAPIProvider or proxying a third-party server, ToolTransform lets you optimize those auto-generated tools for your agent.
from fastmcp.server.transforms import ToolTransformfrom fastmcp.tools.tool_transform import ToolTransformConfig
provider.add_transform(ToolTransform({ "verbose_auto_generated_name": ToolTransformConfig( name="short_name", description="A better description for the agent", tags={"category"}, ),}))VersionFilter
VersionFilter exposes only components within a version range, letting you run v1 and v2 servers from the same codebase. See Component Versioning for how to define versions on your components.
from fastmcp.server.transforms import VersionFilter
# Create servers that share the provider with different filtersapi_v1 = FastMCP("API v1", providers=[components])api_v1.add_transform(VersionFilter(version_lt="2.0"))
api_v2 = FastMCP("API v2", providers=[components])api_v2.add_transform(VersionFilter(version_gte="2.0"))Visibility
The Visibility transform controls which components are exposed by tag, name, or version. This is what powers the enable() and disable() methods on servers and providers.
mcp.disable(tags={"admin"}) # Hide admin toolsmcp.disable(names={"dangerous_tool"}) # Hide by namemcp.enable(tags={"public"}, only=True) # Allowlist modeResourcesAsTools and PromptsAsTools
These transforms expose resources and prompts as tools for clients that only support the tools protocol. Some MCP hosts—particularly early adopters and simpler implementations—only expose tools to agents. These transforms let your server stay rich while still working with limited clients.
from fastmcp.server.transforms import ResourcesAsTools, PromptsAsTools
mcp.add_transform(ResourcesAsTools(mcp))mcp.add_transform(PromptsAsTools(mcp))ResourcesAsTools generates list_resources and read_resource tools that wrap the underlying resource operations. PromptsAsTools generates list_prompts and get_prompt tools. The transforms automatically handle argument mapping and response formatting—your resources and prompts work exactly as expected, just through the tools interface.
Custom Transforms
You can write your own transforms by subclassing Transform:
from collections.abc import Sequencefrom fastmcp.server.transforms import Transform, GetToolNextfrom fastmcp.tools import Tool
class TagFilter(Transform): def __init__(self, required_tags: set[str]): self.required_tags = required_tags
async def list_tools(self, tools: Sequence[Tool]) -> Sequence[Tool]: # list operations receive the sequence directly return [t for t in tools if t.tags & self.required_tags]
async def get_tool(self, name: str, call_next: GetToolNext) -> Tool | None: # get operations use call_next middleware pattern tool = await call_next(name) return tool if tool and tool.tags & self.required_tags else NoneAuthorization
FastMCP 3 introduces per-component authorization for tools, resources, and prompts—the missing piece after OAuth support in 2.12.
Component-Level Auth
The auth parameter accepts a callable (or list of callables) that receives the request context and decides whether to allow it:
from fastmcp import FastMCPfrom fastmcp.server.auth import require_auth, require_scopes
mcp = FastMCP()
@mcp.tool(auth=require_auth)def protected_tool(): ...
@mcp.resource("data://secret", auth=require_scopes("read"))def secret_data(): ...
@mcp.prompt(auth=require_scopes("admin"))def admin_prompt(): ...Built-in checks:
require_auth: Requires any valid tokenrequire_scopes(*scopes): Requires specific OAuth scopesrestrict_tag(tag, scopes): Requires scopes only for tagged components
Server-Wide Auth
Apply authorization to all components via AuthMiddleware:
from fastmcp.server.middleware import AuthMiddlewarefrom fastmcp.server.auth import require_auth, restrict_tag
# Require auth for all componentsmcp = FastMCP(middleware=[AuthMiddleware(auth=require_auth)])
# Tag-based restrictionsmcp = FastMCP(middleware=[ AuthMiddleware(auth=restrict_tag("admin", scopes=["admin"]))])Custom Auth Checks
Custom checks receive AuthContext with token and component:
def custom_check(ctx: AuthContext) -> bool: return ctx.token is not None and "admin" in ctx.token.scopesNote: STDIO transport bypasses all auth checks (no OAuth concept).
CIMD
CIMD (Client ID Metadata Document) is the successor to Dynamic Client Registration. Instead of clients registering via a POST endpoint, they provide an HTTPS URL pointing to their metadata document. The server fetches and validates it, which is more secure and enables better client verification. Shipped in beta 2.
Component Versioning
You can now register multiple versions of the same component. FastMCP automatically exposes the highest version to clients while preserving older versions for compatibility.
Declaring Versions
@mcp.tool(version="1.0")def add(x: int, y: int) -> int: return x + y
@mcp.tool(version="2.0")def add(x: int, y: int, z: int = 0) -> int: return x + y + z
# Only v2.0 is exposed via list_tools()# Calling "add" invokes the v2.0 implementationVersion comparison uses PEP 440 semantic versioning (1.10 > 1.9 > 1.2). The v prefix is normalized (v1.0 equals 1.0).
Version Metadata
When listing components, FastMCP exposes all available versions in the meta field:
tools = await client.list_tools()# Each tool's meta includes:# - meta["fastmcp"]["version"]: the version of this component ("2.0")# - meta["fastmcp"]["versions"]: all available versions ["2.0", "1.0"]Calling Specific Versions
The FastMCP client supports direct version selection:
from fastmcp import Client
async with Client(server) as client: # Call the latest version (default) result = await client.call_tool("add", {"x": 1, "y": 2})
# Call a specific version result = await client.call_tool("add", {"x": 1, "y": 2}, version="1.0")For generic MCP clients that don’t support the version parameter, pass version via _meta in arguments.
{ "x": 1, "y": 2, "_meta": { "fastmcp": { "version": "1.0" } }}Session-Scoped State
State now persists across tool calls within a session, not just within a single request.
@mcp.toolasync def increment_counter(ctx: Context) -> int: count = await ctx.get_state("counter") or 0 await ctx.set_state("counter", count + 1) return count + 1State is automatically keyed by session ID, ensuring isolation between different clients.
Key changes from v2:
- Methods are now async:
await ctx.get_state(),await ctx.set_state(),await ctx.delete_state() - State expires after 1 day (TTL) to prevent unbounded growth
Distributed backends:
The implementation uses pykeyvalue (maintained by FastMCP maintainer Bill Easton) for pluggable storage:
from key_value.aio.stores.redis import RedisStore
# Use Redis for distributed deploymentsmcp = FastMCP("server", session_state_store=RedisStore(...))Stateless HTTP:
For stateless HTTP deployments where there’s no persistent connection, FastMCP respects the mcp-session-id header that most clients send. If you’ve configured a storage backend, we’ll create a virtual session for you.
Visibility System
Components can be enabled or disabled using the visibility system. Each enable() or disable() call adds a Visibility transform that marks components.
mcp = FastMCP("Server")
# Disable by namemcp.disable(names={"dangerous_tool"}, components=["tool"])
# Disable by tagmcp.disable(tags={"admin"})
# Allowlist mode - only show components with these tagsmcp.enable(tags={"public"}, only=True)
# Enable overrides earlier disable (later transform wins)mcp.disable(tags={"internal"})mcp.enable(names={"safe_tool"}) # safe_tool is visible despite internal tagBlocklist vs Allowlist:
- Blocklist mode (default): All components visible except explicitly disabled
- Allowlist mode (
only=True): Only explicitly enabled components visible
Per-Session Visibility
Server-level visibility changes affect all connected clients. For per-session control, use Context methods:
@mcp.tool(tags={"premium"})def premium_analysis(data: str) -> str: return f"Premium analysis of: {data}"
@mcp.toolasync def unlock_premium(ctx: Context) -> str: """Unlock premium features for this session only.""" await ctx.enable_components(tags={"premium"}) return "Premium features unlocked"
@mcp.toolasync def reset_features(ctx: Context) -> str: """Reset to default feature set.""" await ctx.reset_visibility() return "Features reset to defaults"
# Globally disabled - sessions unlock individuallymcp.disable(tags={"premium"})Session visibility methods:
await ctx.enable_components(...): Enable components for this sessionawait ctx.disable_components(...): Disable components for this sessionawait ctx.reset_visibility(): Clear session rules, return to global defaults
FastMCP automatically sends ToolListChangedNotification (and resource/prompt equivalents) to affected sessions when visibility changes.
Production Features
OpenTelemetry Tracing
FastMCP 3 has native OpenTelemetry instrumentation. Drop in your OTEL configuration, and every tool call, resource read, and prompt render is traced with standardized attributes.
from opentelemetry import tracefrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
provider = TracerProvider()provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))trace.set_tracer_provider(provider)
# Use fastmcp normally - spans export to your configured backendServer spans include: component key, provider type, session ID, auth context. Client spans wrap outgoing calls with W3C trace context propagation.
Background Tasks (SEP-1686)
MCP has a spec extension (SEP-1686) for long-running background tasks. FastMCP implements this via Docket integration—you get persistent task queues backed by SQLite or Postgres, with the ability to scale workers horizontally.
from fastmcp.server.tasks import TaskConfig
@mcp.tool(task=TaskConfig(mode="required"))async def long_running_task(): # Must be executed as background task ...
@mcp.tool(task=TaskConfig(mode="optional"))async def flexible_task(): # Supports both sync and task execution ...
@mcp.tool(task=True) # Shorthand for mode="optional"async def simple_task(): ...Task modes:
"forbidden": Does not support task execution (default)"optional": Supports both synchronous and task execution"required": Must be executed as background task
Install with fastmcp[tasks] for Docket integration.
Tool Timeouts
Tools can limit foreground execution time:
@mcp.tool(timeout=30.0)async def fetch_data(url: str) -> dict: """Fetch with 30-second timeout.""" ...When exceeded, clients receive MCP error code -32000. Both sync and async tools are supported. Note: timeouts don’t apply to background tasks—those run in Docket’s task queue with their own lifecycle management.
Pagination
For servers with many components, enable pagination:
server = FastMCP("ComponentRegistry", list_page_size=50)When list_page_size is set, list operations paginate responses with nextCursor for subsequent pages. The FastMCP Client fetches all pages automatically—list_tools() returns the complete list. For manual pagination:
async with Client(server) as client: result = await client.list_tools_mcp() while result.nextCursor: result = await client.list_tools_mcp(cursor=result.nextCursor)PingMiddleware
Keep long-lived connections alive with periodic pings:
from fastmcp.server.middleware import PingMiddleware
mcp = FastMCP("server")mcp.add_middleware(PingMiddleware(interval_ms=5000))Developer Experience
Decorators Return Functions
By popular demand (and by “popular demand” I mean “relentless GitHub issues”), your decorated functions now stay callable, like they do in Flask, FastAPI, and Typer:
@mcp.tooldef greet(name: str) -> str: return f"Hello, {name}!"
# greet is still your function - call it directlygreet("World") # "Hello, World!"This makes testing straightforward: just call the function. For v2 compatibility, set FASTMCP_DECORATOR_MODE=object.
Hot Reload
fastmcp run --reload watches your files and reloads automatically:
# Watch for changes and restartfastmcp run server.py --reload
# Watch specific directoriesfastmcp run server.py --reload --reload-dir ./src --reload-dir ./libThe fastmcp dev command is a shorthand that includes --reload by default.
Automatic Threadpool
Synchronous tools, resources, and prompts now automatically run in a threadpool:
import time
@mcp.tooldef slow_tool(): time.sleep(10) # No longer blocks other requests return "done"Three concurrent calls now execute in parallel (~10s) rather than sequentially (30s).
Composable Lifespans
Lifespans can be combined with the | operator for modular setup/teardown:
from fastmcp import FastMCPfrom fastmcp.server.lifespan import lifespan
@lifespanasync def db_lifespan(server): db = await connect_db() try: yield {"db": db} finally: await db.close()
@lifespanasync def cache_lifespan(server): cache = await connect_cache() try: yield {"cache": cache} finally: await cache.close()
mcp = FastMCP("server", lifespan=db_lifespan | cache_lifespan)Both enter in order and exit in reverse (LIFO). Context dicts are merged.
Rich Result Classes
New result classes provide explicit control over component responses:
ToolResult:
from fastmcp.tools import ToolResult
@mcp.tooldef process(data: str) -> ToolResult: return ToolResult( content=[TextContent(type="text", text="Done")], structured_content={"status": "success", "count": 42}, meta={"processing_time_ms": 150} )ResourceResult:
from fastmcp.resources import ResourceResult, ResourceContent
@mcp.resource("data://items")def get_items() -> ResourceResult: return ResourceResult( contents=[ ResourceContent({"key": "value"}), ResourceContent(b"binary data"), ], meta={"count": 2} )PromptResult:
from fastmcp.prompts import PromptResult, Message
@mcp.promptdef conversation() -> PromptResult: return PromptResult( messages=[ Message("What's the weather?"), Message("It's sunny today.", role="assistant"), ], meta={"generated_at": "2024-01-01"} )Context.transport Property
Tools can detect which transport is active:
@mcp.tooldef my_tool(ctx: Context) -> str: if ctx.transport == "stdio": return "short response" return "detailed response with more context"Returns "stdio", "sse", or "streamable-http".
Upgrading
The vast majority of users can upgrade with no modifications. The breaking changes are documented in the upgrade guide, but the main ones are:
- Decorators return functions (set
FASTMCP_DECORATOR_MODE=objectfor v2 behavior) - State methods are async (
await ctx.get_state()instead ofctx.get_state()) - Auth providers require explicit configuration (no more auto-loading from env vars)
enabledparameter removed from components (use the visibility system instead:mcp.enable()/mcp.disable())
- Upgrade:
pip install fastmcp==3.0.0b2 - Docs: Read the new documentation
- GitHub: Star the repo
Happy (context) engineering!
Subscribe
Comments
Join the conversation by posting on social media.
Has MCP already solved the context overflow problem? Like letting the agent decide when to know more about a tool?
It seems like that was the biggest limitation and why devs are switching over to skills instead.
Personally I think devs switched to skills because it's just more convenient to manage local context via files than through MCP. At least, that's why I did! But I'm not sure they're mutually exclusive, just appropriate for different jobs.
To your larger question, I think the protocol has many of the hooks this requires but there could be more (e.g. short and long descriptions), and ultimately many MCP client implementations are just frankly not great. So where we could dream up ways to progressively disclose details, it comes down to whether clients adopt it.
In 3.0 we're experimenting with skills-over-MCP which has been pretty fun.
I was really excited about the SkillsProvider. Serving skills over MCP to a large organization with many repos is very appealing.
Does it actually work though or is it just a proof-of-concept? I tried it with GitHub Copilot in VSCode but it didn’t seem to pick up the skill I exposed in my MCP.
IMO it's not mcp or skills. It's MCP and skills.
What you describe is an agent context management problem. Not something inherent in the protocol. Skills are meant to solve this but they can do so while also utilizing mcp tools. The tools they refer can be used to filter and mcp server's entire toolset.
Also skills + cli tools are mostly feasible for coding agents that have something like bash access whereas skills + mcp is available for more locked down agents.
I don't think I agree.
I think in the short term, it will be: 1) Local: Skills, 2) Remote: MCP. But long term skills might replace it all together.
The pattern that will replace MCP is: Skill Prompt + CLI tool
That already works for existing CLI tools. For example Vercels agent-browser with their own skill fully replaces playwright MCP. It's leaps better. Why? Mostly because the agent has more flexibility.
It can learn about the CLI tools by simply doing --help and is not bound to the tool/param selection the user provides.
Based on the blogpost it's possible to enable more Tools during the session to optimize context. As far as I know the MCP Integrations generally work by adding the tool descriptions into the system prompt. But how would the llm discover those new enabled tools during an ongoing chat let's say inside a chat conversation in a jetbrains ide?
How would MCP consumers detect those changes in the tool list and implement the pass through to llms without breaking caching?
MCP-compliant clients support a tool change notification that the server issues to indicate the need for a refresh.
That's a non-technical answer for a pretty technical question. So you don't know too? Iam not aware of a "Notification" concept llm APIs understand. They have system prompts and mostly an array of messages. That's it.
Thank you for your work. The SuperMCP depends on FastMCP. Here’s the link: https://github.com/dhanababum/supermcp. Does fastmcp3.0 support dynamic MCP servers now?
I think so -- providers can return new components on every call, so you can build some really dynamic stuff. For example the new filesystem provider can reload files to automatically pick up changes. If there's something you need that isn't easy let us know!
Very excited for this. I saw the docs earlier on the redis session store and I almost risked it all to install the beta in prod. Thanks for all the work you do!
This is a smart direction. Reducing everything to Providers and Transforms makes the framework easier to reason about and extend, and it shows in the feature list. Per-component auth and session state are especially welcome. The minimal breaking changes make upgrades practical too.
Whelp I know what I'm doing this weekend. My MCP for coding needs a refresh and this gives a good reason to get on it finally lol.
Factoring everything into providers and transforms is the move. 2.x had good ideas but the subsystems felt bolted together... mounting and proxying and filtering each doing their own thing. This cleanup should help devs who want to combine, say, an OpenAPI spec with custom auth transforms without hitting weird edge cases.
Session-scoped state is the one that'll matter most for agentic workflows. Multi-turn tool calls sharing context without external state stores simplifies a lot of orchestration code.
Hot reload was overdue. Debugging MCP servers without it meant restart loops that killed iteration speed.
Session state is the one I keep coming back to. The FAME paper from last month showed 13x latency reduction just by automating agent memory persistence instead of round-tripping to external stores. FastMCP baking this in at the framework level means fewer teams will roll their own half-broken session hacks.
Hot reload feels obvious in hindsight... the restart loop tax on debugging was real. Curious if the FileSystemProvider hot-reload plays nice with breakpoints or if you still need to detach/reattach.
You’re absolutely correct! It’s been a game changer!
Are but the start of the phrases I will use in my new MCP server called, human-to-ai-email-response-mcp.
Next time you’re sending a manual email using your own language embedded with your personality you can call upon my MCP to ensure it sounds like the rest of your team members.
why only python though? what did go and js ever do to you? personally i skip anything with py, thats why im still sticking with metamcp
Love fastmcp, thanks for your work.
🙏