Skip to content

Commit 135e208

Browse files
committed
Add SSE polling documentation
1 parent 284dfc3 commit 135e208

File tree

1 file changed

+72
-0
lines changed

1 file changed

+72
-0
lines changed

docs/deployment/http.mdx

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,78 @@ Without `expose_headers=["mcp-session-id"]`, browsers will receive the session I
198198
**Production Security**: Never use `allow_origins=["*"]` in production. Specify the exact origins of your browser-based clients. Using wildcards exposes your server to unauthorized access from any website.
199199
</Warning>
200200

201+
### SSE Polling for Long-Running Operations
202+
203+
<VersionBadge version="2.14.0" />
204+
205+
<Note>
206+
This feature only applies to the **StreamableHTTP transport** (the default for `http_app()`). It does not apply to the legacy SSE transport (`transport="sse"`).
207+
</Note>
208+
209+
When running tools that take a long time to complete, you may encounter issues with load balancers or proxies terminating connections that stay idle too long. [SEP-1699](https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1699) introduces SSE polling to solve this by allowing the server to gracefully close connections and have clients automatically reconnect.
210+
211+
To enable SSE polling, configure an `EventStore` when creating your HTTP application:
212+
213+
```python
214+
from fastmcp import FastMCP, EventStore
215+
216+
mcp = FastMCP("My Server")
217+
218+
@mcp.tool
219+
async def long_running_task(ctx: Context) -> str:
220+
"""A task that takes several minutes to complete."""
221+
for i in range(100):
222+
await ctx.report_progress(i, 100)
223+
224+
# Periodically close the connection to avoid load balancer timeouts
225+
# Client will automatically reconnect and resume receiving progress
226+
if i % 30 == 0 and i > 0:
227+
await ctx.close_sse_stream()
228+
229+
await do_expensive_work()
230+
231+
return "Done!"
232+
233+
# Configure with EventStore for resumability
234+
event_store = EventStore()
235+
app = mcp.http_app(
236+
event_store=event_store,
237+
retry_interval=2000, # Client reconnects after 2 seconds
238+
)
239+
```
240+
241+
**How it works:**
242+
243+
1. When `event_store` is configured, the server stores all events (progress updates, results) with unique IDs
244+
2. Calling `ctx.close_sse_stream()` gracefully closes the HTTP connection
245+
3. The client automatically reconnects with a `Last-Event-ID` header
246+
4. The server replays any events the client missed during the disconnection
247+
248+
The `retry_interval` parameter (in milliseconds) controls how long clients wait before reconnecting. Choose a value that balances responsiveness with server load.
249+
250+
<Note>
251+
`close_sse_stream()` is a no-op if called without an `EventStore` configured, so you can safely include it in tools that may run in different deployment configurations.
252+
</Note>
253+
254+
#### Custom Storage Backends
255+
256+
By default, `EventStore` uses in-memory storage. For production deployments with multiple server instances, you can provide a custom storage backend using the `key_value` package:
257+
258+
```python
259+
from fastmcp import EventStore
260+
from key_value.aio.stores.redis import RedisStore
261+
262+
# Use Redis for distributed deployments
263+
redis_store = RedisStore(url="redis://localhost:6379")
264+
event_store = EventStore(
265+
storage=redis_store,
266+
max_events_per_stream=100, # Keep last 100 events per stream
267+
ttl=3600, # Events expire after 1 hour
268+
)
269+
270+
app = mcp.http_app(event_store=event_store)
271+
```
272+
201273
## Integration with Web Frameworks
202274

203275
If you already have a web application running, you can add MCP capabilities by mounting a FastMCP server as a sub-application. This allows you to expose MCP tools alongside your existing API endpoints, sharing the same domain and infrastructure. The MCP server becomes just another route in your application, making it easy to manage and deploy.

0 commit comments

Comments
 (0)