Palfrey is a clean-room, high-performance Python ASGI server with source-traceable parity mapping.
Documentation: https://palfrey.dymmond.com 📚
Source Code: https://github.com/dymmond/palfrey
The official supported version is always the latest released.
Palfrey is a clean-room ASGI server focused on three things:
- behavior you can reason about
- deployment controls you can operate safely
- performance you can reproduce and verify
Protocol runtime modes include HTTP/1.1 backends plus opt-in HTTP/2 (--http h2) and HTTP/3 (--http h3) paths.
Palfrey was built with deep respect for Uvicorn and the ASGI ecosystem it helped mature. This is not a "winner vs loser" comparison. Uvicorn is an excellent, battle-tested server, and Palfrey intentionally keeps a compatible API/CLI experience so teams coming from Uvicorn feel at home. Our goal is to offer another strong option when teams want different internal architecture and extended runtime capabilities.
Benchmark snapshot (your run):
- Command:
python -m benchmarks.run --http-requests 100000
| Scenario | Palfrey Ops/s | Uvicorn Ops/s | Relative Speed |
|---|---|---|---|
| HTTP | 36859.67 | 36357.47 | 1.014x |
| WebSocket | 38884.53 | 15317.18 | 2.539x |
These numbers are environment-dependent. Always benchmark with your own app, traffic profile, and infrastructure before making production decisions.
This documentation is written for both technical and non-technical readers.
- Engineers can use the protocol details, option tables, and runbooks.
- Product, support, and operations teams can use the plain-language summaries and checklists.
At runtime, Palfrey sits between clients and your ASGI application.
- accepts TCP or UNIX socket connections
- parses protocol bytes into ASGI events
- calls your app with
scope,receive,send - writes responses back to clients
- manages process behavior (reload, workers, graceful shutdown)
Create main.py:
async def app(scope, receive, send):
"""Return a plain-text greeting for HTTP requests."""
if scope["type"] != "http":
return
body = b"Hello from Palfrey"
await send(
{
"type": "http.response.start",
"status": 200,
"headers": [
(b"content-type", b"text/plain; charset=utf-8"),
(b"content-length", str(len(body)).encode("ascii")),
],
}
)
await send({"type": "http.response.body", "body": body})Run Palfrey:
palfrey main:app --host 127.0.0.1 --port 8000Check it:
curl http://127.0.0.1:8000Gunicorn + Palfrey worker:
gunicorn main:app -k palfrey.workers.PalfreyWorker -w 4 -b 0.0.0.0:8000- install, verify, and run your first app
- move from a minimal app to real startup patterns
- what ASGI is, and how Palfrey applies it
- how HTTP, WebSocket, and lifespan flows behave
- how server internals affect user-visible outcomes
- full CLI and config surface
- protocol and logging behavior
- env var model and common errors
- migration, security hardening, production rollout
- practical troubleshooting and FAQ
- deployment shapes, workers, reload model
- capacity planning, observability, benchmark method
- platform-specific notes and release process
If your application is the business logic, Palfrey is the runtime control layer around it. A good runtime control layer gives teams:
- predictable startup and shutdown
- fewer surprises under traffic spikes
- clearer incident response paths
- safer, repeatable deployments
