This example demonstrates Dynamo's Prefill/Decode Disaggregated Serving architecture, where the prefill and decode phases of LLM inference are separated into specialized workers for enhanced performance, improved resource utilization, and better scalability.
Traditional LLM inference combines two distinct phases with different computational characteristics:
- Prefill Phase: Processes the entire input prompt to generate the KV cache (compute-bound)
- Decode Phase: Generates output tokens one by one using the KV cache (memory-bound)
Dynamo's disaggregated architecture separates these phases into specialized workers:
- Prefill Workers: Optimized for high-throughput parallel processing of input tokens
- Decode Workers: Optimized for low-latency sequential token generation
This separation allows for:
- Better Hardware Utilization: Use different parallelism configurations optimized for each phase
- Improved Scalability: Scale prefill and decode workers independently based on workload
- Enhanced Performance: Eliminate head-of-line blocking where long prefills delay ongoing decodes
Note
This example requires having at least 2 GPUs -- one for Prefill and one for Decode
Before running this example, ensure you have the following services running:
- etcd: A distributed key-value store used for service discovery and metadata storage
- NATS: A high-performance message broker for inter-component communication
You can start these services using Docker Compose:
docker compose -f deploy/metrics/docker-compose.yml up -d- Frontend - HTTP API endpoint that receives requests and forwards them to the decode worker
- vLLM Prefill Worker - Specialized worker for prefill phase execution
- vLLM Decode Worker - Specialized worker that handles requests and decides between local/remote prefill
---
title: Disaggregated Request Flow
---
flowchart TD
Client["Users/Clients<br/>(HTTP)"] --> Frontend["Frontend<br/>HTTP API endpoint<br/>(OpenAI Style)"]
Frontend --> Decode["Decode Worker"]
Decode --> Availability{"Prefill Workers<br/>Available?"}
Availability -->|Yes| Prefill["Prefill Worker<br/>(Remote execution)"]
Availability -->|No| Decode
Prefill --> NIXL["NIXL KV Transfer<br/>(GPU-to-GPU)"]
NIXL --> Decode
Decode --> Frontend
Frontend --> Client
There are four steps to deploy and use disaggregated serving with Dynamo.
Open a new terminal and start the decode worker:
export DYN_LOG=debug # Increase log verbosity to see disaggregation
CUDA_VISIBLE_DEVICES=0 python -m dynamo.vllm --model Qwen/Qwen3-0.6BThis starts a decode worker that can receive requests and decide whether to:
- Handle short prefills locally (fast path)
- Send long prefills to remote prefill workers (disaggregated path)
Leave this terminal running - it will show Decode Worker logs.
Open another terminal and start the prefill worker:
export DYN_LOG=debug # Increase log verbosity to see disaggregation
DYN_VLLM_KV_EVENT_PORT=20081 \
VLLM_NIXL_SIDE_CHANNEL_PORT=20097 \
CUDA_VISIBLE_DEVICES=1 python -m dynamo.vllm --model Qwen/Qwen3-0.6B --is-prefill-workerThis starts a specialized prefill worker that:
- Pulls prefill requests from the NATS queue
- Executes prefill computation efficiently
- Transfers computed KV cache to decode workers via NIXL
Leave this terminal running - it will show Prefill Worker logs.
Open a third terminal and start the frontend:
python -m dynamo.frontend --http-port 8000The frontend will automatically discover the prefill and decode workers through etcd service registry.
Send requests to test the disaggregated serving setup:
curl -X POST http://localhost:8000/v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{
"model": "Qwen/Qwen3-0.6B",
"messages": [
{ "role": "user", "content": "Tell me a story about a cowardly cat" }
],
"stream": false,
"max_tokens": 1028
}'When you're done with the disaggregated serving example:
In each terminal, press Ctrl+C to stop:
- Frontend (terminal from step 3)
- Prefill Worker (terminal from step 2)
- Decode Worker (terminal from step 1)
Stop the etcd and NATS services:
docker compose -f deploy/metrics/docker-compose.yml downDynamo's disaggregated serving architecture separates prefill and decode operations for optimal performance:
The system employs two types of specialized workers:
-
Decode Workers: Handle incoming requests and manage token generation
- Receive all incoming requests
- Make routing decisions based on system state
- Execute the decode phase to generate output tokens
-
Prefill Workers: Focus exclusively on prefill computation
- Process input prompts to generate KV cache
- Transfer computed KV cache to decode workers
- Return control immediately after prefill completion
The system uses a simple yet effective routing strategy:
- Availability-Based Routing: Decode workers monitor prefill worker availability
- Automatic Fallback: When no prefill workers are available, decode workers handle everything locally
- Transparent Operation: Clients are unaware of whether requests are processed locally or disaggregated
This approach ensures the system remains operational regardless of configuration changes, automatically adapting to the available resources.
The architecture relies on NVIDIA's NIXL (NVIDIA Inference Transfer Library) for efficient KV cache movement:
- Direct GPU-to-GPU Transfer: KV cache data moves directly between GPU memory without CPU involvement
- Zero-Copy Operations: Eliminates redundant memory copies for maximum efficiency
- Automatic Transport Selection: NIXL chooses the optimal transport (NVLink, InfiniBand, etc.) based on hardware topology
sequenceDiagram
participant Client
participant Decode as Decode Worker
participant Prefill as Prefill Worker
Client->>Decode: Send request
Decode->>Decode: Check prefill availability
alt Prefill workers available
Decode->>Prefill: Forward for prefill
Prefill->>Prefill: Compute KV cache
Note over Prefill,Decode: NIXL transfers KV cache
Prefill-->>Decode: Return control
Decode->>Decode: Generate tokens
else No prefill workers
Decode->>Decode: Prefill + Decode locally
end
Decode-->>Client: Stream response tokens
This disaggregated architecture provides several advantages:
- Resource Optimization: Each worker type can be optimized for its specific workload
- Independent Scaling: Add prefill or decode workers based on workload characteristics
- Improved Latency: Ongoing decode operations aren't blocked by new prefill requests
- Seamless Degradation: System continues operating even without prefill workers
The architecture supports various deployment patterns:
- Single Node: Prefill and decode workers on different GPUs of the same machine
- Multi-Node: Workers distributed across multiple machines for larger scale
- Dynamic Scaling: Add or remove workers without disrupting ongoing operations
By separating concerns and using efficient communication mechanisms, Dynamo achieves the performance benefits of disaggregation without the complexity typically associated with distributed systems.