Master: The main ComfyUI instance that coordinates and distributes work. This is where you load workflows, manage the queue, and view results.
Worker: A ComfyUI instance that receives and processes tasks from the master. Workers handle just the GPU computation and send results back to the master. You can have multiple workers connected to a single master, each utilizing their own GPU.
The master can either contribute GPU work or stay in orchestrator-only mode:
- Participating: Master renders alongside workers, useful when you want every available GPU.
- Orchestrator-only: Master sends jobs to selected workers but skips local rendering. Enable this by opening the Distributed panel and unchecking the master toggle. The master card will display “Master disabled: running as orchestrator only.”
- Fallback: If orchestrator-only is enabled but no workers remain selected, the master automatically re-enables execution to guarantee the workflow still runs. The UI shows a green “Master fallback execution active” badge so you know work is executing locally again.
- Local workers: Additional GPUs on the same machine as the master
- Remote workers: GPUs on different computers within your network
- Cloud workers: GPUs hosted on cloud services like Runpod
These are added automatically on first launch, but you can add them manually if you need to.
- Open the Distributed GPU panel.
- Click "Add Worker" in the UI.
- Configure your local worker:
- Name: A descriptive name for the worker (e.g., "Studio PC 1")
- Port: A unique port number for this worker (e.g., 8189, 8190...).
- CUDA Device: The GPU index from
nvidia-smi(e.g., 0, 1). - Extra Args: Optional ComfyUI arguments for this specific worker.
- Save and launch the local worker.
ComfyUI instances running on completely different computers on your network. These allow you to harness GPU power from other machines. Remote workers must be manually started on their respective computers and are connected via IP address.
On the Remote Worker Machine:
- Launch ComfyUI with the
--listen --enable-cors-headerarguments.⚠️ Required!- This ComfyUI instance will serve as a worker for your main master.
- Optionally add additional local workers on this machine if it has multiple GPUs:
- Access the Distributed GPU panel in this ComfyUI instance
- Add workers for any additional GPUs (if they haven't been added automatically)
- Make sure they have
--listenset inExtra Args - Launch them
- Open the ComfyUI port (e.g., 8188) and any additional worker ports (e.g., 8189, 8190) in the firewall.
On the Main Machine:
- Launch ComfyUI with
--enable-cors-headerlaunch argument. - Open the Distributed GPU panel (sidebar on the left).
- Click "Add Worker."
- Choose "Remote".
- Configure your remote worker:
- Name: A descriptive name for the worker (e.g., "Server Rack GPU 0")
- Host: The remote worker's IP address.
- Port: The port number used when launching ComfyUI on the remote master/worker (e.g., 8188).
- Save the remote worker configuration.
ComfyUI instances running on a cloud service like Runpod.
On Runpod:
If using your own template, make sure you launch ComfyUI with the
--enable-cors-headerargument and yougit clone ComfyUI-Distributedinto custom_nodes.⚠️ Required!
- Register a Runpod account.
- On Runpod, go to Storage > New Network Volume and create a volume that will store the models you need. Start with 40 GB, you can always add more later. Learn more about Network Volumes.
- Use the ComfyUI Distributed Pod template.
- Make sure your Network Volume is mounted and choose a suitable GPU.
⚠️ To use the ComfyUI Distributed Pod template, you will need to filter instances by CUDA 12.8 (add filter in Additional Filters).
- Press Edit Template to configure the pod's Environment Variables:
- CIVITAI_API_TOKEN: get your token here
- HF_API_TOKEN: get your token here
- SAGE_ATTENTION: optional optimisation (set to true/false)
- Deploy your pod.
- Connect to your pod using JupyterLabs. This gives us access to the pod's file system.
- Download models into /workspaces/ComfyUI/models/ (these will remain on your network drive even after you terminate the pod). Example commands below:
# Download from CivitAI
comfy model download --url https://civitai.com/api/download/models/1759168 --relative-path /workspace/ComfyUI/models/checkpoints --set-civitai-api-token $CIVITAI_API_TOKEN
# Download model from Hugging Face
comfy model download --url https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors --relative-path /workspace/ComfyUI/models/unet --set-hf-api-token $HF_API_TOKEN
ℹ️ Use this guide to make this process easy. It will generate a shell script that automatically downloads the models for a given workflow.
- Access ComfyUI through the Runpod URL.
- Download any additional custom nodes you need using the ComfyUI Manager.
On the Main Machine:
- Launch a Cloudflare tunnel.
- Download from here: https://github.com/cloudflare/cloudflared/releases
- Then run, for example:
cloudflared-windows-amd64.exe tunnel --url http://localhost:8188
ℹ️ Cloudflare tunnels create secure connections without exposing ports directly to the internet and are required for Cloud Workers.
- Copy the Cloudflare address
- Launch ComfyUI with
--enable-cors-headerlaunch argument. - Open the Distributed GPU panel (sidebar on the left).
- Edit the Master's settings to change the host address to the Cloudflare address.
- Click "Add Worker."
- Choose "Cloud".
- Configure your cloud worker:
- Host: The ComfyUI Runpod address. For example:
wcegfo9tbbml9l-8188.proxy.runpod.net - Port: 443
- Host: The ComfyUI Runpod address. For example:
- Save the remote worker configuration.
On the Cloud Worker machine:
- Your cloud worker container needs to have the same models and custom nodes as the workflow you want to run on your local machine.
- If your cloud platform doesn't provide a secure connection, use Cloudflare to create a tunnel for the worker. Each GPU needs their own tunnel for their respective port.
- For example:
./cloudflared tunnel --url http://localhost:8188
- Launch ComfyUI with the
--listen --enable-cors-headerarguments.⚠️ Required! - Add workers in the UI panel if the cloud machine has more than one GPU.
- Make sure that they also have
--listenset inExtra Args. - Then launch them.
- Make sure that they also have
On the Main Machine:
- Launch a Cloudflare tunnel on your local machine.
- Download from here: https://github.com/cloudflare/cloudflared/releases
- Then run, for example:
cloudflared-windows-amd64.exe tunnel --url http://localhost:8188
- Copy the Cloudflare address
- Launch ComfyUI with
--enable-cors-headerlaunch argument. - Open the Distributed GPU panel (sidebar on the left).
- Edit the Master's host address and replace it with the Cloudflare address.
- Click "Add Worker."
- Choose "Cloud".
- Configure your cloud worker:
- Host: The remote worker's IP address/domain
- Port: 443
- Save the remote worker configuration.