Skip to content

Commit 44ad748

Browse files
BihanBihan  Ranapeterschmidt85
authored
[Examples] Ray+RAGEN (#2665)
* Add Distributed Agent Fine Tuning Example * Update examples/clusters/agent-fine-tuning/README.md Co-authored-by: Andrey Cheptsov <[email protected]> * Update examples/clusters/agent-fine-tuning/README.md Co-authored-by: Andrey Cheptsov <[email protected]> * [Examples] Ray+RAGEN --------- Co-authored-by: Bihan Rana <[email protected]> Co-authored-by: Andrey Cheptsov <[email protected]> Co-authored-by: peterschmidt85 <[email protected]>
1 parent 192f8b5 commit 44ad748

File tree

10 files changed

+201
-8
lines changed

10 files changed

+201
-8
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,3 +24,4 @@ build/
2424
.vscode
2525
.aider*
2626
uv.lock
27+
.local/

docs/docs/guides/clusters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,5 +76,5 @@ Refer to [instance volumes](../concepts/volumes.md#instance) for an example.
7676

7777
!!! info "What's next?"
7878
1. Read about [distributed tasks](../concepts/tasks.md#distributed-tasks), [fleets](../concepts/fleets.md), and [volumes](../concepts/volumes.md)
79-
2. Browse the [Clusters](../../examples.md#clusters) examples
79+
2. Browse the [Clusters](../../examples.md#clusters) and [Distributed training](../../examples.md#distributed-training) examples
8080

docs/examples.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,22 @@ hide:
8383
</a>
8484
</div>
8585

86+
## Distributed training
87+
88+
<div class="tx-landing__highlights_grid">
89+
<a href="/examples/distributed-training/ray-ragen"
90+
class="feature-cell sky">
91+
<h3>
92+
Ray+RAGEN
93+
</h3>
94+
95+
<p>
96+
Fine-tune an agent on multiple nodes
97+
with RAGEN, verl, and Ray.
98+
</p>
99+
</a>
100+
</div>
101+
86102
## Inference
87103

88104
<div class="tx-landing__highlights_grid">
@@ -128,7 +144,7 @@ hide:
128144
TensorRT-LLM
129145
</h3>
130146
<p>
131-
Deploy DeepSeek R1 and its distilled version with TensorRT-LLM
147+
Deploy DeepSeek models with TensorRT-LLM
132148
</p>
133149
</a>
134150
</div>

docs/examples/distributed-training/ray-ragen/index.md

Whitespace-only changes.

docs/overrides/main.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,7 @@
119119
<div class="tx-footer__section-title">Examples</div>
120120
<a href="/examples#fine-tuning" class="tx-footer__section-link">Fine-tuning</a>
121121
<a href="/examples#clusters" class="tx-footer__section-link">Clusters</a>
122+
<a href="/examples#distributed-training" class="tx-footer__section-link">Distributed training</a>
122123
<a href="/examples#inference" class="tx-footer__section-link">Inference</a>
123124
<a href="/examples#accelerators" class="tx-footer__section-link">Accelerators</a>
124125
<a href="/examples#llms" class="tx-footer__section-link">LLMs</a>

examples/.dstack.yml

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,15 @@ type: dev-environment
22
# The name is optional, if not specified, generated randomly
33
name: vscode
44

5-
python: "3.11"
6-
# Uncomment to use a custom Docker image
7-
#image: dstackai/base:py3.13-0.7-cuda-12.1
5+
#python: "3.11"
6+
7+
image: un1def/dstack-base:py3.12-dev-cuda-12.1
88

99
ide: vscode
1010

1111
# Use either spot or on-demand instances
12-
spot_policy: auto
12+
#spot_policy: auto
1313

1414
resources:
15-
gpu: 1
15+
cpu: x86:8..32
16+
gpu: 24GB..:1
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
type: task
2+
name: ray-ragen-cluster
3+
4+
nodes: 2
5+
6+
env:
7+
- WANDB_API_KEY
8+
image: whatcanyousee/verl:ngc-cu124-vllm0.8.5-sglang0.4.6-mcore0.12.0-te2.2
9+
commands:
10+
- wget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
11+
- bash miniconda.sh -b -p /workflow/miniconda
12+
- eval "$(/workflow/miniconda/bin/conda shell.bash hook)"
13+
- git clone https://github.com/RAGEN-AI/RAGEN.git
14+
- cd RAGEN
15+
- bash scripts/setup_ragen.sh
16+
- conda activate ragen
17+
- cd verl
18+
- pip install --no-deps -e .
19+
- pip install hf_transfer hf_xet
20+
- pip uninstall -y ray
21+
- pip install -U "ray[default]"
22+
- |
23+
if [ $DSTACK_NODE_RANK = 0 ]; then
24+
ray start --head --port=6379;
25+
else
26+
ray start --address=$DSTACK_MASTER_NODE_IP:6379
27+
fi
28+
29+
# Expose Ray dashboard port
30+
ports:
31+
- 8265
32+
33+
resources:
34+
gpu: 80GB:8
35+
shm_size: 128GB
36+
37+
# Save checkpoints on the instance
38+
volumes:
39+
- /checkpoints:/checkpoints
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
# Ray + RAGEN
2+
3+
This example shows how use `dstack` and [RAGEN :material-arrow-top-right-thin:{ .external }](https://github.com/RAGEN-AI/RAGEN){:target="_blank"}
4+
to fine-tune an agent on mulitiple nodes.
5+
6+
Under the hood `RAGEN` uses [verl :material-arrow-top-right-thin:{ .external }](https://github.com/volcengine/verl){:target="_blank"} for Reinforcement Learning and [Ray :material-arrow-top-right-thin:{ .external }](https://docs.ray.io/en/latest/){:target="_blank"} for ditributed training.
7+
8+
## Create fleet
9+
10+
Before submitted disributed training runs, make sure to create a fleet with a `placement` set to `cluster`.
11+
12+
> For more detials on how to use clusters with `dstack`, check the [Clusters](https://dstack.ai/docs/guides/clusters) guide.
13+
14+
## Run a Ray cluster
15+
16+
If you want to use Ray with `dstack`, you have to first run a Ray cluster.
17+
18+
The task below runs a Ray cluster on an existing fleet:
19+
20+
<div editor-title="examples/distributed-training/ray-ragen/.dstack.yml">
21+
22+
```yaml
23+
type: task
24+
name: ray-ragen-cluster
25+
26+
nodes: 2
27+
28+
env:
29+
- WANDB_API_KEY
30+
image: whatcanyousee/verl:ngc-cu124-vllm0.8.5-sglang0.4.6-mcore0.12.0-te2.2
31+
commands:
32+
- wget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
33+
- bash miniconda.sh -b -p /workflow/miniconda
34+
- eval "$(/workflow/miniconda/bin/conda shell.bash hook)"
35+
- git clone https://github.com/RAGEN-AI/RAGEN.git
36+
- cd RAGEN
37+
- bash scripts/setup_ragen.sh
38+
- conda activate ragen
39+
- cd verl
40+
- pip install --no-deps -e .
41+
- pip install hf_transfer hf_xet
42+
- pip uninstall -y ray
43+
- pip install -U "ray[default]"
44+
- |
45+
if [ $DSTACK_NODE_RANK = 0 ]; then
46+
ray start --head --port=6379;
47+
else
48+
ray start --address=$DSTACK_MASTER_NODE_IP:6379
49+
fi
50+
51+
# Expose Ray dashboard port
52+
ports:
53+
- 8265
54+
55+
resources:
56+
gpu: 80GB:8
57+
shm_size: 128GB
58+
59+
# Save checkpoints on the instance
60+
volumes:
61+
- /checkpoints:/checkpoints
62+
```
63+
64+
</div>
65+
66+
We are using verl's docker image for vLLM with FSDP. See [Installation :material-arrow-top-right-thin:{ .external }](https://verl.readthedocs.io/en/latest/start/install.html){:target="_blank"} for more.
67+
68+
The `RAGEN` setup script `scripts/setup_ragen.sh` isolates dependencies within Conda environment.
69+
70+
Note that the Ray setup in the RAGEN environment is missing the dashboard, so we reinstall it using `ray[default]`.
71+
72+
Now, if you run this task via `dstack apply`, it will automatically forward the Ray's dashboard port to `localhost:8265`.
73+
74+
<div class="termy">
75+
76+
```shell
77+
$ dstack apply -f examples/distributed-training/ray-ragen/.dstack.yml
78+
```
79+
80+
</div>
81+
82+
As long as the `dstack apply` is attached, you can use `localhost:8265` to submit Ray jobs for execution.
83+
If `dstack apply` is detached, you can use `dstack attach` to re-attach.
84+
85+
## Submit Ray jobs
86+
87+
Before you can submit Ray jobs, ensure to install `ray` locally:
88+
89+
<div class="termy">
90+
91+
```shell
92+
$ pip install ray
93+
```
94+
95+
</div>
96+
97+
Now you can submit the training job to the Ray cluster which is available at `localhost:8265`:
98+
99+
<div class="termy">
100+
101+
```shell
102+
$ RAY_ADDRESS=http://localhost:8265
103+
$ ray job submit \
104+
-- bash -c "\
105+
export PYTHONPATH=/workflow/RAGEN; \
106+
cd /workflow/RAGEN; \
107+
/workflow/miniconda/envs/ragen/bin/python train.py \
108+
--config-name base \
109+
system.CUDA_VISIBLE_DEVICES=[0,1,2,3,4,5,6,7] \
110+
model_path=Qwen/Qwen2.5-7B-Instruct \
111+
trainer.experiment_name=agent-fine-tuning-Qwen2.5-7B \
112+
trainer.n_gpus_per_node=8 \
113+
trainer.nnodes=2 \
114+
micro_batch_size_per_gpu=2 \
115+
trainer.default_local_dir=/checkpoints \
116+
trainer.save_freq=50 \
117+
actor_rollout_ref.rollout.tp_size_check=False \
118+
actor_rollout_ref.rollout.tensor_model_parallel_size=4"
119+
```
120+
121+
</div>
122+
123+
!!! info "Training parameters"
124+
1. `actor_rollout_ref.rollout.tensor_model_parallel_size=4`, because `Qwen/Qwen2.5-7B-Instruct` has 28 attention heads and number of attention heads should be divisible by `tensor_model_parallel_size`
125+
2. `actor_rollout_ref.rollout.tp_size_check=False`, if True `tensor_model_parallel_size` should be equal to `trainer.n_gpus_per_node`
126+
3. `micro_batch_size_per_gpu=2`, to keep the RAGEN-paper's `rollout_filter_ratio` and `es_manager` settings as it is for world size `16`
127+
128+
Using Ray via `dstack` is a powerful way to get access to the rich Ray ecosystem while benefiting from `dstack`'s provisioning capabilities.
129+
130+
!!! info "What's next"
131+
1. Check the [Clusters](https://dstack.ai/docs/guides/clusters) guide
132+
2. Read about [distributed tasks](https://dstack.ai/docs/concepts/tasks#distributed-tasks) and [fleets](https://dstack.ai/docs/concepts/fleets)
133+
3. Browse Ray's [docs :material-arrow-top-right-thin:{ .external }](https://docs.ray.io/en/latest/train/examples.html){:target="_blank"} for other examples.

examples/misc/ray/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ name: ray-cluster
3333
nodes: 4
3434
commands:
3535
- pip install -U "ray[default]"
36-
- >
36+
- |
3737
if [ $DSTACK_NODE_RANK = 0 ]; then
3838
ray start --head --port=6379;
3939
else

mkdocs.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,8 @@ nav:
264264
- RCCL tests: examples/clusters/rccl-tests/index.md
265265
- A3 Mega: examples/clusters/a3mega/index.md
266266
- A3 High: examples/clusters/a3high/index.md
267+
- Distributed training:
268+
- Ray+RAGEN: examples/distributed-training/ray-ragen/index.md
267269
- Deployment:
268270
- SGLang: examples/inference/sglang/index.md
269271
- vLLM: examples/inference/vllm/index.md

0 commit comments

Comments
 (0)