Skip to content

Commit 6432b19

Browse files
committed
add readme and refactor on scripts
1 parent 2df31f0 commit 6432b19

File tree

2 files changed

+281
-39
lines changed

2 files changed

+281
-39
lines changed
Lines changed: 239 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,239 @@
1+
# Automated Context Length Testing for Large Language Models
2+
3+
This script automates the process of determining the **maximum context length** a large language model (LLM) can handle under various distributed training configurations, including different GPU counts and sequence parallelism settings. It iteratively increases the context length during training until an **Out-of-Memory (OOM)** error occurs, logging results and supporting advanced features like RoPE scaling, FSDP strategies, and offloading.
4+
5+
---
6+
7+
## 🧰 Requirements
8+
9+
Ensure Trinity-RFT is well installed ([Installation Guide](https://modelscope.github.io/Trinity-RFT/en/main/tutorial/trinity_installation.html)). No extra dependence is required.
10+
11+
---
12+
13+
## 🛠️ Configuration Files
14+
15+
The script relies on two external files:
16+
17+
1. **`context_length.yaml`**
18+
- Located in the same directory as this script.
19+
- Defines the base training configuration used by `trinity`.
20+
21+
2. **`workflow/` plugin directory**
22+
- Contains `CustomWorkflow` expected by the `trinity`, which providing a synthetic training data generator.
23+
24+
Ensure both exist at runtime. You can modify these files to customize the training process.
25+
26+
---
27+
28+
## 🚀 Usage
29+
30+
### Run the Script
31+
32+
```bash
33+
python search_context_length_capacity.py \
34+
--model_path /path/to/your/model \
35+
--start_length 4096 \
36+
--log_dir ./logs \
37+
--test_gpu_num 1 2 4 \
38+
--test_sp_num 1 2 \
39+
--trainer_strategy fsdp \
40+
--save_hf_checkpoint last \
41+
--timeout 2400
42+
```
43+
44+
### Required Arguments
45+
46+
| Argument | Description |
47+
|--------|-----------|
48+
| `--model_path` | Path to the pretrained Hugging Face model directory. |
49+
50+
### Optional Arguments
51+
52+
| Argument | Default | Description |
53+
|--------|--------|-----------|
54+
| `--start_length` | `4096` | Initial context length to begin testing. |
55+
| `--log_dir` | `./logs` | Directory to save logs and results. |
56+
| `--test_gpu_num` | `1 2 4 6` | List of GPU counts to test scalability. |
57+
| `--test_sp_num` | `1` | Sequence parallel group sizes to evaluate. Must divide `test_gpu_num` and number of attention heads. |
58+
| `--save_hf_checkpoint` | `last` | When to save HF format checkpoints (`always`, `never`, `last`). |
59+
| `--entropy_saving` | `False` | Enable memory-saving techniques (if supported). |
60+
| `--offload` | `False` | Offload parameters to CPU to reduce GPU memory usage. |
61+
| `--trainer_strategy` | `fsdp` | Distributed training strategy (`fsdp` or `fsdp2`). |
62+
| `--timeout` | `2400` (40 min) | Maximum time per job before forced termination. |
63+
64+
---
65+
66+
## 📂 Output Structure
67+
68+
Logs are saved in a structured hierarchy under `--log_dir`:
69+
70+
```
71+
logs/
72+
└── <model_name>/
73+
└── gpu-<N>/
74+
└── sp-<S>/
75+
└── model_len-<L>.log
76+
```
77+
78+
Each log file corresponds to a specific `(GPU count, SP size, context length)` combination.
79+
80+
Final results are printed to stdout:
81+
```
82+
model_name = Qwen3-0.6B, trainer_gpu_num = 4, sp_num = 2, max_model_len = 40960
83+
```
84+
85+
---
86+
87+
## ⚠️ Notes & Best Practices
88+
89+
- **Model Compatibility**: Ensure the model supports dynamic context extension (e.g., via RoPE scaling).
90+
- **SP Validity**: Only valid SP values (divisors of both GPU count and attention heads) are tested.
91+
- **Checkpoint Root**: Controlled by `TRINITY_CHECKPOINT_ROOT_DIR` env var (default: `./checkpoints/length-test`). Cleared before each trial.
92+
- **Early Termination**: If any run fails due to OOM, the search stops and returns the last successful length.
93+
- **Large Steps After Base Limit**: Basic step size is 4096. And once context exceeds `max_position_embeddings`, step size becomes quarter of original limit.
94+
95+
---
96+
97+
## 🧪 Example: Test Qwen3-0.6B Context Length
98+
99+
```bash
100+
python search_context_length_capacity.py \
101+
--model_path Qwen/Qwen3-0.6B \
102+
--test_gpu_num 1 2 4 6 \
103+
--test_sp_num 1 2 4 \
104+
--start_length 8192 \
105+
--log_dir ./results/qwen3-length-scan \
106+
--trainer_strategy fsdp2 \
107+
--timeout 3600
108+
```
109+
110+
This command will test the maximum context length for Qwen3-0.6B model with 2, 4, and 8 GPUs, using FSDP2 strategy, and save logs to `./results/qwen3-length-scan`.
111+
112+
---
113+
114+
## 📚 Test Results
115+
116+
Below are empirical results from running this script on various Qwen3 models across different hardware and optimization configurations. These benchmarks help guide configuration choices for maximizing context length within memory constraints.
117+
118+
### Legend
119+
- `*` indicates RoPE scaling (YARN) was applied — context length exceeds the model’s native `max_position_embeddings`.
120+
- `-` indicates OOM occurred even at 4096 context length.
121+
- All tests use `start_length=4096` and increase dynamically.
122+
123+
### A100 80GB
124+
125+
#### Vallina Settings (Baseline)
126+
127+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
128+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
129+
| 1 | 1 | 20480 | 16384 | - | - | - |
130+
| 2 | 1 | 24576 | 20480 | 12288 | - | - |
131+
| 2 | 2 | 40960 | 40960 | 24576 | - | - |
132+
| 4 | 1 | 24576 | 20480 | 20480 | 8192 | - |
133+
| 4 | 2 | 40960 | 40960 | 36864 | 20480 | - |
134+
| 4 | 4 | 92160* | 81920* | 71680* | 40960 | - |
135+
| 6 | 1 | 24576 | 20480 | 20480 | 12288 | 8192 |
136+
| 6 | 2 | 40960 | 40960 | 40960 | 28672 | 16384 |
137+
138+
139+
#### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`
140+
141+
> ⚠️ Must be set **before** launching any processes (including Ray clusters).
142+
143+
144+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
145+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
146+
| 1 | 1 | 24576 | 16384 | - | - | - |
147+
| 2 | 1 | 28672 | 24576 | 16384 | 4096 | - |
148+
| 2 | 2 | 51200* | 40960 | 32768 | - | - |
149+
| 4 | 1 | 28672 | 24576 | 20480 | 12288 | 4096 |
150+
| 4 | 2 | 51200* | 51200* | 40960 | 28672 | 8192 |
151+
| 4 | 4 | 112640* | 102400* | 81920* | 51200* | 20480 |
152+
| 6 | 1 | 28672 | 28672 | 24576 | 16384 | 8192 |
153+
| 6 | 2 | 61440* | 51200* | 40960 | 32768 | 20480 |
154+
155+
156+
#### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`, FSDP2 Offload and `save_hf_checkpoint=never`
157+
158+
> Uses: `--offload --trainer_strategy fsdp2 --save_hf_checkpoint never`
159+
160+
161+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
162+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
163+
| 1 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 |
164+
| 2 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 |
165+
| 2 | 2 | 61440* | 51200* | 51200* | 51200* | 40960 |
166+
| 4 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 |
167+
| 4 | 2 | 61440* | 51200* | 51200* | 51200* | 40960 |
168+
| 4 | 4 | 122880* | 112640* | 102400* | 102400* | 92160* |
169+
| 6 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 |
170+
| 6 | 2 | 61440* | 51200* | 51200* | 51200* | 40960 |
171+
172+
173+
174+
175+
### H20 96GB (Higher VRAM, Slower Bandwidth)
176+
177+
178+
#### Vallina Settings
179+
180+
181+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
182+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
183+
| 1 | 1 | 28672 | 20480 | 8192 | - | - |
184+
| 2 | 1 | 28672 | 24576 | 16384 | 8192 | - |
185+
| 2 | 2 | 51200* | 51200* | 36864 | 16384 | - |
186+
| 4 | 1 | 28672 | 28672 | 24576 | 16384 | 8192 |
187+
| 4 | 2 | 61440* | 51200* | 40960 | 28672 | 16384 |
188+
| 4 | 4 | 112640* | 102400* | 92160* | 51200* | 32768 |
189+
| 6 | 1 | 28672 | 28672 | 24576 | 20480 | 12288 |
190+
| 6 | 2 | 61440* | 51200* | 51200* | 36864 | 24576 |
191+
192+
193+
#### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`
194+
195+
196+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
197+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
198+
| 1 | 1 | 32768 | 24576 | 8192 | - | - |
199+
| 2 | 1 | 36864 | 28672 | 20480 | 8192 | - |
200+
| 2 | 2 | 71680* | 61440* | 40960 | 16384 | - |
201+
| 4 | 1 | 36864 | 32768 | 28672 | 20480 | 8192 |
202+
| 4 | 2 | 71680* | 61440* | 51200* | 36864 | 20480 |
203+
| 4 | 4 | 143360* | 122880* | 102400* | 71680* | 36864 |
204+
| 6 | 1 | 36864 | 32768 | 28672 | 20480 | 16384 |
205+
| 6 | 2 | 71680* | 61440* | 51200* | 40960 | 32768 |
206+
207+
208+
209+
#### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` and FSDP2 Offload
210+
211+
> Uses: `--offload --trainer_strategy fsdp2`
212+
213+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
214+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
215+
| 1 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 |
216+
| 2 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 |
217+
| 2 | 2 | 71680* | 61440* | 61440* | 61440 | 51200* |
218+
| 4 | 1 | 36864 | | 32768 | 28672 | 28672 |
219+
| 4 | 2 | 71680* | 71680* | 61440* | 61440* | |
220+
| 4 | 4 | 143360* | 133120* | 133120* | 122880* | 112640* |
221+
| 6 | 1 | 36864 | | 32768 | 28672 | 28672 |
222+
| 6 | 2 | 71680* | 71680* | 61440* | 61440* | 51200* |
223+
224+
225+
#### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`, FSDP2 Offload and `save_hf_checkpoint=never`
226+
227+
> Uses: `--offload --trainer_strategy fsdp2 --save_hf_checkpoint never`
228+
229+
230+
| #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B |
231+
| ---- | -- | ---------- | ---------- | -------- | -------- | --------- |
232+
| 1 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 |
233+
| 2 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 |
234+
| 2 | 2 | 71680* | 61440* | 61440* | 61440* | |
235+
| 4 | 1 | 36864 | | 32768 | 28672 | 28672 |
236+
| 4 | 2 | 71680* | 71680* | 61440* | 61440* | 51200* |
237+
| 4 | 4 | 143360* | 133120* | 133120* | 122880* | 112640* |
238+
| 6 | 1 | 36864 | | 32768 | 28672 | 28672 |
239+
| 6 | 2 | 71680* | 71680* | 61440* | 61440* | 51200* |

0 commit comments

Comments
 (0)