-
Notifications
You must be signed in to change notification settings - Fork 49
Add scripts to search context length capacity on given settings. #423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
pan-x-c
merged 14 commits into
agentscope-ai:main
from
chenyushuo:add/context_length_search
Dec 10, 2025
Merged
Changes from 7 commits
Commits
Show all changes
14 commits
Select commit
Hold shift + click to select a range
2df31f0
Add scripts to search context length capacity on given settings.
chenyushuo 6432b19
add readme and refactor on scripts
chenyushuo 5c7eccc
add `trinity_trainer_configs.md`
chenyushuo 2c9e31c
add doc for trainer settings
chenyushuo ea0fca2
add multi node support
chenyushuo d5f837e
apply reviews
chenyushuo 1130bbe
add explanatory docs for `max_token_len_per_gpu`
chenyushuo 5e860e9
apply reviews
chenyushuo ab68e76
add increamental timeout
chenyushuo dadec5b
apply reviews
chenyushuo 1c930e2
rename `trinity_trainer_configs` to `trinity_gpu_configs`
chenyushuo 6cb399e
add GPU Resource and Training Configuration Guide to readme
chenyushuo 5d72ed9
add GPU Resource and Training Configuration Guide to readme
chenyushuo 207c97e
rename develop_selector title
chenyushuo File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
277 changes: 277 additions & 0 deletions
277
docs/sphinx_doc/source/tutorial/trinity_trainer_configs.md
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
277 changes: 277 additions & 0 deletions
277
docs/sphinx_doc/source_zh/tutorial/trinity_trainer_configs.md
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,241 @@ | ||
| # Automated Context Length Testing for Large Language Models | ||
|
|
||
| This script automates the process of determining the **maximum context length** a large language model (LLM) can handle under various distributed training configurations, including different GPU counts and sequence parallelism settings. It iteratively increases the context length during training until an **Out-of-Memory (OOM)** error occurs, logging results and supporting advanced features like RoPE scaling, FSDP strategies, and offloading. | ||
|
|
||
| --- | ||
|
|
||
| ## 🧰 Requirements | ||
|
|
||
| Ensure Trinity-RFT is well installed ([Installation Guide](https://modelscope.github.io/Trinity-RFT/en/main/tutorial/trinity_installation.html)). No extra dependence is required. | ||
|
|
||
| --- | ||
|
|
||
| ## 🛠️ Configuration Files | ||
|
|
||
| The script relies on two external files: | ||
|
|
||
| 1. **`context_length.yaml`** | ||
| - Located in the same directory as this script. | ||
| - Defines the base training configuration used by `trinity`. | ||
|
|
||
| 2. **`workflow/` plugin directory** | ||
| - Contains `CustomWorkflow` expected by the `trinity`, which providing a synthetic training data generator. | ||
|
|
||
| Ensure both exist at runtime. You can modify these files to customize the training process. | ||
|
|
||
| --- | ||
|
|
||
| ## 🚀 Usage | ||
|
|
||
| ### Run the Script | ||
|
|
||
| ```bash | ||
| python search_context_length_capacity.py \ | ||
| --model_path /path/to/your/model \ | ||
| --start_length 4096 \ | ||
| --log_dir ./logs \ | ||
| --test_gpu_num 1 2 4 \ | ||
| --test_sp_num 1 2 \ | ||
| --trainer_strategy fsdp \ | ||
| --save_hf_checkpoint last \ | ||
| --timeout 2400 | ||
| ``` | ||
|
|
||
| ### Required Arguments | ||
|
|
||
| | Argument | Description | | ||
| |--------|-----------| | ||
| | `--model_path` | Path to the pretrained Hugging Face model directory. | | ||
|
|
||
| ### Optional Arguments | ||
|
|
||
| | Argument | Default | Description | | ||
| |--------|--------|-----------| | ||
| | `--start_length` | `4096` | Initial context length to begin testing. | | ||
| | `--log_dir` | `./logs` | Directory to save logs and results. | | ||
| | `--checkpoint_path` | `os.environ.get("TRINITY_CHECKPOINT_ROOT_DIR", "./checkpoints/length-test")` | Checkpoint path for testing. Note that this directory will be deleted during the test, please specify a path that is not used by other processes. | | ||
| | `--test_gpu_num` | `1 2 4 6` | List of GPU counts to test scalability. | | ||
| | `--test_sp_num` | `1` | Sequence parallel group sizes to evaluate. Must divide `test_gpu_num` and number of attention heads. | | ||
| | `--save_hf_checkpoint` | `last` | When to save HF format checkpoints (`always`, `never`, `last`). | | ||
| | `--entropy_saving` | `False` | Enable memory-saving techniques (if supported). | | ||
| | `--offload` | `False` | Offload parameters to CPU to reduce GPU memory usage. | | ||
| | `--trainer_strategy` | `fsdp` | Distributed training strategy (`fsdp` or `fsdp2`). | | ||
| | `--timeout` | `2400` (40 min) | Maximum time per job before forced termination. | | ||
| | `--dlc` | `False` | Specify when running in Aliyun PAI DLC. | | ||
|
|
||
| --- | ||
|
|
||
| ## 📂 Output Structure | ||
|
|
||
| Logs are saved in a structured hierarchy under `--log_dir`: | ||
|
|
||
| ``` | ||
| logs/ | ||
| └── <model_name>/ | ||
| └── gpu-<N>/ | ||
| └── sp-<S>/ | ||
| └── model_len-<L>.log | ||
| ``` | ||
|
|
||
| Each log file corresponds to a specific `(GPU count, SP size, context length)` combination. | ||
|
|
||
| Final results are printed to stdout: | ||
| ``` | ||
| model_name = Qwen3-0.6B, trainer_gpu_num = 4, sp_num = 2, max_model_len = 40960 | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## ⚠️ Notes & Best Practices | ||
|
|
||
| - **Model Compatibility**: Ensure the model supports dynamic context extension (e.g., via RoPE scaling). | ||
| - **SP Validity**: Only valid SP values (divisors of both GPU count and attention heads) are tested. | ||
| - **Checkpoint Root**: Controlled by `TRINITY_CHECKPOINT_ROOT_DIR` env var (default: `./checkpoints/length-test`). Cleared before each trial. | ||
| - **Early Termination**: If any run fails due to OOM, the search stops and returns the last successful length. | ||
| - **Large Steps After Base Limit**: Basic step size is 4096. And once context exceeds `max_position_embeddings`, step size becomes quarter of original limit. | ||
|
|
||
| --- | ||
|
|
||
| ## 🧪 Example: Test Qwen3-0.6B Context Length | ||
|
|
||
| ```bash | ||
| python search_context_length_capacity.py \ | ||
| --model_path Qwen/Qwen3-0.6B \ | ||
| --test_gpu_num 1 2 4 6 \ | ||
| --test_sp_num 1 2 4 \ | ||
| --start_length 8192 \ | ||
| --log_dir ./results/qwen3-length-scan \ | ||
| --trainer_strategy fsdp2 \ | ||
| --timeout 3600 | ||
| ``` | ||
|
|
||
| This command will test the maximum context length for Qwen3-0.6B model with 2, 4, and 8 GPUs, using FSDP2 strategy, and save logs to `./results/qwen3-length-scan`. | ||
|
|
||
| --- | ||
|
|
||
| ## 📚 Test Results | ||
|
|
||
| Below are empirical results from running this script on various Qwen3 models across different hardware and optimization configurations. These benchmarks help guide configuration choices for maximizing context length within memory constraints. | ||
|
|
||
| ### Legend | ||
| - `*` indicates RoPE scaling (YARN) was applied — context length exceeds the model’s native `max_position_embeddings`. | ||
| - `-` indicates OOM occurred even at 4096 context length. | ||
| - All tests use `start_length=4096` and increase dynamically. | ||
|
|
||
| ### A100 80GB | ||
|
|
||
| #### Vallina Settings (Baseline) | ||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 20480 | 16384 | - | - | - | | ||
| | 2 | 1 | 24576 | 20480 | 12288 | - | - | | ||
| | 2 | 2 | 40960 | 40960 | 24576 | - | - | | ||
| | 4 | 1 | 24576 | 20480 | 20480 | 8192 | - | | ||
| | 4 | 2 | 40960 | 40960 | 36864 | 20480 | - | | ||
| | 4 | 4 | 92160* | 81920* | 71680* | 40960 | - | | ||
| | 6 | 1 | 24576 | 20480 | 20480 | 12288 | 8192 | | ||
| | 6 | 2 | 40960 | 40960 | 40960 | 28672 | 16384 | | ||
|
|
||
|
|
||
| #### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` | ||
|
|
||
| > ⚠️ Must be set **before** launching any processes (including Ray clusters). | ||
|
|
||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 24576 | 16384 | - | - | - | | ||
| | 2 | 1 | 28672 | 24576 | 16384 | 4096 | - | | ||
| | 2 | 2 | 51200* | 40960 | 32768 | - | - | | ||
| | 4 | 1 | 28672 | 24576 | 20480 | 12288 | 4096 | | ||
| | 4 | 2 | 51200* | 51200* | 40960 | 28672 | 8192 | | ||
| | 4 | 4 | 112640* | 102400* | 81920* | 51200* | 20480 | | ||
| | 6 | 1 | 28672 | 28672 | 24576 | 16384 | 8192 | | ||
| | 6 | 2 | 61440* | 51200* | 40960 | 32768 | 20480 | | ||
|
|
||
|
|
||
| #### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`, FSDP2 Offload and `save_hf_checkpoint=never` | ||
|
|
||
| > Uses: `--offload --trainer_strategy fsdp2 --save_hf_checkpoint never` | ||
|
|
||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 | | ||
| | 2 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 | | ||
| | 2 | 2 | 61440* | 51200* | 51200* | 51200* | 40960 | | ||
| | 4 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 | | ||
| | 4 | 2 | 61440* | 51200* | 51200* | 51200* | 40960 | | ||
| | 4 | 4 | 122880* | 112640* | 102400* | 102400* | 92160* | | ||
| | 6 | 1 | 28672 | 28672 | 28672 | 24576 | 24576 | | ||
| | 6 | 2 | 61440* | 51200* | 51200* | 51200* | 40960 | | ||
|
|
||
|
|
||
|
|
||
|
|
||
| ### H20 96GB (Higher VRAM, Slower Bandwidth) | ||
|
|
||
|
|
||
| #### Vallina Settings | ||
|
|
||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 28672 | 20480 | 8192 | - | - | | ||
| | 2 | 1 | 28672 | 24576 | 16384 | 8192 | - | | ||
| | 2 | 2 | 51200* | 51200* | 36864 | 16384 | - | | ||
| | 4 | 1 | 28672 | 28672 | 24576 | 16384 | 8192 | | ||
| | 4 | 2 | 61440* | 51200* | 40960 | 28672 | 16384 | | ||
| | 4 | 4 | 112640* | 102400* | 92160* | 51200* | 32768 | | ||
| | 6 | 1 | 28672 | 28672 | 24576 | 20480 | 12288 | | ||
| | 6 | 2 | 61440* | 51200* | 51200* | 36864 | 24576 | | ||
|
|
||
|
|
||
| #### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` | ||
|
|
||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 32768 | 24576 | 8192 | - | - | | ||
| | 2 | 1 | 36864 | 28672 | 20480 | 8192 | - | | ||
| | 2 | 2 | 71680* | 61440* | 40960 | 16384 | - | | ||
| | 4 | 1 | 36864 | 32768 | 28672 | 20480 | 8192 | | ||
| | 4 | 2 | 71680* | 61440* | 51200* | 36864 | 20480 | | ||
| | 4 | 4 | 143360* | 122880* | 102400* | 71680* | 36864 | | ||
| | 6 | 1 | 36864 | 32768 | 28672 | 20480 | 16384 | | ||
| | 6 | 2 | 71680* | 61440* | 51200* | 40960 | 32768 | | ||
|
|
||
|
|
||
|
|
||
| #### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` and FSDP2 Offload | ||
|
|
||
| > Uses: `--offload --trainer_strategy fsdp2` | ||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 | | ||
| | 2 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 | | ||
| | 2 | 2 | 71680* | 61440* | 61440* | 61440 | 51200* | | ||
| | 4 | 1 | 36864 | | 32768 | 28672 | 28672 | | ||
| | 4 | 2 | 71680* | 71680* | 61440* | 61440* | | | ||
| | 4 | 4 | 143360* | 133120* | 133120* | 122880* | 112640* | | ||
| | 6 | 1 | 36864 | | 32768 | 28672 | 28672 | | ||
| | 6 | 2 | 71680* | 71680* | 61440* | 61440* | 51200* | | ||
|
|
||
|
|
||
| #### Enable `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`, FSDP2 Offload and `save_hf_checkpoint=never` | ||
|
|
||
| > Uses: `--offload --trainer_strategy fsdp2 --save_hf_checkpoint never` | ||
|
|
||
|
|
||
| | #GPU | SP | Qwen3-0.6B | Qwen3-1.7B | Qwen3-4B | Qwen3-8B | Qwen3-14B | | ||
| | ---- | -- | ---------- | ---------- | -------- | -------- | --------- | | ||
| | 1 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 | | ||
| | 2 | 1 | 36864 | 36864 | 32768 | 28672 | 28672 | | ||
| | 2 | 2 | 71680* | 61440* | 61440* | 61440* | | | ||
| | 4 | 1 | 36864 | | 32768 | 28672 | 28672 | | ||
| | 4 | 2 | 71680* | 71680* | 61440* | 61440* | 51200* | | ||
| | 4 | 4 | 143360* | 133120* | 133120* | 122880* | 112640* | | ||
| | 6 | 1 | 36864 | | 32768 | 28672 | 28672 | | ||
| | 6 | 2 | 71680* | 71680* | 61440* | 61440* | 51200* | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,112 @@ | ||
| mode: both | ||
| project: Trinity-RFT-context-length-exp | ||
| group: length-test | ||
| name: length-test | ||
| checkpoint_root_dir: ${oc.env:TRINITY_CHECKPOINT_ROOT_DIR,./checkpoints/length-test} | ||
| continue_from_checkpoint: false | ||
| algorithm: | ||
| algorithm_type: grpo | ||
| repeat_times: ${oc.env:REPEAT_TIMES,8} | ||
| advantage_fn: grpo | ||
| sample_strategy: default | ||
| policy_loss_fn: ppo | ||
| kl_penalty_fn: none | ||
| kl_loss_fn: k2 | ||
| entropy_loss_fn: default | ||
| optimizer: | ||
| lr: 1.0e-05 | ||
| lr_warmup_steps_ratio: 0.0 | ||
| warmup_style: constant | ||
| data_processor: {} | ||
| model: | ||
| model_path: ${oc.env:MODEL_PATH,Qwen/Qwen3-0.6B} | ||
| max_prompt_tokens: ${oc.env:PROMPT_LEN,2048} | ||
| max_model_len: ${oc.env:MAX_MODEL_LEN,4096} | ||
| rope_scaling: ${oc.decode:${oc.env:ROPE_SCALING,null}} | ||
| cluster: | ||
| node_num: 1 | ||
| gpu_per_node: ${oc.env:GPU_NUM,8} | ||
| buffer: | ||
| batch_size: 1 | ||
| total_steps: 2 | ||
| explorer_input: | ||
| taskset: | ||
| name: taskset | ||
| storage_type: file | ||
| path: openai/gsm8k | ||
| split: train | ||
| subset_name: main | ||
| format: | ||
| prompt_key: question | ||
| response_key: answer | ||
| rollout_args: | ||
| temperature: 1.0 | ||
| logprobs: 0 | ||
| workflow_args: | ||
| prompt_len: ${model.max_prompt_tokens} | ||
| max_model_len: ${model.max_model_len} | ||
| eval_tasksets: [] | ||
| default_workflow_type: dummy_exp_workflow | ||
| default_reward_fn_type: math_reward | ||
| trainer_input: | ||
| experience_buffer: | ||
| name: experience_buffer | ||
| storage_type: queue | ||
| replay_buffer: | ||
| enable: false | ||
| priority_fn: linear_decay | ||
| reuse_cooldown_time: null | ||
| priority_fn_args: | ||
| decay: 2.0 | ||
| explorer: | ||
| runner_per_model: 8 | ||
| rollout_model: | ||
| engine_num: ${oc.env:ENGINE_NUM,1} | ||
| tensor_parallel_size: 1 | ||
| enforce_eager: true | ||
| enable_prefix_caching: false | ||
| enable_chunked_prefill: false | ||
| gpu_memory_utilization: 0.9 | ||
| dtype: bfloat16 | ||
| seed: 42 | ||
| enable_thinking: false | ||
| enable_history: false | ||
| enable_openai_api: false | ||
| enable_auto_tool_choice: false | ||
| tool_call_parser: null | ||
| reasoning_parser: null | ||
| auxiliary_models: [] | ||
| eval_interval: 1000 | ||
| trainer: | ||
| trainer_type: verl | ||
| trainer_strategy: ${oc.env:TRAINER_STRATEGY,fsdp} | ||
| save_interval: 100 | ||
| enable_preview: true | ||
| grad_clip: 1.0 | ||
| ulysses_sequence_parallel_size: ${oc.env:SP_NUM,1} | ||
| save_hf_checkpoint: ${oc.env:SAVE_HF_CHECKPOINT,last} | ||
| trainer_config: | ||
| actor_rollout_ref: | ||
| actor: | ||
| entropy_from_logits_with_chunking: ${oc.env:ENTROPY_SAVING,false} | ||
| entropy_checkpointing: ${oc.env:ENTROPY_SAVING,false} | ||
| fsdp_config: | ||
| param_offload: ${oc.env:OFFLOAD,false} | ||
| optimizer_offload: ${oc.env:OFFLOAD,false} | ||
| offload_policy: ${oc.env:OFFLOAD,false} | ||
| ref: | ||
| entropy_from_logits_with_chunking: ${oc.env:ENTROPY_SAVING,false} | ||
| entropy_checkpointing: ${oc.env:ENTROPY_SAVING,false} | ||
| fsdp_config: | ||
| param_offload: ${oc.env:OFFLOAD,false} | ||
| optimizer_offload: ${oc.env:OFFLOAD,false} | ||
| offload_policy: ${oc.env:OFFLOAD,false} | ||
chenyushuo marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| monitor: | ||
| monitor_type: tensorboard | ||
| synchronizer: | ||
| sync_method: nccl | ||
| sync_style: fixed | ||
| sync_interval: 1 | ||
| sync_timeout: 1200 | ||
| log: | ||
| level: INFO | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.