Skip to content

Commit cd16bed

Browse files
chenopiscursoragent
andcommitted
docs: fix broken links in 0.1.0 documentation
Fix 21 broken links across 7 doc files identified via broken link audit of published docs at docs.nvidia.com/nemo/automodel/0.1.0/. Fixes: - fp8_training.md: fix GitHub org (NVIDIA/NeMo-Automodel → NVIDIA-NeMo/Automodel) and Llama subdir (llama/ → llama3_1/) - peft.md, sft.md: update recipe link (finetune.py → train_ft.py) and example script path (examples/llm/ → examples/llm_finetune/) - gemma3-3n.md: insert missing components/ in dataset URLs - pipelining.md: fix 7 example script paths and 1 YAML config path - installation.md: fix example script path - README.md: fix llama subdir (llama/ → llama3_2/) Co-authored-by: Cursor <cursoragent@cursor.com> Signed-off-by: Andrew Chen <andrewch@nvidia.com>
1 parent 22efc28 commit cd16bed

File tree

7 files changed

+21
-21
lines changed

7 files changed

+21
-21
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ To get started quickly, NeMo AutoModel provides a collection of ready-to-use rec
7575
**And more**: Check out more [LLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune) and [VLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/vlm_finetune) examples! Any causal LM on Hugging Face Hub can be used with the base recipe template!
7676

7777
### Run a Recipe
78-
To run a NeMo AutoModel recipe, you need a recipe script (e.g., [LLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/finetune.py), [VLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/vlm_finetune/finetune.py)) and a YAML config file (e.g., [LLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/llama/llama3_2_1b_squad.yaml), [VLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/vlm_finetune/gemma3/gemma3_vl_4b_cord_v2_peft.yaml)):
78+
To run a NeMo AutoModel recipe, you need a recipe script (e.g., [LLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/finetune.py), [VLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/vlm_finetune/finetune.py)) and a YAML config file (e.g., [LLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/llama3_2/llama3_2_1b_squad.yaml), [VLM](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/vlm_finetune/gemma3/gemma3_vl_4b_cord_v2_peft.yaml)):
7979
```
8080
# Command invocation format:
8181
uv run <recipe_script_path> --config <yaml_config_path>

docs/guides/fp8_training.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -100,12 +100,12 @@ FP8 quantization provides measurable performance improvements while maintaining
100100
## Ready-to-Use Recipes
101101
We provide FP8 training configs for popular models:
102102

103-
- **Llama**: [Llama 3.1 8B](https://github.com/NVIDIA/NeMo-Automodel/blob/main/examples/llm_finetune/llama/llama3_1_8b_hellaswag_fp8.yaml)
104-
- **Mistral**: [Mistral 7B](https://github.com/NVIDIA/NeMo-Automodel/blob/main/examples/llm_finetune/mistral/mistral_7b_hellaswag_fp8.yaml), [Mistral Nemo 2407](https://github.com/NVIDIA/NeMo-Automodel/blob/main/examples/llm_finetune/mistral/mistral_nemo_2407_hellaswag_fp8.yaml)
105-
- **Qwen**: [Qwen 2.5 7B](https://github.com/NVIDIA/NeMo-Automodel/blob/main/examples/llm_finetune/qwen/qwen2_5_7b_hellaswag_fp8.yaml)
106-
- **Phi**: [Phi 4](https://github.com/NVIDIA/NeMo-Automodel/blob/main/examples/llm_finetune/phi/phi_4_hellaswag_fp8.yaml)
103+
- **Llama**: [Llama 3.1 8B](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/llama3_1/llama3_1_8b_hellaswag_fp8.yaml)
104+
- **Mistral**: [Mistral 7B](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/mistral/mistral_7b_hellaswag_fp8.yaml), [Mistral Nemo 2407](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/mistral/mistral_nemo_2407_hellaswag_fp8.yaml)
105+
- **Qwen**: [Qwen 2.5 7B](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/qwen/qwen2_5_7b_hellaswag_fp8.yaml)
106+
- **Phi**: [Phi 4](https://github.com/NVIDIA-NeMo/Automodel/blob/main/examples/llm_finetune/phi/phi_4_hellaswag_fp8.yaml)
107107

108-
Check out our [examples directory](https://github.com/NVIDIA/NeMo-Automodel/tree/main/examples/llm_finetune/) for more recipes and configurations.
108+
Check out our [examples directory](https://github.com/NVIDIA-NeMo/Automodel/tree/main/examples/llm_finetune/) for more recipes and configurations.
109109

110110
To run any of these FP8 training recipes, use the following command:
111111

docs/guides/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ docker run --gpus all -it --rm \
107107
nvcr.io/nvidia/nemo:25.07 /bin/bash -c "\
108108
cd /workspace/Automodel && \ # Enter the mounted repo
109109
pip install -e . && \ # Install Automodel in editable mode
110-
python3 examples/llm/finetune.py" # Run a usage example
110+
python3 examples/llm_finetune/finetune.py" # Run a usage example
111111
```
112112
:::{note}
113113
The above `docker` command uses the volume `-v` option to mount the local `Automodel` directory

docs/guides/llm/peft.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -301,10 +301,10 @@ automodel finetune llm -c peft_guide.yaml
301301

302302
### Invoke the Recipe Script Directly
303303

304-
Alternatively, you can run the recipe [script](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/recipes/llm/finetune.py) directly using [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html), as shown below.
304+
Alternatively, you can run the recipe [script](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/recipes/llm/train_ft.py) directly using [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html), as shown below.
305305

306306
``` bash
307-
torchrun --nproc-per-node=8 examples/llm/finetune.py --config peft_guide.yaml
307+
torchrun --nproc-per-node=8 examples/llm_finetune/finetune.py --config peft_guide.yaml
308308
```
309309

310310
### Sample Output

docs/guides/llm/sft.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ vLLM.
2929

3030
<!-- In addition to this user guide, you can also explore our Quickstart,
3131
which features a [standalone python3
32-
recipe](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/recipes/llm/finetune.py),
32+
recipe](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/recipes/llm/train_ft.py),
3333
offering hands-on demonstrations for quickly getting started with NeMo Automodel. -->
3434

3535
## Run SFT with NeMo Automodel
@@ -258,10 +258,10 @@ where `finetune` is name the name of the recipe file (excluding the `.py` extens
258258

259259
### Invoke the Recipe Script Directly
260260

261-
Alternatively, you can run the recipe [script](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/recipes/llm/finetune.py) directly using [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html), as shown below.
261+
Alternatively, you can run the recipe [script](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/recipes/llm/train_ft.py) directly using [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html), as shown below.
262262

263263
``` bash
264-
torchrun --nproc-per-node=8 examples/llm/finetune.py --config sft_guide.yaml
264+
torchrun --nproc-per-node=8 examples/llm_finetune/finetune.py --config sft_guide.yaml
265265
```
266266

267267
### Sample Output

docs/guides/omni/gemma3-3n.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ The default collate function:
9292
### Preprocess Custom Datasets
9393

9494
When using a custom dataset with a model whose Hugging Face `AutoProcessor` supports the `apply_chat_template` method, you'll need to convert your data into the Hugging Face message list format expected by the `apply_chat_template`.
95-
We provide [examples](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/datasets/vlm/datasets.py) demonstrating how to perform this conversion.
95+
We provide [examples](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/components/datasets/vlm/datasets.py) demonstrating how to perform this conversion.
9696

9797
Some models, such as [Qwen2.5 VL](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct), have specific preprocessing requirements and require custom collate functions. For instance, Qwen2.5-VL uses the `qwen_vl_utils.process_vision_info` function to process images:
9898

@@ -119,7 +119,7 @@ dataloader:
119119
_target_: nemo_automodel.datasets.vlm.collate_fns.qwen2_5_collate_fn
120120
```
121121
122-
We provide [example custom collate functions](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/datasets/vlm/collate_fns.py) that you can use as references for your implementation.
122+
We provide [example custom collate functions](https://github.com/NVIDIA-NeMo/Automodel/blob/main/nemo_automodel/components/datasets/vlm/collate_fns.py) that you can use as references for your implementation.
123123
124124
## Run the Fine-Tune Script
125125

docs/guides/pipelining.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ For a complete training example:
102102

103103
```bash
104104
# Run fine-tuning with 2-way pipeline parallelism using Llama 3.1 8B
105-
uv run torchrun --nproc_per_node=2 examples/llm/finetune.py \
105+
uv run torchrun --nproc_per_node=2 examples/llm_finetune/finetune.py \
106106
--config examples/llm_finetune/llama3_1/llama3_1_8b_hellaswag_pp.yaml
107107
```
108108

@@ -579,8 +579,8 @@ You can easily add pipeline parallelism to any existing training configuration t
579579
Add pipeline parallelism to an existing config using command-line arguments:
580580

581581
```bash
582-
uv run torchrun --nproc_per_node=2 examples/llm/finetune.py \
583-
--config examples/llm/llama_3_2_1b_squad.yaml \
582+
uv run torchrun --nproc_per_node=2 examples/llm_finetune/finetune.py \
583+
--config examples/llm_finetune/llama3_2/llama3_2_1b_squad.yaml \
584584
--distributed._target_ nemo_automodel.components.distributed.fsdp2.FSDP2Manager \
585585
--distributed.pp_size 2 \
586586
--autopipeline._target_ nemo_automodel.components.distributed.pipelining.AutoPipeline \
@@ -625,7 +625,7 @@ autopipeline:
625625
626626
#### Pipeline + Data Parallelism (4 GPUs total)
627627
```bash
628-
uv run torchrun --nproc_per_node=4 examples/llm/finetune.py \
628+
uv run torchrun --nproc_per_node=4 examples/llm_finetune/finetune.py \
629629
--config your_config.yaml \
630630
--distributed.pp_size 2 \
631631
--distributed.dp_size 2 \
@@ -634,7 +634,7 @@ uv run torchrun --nproc_per_node=4 examples/llm/finetune.py \
634634

635635
#### Pipeline + Tensor Parallelism (4 GPUs total)
636636
```bash
637-
uv run torchrun --nproc_per_node=4 examples/llm/finetune.py \
637+
uv run torchrun --nproc_per_node=4 examples/llm_finetune/finetune.py \
638638
--config your_config.yaml \
639639
--distributed.pp_size 2 \
640640
--distributed.tp_size 2 \
@@ -643,7 +643,7 @@ uv run torchrun --nproc_per_node=4 examples/llm/finetune.py \
643643

644644
#### Full Hybrid: PP + DP + TP (8 GPUs total)
645645
```bash
646-
uv run torchrun --nproc_per_node=8 examples/llm/finetune.py \
646+
uv run torchrun --nproc_per_node=8 examples/llm_finetune/finetune.py \
647647
--config your_config.yaml \
648648
--distributed.pp_size 2 \
649649
--distributed.dp_size 2 \
@@ -694,7 +694,7 @@ dataloader:
694694
Run training with:
695695
```bash
696696
# Run with 2 GPUs for 2-way pipeline parallelism
697-
uv run torchrun --nproc_per_node=2 examples/llm/finetune.py --config config.yaml
697+
uv run torchrun --nproc_per_node=2 examples/llm_finetune/finetune.py --config config.yaml
698698
```
699699

700700
## Troubleshooting

0 commit comments

Comments
 (0)