Skip to content

Commit 85053f4

Browse files
authored
Fix(doc): add delinearize instruction (axolotl-ai-cloud#2545)
* fix: mention to install pytorch before axolotl * feat(doc): include instruction to delinearize * fix: update instruction for delinearize with adapter
1 parent a4d5112 commit 85053f4

File tree

3 files changed

+25
-0
lines changed

3 files changed

+25
-0
lines changed

docs/cli.qmd

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,6 +199,17 @@ output_dir: # Directory to save evaluation results
199199
200200
See [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) for more details.
201201
202+
### delinearize-llama4
203+
204+
Delinearizes a Llama 4 linearized model into a regular HuggingFace Llama 4 model. This only works with the non-quantized linearized model.
205+
206+
```bash
207+
axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dir
208+
```
209+
210+
This would be necessary to use with other frameworks. If you have an adapter, merge it with the non-quantized linearized model before delinearizing.
211+
212+
202213
## Legacy CLI Usage
203214

204215
While the new Click-based CLI is preferred, Axolotl still supports the legacy module-based CLI:

docs/installation.qmd

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,12 @@ This guide covers all the ways you can install and set up Axolotl for your envir
1919

2020
## Installation Methods {#sec-installation-methods}
2121

22+
::: {.callout-important}
23+
Please make sure to have Pytorch installed before installing Axolotl in your local environment.
24+
25+
Follow the instructions at: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
26+
:::
27+
2228
### PyPI Installation (Recommended) {#sec-pypi}
2329

2430
```{.bash}

examples/llama-4/README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,3 +26,11 @@ Multi-GPU (4xH100) for Llama 4 Scout uses 62.8GB VRAM/GPU @ 4k contenxt length @
2626
### Llama 4 Maverick 17Bx128Experts (400B)
2727

2828
Coming Soon
29+
30+
## Delinearized Llama 4 Models
31+
32+
We provide a script to delinearize Llama 4 linearized models into regular HuggingFace Llama 4 models.
33+
34+
```bash
35+
axolotl delinearize-llama4 --model path/to/model_dir --output path/to/output_dir
36+
```

0 commit comments

Comments
 (0)