You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Changes
Implements fast evaluation in NLS and improves the output.
### Reason for changes
Accelerates NLS evaluations.
### Related tickets
https://jira.devtools.intel.com/browse/CVS-167422
### Tests
Added fast evaluation to NLS test.
---------
Signed-off-by: J. Pablo Muñoz <pablo.munoz@intel.com>
Co-authored-by: Yuan0320 <jinjie.yuan@intel.com>
Copy file name to clipboardExpand all lines: examples/llm_compression/torch/downstream_qat_with_nls/README.md
+31-14Lines changed: 31 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,14 +9,30 @@ For detailed information about the methodology and format, please refer to this
9
9
<imgsrc="/examples/llm_compression/torch/downstream_qat_with_nls/pics/lora_vs_nls.png"alt="LoRA vs NLS"width="400"/>
10
10
</p>
11
11
12
+
## Install requirements
13
+
14
+
To use this example:
15
+
16
+
- Create a separate Python* environment and activate it: `python3 -m venv nncf_env && source nncf_env/bin/activate`
17
+
- Install dependencies:
18
+
19
+
```bash
20
+
pip install -U pip
21
+
pip install -r requirements.txt
22
+
pip install -e ../../../../
23
+
```
24
+
25
+
## Run Example
26
+
12
27
[main.py](main.py) supports fine-tuning and evaluating a language model with quantization-aware training and **Neural Low-Rank Adapter Search (NLS)** proposed by [Shears](https://arxiv.org/abs/2404.10934) and [SQFT](https://arxiv.org/abs/2410.03750) on various downstream tasks. For example, to run the script for the task [openbookqa](https://huggingface.co/datasets/allenai/openbookqa), you can use the following command:
-`--eval_only`: Whether to perform evaluation only. If specified, the model will be loaded from the checkpoint for evaluation.
27
43
-`--resume`: Whether to resume training from a checkpoint. If specified, the script will load the trained checkpoint and continue training or evaluation.
28
44
-`--custom_rank_config`: Specifies the LoRA rank of adapters per layer.
45
+
-`--num_min_loss_configs`: Number of configurations to evaluate for the min loss heuristic.
29
46
30
-
Regarding evaluation, the script will automatically use a heuristic to obtain a good configuration for evaluation. This default strategy takes advantage of some information from the training phase and requires the evaluation of only 7 suggested configurations. This is automatically done in the example script, and only the best configuration from these candidates is returned to the user. More powerful elastic LoRA NLS configurations can be optionally obtained through more advanced search algorithms. We also support testing a custom configuration for evaluation after training. The following command will load the trained checkpoint and test the specified LoRA rank configuration:
47
+
Regarding evaluation, the script will automatically use a heuristic to obtain a good configuration for evaluation. This default strategy takes advantage of some information from the training phase and requires the evaluation of only 7 suggested configurations (median + frequent + 5 min loss). This is automatically done in the example script, and only the best configuration from these candidates is returned to the user. More powerful elastic LoRA NLS configurations can be optionally obtained through more advanced search algorithms. We also support testing a custom configuration for evaluation after training. The following command will load the trained checkpoint and test the specified LoRA rank configuration:
This script also supports running the vanilla LoRA method. We only need to pass a single number for `--lora_rank_space`, such as `--lora_rank_space 32`. In addition, the training time of LoRA and NLS is very similar, and there is almost no overhead in activating different sub-adapters during training. For instance, fine-tuning the compressed Llama-3.2-3B-Instruct model for 3 epochs on [arc-challenge](https://huggingface.co/datasets/allenai/ai2_arc) takes 161.83 seconds with LoRA and 164.89 seconds with NLS.
@@ -49,17 +66,17 @@ INT4 (LoRA + PTWC) results are derived from the best BF16 (LoRA) model using the
0 commit comments