You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Different from the original LLaMA, we use [RedPajama](https://www.together.xyz/blog/redpajama) dataset, which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. The full dataset is ~5TB unzipped on disk and ~3TB to download compressed.
22
-
23
-
A smaller, more consumable random sample can be downloaded through [Hugging Face](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). If you just want to try out the pretraining script, you can use a 1B-token sample subset of RedPajama, which is available at [Hugging Face](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
24
-
25
-
RedPajama-Data-1T consists of seven data slices:
26
-
27
-
|| RedPajama | LLaMA |
28
-
|---------------|--------------|---------------|
29
-
| CommonCrawl | 878 billion | 852 billion |
30
-
| C4 | 175 billion | 190 billion |
31
-
| Github | 59 billion | 100 billion |
32
-
| Books | 26 billion | 25 billion |
33
-
| ArXiv | 28 billion | 33 billion |
34
-
| Wikipedia | 24 billion | 25 billion |
35
-
| StackExchange | 20 billion | 27 billion |
36
-
| Total | 1.2 trillion | 1.25 trillion |
37
-
38
-
## Training
39
-
40
-
We follow the hyperparameter settings from the original LLaMA paper. We use AdamW with $beta1=0.9$ and $beta2=0.95$. We use a cosine learning rate schedule, such that the final learning rate is equal to 10% of the maximal learning rate. We use a weight decay of 0.1 and gradient clipping of 1.0. We use 2,000 warmup steps.
41
-
42
-
| params | learning rate | batch size |
43
-
|--------|---------------|------------|
44
-
| 6.7B | 3.0e-4 | 4M |
45
-
| 13.0B | 3.0e-4 | 4M |
46
-
| 32.5B | 1.5e-4 | 4M |
47
-
| 65.2B | 1.5e-4 | 4M |
48
-
49
19
## Usage
50
20
21
+
> ⚠ This example only has benchmarking script. For training/finetuning, please refer to the [applications/Colossal-LLaMA](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Colossal-LLaMA).
22
+
51
23
### 1. Installation
52
24
53
25
Please install the latest ColossalAI from source.
@@ -62,52 +34,6 @@ Then install other dependencies.
62
34
pip install -r requirements.txt
63
35
```
64
36
65
-
Additionally, we recommend you to use torch 1.13.1. We've tested our code on torch 1.13.1 and found it's compatible with our code and flash attention.
66
-
67
-
### 2. Download the dataset
68
-
69
-
The dataset can be automatically downloaded by using `huggingface/datasets`. You can specify the dataset path by `-d` or `--dataset`. The default dataset is `togethercomputer/RedPajama-Data-1T-Sample`.
70
-
71
-
### 3. Command line arguments
72
-
73
-
Yon can use colossalai run to launch multi-nodes training:
74
-
```bash
75
-
colossalai run --nproc_per_node YOUR_GPU_PER_NODE --hostfile YOUR_HOST_FILE \
76
-
pretrain.py --OTHER_CONFIGURATIONS
77
-
```
78
-
79
-
Here is a sample hostfile:
80
-
81
-
```text
82
-
hostname1
83
-
hostname2
84
-
hostname3
85
-
hostname4
86
-
```
87
-
88
-
Make sure master node can access all nodes (including itself) by ssh without password.
89
-
90
-
Here is details about CLI arguments:
91
-
92
-
- Model configuration: `-c`, `--config`. `7b`, `13b`, `30b` and `65b` are supported for LLaMA-1, `7b`, `13b`, and `70b` are supported for LLaMA-2.
93
-
- Booster plugin: `-p`, `--plugin`. `gemini`, `gemini_auto`, `zero2`, `hybrid_parallel` and `zero2_cpu` are supported. For more details, please refer to [Booster plugins](https://colossalai.org/docs/basics/booster_plugins).
94
-
- Dataset path: `-d`, `--dataset`. The default dataset is `togethercomputer/RedPajama-Data-1T-Sample`. It support any dataset from `datasets` with the same data format as RedPajama.
95
-
- Number of epochs: `-e`, `--num_epochs`. The default value is 1.
96
-
- Local batch size: `-b`, `--batch_size`. Batch size per GPU. The default value is 2.
97
-
- Learning rate: `--lr`. The default value is 3e-4.
98
-
- Weight decay: `-w`, `--weight_decay`. The default value is 0.1.
99
-
- Warmup steps: `-s`, `--warmup_steps`. The default value is 2000.
100
-
- Gradient checkpointing: `-g`, `--gradient_checkpoint`. The default value is `False`. This saves memory at the cost of speed. You'd better enable this option when training with a large batch size.
101
-
- Max length: `-l`, `--max_length`. The default value is 4096.
102
-
- Mixed precision: `-x`, `--mixed_precision`. The default value is "fp16". "fp16" and "bf16" are supported.
103
-
- Save interval: `-i`, `--save_interval`. The interval (steps) of saving checkpoints. The default value is 1000.
104
-
- Checkpoint directory: `-o`, `--save_dir`. The directory path to save checkpoints. The default value is `checkpoint`.
105
-
- Checkpoint to load: `-f`, `--load`. The checkpoint path to load. The default value is `None`.
106
-
- Gradient clipping: `--gradient_clipping`. The default value is 1.0.
107
-
- Tensorboard log directory: `-t`, `--tensorboard_dir`. The directory path to save tensorboard logs. The default value is `tb_logs`.
108
-
- Flash attention: `-a`, `--flash_attention`. If you want to use flash attention, you must install `flash-attn`. The default value is `False`. This is helpful to accelerate training while saving memory. We recommend you always use flash attention.
109
-
110
-
111
37
### 4. Shell Script Examples
112
38
113
39
For your convenience, we provide some shell scripts to run benchmark with various configurations.
@@ -193,40 +119,3 @@ If you run the above command successfully, you will get the following results:
193
119
year={2023}
194
120
}
195
121
```
196
-
197
-
198
-
# Fine-tune Llama2
199
-
200
-
We also provide a example to fine-tune llama2 in `finetune.py`,
201
-
202
-
Make sure master node can access all nodes (including itself) by ssh without password.
203
-
204
-
Here is details about CLI arguments:
205
-
206
-
- Pretrained checkpoint path: `--model_path`, the path of your model checkpoint, it can be your local directory or a Hugging Face tag.
207
-
- Booster plugin: `-p`, `--plugin`. `gemini`, `gemini_auto`, `zero2`, `hybrid_parallel` and `zero2_cpu` are supported. For more details, please refer to [Booster plugins](https://colossalai.org/docs/basics/booster_plugins).
208
-
- Dataset path: `-d`, `--dataset`. The default dataset is `yizhongw/self_instruct`. It support any dataset from `datasets` with the same data format as `yizhongw/self_instruct`.
209
-
- task name: `--task_name`, the task to fine-tune, it's also related to the target of loading dataset, The default value is `super_natural_instructions`.
210
-
- Number of epochs: `-e`, `--num_epochs`. The default value is 1.
211
-
- Local batch size: `-b`, `--batch_size`. Batch size per GPU. The default value is 2.
212
-
- Learning rate: `--lr`. The default value is 3e-4.
213
-
- Weight decay: `-w`, `--weight_decay`. The default value is 0.1.
214
-
- Gradient checkpointing: `-g`, `--gradient_checkpoint`. The default value is `False`. This saves memory at the cost of speed. You'd better enable this option when training with a large batch size.
215
-
- Max length: `-l`, `--max_length`. The default value is 4096.
216
-
- Mixed precision: `-x`, `--mixed_precision`. The default value is "fp16". "fp16" and "bf16" are supported.
217
-
- Save interval: `-i`, `--save_interval`. The interval (steps) of saving checkpoints. The default value is 1000.
218
-
- Checkpoint directory: `-o`, `--save_dir`. The directory path to save checkpoints. The default value is `checkpoint`.
219
-
- Checkpoint to load: `-f`, `--load`. The checkpoint path to load. The default value is `None`.
220
-
- Gradient clipping: `--gradient_clipping`. The default value is 1.0.
221
-
- Tensorboard log directory: `-t`, `--tensorboard_dir`. The directory path to save tensorboard logs. The default value is `tb_logs`.
222
-
- Flash attention: `-a`, `--flash_attention`. If you want to use flash attention, you must install `flash-attn`. The default value is `False`. This is helpful to accelerate training while saving memory. We recommend you always use flash attention.
0 commit comments