You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Formatting will happen on the fly while tuning. The keys in template should match fields in the dataset file. The `response template` corresponding to the above template will need to be supplied. in this case, `response template` = `\n## Label:`.
@@ -829,6 +829,9 @@ Number of trainable parameters = 13,631,488
829
829
The `fms_acceleration.cli` can do more to search for all available configs, plugins and arguments, [see the advanced flow](https://github.com/foundation-model-stack/fms-acceleration#advanced-flow).
830
830
831
831
832
+
## Extended Pre-Training
833
+
834
+
We also have support for extended pre training where users might wanna pretrain a model with large number of samples. Please refer our separate doc on [EPT Use Cases](./docs/ept.md)
832
835
833
836
## Inference
834
837
Currently, we do *not* offer inference support as part of the library, but we provide a standalone script for running inference on tuned models for testing purposes. For a full list of options run `python scripts/run_inference.py --help`. Note that no data formatting / templating is applied at inference time.
Copy file name to clipboardExpand all lines: docs/advanced-data-preprocessing.md
+21-4Lines changed: 21 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,6 +60,10 @@ definitions:
60
60
type: float
61
61
builder:
62
62
type: string
63
+
rename_columns:
64
+
type: object
65
+
retain_columns:
66
+
type: object
63
67
data_paths:
64
68
type: array
65
69
items:
@@ -118,6 +122,8 @@ Users can create a data config file in any of YAML or JSON format they choose (w
118
122
- `name` (optional, str): A unique identifier for the dataset.
119
123
- `data_paths` (optional, list): A `list` of file paths or directories containing the dataset.
120
124
- `builder` (optional, str): Specifies a [Hugging Face dataset builder](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/loading_methods#datasets.load_dataset.path), if applicable.
125
+
- `rename_columns` (optional, dict[str:str]): Specifies a dictionary of columns to rename like `{"old_name": "new_name"}` at dataset load time. *Applied before `retain_columns` if both are specified*.
126
+
- `retain_columns` (optional, list[str]): Specifies a list of columns to retain `["input_ids", "labels"]` every other column will be dropped at dataset load time. *Applied strictly after `rename_columns` if both are specified*.
121
127
- `sampling` (optional, float): The sampling ratio (0.0 to 1.0) with which to sample a dataset in case of interleaving.
122
128
- `data_handlers` (optional, list): A list of data handler configurations which preprocess the dataset.
123
129
@@ -149,6 +155,10 @@ Not Supported:
149
155
Currently there's no support for sampling under multiple data paths which are defined inside a dataset definition.
150
156
All dataset paths that will be specified inside one dataset will be [concatenated](https://huggingface.co/docs/datasets/v3.2.0/en/process#concatenate) after loading them, while across datasets users can specify [mixing via sampling datasets](#data-mixing)
151
157
158
+
Probably something like this:
159
+
160
+
Additionally while loading the dataset, users can specify which columns to rename via `rename_columns` and which to retain via `retain_columns` arguments above.
161
+
The order of application of these operations is *strictly rename followed by retain* so users should note that an old column name which is renamed will not be available in retain and hence should be careful while applying these operations. The code will throw a `ValueError` in case user specified a column requested to be renamed via rename argument in retain argument as well.
152
162
153
163
### How can users specify data handlers.
154
164
@@ -204,14 +214,21 @@ Users can also pass any number of `kwargs` arguments required for each data hand
204
214
205
215
#### Preexisting data handlers
206
216
This library currently supports the following [preexisting data handlers](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tuning/data/data_handlers.py#L156):
207
-
- `tokenize_and_apply_input_masking`:
208
-
Tokenizes input text and applies masking to the labels for causal language modeling tasks, good for input/output datasets.
209
-
- `apply_dataset_formatting`:
210
-
Formats a dataset by appending an EOS token to a specified field.
217
+
- `add_tokenizer_eos_token`:
218
+
Appends the tokenizer's EOS token to a specified dataset field.
211
219
- `apply_custom_data_formatting_template`:
212
220
Applies a custom template (e.g., Alpaca style) to format dataset elements.
221
+
By default this handler adds `EOS_TOKEN` which can be disabled by a handler argument, [see](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tests/artifacts/predefined_data_configs/apply_custom_template.yaml)
222
+
- `tokenize_and_apply_input_masking`:
223
+
Tokenizes input text and applies masking to the labels for causal language modeling tasks, good for input/output datasets.
224
+
By default this handler adds `EOS_TOKEN` which can be disabled by a handler argument, [see](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tests/artifacts/predefined_data_configs/tokenize_and_apply_input_masking.yaml)
225
+
- `apply_custom_jinja_template`:
226
+
Applies a custom jinja template (e.g., Alpaca style) to format dataset elements.
227
+
By default this handler adds `EOS_TOKEN` which can be disabled by a handler argument, [see](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tests/artifacts/predefined_data_configs/apply_custom_jinja_template.yaml)
213
228
- `apply_tokenizer_chat_template`:
214
229
Uses a tokenizer's chat template to preprocess dataset elements, good for single/multi turn chat templates.
230
+
- `duplicate_columns`:
231
+
Duplicates one column of the dataset to another column.
215
232
216
233
These handlers could be requested by their same name and users can lookup the function args from [here](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tuning/data/data_handlers.py)
Our library also supports Extended Pre-Training (EPT), which is generally useful when users want to train a pretrained model on a large number of samples. The training behaviour of EPT is similar to that of pretraining where users might wanna make sure the models runs through entire corpus of data available and be trained on whole set of tokens without any specific masking.
3
+
4
+
See [below](#additional-information) for information on when this document was last updated and the release which supports this feature.
5
+
6
+
## Packing support
7
+
8
+
We support training via `packing` dataset samples by specifing `--packing=True` in the command line parameters. Users can choose to specify `--max_seq_len=<value like 4k/8k>` to provide the maxium sequence length of each chunk post packing.
9
+
10
+
We provide below details on how to use different style of datasets with the library.
11
+
12
+
## Non-Tokenized Dataset
13
+
14
+
### Single Non-Tokenized Dataset
15
+
Users can pass a single dataset to the library by using a [data_config](./advanced-data-preprocessing.md#data-config).
16
+
Lets say you have a `JSONL` data file which contains text to be trained on in each line that you want to perform EPT on, you can create a `data_config` for the dataset in this manner,
17
+
18
+
Example dataset,
19
+
20
+
```
21
+
{"Tweet":"@HMRCcustomers No this is my first job","ID":0,"Label":2,"text_label":"no complaint","output":"### Text: @HMRCcustomers No this is my first job\n\n### Label: no complaint"}
22
+
{"Tweet":"@KristaMariePark Thank you for your interest! If you decide to cancel, you can call Customer Care at 1-800-NYTIMES.","ID":1,"Label":2,"text_label":"no complaint","output":"### Text: @KristaMariePark Thank you for your interest! If you decide to cancel, you can call Customer Care at 1-800-NYTIMES.\n\n### Label: no complaint"}
23
+
...
24
+
```
25
+
26
+
Sample data config for the above use case.
27
+
```
28
+
dataprocessor:
29
+
type: default
30
+
datasets:
31
+
- name: non_tokenized_text_dataset
32
+
data_paths:
33
+
- "<path-to-the-jsonl-dataset>"
34
+
data_handlers:
35
+
- name: add_tokenizer_eos_token
36
+
arguments:
37
+
remove_columns: all
38
+
batched: false
39
+
fn_kwargs:
40
+
dataset_text_field: "dataset_text_field"
41
+
```
42
+
43
+
And the commandline passed to the library should include following.
44
+
45
+
```
46
+
--data_config <path to the data config> --packing=True --max_seq_len 8192
47
+
```
48
+
49
+
Please note that for non tokenized dataset our code adds `EOS_TOKEN` to the lines, for e.g. `Tweet` column before passing that as a dataset.
50
+
51
+
### Multiple Non Tokenized Datasets
52
+
53
+
If a user wants to utilize multiple datasets and want to [`sample`](./advanced-data-preprocessing.md#how-the-user-can-write-data-configs) the datasets. This can be achieved by specifying multiple datasets in the data config with different sampling ratios.
54
+
55
+
Sample data config for sampling among multiple datasets
56
+
```
57
+
dataprocessor:
58
+
type: default
59
+
sampling_stopping_strategy: first_exhausted
60
+
seed: 66
61
+
datasets:
62
+
- name: non_tokenized_text_dataset_1
63
+
sampling: 0.3
64
+
data_paths:
65
+
- "FILE_PATH"
66
+
data_handlers:
67
+
- name: apply_custom_data_formatting_template
68
+
arguments:
69
+
remove_columns: all
70
+
batched: false
71
+
fn_kwargs:
72
+
dataset_text_field: "dataset_text_field"
73
+
template: "dataset_template"
74
+
- name: non_tokenized_text_dataset_2
75
+
sampling: 0.4
76
+
data_paths:
77
+
- "FILE_PATH"
78
+
data_handlers:
79
+
- name: apply_custom_data_formatting_template
80
+
arguments:
81
+
remove_columns: all
82
+
batched: false
83
+
fn_kwargs:
84
+
dataset_text_field: "dataset_text_field"
85
+
template: "dataset_template"
86
+
- name: non_tokenized_text_dataset_3
87
+
sampling: 0.3
88
+
data_paths:
89
+
- "FILE_PATH"
90
+
data_handlers:
91
+
- name: apply_custom_data_formatting_template
92
+
arguments:
93
+
remove_columns: all
94
+
batched: false
95
+
fn_kwargs:
96
+
dataset_text_field: "dataset_text_field"
97
+
template: "dataset_template"
98
+
```
99
+
100
+
NOTE: More in-depth documentation of `sampling_stopping_strategy` and how to specify data mixing parameters in the `data_config` is covered in the [data mixing](./advanced-data-preprocessing.md#data-mixing) section of the advanced data preprocessing documentation
101
+
102
+
Here also the command line arguments would be
103
+
104
+
```
105
+
--data_config <path to the data config> --packing=True --max_seq_len 8192
106
+
```
107
+
108
+
The code again would add `EOS_TOKEN` to the non tokenized data before using it and also note that the `dataset_text_field` is assumed to be same across all datasets for now.
109
+
110
+
### Additional Information
111
+
This feature is supported post [v2.3.1](https://github.com/foundation-model-stack/fms-hf-tuning/releases/tag/v2.3.1) of this library.
0 commit comments