Skip to content

Commit 4a5f21e

Browse files
committed
Merge tag 'v2.6.0-rc.1' into v2.6.0-rc1
Signed-off-by: Abhishek <[email protected]>
2 parents 6f9bab2 + fb3ace8 commit 4a5f21e

23 files changed

+806
-53
lines changed

README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
- [Prompt Tuning](#prompt-tuning)
1414
- [Fine Tuning](#fine-tuning)
1515
- [FMS Acceleration](#fms-acceleration)
16+
- [Extended Pre-Training](#extended-pre-training)
1617
- [Inference](#inference)
1718
- [Running a single example](#running-a-single-example)
1819
- [Running multiple examples](#running-multiple-examples)
@@ -133,7 +134,7 @@ Example: Train.json
133134
},
134135
...
135136
]`
136-
data_formatter_template: `### Input: {{input}} \n\n##Label: {{output}}`
137+
data_formatter_template: `### Input: {{input}} \n\n## Label: {{output}}`
137138

138139
Formatting will happen on the fly while tuning. The keys in template should match fields in the dataset file. The `response template` corresponding to the above template will need to be supplied. in this case, `response template` = `\n## Label:`.
139140

@@ -299,7 +300,7 @@ python tuning/sft_trainer.py \
299300
--gradient_accumulation_steps 4 \
300301
--learning_rate 1e-5 \
301302
--response_template "\n## Label:" \
302-
--data_formatter_template: "### Input: {{input}} \n\n##Label: {{output}}"
303+
--data_formatter_template: "### Input: {{input}} \n\n## Label: {{output}}"
303304

304305
```
305306

@@ -322,7 +323,6 @@ Below example runs multi-GPU fine tuning on 8 GPUs with FSDP:
322323
# OUTPUT_PATH=out # Path to the output folder where the checkpoints are saved
323324

324325
accelerate launch \
325-
--main_process_port $MASTER_PORT \
326326
--config_file fixtures/accelerate_fsdp_defaults.yaml \
327327
--num_processes=8 \
328328
--main_process_port=$MASTER_PORT \
@@ -829,6 +829,9 @@ Number of trainable parameters = 13,631,488
829829
The `fms_acceleration.cli` can do more to search for all available configs, plugins and arguments, [see the advanced flow](https://github.com/foundation-model-stack/fms-acceleration#advanced-flow).
830830

831831

832+
## Extended Pre-Training
833+
834+
We also have support for extended pre training where users might wanna pretrain a model with large number of samples. Please refer our separate doc on [EPT Use Cases](./docs/ept.md)
832835

833836
## Inference
834837
Currently, we do *not* offer inference support as part of the library, but we provide a standalone script for running inference on tuned models for testing purposes. For a full list of options run `python scripts/run_inference.py --help`. Note that no data formatting / templating is applied at inference time.

docs/advanced-data-preprocessing.md

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,10 @@ definitions:
6060
type: float
6161
builder:
6262
type: string
63+
rename_columns:
64+
type: object
65+
retain_columns:
66+
type: object
6367
data_paths:
6468
type: array
6569
items:
@@ -118,6 +122,8 @@ Users can create a data config file in any of YAML or JSON format they choose (w
118122
- `name` (optional, str): A unique identifier for the dataset.
119123
- `data_paths` (optional, list): A `list` of file paths or directories containing the dataset.
120124
- `builder` (optional, str): Specifies a [Hugging Face dataset builder](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/loading_methods#datasets.load_dataset.path), if applicable.
125+
- `rename_columns` (optional, dict[str:str]): Specifies a dictionary of columns to rename like `{"old_name": "new_name"}` at dataset load time. *Applied before `retain_columns` if both are specified*.
126+
- `retain_columns` (optional, list[str]): Specifies a list of columns to retain `["input_ids", "labels"]` every other column will be dropped at dataset load time. *Applied strictly after `rename_columns` if both are specified*.
121127
- `sampling` (optional, float): The sampling ratio (0.0 to 1.0) with which to sample a dataset in case of interleaving.
122128
- `data_handlers` (optional, list): A list of data handler configurations which preprocess the dataset.
123129

@@ -149,6 +155,10 @@ Not Supported:
149155
Currently there's no support for sampling under multiple data paths which are defined inside a dataset definition.
150156
All dataset paths that will be specified inside one dataset will be [concatenated](https://huggingface.co/docs/datasets/v3.2.0/en/process#concatenate) after loading them, while across datasets users can specify [mixing via sampling datasets](#data-mixing)
151157

158+
Probably something like this:
159+
160+
Additionally while loading the dataset, users can specify which columns to rename via `rename_columns` and which to retain via `retain_columns` arguments above.
161+
The order of application of these operations is *strictly rename followed by retain* so users should note that an old column name which is renamed will not be available in retain and hence should be careful while applying these operations. The code will throw a `ValueError` in case user specified a column requested to be renamed via rename argument in retain argument as well.
152162

153163
### How can users specify data handlers.
154164

@@ -204,14 +214,21 @@ Users can also pass any number of `kwargs` arguments required for each data hand
204214

205215
#### Preexisting data handlers
206216
This library currently supports the following [preexisting data handlers](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tuning/data/data_handlers.py#L156):
207-
- `tokenize_and_apply_input_masking`:
208-
Tokenizes input text and applies masking to the labels for causal language modeling tasks, good for input/output datasets.
209-
- `apply_dataset_formatting`:
210-
Formats a dataset by appending an EOS token to a specified field.
217+
- `add_tokenizer_eos_token`:
218+
Appends the tokenizer's EOS token to a specified dataset field.
211219
- `apply_custom_data_formatting_template`:
212220
Applies a custom template (e.g., Alpaca style) to format dataset elements.
221+
By default this handler adds `EOS_TOKEN` which can be disabled by a handler argument, [see](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tests/artifacts/predefined_data_configs/apply_custom_template.yaml)
222+
- `tokenize_and_apply_input_masking`:
223+
Tokenizes input text and applies masking to the labels for causal language modeling tasks, good for input/output datasets.
224+
By default this handler adds `EOS_TOKEN` which can be disabled by a handler argument, [see](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tests/artifacts/predefined_data_configs/tokenize_and_apply_input_masking.yaml)
225+
- `apply_custom_jinja_template`:
226+
Applies a custom jinja template (e.g., Alpaca style) to format dataset elements.
227+
By default this handler adds `EOS_TOKEN` which can be disabled by a handler argument, [see](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tests/artifacts/predefined_data_configs/apply_custom_jinja_template.yaml)
213228
- `apply_tokenizer_chat_template`:
214229
Uses a tokenizer's chat template to preprocess dataset elements, good for single/multi turn chat templates.
230+
- `duplicate_columns`:
231+
Duplicates one column of the dataset to another column.
215232

216233
These handlers could be requested by their same name and users can lookup the function args from [here](https://github.com/foundation-model-stack/fms-hf-tuning/blob/main/tuning/data/data_handlers.py)
217234

docs/ept.md

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
# Extended Pre Training Support
2+
Our library also supports Extended Pre-Training (EPT), which is generally useful when users want to train a pretrained model on a large number of samples. The training behaviour of EPT is similar to that of pretraining where users might wanna make sure the models runs through entire corpus of data available and be trained on whole set of tokens without any specific masking.
3+
4+
See [below](#additional-information) for information on when this document was last updated and the release which supports this feature.
5+
6+
## Packing support
7+
8+
We support training via `packing` dataset samples by specifing `--packing=True` in the command line parameters. Users can choose to specify `--max_seq_len=<value like 4k/8k>` to provide the maxium sequence length of each chunk post packing.
9+
10+
We provide below details on how to use different style of datasets with the library.
11+
12+
## Non-Tokenized Dataset
13+
14+
### Single Non-Tokenized Dataset
15+
Users can pass a single dataset to the library by using a [data_config](./advanced-data-preprocessing.md#data-config).
16+
Lets say you have a `JSONL` data file which contains text to be trained on in each line that you want to perform EPT on, you can create a `data_config` for the dataset in this manner,
17+
18+
Example dataset,
19+
20+
```
21+
{"Tweet":"@HMRCcustomers No this is my first job","ID":0,"Label":2,"text_label":"no complaint","output":"### Text: @HMRCcustomers No this is my first job\n\n### Label: no complaint"}
22+
{"Tweet":"@KristaMariePark Thank you for your interest! If you decide to cancel, you can call Customer Care at 1-800-NYTIMES.","ID":1,"Label":2,"text_label":"no complaint","output":"### Text: @KristaMariePark Thank you for your interest! If you decide to cancel, you can call Customer Care at 1-800-NYTIMES.\n\n### Label: no complaint"}
23+
...
24+
```
25+
26+
Sample data config for the above use case.
27+
```
28+
dataprocessor:
29+
type: default
30+
datasets:
31+
- name: non_tokenized_text_dataset
32+
data_paths:
33+
- "<path-to-the-jsonl-dataset>"
34+
data_handlers:
35+
- name: add_tokenizer_eos_token
36+
arguments:
37+
remove_columns: all
38+
batched: false
39+
fn_kwargs:
40+
dataset_text_field: "dataset_text_field"
41+
```
42+
43+
And the commandline passed to the library should include following.
44+
45+
```
46+
--data_config <path to the data config> --packing=True --max_seq_len 8192
47+
```
48+
49+
Please note that for non tokenized dataset our code adds `EOS_TOKEN` to the lines, for e.g. `Tweet` column before passing that as a dataset.
50+
51+
### Multiple Non Tokenized Datasets
52+
53+
If a user wants to utilize multiple datasets and want to [`sample`](./advanced-data-preprocessing.md#how-the-user-can-write-data-configs) the datasets. This can be achieved by specifying multiple datasets in the data config with different sampling ratios.
54+
55+
Sample data config for sampling among multiple datasets
56+
```
57+
dataprocessor:
58+
type: default
59+
sampling_stopping_strategy: first_exhausted
60+
seed: 66
61+
datasets:
62+
- name: non_tokenized_text_dataset_1
63+
sampling: 0.3
64+
data_paths:
65+
- "FILE_PATH"
66+
data_handlers:
67+
- name: apply_custom_data_formatting_template
68+
arguments:
69+
remove_columns: all
70+
batched: false
71+
fn_kwargs:
72+
dataset_text_field: "dataset_text_field"
73+
template: "dataset_template"
74+
- name: non_tokenized_text_dataset_2
75+
sampling: 0.4
76+
data_paths:
77+
- "FILE_PATH"
78+
data_handlers:
79+
- name: apply_custom_data_formatting_template
80+
arguments:
81+
remove_columns: all
82+
batched: false
83+
fn_kwargs:
84+
dataset_text_field: "dataset_text_field"
85+
template: "dataset_template"
86+
- name: non_tokenized_text_dataset_3
87+
sampling: 0.3
88+
data_paths:
89+
- "FILE_PATH"
90+
data_handlers:
91+
- name: apply_custom_data_formatting_template
92+
arguments:
93+
remove_columns: all
94+
batched: false
95+
fn_kwargs:
96+
dataset_text_field: "dataset_text_field"
97+
template: "dataset_template"
98+
```
99+
100+
NOTE: More in-depth documentation of `sampling_stopping_strategy` and how to specify data mixing parameters in the `data_config` is covered in the [data mixing](./advanced-data-preprocessing.md#data-mixing) section of the advanced data preprocessing documentation
101+
102+
Here also the command line arguments would be
103+
104+
```
105+
--data_config <path to the data config> --packing=True --max_seq_len 8192
106+
```
107+
108+
The code again would add `EOS_TOKEN` to the non tokenized data before using it and also note that the `dataset_text_field` is assumed to be same across all datasets for now.
109+
110+
### Additional Information
111+
This feature is supported post [v2.3.1](https://github.com/foundation-model-stack/fms-hf-tuning/releases/tag/v2.3.1) of this library.
112+
Post Last Updated On: 12-02-2025

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,12 +29,12 @@ classifiers=[
2929
dependencies = [
3030
"numpy>=1.26.4,<2.0",
3131
"accelerate>=0.20.3,!=0.34,<1.1",
32-
"transformers>=4.45,<4.46",
32+
"transformers>=4.46,<4.48.2",
3333
"torch>=2.2.0,<2.5",
3434
"sentencepiece>=0.1.99,<0.3",
3535
"tokenizers>=0.13.3,<1.0",
3636
"tqdm>=4.66.2,<5.0",
37-
"trl>=0.9.3,<0.12",
37+
"trl>=0.13,<0.15",
3838
"peft>=0.8.0,<0.14",
3939
"protobuf>=5.28.0,<6.0.0",
4040
"datasets>=2.15.0,<3.0",

tests/artifacts/predefined_data_configs/__init__.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@
2222
DATA_CONFIG_APPLY_CUSTOM_TEMPLATE_YAML = os.path.join(
2323
PREDEFINED_DATA_CONFIGS, "apply_custom_template.yaml"
2424
)
25+
DATA_CONFIG_APPLY_CUSTOM_JINJA_TEMPLATE_YAML = os.path.join(
26+
PREDEFINED_DATA_CONFIGS, "apply_custom_jinja_template.yaml"
27+
)
2528
DATA_CONFIG_PRETOKENIZE_JSON_DATA_YAML = os.path.join(
2629
PREDEFINED_DATA_CONFIGS, "pretokenized_json_data.yaml"
2730
)
@@ -31,3 +34,9 @@
3134
DATA_CONFIG_MULTIPLE_DATASETS_SAMPLING_YAML = os.path.join(
3235
PREDEFINED_DATA_CONFIGS, "multiple_datasets_with_sampling.yaml"
3336
)
37+
DATA_CONFIG_DUPLICATE_COLUMNS = os.path.join(
38+
PREDEFINED_DATA_CONFIGS, "duplicate_columns.yaml"
39+
)
40+
DATA_CONFIG_RENAME_RETAIN_COLUMNS = os.path.join(
41+
PREDEFINED_DATA_CONFIGS, "rename_retain_columns.yaml"
42+
)
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
dataprocessor:
2+
type: default
3+
datasets:
4+
- name: apply_custom_data_jinja_template
5+
data_paths:
6+
- "FILE_PATH"
7+
data_handlers:
8+
- name: apply_custom_jinja_template
9+
arguments:
10+
remove_columns: all
11+
batched: false
12+
fn_kwargs:
13+
dataset_text_field: "dataset_text_field"
14+
template: "dataset_template"
15+
add_eos_token: true

tests/artifacts/predefined_data_configs/apply_custom_template.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,5 @@ datasets:
1111
batched: false
1212
fn_kwargs:
1313
dataset_text_field: "dataset_text_field"
14-
template: "dataset_template"
14+
template: "dataset_template"
15+
add_eos_token: true
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
dataprocessor:
2+
type: default
3+
datasets:
4+
- name: pre_tokenized_with_only_input_ids
5+
data_paths:
6+
- "FILE_PATH"
7+
data_handlers:
8+
- name: duplicate_columns
9+
arguments:
10+
remove_columns: all
11+
batched: false
12+
fn_kwargs:
13+
old_column: "input_ids"
14+
new_column: "labels"
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
dataprocessor:
2+
type: default
3+
datasets:
4+
- name: text_dataset_input_output_masking
5+
rename_columns:
6+
"input" : "instruction"
7+
"output" : "response"
8+
retain_columns:
9+
- "instruction"
10+
- "response"
11+
data_paths:
12+
- "FILE_PATH"
13+
data_handlers:
14+
- name: tokenize_and_apply_input_masking
15+
arguments:
16+
remove_columns: all
17+
batched: false
18+
fn_kwargs:
19+
input_field_name: instruction
20+
output_field_name: response

tests/artifacts/predefined_data_configs/tokenize_and_apply_input_masking.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,5 @@ datasets:
1111
batched: false
1212
fn_kwargs:
1313
input_field_name: input
14-
output_field_name: output
14+
output_field_name: output
15+
add_eos_token: true

0 commit comments

Comments
 (0)