You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sphinx_doc/source/tutorial/example_data_functionalities.md
+19-25Lines changed: 19 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,22 +2,22 @@
2
2
3
3
## Example: Data Processor for Task Pipeline
4
4
5
-
In this example, you will learn how to apply the data processor workflow of Trinity-RFT to prepare and prioritize the dataset before task exploring and training. This example takes GSM-8K dataset as the example dataset to figure out:
5
+
In this example, you will learn how to apply the data processor of Trinity-RFT to prepare and prioritize the dataset before task exploring and training. This example takes GSM-8K dataset as the example dataset to figure out:
6
6
7
-
1. how to prepare the data workflow
7
+
1. how to prepare the data processor
8
8
2. how to configure the data processor
9
-
3. what the data workflow can do
9
+
3. what the data processor can do
10
10
11
11
Before getting started, you need to prepare the main environment of Trinity-RFT according to the [installation section of the README file](../main.md).
12
12
13
13
### Data Preparation
14
14
15
-
#### Prepare the Data Workflow
15
+
#### Prepare the Data Processor
16
16
17
-
As the overall framework of Trinity-RFT shows, the data workflow is one of the high-level functions. Trinity-RFT encapsulates the data workflow as an independent service to avoid dependency conflict issues. Thus you need to prepare a split environment for this module and start the server.
17
+
As the overall framework of Trinity-RFT shows, the data processor is one of the high-level functions. Trinity-RFT encapsulates the data processor as an independent service to avoid dependency conflict issues. Thus you need to prepare a split environment for this module and start the server.
18
18
19
19
```shell
20
-
# prepare split environments, including the one of data workflow
20
+
# prepare split environments, including the one of data processor
21
21
python scripts/install.py
22
22
23
23
# start all split servers
@@ -32,7 +32,7 @@ In this example, assume that you need to rank all math questions and correspondi
Here you can set the basic buffers for the GSM-8K dataset input and output and some other items about downstream dataset loading for exploring and training:
55
55
56
-
+ `data_workflow_url`: the URL of the data processor service, which is started in the previous step.
56
+
+ `data_processor_url`: the URL of the data processor service, which is started in the previous step.
57
57
+ `task_pipeline`: the configs for the task pipeline. Task pipeline is used to process the raw dataset. It consists of several inner configs:
58
58
+ `input_buffers`: the input buffers for the task pipeline. We usually load from raw dataset files in this pipeline, thus we need to the dataset `path` and set the `storage_type` to "file" and set `raw` to True. It allows multiple input buffers. We can name each buffer with the `name` field.
59
59
+ `output_buffer`: the output buffer for the task pipeline. We usually store the processed dataset in files as well, thus we need to set the `storage_type` to "file".
@@ -66,7 +66,7 @@ If you are not familiar with Data-Juicer, the data processor provides a natural-
@@ -148,7 +148,7 @@ All config items in the `data` section can be found [here](trinity_configs.md).
148
148
149
149
150
150
```{note}
151
-
Only when one of `xxx_pipeline` is provided, and one of `dj_process_desc` and `dj_config_path` in the pipeline config is provided, the data workflow and the data active iterator will be activated. Otherwise, this part will be skipped and it will enter into the exploring stage directly.
151
+
Only when one of `xxx_pipeline` is provided, and one of `dj_process_desc` and `dj_config_path` in the pipeline config is provided, the data processor and the data active iterator will be activated. Otherwise, this part will be skipped and it will enter into the exploring stage directly.
152
152
```
153
153
154
154
### Exploring & Training
@@ -165,13 +165,7 @@ ray start --address=<master_address>
165
165
trinity run --config <Trinity-RFT_config_path>
166
166
```
167
167
168
-
If you follow the steps above, Trinity-RFT will send a request to the data workflow server, the data active iterator will be activated, compute difficulty scores for each sample in the raw dataset, and rank the dataset according to difficulty scores. After that, the data workflow server stores the result dataset into the output buffer, when exploring begins, it will load the prepared dataset and continue the downstream steps.
169
-
170
-
171
-
## Example: Data Processor for Experience Pipeline
172
-
173
-
TBD.
174
-
168
+
If you follow the steps above, Trinity-RFT will send a request to the data processor server, the data active iterator will be activated, compute difficulty scores for each sample in the raw dataset, and rank the dataset according to difficulty scores. After that, the data processor server stores the result dataset into the output buffer, when exploring begins, it will load the prepared dataset and continue the downstream steps.
175
169
176
170
## Example: Human in the Loop
177
171
Sometimes, you might need to involve human feedbacks for some raw data. In this example, you will learn how to annotate raw data to get a better dataset before training. This example takes an example Q&A dataset and tries to select the chosen and rejected ones for DPO method.
@@ -180,27 +174,27 @@ Before getting started, you need to prepare the main environment of Trinity-RFT
180
174
181
175
### Data Preparation
182
176
183
-
#### Prepare the Data Workflow
177
+
#### Prepare the Data Processor
184
178
185
-
As the overall framework of Trinity-RFT shows, the data workflow is one of the high-level functions. Trinity-RFT encapsulates the data workflow as an independent service to avoid dependency conflict issues. Thus you need to prepare a split environment for this module and start the server.
179
+
As the overall framework of Trinity-RFT shows, the data processor is one of the high-level functions. Trinity-RFT encapsulates the data processor as an independent service to avoid dependency conflict issues. Thus you need to prepare a split environment for this module and start the server.
186
180
187
181
```shell
188
-
# prepare split environments, including the one of data workflow
182
+
# prepare split environments, including the one of data processor
189
183
python scripts/install.py
190
184
191
185
# start all split servers
192
186
python scripts/start_servers.py
193
187
```
194
188
195
-
### Configure the Data Workflow
189
+
### Configure the Data Processor
196
190
197
-
Trinity-RFT uses a unified config file to manage all config items. For the data workflow, you need to focus on the `data_processor` section in the config file.
191
+
Trinity-RFT uses a unified config file to manage all config items. For the data processor, you need to focus on the `data_processor` section in the config file.
198
192
199
193
In this example, assume that you need to select the chosen and rejected responses for DPO method. So you can set these config items like the following example:
@@ -259,7 +253,7 @@ You can set more config items for this OP (e.g. notification when annotation is
259
253
260
254
### Start Running
261
255
262
-
When you start running with the RFT config, the data workflow will start the OP `human_preference_annotation_mapper`, and then you can find a new project on the "Projects" page of the label-studio server.
256
+
When you start running with the RFT config, the data processor will start the OP `human_preference_annotation_mapper`, and then you can find a new project on the "Projects" page of the label-studio server.
- Then you need to prepare the `data_processor` section in the config file (e.g. [test_cfg.yaml](tests/test_configs/active_iterator_test_cfg.yaml))
89
89
- For the `dj_config_path` argument in it, you can either specify a data-juicer config file path (e.g. [test_dj_cfg.yaml](tests/test_configs/active_iterator_test_dj_cfg.yaml)), or write the demand in `dj_process_desc` argument in natural language and our agent will help you to organize the data-juicer config.
90
90
- Finally you can send requests to the data server to start an active iterator to process datasets in many ways:
91
-
- Request with `curl`: `curl "http://127.0.0.1:5000/data_workflow?configPath=tests%2Ftest_configs%2Factive_iterator_test_cfg.yaml"`
91
+
- Request with `curl`: `curl "http://127.0.0.1:5005/data_processor/task_pipeline?configPath=tests%2Ftest_configs%2Factive_iterator_test_cfg.yaml"`
0 commit comments