You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -178,13 +180,12 @@ Similar to LLM evaluation, it is possible to specify the prompt prefix and suffi
178
180
|`data_loading.batch_size`| Loading and evaluate data in batch. By default `batch_size=10`|
179
181
| `dataset_dir` | The directory consists of different JSONL files processed in Step 1. Only used in LLM evaluation
180
182
| `dataset.parquet_path` | The parquet path consists of different Parquet files files processed in Step 1. Only used in LCM evaluation
181
-
| `dataset.source_column` | The column in the data that refers to the input embedding. Not applicable when evaluating LLMs
182
-
| `dataset.source_text_column` | The column in the data that refers to the input text. Not applicable when evaluating LCMs
183
-
| `dataset.source_text_column` | The column in the data that refers to the input text. Not applicable when evaluating LCMs
184
-
| `dataset.target_column` | The column in the data that refers to the ground-truth embedding. Not applicable when evaluating LLMs
185
-
| `dataset.target_text_column` | The column in the data that refers to the ground-truth text. Not applicable when evaluating LCMs
183
+
| `dataset.source_column` | The column in the data that refers to the input embedding. Not applicable when evaluating LLMs.
184
+
| `dataset.source_text_column` | The column in the data that refers to the input text.
185
+
| `dataset.target_column` | The column in the data that refers to the ground-truth embedding. Not applicable when evaluating LLMs.
186
+
| `dataset.target_text_column` | The column in the data that refers to the ground-truth text.
186
187
| `dataset.source_text_prefix` | The text that will prepended to each input text to make the prompt for the model.
187
-
| `dataset.source_text_prefix` | The text that will appended after each input text to make the prompt for the model.
188
+
| `dataset.source_text_suffix` | The text that will appended after each input text to make the prompt for the model.
188
189
| `task_args` | The JSON-formatted string that represents the task arguments. See [task param list](#task_param_list) below.
189
190
| `dump_dir` | The directory consisting output of the eval run. If successful, there should be a file `metrics.eval.jsonl` that consists of metric results, the directory `results` that capture the verbose command line used with the detailed output scores, and the directory `raw_results` that shows
190
191
the model output for each individual sample, together with the per-sample metric results.
0 commit comments