You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sphinx_doc/source/tutorial/trinity_configs.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -160,7 +160,7 @@ model:
160
160
161
161
- `model_path`: Path to the model being trained.
162
162
- `critic_model_path`: Optional path to a separate critic model. If empty, defaults to `model_path`.
163
-
- `max_model_len`: Maximum number of tokens in a sequence. It is recommended to set this value manually. If not set, it will be inferred from the model configuration.
163
+
- `max_model_len`: Maximum number of tokens in a sequence. It is recommended to set this value manually. If not set, it will default to `max_prompt_tokens` + `max_response_tokens`. However, if either `max_prompt_tokens` or `max_response_tokens` is not set, we will raise an error.
164
164
- `max_response_tokens`: Maximum number of tokens allowed in generated responses. Only for `chat` and `generate` methods in `InferenceModel`.
165
165
- `max_prompt_tokens`: Maximum number of tokens allowed in prompts. Only for `chat` and `generate` methods in `InferenceModel`.
166
166
- `min_response_tokens`: Minimum number of tokens allowed in generated responses. Only for `chat` and `generate` methods in `InferenceModel`. Default is `1`. It must be less than `max_response_tokens`.
0 commit comments