You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 22, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: config/locales/client.en.yml
+4-2Lines changed: 4 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -399,7 +399,8 @@ en:
399
399
name: "Model id"
400
400
provider: "Provider"
401
401
tokenizer: "Tokenizer"
402
-
max_prompt_tokens: "Number of tokens for the prompt"
402
+
max_prompt_tokens: "Context window"
403
+
max_output_tokens: "Max output tokens"
403
404
url: "URL of the service hosting the model"
404
405
api_key: "API Key of the service hosting the model"
405
406
enabled_chat_bot: "Allow AI bot selector"
@@ -486,7 +487,8 @@ en:
486
487
failure: "Trying to contact the model returned this error: %{error}"
487
488
488
489
hints:
489
-
max_prompt_tokens: "Max numbers of tokens for the prompt. As a rule of thumb, this should be 50% of the model's context window."
490
+
max_prompt_tokens: "The maximum number of tokens the model can process in a single request"
491
+
max_output_tokens: "The maximum number of tokens the model can generate in a single request"
490
492
display_name: "The name used to reference this model across your site's interface."
491
493
name: "We include this in the API call to specify which model we'll use"
492
494
vision_enabled: "If enabled, the AI will attempt to understand images. It depends on the model being used supporting vision. Supported by latest models from Anthropic, Google, and OpenAI."
0 commit comments