We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent f3b2caa commit b1ed423Copy full SHA for b1ed423
docs/reference/inference/elastic-infer-service.asciidoc
@@ -79,11 +79,19 @@ include::inference-shared.asciidoc[tag=service-settings]
79
These settings are specific to the `elser` service.
80
--
81
82
-
83
`model_id`:::
84
(Required, string)
85
The name of the model to use for the {infer} task.
86
+`rate_limit`:::
87
+(Optional, object)
88
+By default, the `elastic` service sets the number of requests allowed per minute to `1000`.
89
+This helps to minimize the number of rate limit errors returned.
90
+To modify this, set the `requests_per_minute` setting of this object in your service settings:
91
++
92
+--
93
+include::inference-shared.asciidoc[tag=request-per-minute-example]
94
95
96
`task_settings`::
97
(Optional, object)
0 commit comments