You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/pages/product/apis-integrations/ai-api.mdx
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -139,7 +139,12 @@ One way of handling this is to pass the error message back into the AI API; it m
139
139
140
140
When using `"runQuery": true`, you might sometimes receive a query result containing `{ "error": "Continue wait" }`. If this happens, you should use `/load` ([described above](#2-load)) instead of `runQuery` to run the query, and handle retries as described in the [REST API documentation](/product/apis-integrations/rest-api#continue-wait).
141
141
142
-
## Advanced Features
142
+
## Advanced Usage
143
+
144
+
<InfoBox>
145
+
The advanced features discussed here are available on Cube version 1.1.7 and above.
146
+
</InfoBox>
147
+
143
148
### Custom prompts
144
149
145
150
You can prompt the AI API with custom instructions. For example, you may want it to always
@@ -176,10 +181,10 @@ to give the AI context on possible values in a categorical dimension:
176
181
177
182
### Other LLM providers
178
183
179
-
<WarningBox>
184
+
<InfoBox>
180
185
These environment variables also apply to the [AI Assistant](/product/workspace/ai-assistant),
181
186
if it is enabled on your deployment.
182
-
</WarningBox>
187
+
</InfoBox>
183
188
184
189
If desired, you may "bring your own" LLM model by providing a model and API credentials
185
190
for a supported model provider. Do this by setting environment variables in your Cube
0 commit comments