You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/generative-apis/troubleshooting/fixing-common-issues.mdx
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,15 +111,25 @@ Below are common issues that you may encounter when using Generative APIs, their
111
111
- The model goes into an infinite loop while processing the input (which is a known structural issue with several AI models)
112
112
113
113
### Solution
114
+
For queries that are too long to process:
114
115
- Set a stricter **maximum token limit** to prevent overly long responses.
115
116
- Reduce the size of the input tokens, or split the input into multiple API requests.
116
117
- Use [Managed Inference](/managed-inference/), where no query timeout is enforced.
117
118
119
+
For queries where the model enters an infinite loop (more frequent when using **structured output**):
120
+
- Set `temperature` to the default value recommended for the model. These values can be found in the [Generative APIs Playground](https://console.scaleway.com/generative-api/models/fr-par/playground) when selecting the model. Avoid using temperature `0`, as this can lock the model into outputting only the next (and same) most probable token repeatedly.
121
+
- Ensure the `top_p` parameter is not set too low (we recommend the default value of `1`).
122
+
- Add a `presence_penalty` value in your request (`0.5` is a good starting value). This option will help the model choose different tokens than the one it is looping on, although it might impact accuracy for some tasks requiring repeated multiple similar outputs.
123
+
- Use more recent models, which are usually more optimized to avoid loops, especially when using structured output.
124
+
- Optimize the system prompt to provide clearer and simpler tasks. Currently, JSON output accuracy still relies on heuristics to constrain models to output only valid JSON tokens, and thus depends on the prompts given. As a counter-example, providing contradictory requirements to a model - such as `Never output JSON` in the system prompt and `response_format` as `json_schema" in the query - may lead to the model never outputting closing JSON brackets `}`.
125
+
118
126
## Structured output (e.g., JSON) is not working correctly
- Structured output response is valid JSON but content is less relevant
131
+
- Structured output response never ends (loop over characters such as `"`, `\t` or `\n`). For this issue, see the advice on infinite loops in [504 Gateway Timeout](#504-gateway-timeout).
132
+
123
133
124
134
### Causes
125
135
- Incorrect field naming in the request, such as using `"format"` instead of the correct `"response_format"` field.
0 commit comments