You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/generative-apis/troubleshooting/fixing-common-issues.mdx
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,15 +111,25 @@ Below are common issues that you may encounter when using Generative APIs, their
111
111
- The model goes into an infinite loop while processing the input (which is a known structural issue with several AI models)
112
112
113
113
### Solution
114
+
For queries that are too long to process:
114
115
- Set a stricter **maximum token limit** to prevent overly long responses.
115
116
- Reduce the size of the input tokens, or split the input into multiple API requests.
116
117
- Use [Managed Inference](/managed-inference/), where no query timeout is enforced.
117
118
119
+
For queries having the model entering an infinite loop (more frequent when using **structured output**):
120
+
- Set `temperature` to the default value recommended for the model. These values can be found in [Generative APIs Playground](https://console.scaleway.com/generative-api/models/fr-par/playground) when selecting the model. Avoid using temperature `0` as this can lock the model on outputing only the next (and same) most probable token over and over.
121
+
- Ensure the `top_p` parameter is not set too low (recommended value is the default one `1`).
122
+
- Add a `presence_penalty` value in your request (`0.5` is a good starting value). This option will help the model choosing different tokens than the one it is looping on, although it might impact accuracy on some tasks requiring to repeat multiple similar outputs.
123
+
- Use more recent models, which are usually more optimized to avoid loops, especially when using structured output.
124
+
- Optimize system prompt to provide clearer and simpler tasks. Currently, JSON output accuracy still rely on heuristics to constrain models to output only valid JSON tokens, and thus depends on prompts given. As a counter-example, providing contradictory requirements to a model - such as `Never output JSON` in the system prompt and `response_format` as `json_schema" in the query - may lead a model to never output closing JSON brackets `}`.
125
+
118
126
## Structured output (e.g., JSON) is not working correctly
0 commit comments