This repository was archived by the owner on Jul 22, 2025. It is now read-only.
API completions frequently hallucinate? #198
joeyfromspace
started this conversation in
General
Replies: 1 comment
-
Hey @joeyfromspace ! Thanks for bringing this to our attention. A bug has been filed and I'm personally looking into this. Will let you know as soon as we have some updates. Also in the future if you come across issues like please feel free to file a bug. This will help streamline the process! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Frequently, we receive garbled random text or long sequences of nonsense characters with our completions. At first I thought this was exclusive to
sonar-deep-research
but we are now seeing it in our completions withsonar-pro
as well. We haven't changed any of our API call parameters before, andsonar-pro
had been working great for us for some weeks now.I've tried tweaking various things like not sending up a system prompt, adjusting settings like
temperature
andfrequency_penalty
, but nothing I've tried seems to have any effect on the outcome.What can I do to improve the quality of our results? Or is this an issue with Perplexity itself?
For example, here is one request for deep research:
And this is the text response I received (after parsing out the CoT and the text from the json schema):
Beta Was this translation helpful? Give feedback.
All reactions