This repository was archived by the owner on Jul 22, 2025. It is now read-only.
Strange instances of hallucinations where "Initialized" and "assistant" were inserted into Perplexity API responses #65
robert-foley
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've seen 3 instances recently where these words strangely seemed to leak into the output of the Perplexity API - 2 instances with
Initialized
and 1 instance ofassistant
. In each case the prompt was a yes/no question and the word was injected immediately afterno
/No
before the space, which was then followed by an explanation (even though our system prompt specifies to give no explanation - another issue). Each case happened using the modelllama-3.1-sonar-large-128k-online
with a temperature of0.0
and500
max tokens.Here are each of the cases including the user message and assistant output:
Each of these instance occurred using a prompt constructed with this code:
I really have no idea what is going on here, whether it's a pure model hallucination or something else strange is going on. Has anyone else seen this?
Beta Was this translation helpful? Give feedback.
All reactions