This repository was archived by the owner on Jul 22, 2025. It is now read-only.
Frustration with Limited Offline Model Options #228
Kate-Actuary-Viola
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey Perplexity community,
I'm writing this post to express my frustration with the current state of offline models available to us. As of now, we're essentially limited to just one option: r1-1776. While I appreciate having an offline model that doesn't rely on web searches, the implementation leaves a lot to be desired.
The Good
It's an offline model, which is great for privacy and working without an internet connection
128k context length is impressive
The Bad
Only one offline model to choose from
Prints the entire thinking process, which is often unnecessary and clutters the output
The Ugly
The thinking process output makes it difficult to use for clean, presentable results
It feels like we're stuck with a "debug mode" version of the model
I understand that Perplexity is constantly evolving, but having just one offline model that spits out its entire thought process is far from ideal. It's like trying to have a conversation with someone who insists on verbalizing every single thought that crosses their mind – interesting for a minute, annoying after that.
Can we please get some more offline model options? Or at least a version of r1-1776 that gives us clean, final outputs without all the behind-the-scenes chatter?
What do you all think? Are you facing similar frustrations? Let's make some noise and hopefully get the attention of the Perplexity team to address this limitation.
#PerplexityAI #OfflineModels #AIFrustrations #R1-1776Problems
Beta Was this translation helpful? Give feedback.
All reactions