This repository was archived by the owner on Jul 22, 2025. It is now read-only.
Perplexity Has Lost All Shame: Using DeepSeek's Open-Source R1 Model Without Proper Credit #263
TheLittleJimmy
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I need to call out something extremely disappointing that Perplexity is doing. They're using DeepSeek's open-source R1 model, feeding it with politically questionable training data to create a supposedly "unbiased" model, and then completely failing to give DeepSeek proper credit.
What's particularly outrageous is their double standard: when Perplexity uses other closed-source models, they clearly mark them with their respective company names and proper attribution. But when it comes to DeepSeek's R1 model, suddenly it becomes "Perplexity's model" with zero mention of DeepSeek's contribution.
Isn't this embarrassingly unprofessional? Taking someone's open-source work, repurposing it, and then presenting it as your own creation? This kind of behavior shows a complete lack of ethics in the AI community and disrespects the original creators.
How can a company claim to be advancing AI when they can't even properly acknowledge the foundations they're building upon?
Beta Was this translation helpful? Give feedback.
All reactions