Deepseek Unsloth's 1.58-bit gguf support? #226
Unanswered
maximeozenne
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
First of all, thank you for your impressive work!
I'm actively following the work of Unsloth about the dynamic quantification they made on Deepseek-r1:
https://unsloth.ai/blog/deepseekr1-dynamic
But even with KTransformers, my current hardware does not allow me to run a Q2 sized locally, but I have enough RAM to run the Q1s.
I was wondering if you plan to support the Q1s Unsloth's versions of Deepseek (1.58-bit, 1.73-bit)?
Your work is a massive enhancement in the LLM world, thank you again for it!
Beta Was this translation helpful? Give feedback.
All reactions