-
|
Question for everyone, https://llm.extractum.io/list/ has a lot of models but jan can't use them directly at the moment. Until jan supports safetensors, how to convert multiple files with .safetensors extension to 1 gguf file so that jan can use them. (I don't know how to program but would like to use more models) |
Beta Was this translation helpful? Give feedback.
Answered by
louis-jan
Nov 11, 2025
Replies: 1 comment
-
|
I'm not sure if this is still relevant, but you can follow this instruction to quantize SafeTensors models. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
louis-jan
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm not sure if this is still relevant, but you can follow this instruction to quantize SafeTensors models.
https://qwen.readthedocs.io/en/latest/quantization/llama.cpp.html