How do I convert models from q4_2? #1399
Unanswered
skidd-level-100
asked this question in
Q&A
Replies: 1 comment 3 replies
-
you could get the originals https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced/ and convert them yourself. (that is all Pi3141 did in this case) or just wait for ppl to roll out the new files. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In light of the current readme stating:
warning TEMPORARY NOTICE ABOUT UPCOMING BREAKING CHANGE warning
The quantization formats will soon be updated: #1305
All ggml model files using the old format will not work with the latest llama.cpp code after that change is merged
:
I would like to have a few command examples for when this change hits to convert my alpaca-enhanced model (https://huggingface.co/Pi3141/alpaca-7b-native-enhanced) to whatever the new format will be, also any other conversion commands
are great as I have a couple more models I like (Wizard-7B-uncesored, and more).
#1288
is empty ATM
Beta Was this translation helpful? Give feedback.
All reactions