Can I finetune llama3.3 using axolotl? #2283
Unanswered
hahmad2008
asked this question in
Q&A
Replies: 2 comments 12 replies
-
Hey, you could use the existing llama3 configs and point to that new model https://github.com/axolotl-ai-cloud/axolotl/blob/main/examples/llama-3/lora-8b.yml |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks @NanoCode012 . can we finetune a quantized model? AWQ version? what the config need to be set? |
Beta Was this translation helpful? Give feedback.
12 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
could you please provide any example for the config file to finetune llama3.3 on instruction dataset?
model:
meta-llama/Llama-3.3-70B-Instruct
Beta Was this translation helpful? Give feedback.
All reactions