Skip to content

Conversation

zoq
Copy link

@zoq zoq commented Oct 15, 2025

metal backend that supports lora-finetuning. On macOS building llama.cpp will automatically select the Metal backend. Command to test the finetuning against the Metal backend:

./build/bin/llama-finetune-lora -m Qwen3_0.6B.Q8_0.gguf -f trump.txt -ngl 999 -c 256 -b 256 -ub 256 --flash-attn off

Copy link

@olyasir olyasir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!

@zoq zoq changed the base branch from temp-latest to temp-latest-finetuning October 16, 2025 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants