-
Notifications
You must be signed in to change notification settings - Fork 6
Add initial LoRA finetuning support; vulkan OUT_PROD; vulkan cross-entropy-backward #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: temp-finetuning
Are you sure you want to change the base?
Conversation
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
Signed-off-by: vineet <[email protected]>
…lation Signed-off-by: vineet <[email protected]>
Steps to test llama.cpp inference on Android:
make sure to checkout the
|
For testing I'll reference the updated README: https://github.com/tetherto/qvac-ext-lib-llama.cpp/blob/bc7dd9f9288222394da37eac3d7adf71d409ad83/examples/training/README.md#using-trained-adapters |
command we used for testing |
Signed-off-by: vineet <[email protected]>
The PR adds: