forked from kaust-generative-ai/local-deployment
-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
enhancementNew feature or requestNew feature or request
Description
LLaMA C++ supports using different LoRA adaptors for the same underlying pre-trained model. The following are the relevant llama-cli flags.
- `--lora FNAME`: Apply a LoRA (Low-Rank Adaptation) adapter to the model (implies --no-mmap). This allows you to adapt the pretrained model to specific tasks or domains.
- `--lora-base FNAME`: Optional model to use as a base for the layers modified by the LoRA adapter. This flag is used in conjunction with the `--lora` flag, and specifies the base model for the adaptation.Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request