我们提供了多样化的大模型微调示例脚本。
请确保在 LLaMA-Factory 目录下执行下述命令。
使用 CUDA_VISIBLE_DEVICES(GPU)或 ASCEND_RT_VISIBLE_DEVICES(NPU)选择计算设备。
LLaMA-Factory 默认使用所有可见的计算设备。
基础用法:
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml高级用法:
CUDA_VISIBLE_DEVICES=0,1 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml \
learning_rate=1e-5 \
logging_steps=1bash examples/train_lora/llama3_lora_sft.shllamafactory-cli train examples/train_lora/llama3_lora_pretrain.yamlllamafactory-cli train examples/train_lora/llama3_lora_sft.yamlllamafactory-cli train examples/train_lora/qwen2_5vl_lora_sft.yamlllamafactory-cli train examples/train_lora/llama3_lora_dpo.yamlllamafactory-cli train examples/train_lora/qwen2_5vl_lora_dpo.yamlllamafactory-cli train examples/train_lora/llama3_lora_reward.yamlllamafactory-cli train examples/train_lora/llama3_lora_ppo.yamlllamafactory-cli train examples/train_lora/llama3_lora_kto.yaml对于大数据集有帮助,在配置中使用 tokenized_path 以加载预处理后的数据集。
llamafactory-cli train examples/train_lora/llama3_preprocess.yamlllamafactory-cli eval examples/train_lora/llama3_lora_eval.yamlFORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml要启动一个支持弹性节点和容错的多机指令微调,在每个节点上执行以下命令。弹性节点数量范围为 MIN_NNODES:MAX_NNODES,每个节点最多允许因为错误重启 MAX_RESTARTS 次。RDZV_ID 应设置为一个唯一的作业 ID(由参与该作业的所有节点共享)。更多新可以参考官方文档 torchrun。
FORCE_TORCHRUN=1 MIN_NNODES=1 MAX_NNODES=3 MAX_RESTARTS=3 RDZV_ID=llamafactory MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yamlFORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yamlUSE_RAY=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ray.yamlllamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yamlllamafactory-cli train examples/train_qlora/llama3_lora_sft_bnb_npu.yamlllamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yamlllamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yamlllamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yamlFORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yamlFORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yamlFORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2_5vl_full_sft.yaml注:请勿使用量化后的模型或 quantization_bit 参数来合并 LoRA 适配器。
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yamlllamafactory-cli export examples/merge_lora/llama3_gptq.yamlllamafactory-cli export examples/merge_lora/llama3_full_sft.yamlpython scripts/vllm_infer.py --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct --template llama3 --dataset alpaca_en_demo
python scripts/eval_bleu_rouge.py generated_predictions.jsonl
llamafactory-cli chat examples/inference/llama3_lora_sft.yamlllamafactory-cli webchat examples/inference/llama3_lora_sft.yamlllamafactory-cli api examples/inference/llama3_lora_sft.yamlllamafactory-cli train examples/extras/galore/llama3_full_sft.yamlllamafactory-cli train examples/extras/apollo/llama3_full_sft.yamlllamafactory-cli train examples/extras/badam/llama3_full_sft.yamlllamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yamlllamafactory-cli train examples/extras/muon/qwen2_full_sft.yamlllamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yamlllamafactory-cli train examples/extras/pissa/llama3_lora_sft.yamlllamafactory-cli train examples/extras/mod/llama3_full_sft.yamlbash examples/extras/llama_pro/expand.sh
llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yamlbash examples/extras/fsdp_qlora/train.shllamafactory-cli train examples/extras/oft/llama3_oft_sft.yamlllamafactory-cli train examples/extras/qoft/llama3_oft_sft_bnb_npu.yaml