You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: applications/ColossalChat/examples/README.md
+57Lines changed: 57 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -892,6 +892,63 @@ The dialogues can by multiple turns and it can contain system prompt. For more d
892
892
893
893
We use bf16 weights for finetuning. If you downloaded fp8 DeepSeek V3/R1 weights, you can use the [script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py) to convert the weights to bf16 via GPU. For Ascend NPU, you can use this [script](https://gitee.com/ascend/ModelZoo-PyTorch/blob/master/MindIE/LLM/DeepSeek/DeepSeek-V2/NPU_inference/fp8_cast_bf16.py).
894
894
895
+
We have also added details on how to load and reason with lora models.
896
+
```python
897
+
from transformers import (
898
+
AutoModelForCausalLM,
899
+
AutoTokenizer,
900
+
)
901
+
from peft import (
902
+
PeftModel
903
+
)
904
+
import torch
905
+
906
+
# Set model path
907
+
model_name ="Qwen/Qwen2.5-3B"
908
+
lora_adapter ="Qwen2.5-3B_lora"# Your lora model Path
0 commit comments