Skip to content

Commit 32eb6f5

Browse files
authored
Update README.md
1 parent fe7ec53 commit 32eb6f5

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

llama_adapter_v2_multimodal/README.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# LLaMA-Adapter-V2 Multi-modal
22

33
## News
4-
4+
* [July 5, 2023] Release pre-traininig and fine-tuning codes.
55
* [May 26, 2023] Initial release.
66

77

@@ -23,7 +23,7 @@
2323
└── tokenizer.model
2424
```
2525

26-
## Usage
26+
## Inference
2727

2828
Here is a simple inference script for LLaMA-Adapter V2. The pre-trained model will be downloaded directly from [Github Release](https://github.com/ZrrSkywalker/LLaMA-Adapter/releases/tag/v.2.0.0).
2929

@@ -38,6 +38,7 @@ device = "cuda" if torch.cuda.is_available() else "cpu"
3838
llama_dir = "/path/to/LLaMA/"
3939

4040
model, preprocess = llama.load("BIAS-7B", llama_dir, device)
41+
model.eval()
4142

4243
prompt = llama.format_prompt("Please introduce this painting.")
4344
img = Image.fromarray(cv2.imread("../docs/logo_v1.png"))
@@ -71,4 +72,7 @@ import llama
7172
print(llama.available_models())
7273
```
7374

74-
Now we provide `BIAS-7B`, which fine-tunes the `bias` and `norm` parameters of LLaMA. We will include more pretrained models in the future, such as the LoRA fine-tuning model `LoRA-7B` and partial-tuning model `PARTIAL-7B`.
75+
Now we provide `BIAS-7B`, which fine-tunes the `bias` and `norm` parameters of LLaMA. We will include more pretrained models in the future, such as the LoRA fine-tuning model `LoRA-7B` and partial-tuning model `PARTIAL-7B`.
76+
77+
## Pre-traininig & Fine-tuning
78+
See [train.md](docs/train.md)

0 commit comments

Comments
 (0)