You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Official implementation of ['LLaMA-Adapter: Efficient Fine-tuning of Language Mo
10
10
11
11
This repo proposes **LLaMA-Adapter (V2)**, a lightweight adaption method for fine-tuning **Instruction-following** and **Multi-modal**[LLaMA](https://github.com/facebookresearch/llama) models 🔥.
12
12
13
-
Try out the web demo 🤗 of LLaMA-Adapter: [](https://huggingface.co/spaces/csuhan/LLaMA-Adapter), [LLaMA-Adapter V2](http://llama-adapter.opengvlab.com/) and [ImageBind-LLM](http://imagebind-llm.opengvlab.com/).
13
+
Try out the web demo 🤗 of LLaMA-Adapter: [](https://huggingface.co/spaces/csuhan/LLaMA-Adapter)[](https://openxlab.org.cn/apps/detail/RenRuiZhang/LLaMA_Adapter), [LLaMA-Adapter V2](http://llama-adapter.opengvlab.com/) and [ImageBind-LLM](http://imagebind-llm.opengvlab.com/).
14
14
15
15
## News
16
16
-**[2023.08.28]** We release quantized LLM with [OmniQuant](https://github.com/OpenGVLab/OmniQuant), which is an efficient, accurate, and omnibearing (even extremely low bit) quantization algorithm. Multimodal version is coming soon.🔥🔥🔥
0 commit comments