Skip to content

Commit 32f5118

Browse files
authored
Update README.md
1 parent b82e90b commit 32f5118

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ This repo proposes **LLaMA-Adapter (V2)**, a lightweight adaption method for fin
1212

1313
Try out the web demo 🤗 of LLaMA-Adapter: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/csuhan/LLaMA-Adapter), [LLaMA-Adapter V2](http://llama-adapter.opengvlab.com/) and [ImageBind-LLM](http://imagebind-llm.opengvlab.com/).
1414

15+
Join us at [Wechat](https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/docs/wechat.md)!
16+
1517
## News
1618
- **[2023.11.11]** We release [SPHINX](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX), a new multi-modal LLM, which is a huge leap from LLaMa-Adapter-V2.🔥🔥🔥
1719
- **[2023.10.11]** We realse **LLaMA-Adapter V2.1**, an improved version of LLaMA-Adapter V2 with stronger multi-modal reasoning performance. Check [llama_adapter_v2_multimodal7b](llama_adapter_v2_multimodal7b) for details.

0 commit comments

Comments
 (0)