Skip to content

Commit 333d6fc

Browse files
authored
Merge pull request #279 from LLaVA-VL/yhzhang/llava_video_dev
Update LLaVA-Video paper link
2 parents a4c9bce + 44bb013 commit 333d6fc

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
</p>
44

55
# LLaVA-NeXT: Open Large Multimodal Models
6-
[![Static Badge](https://img.shields.io/badge/llava_video-paper-green)](http://arxiv.org/abs/2410.0271)
6+
[![Static Badge](https://img.shields.io/badge/llava_video-paper-green)](http://arxiv.org/abs/2410.02713)
77
[![Static Badge](https://img.shields.io/badge/llava_onevision-paper-green)](https://arxiv.org/abs/2408.03326)
88
[![llava_next-blog](https://img.shields.io/badge/llava_next-blog-green)](https://llava-vl.github.io/blog/)
99

@@ -30,7 +30,7 @@
3030
📄 **Explore more**:
3131
- [LLaVA-Video-178K Dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K): Download the dataset.
3232
- [LLaVA-Video Models](https://huggingface.co/collections/lmms-lab/llava-video-661e86f5e8dabc3ff793c944): Access model checkpoints.
33-
- [Paper](http://arxiv.org/abs/2410.0271): Detailed information about LLaVA-Video.
33+
- [Paper](http://arxiv.org/abs/2410.02713): Detailed information about LLaVA-Video.
3434
- [LLaVA-Video Documentation](https://github.com/LLaVA-VL/LLaVA-NeXT/blob/main/docs/LLaVA_Video_1003.md): Guidance on training, inference and evaluation.
3535

3636
- [2024/09/13] 🔥 **🚀 [LLaVA-OneVision-Chat](docs/LLaVA_OneVision_Chat.md)**. The new LLaVA-OV-Chat (7B/72B) significantly improves the chat experience of LLaVA-OV. 📄

0 commit comments

Comments
 (0)