You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,5 +4,5 @@
4
4
- This repository contains the implementation for our ICCV 2025 submission on evaluating and training multi-modal large language models for action recognition.
5
5
- Our code is built on [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT), and files in the directory `llava/action` are related to this work. We thank the authors of LLaVA-NeXT for making their code publicly available.
6
6
A diff can be generated against the FIX of LLaVA-NeXT to see our modifications.
7
-
- The code will be open sourced when published. For review, the license is [no license](https://choosealicense.com/no-permission/).
7
+
- The code will be made publicaly available when published. For review, the provdied code and model license is [no license](https://choosealicense.com/no-permission/).
8
8
- Please see the `\example` directory for a demo notebook and installation instructions.
0 commit comments