Skip to content

Commit d51663d

Browse files
authored
Minimal Changes to Colab Notebook (#4)
* Created using Colab * Created using Colab * Delete llavaction_video_demo.ipynb * Update llavaction_video_demo.ipynb - linked to main repo * Update README.md - added colab button
1 parent dd12332 commit d51663d

File tree

2 files changed

+1007
-87
lines changed

2 files changed

+1007
-87
lines changed

README.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,24 +18,23 @@ Understanding human behavior requires measuring behavioral actions. Due to its c
1818
- This repository contains the implementation for our preprint on evaluating and training multi-modal large language models for action recognition.
1919
- Our code is built on [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT), and files in the directory `llavaction/action` are related to our work. We thank the authors of LLaVA-NeXT for making their code publicly available.
2020
- The files in the `/eval`, `/model`, `/serve` and `/train` are directly from [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT), unless modified and noted below.
21-
- Modified files are:
22-
- - /model/llava_arch.py
23-
- - /model/language_model/llava_qwen.py
24-
- - /train/train.py
25-
- - /train/llava_trainer.py
26-
- - /utils.py
27-
- - A diff can be generated against the commit (79ef45a6d8b89b92d7a8525f077c3a3a9894a87d) of LLaVA-NeXT to see our modifications.
21+
- `/model/llava_arch.py`
22+
- `/model/language_model/llava_qwen.py`
23+
- `/train/train.py`
24+
- `/train/llava_trainer.py`
25+
- `/utils.py`
2826

2927
## Demo
30-
- Currently, we provide code to run video inference in a Jupyter Notebook (which can be run on Google Colaboratory).
28+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AdaptiveMotorControlLab/LLaVAction/blob/main/example/llavaction_video_demo.ipynb)
29+
We provide code to run video inference in a Jupyter Notebook (which can be run on Google Colaboratory).
3130

3231

3332
### Installation guide for video inference:
3433
```bash
3534
conda create -n llavaction python=3.10 -y
3635
conda activate llavaction
3736
pip install --upgrade pip # Enable PEP 660 support.
38-
pip install -e .
37+
pip install --pre llavaction
3938
```
4039

4140
- Please see the `/example` directory for a demo notebook.

0 commit comments

Comments
 (0)