Skip to content

Commit a7ae1c4

Browse files
corrected broken links and broken references (#46)
* corrected broken links and broken references * Update README.md - putting /conda only works for linux users so please do not add that @yeshaokai * Update README.md - always add full links; in external docs or on pypi these are also broken links @yeshaokai * Update amadesuGPT.yml - keep amadeusgpt as a dependency ... --------- Co-authored-by: Mackenzie Mathis <[email protected]>
1 parent e4333df commit a7ae1c4

File tree

5 files changed

+17
-17
lines changed

5 files changed

+17
-17
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,12 @@ You can git clone (or download) this repo to grab a copy and go. We provide exam
7272

7373
### Here are a few demos that could fuel your own work, so please check them out!
7474

75-
1) [Draw a region of interest (ROI) and ask, "when is the animal in the ROI?"](notebook/EPM_demo.ipynb)
76-
2) [Use a DeepLabCut SuperAnimal pose model to do video inference](notebook/custom_mouse_demo.ipynb) - (make sure you use a GPU if you don't have corresponding DeepLabCut keypoint files already!
77-
3) [Write you own integration modules and use them](notebook/Horse_demo.ipynb). Bonus: [source code](amadeusgpt/integration_modules). Make sure you delete the cached modules_embedding.pickle if you add new modules!
78-
4) [Multi-Animal social interactions](notebook/MABe_demo.ipynb)
79-
5) [Reuse the task program generated by LLM and run it on different videos](notebook/MABe_demo.ipynb)
80-
7) You can ask one query across multiple videos. Put your keypoint files and video files (pairs) in the same folder and specify the `data_folder` as shown in this [Demo](notebook/custom_mouse_video.ipynb). Make sure your video file and keypoint file follows the normal DeepLabCut convention, i.e., `prefix.mp4` `prefix*.h5`.
75+
1) [Draw a region of interest (ROI) and ask, "when is the animal in the ROI?"](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/EPM_demo.ipynb)
76+
2) [Use a DeepLabCut SuperAnimal pose model to do video inference](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/custom_mouse_demo.ipynb) - (make sure you use a GPU if you don't have corresponding DeepLabCut keypoint files already!
77+
3) [Write you own integration modules and use them](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/Horse_demo.ipynb). Bonus: [source code](amadeusgpt/integration_modules). Make sure you delete the cached modules_embedding.pickle if you add new modules!
78+
4) [Multi-Animal social interactions](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/MABe_demo.ipynb)
79+
5) [Reuse the task program generated by LLM and run it on different videos](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/MABe_demo.ipynb)
80+
7) You can ask one query across multiple videos. Put your keypoint files and video files (pairs) in the same folder and specify the `data_folder` as shown in this [Demo](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/custom_mouse_video.ipynb). Make sure your video file and keypoint file follows the normal DeepLabCut convention, i.e., `prefix.mp4` `prefix*.h5`.
8181

8282
### Minimal example
8383

conda/amadesuGPT.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,6 @@ dependencies:
66
- python==3.10
77
- pytables==3.8.0
88
- hdf5
9-
- pip
109
- jupyter
11-
- amadeusGPT
10+
- pip:
11+
- amadeusgpt

conda/install_cpu.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
#!/bin/bash
22
source /Users/shaokaiye/miniforge3/bin/activate
3-
conda env create -f conda/amadesuGPT-cpu.yml
4-
conda activate amadeusgpt-cpu
3+
conda env create -f conda/amadesuGPT.yml
4+
conda activate amadeusgpt
55
conda install pytorch torchvision cpuonly -c pytorch
66
pip install "git+https://github.com/DeepLabCut/DeepLabCut.git@pytorch_dlc#egg=deeplabcut"
77
pip install pycocotools
88
pip install -e .[streamlit]
99
# install the python kernel
10-
python -m ipykernel install --user --name amadeusgpt-cpu --display-name "amadeusgpt-cpu"
10+
python -m ipykernel install --user --name amadeusgpt --display-name "amadeusgpt"

conda/install_gpu.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
#!/bin/bash
22
source /mnt/md0/shaokai/miniconda3/bin/activate
3-
conda env create -f conda/amadesuGPT-gpu.yml
4-
conda activate amadeusgpt-gpu
3+
conda env create -f conda/amadesuGPT.yml
4+
conda activate amadeusgpt
55
# adjust this line according to your cuda version
66
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
77
pip install "git+https://github.com/DeepLabCut/DeepLabCut.git@pytorch_dlc#egg=deeplabcut"
88
pip install pycocotools
99
pip install -e .[streamlit]
1010
# install the python kernel
11-
python -m ipykernel install --user --name amadeusgpt-gpu --display-name "amadeusgpt-gpu"
11+
python -m ipykernel install --user --name amadeusgpt --display-name "amadeusgpt"

conda/install_minimal.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
#!/bin/bash
22
# change this to your own miniconda / miniforge path
33
source /Users/shaokaiye/miniforge3/bin/activate
4-
conda env create -f conda/amadesuGPT-minimal.yml
5-
conda activate amadeusgpt-minimal
4+
conda env create -f conda/amadesuGPT.yml
5+
conda activate amadeusgpt
66
pip install pycocotools
77
pip install -e .[streamlit]
8-
python -m ipykernel install --user --name amadeusgpt-minimal --display-name "amadeusgpt-minimal"
8+
python -m ipykernel install --user --name amadeusgpt --display-name "amadeusgpt"

0 commit comments

Comments
 (0)