@@ -17,7 +17,7 @@ This library is based on three research projects for monocular/stereo 3D human l
1717 [T. Mordan](https://people.epfl.ch/taylor.mordan/?lang=en), [A. Alahi](https://scholar.google.com/citations?user=UIhXQ64AAAAJ&hl=en)_, ICRA 2021 <br />
1818__ [ Article] ( https://arxiv.org/abs/2008.10913 ) __   ;   ;   ;   ;   ;   ;   ;   ; __ [ Citation] ( #Citation ) __   ;   ;   ;   ;   ;   ;   ;   ; __ [ Video] ( https://www.youtube.com/watch?v=pGssROjckHU ) __
1919
20- <img src =" docs/out_test_000840_multi .jpg " width =" 700 " />
20+ <img src =" docs/out_000840_multi .jpg " width =" 700 " />
2121
2222---
2323
@@ -125,24 +125,24 @@ If you provide a ground-truth json file to compare the predictions of the networ
125125For an example image, run the following command:
126126
127127``` sh
128- python3 -m monoloco.run predict docs/test_002282 .png \
128+ python3 -m monoloco.run predict docs/002282 .png \
129129--path_gt names-kitti-200615-1022.json \
130130-o < output directory> \
131131--long-edge < rescale the image by providing dimension of long side>
132132--n_dropout < 50 to include epistemic uncertainty, 0 otherwise>
133133```
134134
135- ![ predict] ( docs/out_test_002282 .png.multi.jpg )
135+ ![ predict] ( docs/out_002282 .png.multi.jpg )
136136
137137To show all the instances estimated by MonoLoco add the argument ` --show_all ` to the above command.
138138
139- ![ predict_all] ( docs/out_test_002282 .png.multi_all.jpg )
139+ ![ predict_all] ( docs/out_002282 .png.multi_all.jpg )
140140
141141It is also possible to run [ openpifpaf] ( https://github.com/vita-epfl/openpifpaf ) directly
142142by using ` --mode keypoints ` . All the other pifpaf arguments are also supported
143143and can be checked with ` python3 -m monoloco.run predict --help ` .
144144
145- ![ predict] ( docs/out_test_002282_pifpaf .jpg )
145+ ![ predict] ( docs/out_002282_pifpaf .jpg )
146146
147147
148148** Stereo Examples** <br />
@@ -156,12 +156,12 @@ You can load one or more image pairs using glob expressions. For example:
156156
157157``` sh
158158python3 -m monoloco.run predict --mode stereo \
159- --glob docs/test_000840 * .png
159+ --glob docs/000840 * .png
160160 --path_gt < to match results with ground-truths> \
161161 -o data/output -long_edge 2500
162162 ```
163163
164- ![ Crowded scene] ( docs/out_test_000840_multi .jpg )
164+ ![ Crowded scene] ( docs/out_000840_multi .jpg )
165165
166166``` sh
167167python3 -m monoloco.run predict --glob docs/005523* .png \ --output_types multi \
@@ -183,7 +183,7 @@ For more info, run:
183183** Examples** <br >
184184An example from the Collective Activity Dataset is provided below.
185185
186- <img src =" docs/test_frame0032 .jpg " width =" 500 " />
186+ <img src =" docs/frame0032 .jpg " width =" 500 " />
187187
188188To visualize social distancing run the below, command:
189189
@@ -192,11 +192,11 @@ pip3 install scipy
192192```
193193
194194``` sh
195- python3 -m monoloco.run predict docs/test_frame0032 .jpg \
195+ python3 -m monoloco.run predict docs/frame0032 .jpg \
196196--activities social_distance --output_types front bird
197197```
198198
199- <img src =" docs/out_test_frame0032_front_bird .jpg " width =" 700 " />
199+ <img src =" docs/out_frame0032_front_bird .jpg " width =" 700 " />
200200
201201## C) Hand-raising detection
202202To detect raised hand, you can add the argument ` --activities raise_hand ` to the prediction command.
0 commit comments