You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The network estimates orientation and box dimensions as well. Results are saved in a json file when using the command
@@ -218,11 +219,14 @@ The network estimates orientation and box dimensions as well. Results are saved
218
219
219
220
## E) Webcam
220
221
You can use the webcam as input by using the `--webcam` argument. By default the `--z_max` is set to 10 while using the webcam and the `--long-edge` is set to 144. If multiple webcams are plugged in you can choose between them using `--camera`, for instance to use the second camera you can add `--camera 1`.
221
-
222
+
You also need to install `opencv-python` to use this feature :
223
+
```sh
224
+
pip3 install opencv-python
225
+
```
222
226
Example command:
223
227
224
228
```sh
225
-
python -m monoloco.run predict --webcam \
229
+
python3 -m monoloco.run predict --webcam \
226
230
--activities raise_hand social_distance
227
231
```
228
232
@@ -231,21 +235,22 @@ We train on the KITTI dataset (MonoLoco/Monoloco++/MonStereo) or the nuScenes da
231
235
232
236
Results for [MonoLoco++](###Tables) are obtained with:
If you are interested in the original results of the MonoLoco ICCV article (now improved with MonoLoco++), please refer to the tag v0.4.9 in this repository.
245
250
246
251
Finally, for a more extensive list of available parameters, run:
247
252
248
-
`python -m monstereo.run train --help`
253
+
`python3 -m monstereo.run train --help`
249
254
250
255
<br />
251
256
@@ -284,7 +289,7 @@ Download kitti images (from left and right cameras), ground-truth files (labels)
284
289
The network takes as inputs 2D keypoints annotations. To create them run PifPaf over the saved images:
285
290
286
291
```sh
287
-
python -m openpifpaf.predict \
292
+
python3 -m openpifpaf.predict \
288
293
--glob "data/kitti/images/*.png" \
289
294
--json-output <directory to contain predictions> \
290
295
--checkpoint=shufflenetv2k30 \
@@ -306,12 +311,12 @@ Once this step is complete, the below commands transform all the annotations int
306
311
307
312
For MonoLoco++:
308
313
```sh
309
-
python -m monoloco.run prep --dir_ann <directory that contains annotations>
314
+
python3 -m monoloco.run prep --dir_ann <directory that contains annotations>
310
315
```
311
316
312
317
For MonStereo:
313
318
```sh
314
-
python -m monoloco.run prep --mode stereo --dir_ann <directory that contains left annotations>
319
+
python3 -m monoloco.run prep --mode stereo --dir_ann <directory that contains left annotations>
315
320
```
316
321
317
322
## Collective Activity Dataset
@@ -341,7 +346,7 @@ which for example change the name of all the jpg images in that folder adding th
341
346
Pifpaf annotations should also be saved in a single folder and can be created with:
@@ -381,7 +386,7 @@ To include also geometric baselines and MonoLoco, download a monoloco model, sav
381
386
The evaluation file will run the model over all the annotations and compare the results with KITTI ground-truth and the downloaded baselines. For this run:
0 commit comments