You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the demo application with smartlab action recognition and smartlab object detection algorithms.
4
-
This demo takes multi-view video inputs to identify actions and objects, then evaluates scores of current state.
5
-
Action recognition architecture uses two encoders for front-view and top-view respectively, and a single decoder.
6
-
Object detection uses two models for each view to detect large and small objects, respectively.
3
+
This is the demo application with smartlab object detection and smartlab action recognition algorithms.
4
+
This demo takes multi-view video inputs to identify objects and actions, then evaluates scores for teacher's reference.
5
+
The UI is shown as:
6
+

7
+
**The left picture** and **right picture** show top view and side view on the test bench respectively. For object detection part,
8
+
**blue bounding boxes** are shown. Below these pictures, **progress bar** is shown for action types, and the colors of actions correspond to
9
+
the **action names** above. Scoring part is below the entire UI and there are 8 score points. `[1]` means student can
10
+
get 1 point while `[0]` means student loses the point. `[-]` means under evaluation.
11
+
12
+
## Algorithms
13
+
Architecture of smart science lab contains object detection, action recognition and scoring evaluator.
7
14
The following pre-trained models are delivered with the product:
8
15
9
-
*`i3d-rgb-tf` + `smartlab-sequence-modelling-0001`, which are other models for identifying actions 2 actions of smartlab (adjust_rider, put_take).
16
+
*`smartlab-object-detection-0001` + `smartlab-object-detection-0002` + `smartlab-object-detection-0003` + `smartlab-object-detection-0004`, which are models
*`Segmentor` segments video frames based on action of the frame
20
-
*`Evaluator` calculates scores of the current state
21
-
*`Display` displays detected objects, recognized action, calculated scores on the current frame
22
-
23
-
24
-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
*`Segmentor` segment and classify video frames based on action type of the frame
31
+
*`Evaluator` give scores of the current state
32
+
*`Display` display the whole UI
25
33
26
34
## Preparing to Run
27
-
For demo input image or video files, you need to provide smartlab videos (https://storage.openvinotoolkit.org/data/test_data/videos/smartlab/).
28
-
The list of models supported by the demo is in `<omz_dir>/demos/smartlab_demo/python/models.lst` file.
29
-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO format (\*.xml + \*.bin).
35
+
Example input video: https://storage.openvinotoolkit.org/data/test_data/videos/smartlab/v3.
30
36
37
+
The list of models supported by the demo is in `<omz_dir>/demos/smartlab_demo/python/models.lst` file.
38
+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) to download.
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
58
+
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) for
59
+
> the details on models inference support at different devices.
46
60
47
61
## Running
48
62
49
63
Running the demo with `-h` shows this help message:
0 commit comments