Skip to content

Commit a63e794

Browse files
authored
Merge branch 'openvinotoolkit:master' into master
2 parents ba74851 + 21ad0f0 commit a63e794

File tree

601 files changed

+5012
-1722
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

601 files changed

+5012
-1722
lines changed

CONTRIBUTING.md

Lines changed: 16 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,16 @@ For replacement operation:
113113
- `replacement` — Replacement string
114114
- `count` (*optional*) — Exact number of replacements (if number of `pattern` occurrences less then this number, downloading will be aborted)
115115

116+
**`input_info`**
117+
118+
List of inputs containing the information about input name, shape and layout. For example:
119+
```
120+
input_info:
121+
- name: Placeholder
122+
shape: [1, 224, 224, 3]
123+
layout: NHWC
124+
```
125+
116126
**`conversion_to_onnx_args`** (*only for PyTorch\* models*)
117127

118128
List of ONNX\* conversion parameters, see `model_optimizer_args` for details.
@@ -121,7 +131,6 @@ List of ONNX\* conversion parameters, see `model_optimizer_args` for details.
121131

122132
Conversion parameters (learn more in the [Model conversion](#model-conversion) section). For example:
123133
```
124-
- --input=data
125134
- --mean_values=data[127.5]
126135
- --scale_values=data[127.5]
127136
- --reverse_input_channels
@@ -163,10 +172,12 @@ postprocessing:
163172
- $type: unpack_archive
164173
format: gztar
165174
file: tf-densenet121.tar.gz
175+
input_info:
176+
- name: Placeholder
177+
shape: [1, 224, 224, 3]
178+
layout: NHWC
166179
model_optimizer_args:
167180
- --reverse_input_channels
168-
- --input_shape=[1,224,224,3]
169-
- --input=Placeholder
170181
- --mean_values=Placeholder[123.68,116.78,103.94]
171182
- --scale_values=Placeholder[58.8235294117647]
172183
- --output=densenet121/predictions/Reshape_1
@@ -177,9 +188,9 @@ license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICE
177188

178189
## Model Conversion
179190

180-
Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After a successful conversion you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters.
191+
OpenVINO™ Runtime supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide). After a successful conversion, you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters.
181192

182-
> **NOTE 1**: Image preprocessing parameters (mean and scale) must be built into a converted model to simplify model usage.
193+
> **NOTE**: Image preprocessing parameters (mean and scale) must be built into a converted model to simplify model usage.
183194
184195
> **NOTE 2**: If a model input is a color image, color channel order should be `BGR`.
185196

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Intel is committed to the respect of human rights and avoiding complicity in hum
1111
* [Intel Pre-Trained Models](models/intel/index.md)
1212
* [Public Pre-Trained Models](models/public/index.md)
1313
* [Model Downloader](tools/model_tools/README.md) and other automation tools
14-
* [Demos](demos/README.md) that demonstrate models usage with Deep Learning Deployment Toolkit
14+
* [Demos](demos/README.md) that demonstrate models usage with OpenVINO™ Toolkit
1515
* [Accuracy Checker](tools/accuracy_checker/README.md) tool for models accuracy validation
1616

1717
## License

ci/dependencies.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
opencv_linux: '20220215_0649-4.5.5_053'
2-
opencv_windows: '20220215_0649-4.5.5_053'
3-
openvino_linux: '2022.1.0.592'
4-
openvino_windows: '2022.1.0.592'
5-
wheel_linux: '2022.1.0.dev20220215-6682'
6-
wheel_windows: '2022.1.0.dev20220215-6682'
1+
opencv_linux: '20220223_0602-4.5.5_073'
2+
opencv_windows: '20220223_0602-4.5.5_073'
3+
openvino_linux: '2022.1.0.606'
4+
openvino_windows: '2022.1.0.606'
5+
wheel_linux: '2022.1.0.dev20220222-6839'
6+
wheel_windows: '2022.1.0.dev20220222-6839'

data/dataset_definitions.yml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1461,3 +1461,35 @@ datasets:
14611461
data_dir: annotation
14621462
input_suffix: in
14631463
reference_suffix: out
1464+
1465+
- name: smartlab_detection_10cl_top
1466+
data_source: object_detection/streams_1/top/images
1467+
annotation_conversion:
1468+
converter: mscoco_detection
1469+
annotation_file: object_detection/streams_1/top/annotations/instances_glb1cls10.json
1470+
annotation: mscoco_detection_top_10cls.pickle
1471+
dataset_meta: mscoco_detection_top_10cls.json
1472+
1473+
- name: smartlab_detection_3cl_top
1474+
data_source: object_detection/streams_1/top/images
1475+
annotation_conversion:
1476+
converter: mscoco_detection
1477+
annotation_file: object_detection/streams_1/top/annotations/instances_glb2bcls3.json
1478+
annotation: mscoco_detection_top_3cls.pickle
1479+
dataset_meta: mscoco_detection_top_3cls.json
1480+
1481+
- name: smartlab_detection_10cl_high
1482+
data_source: object_detection/streams_1/high/images
1483+
annotation_conversion:
1484+
converter: mscoco_detection
1485+
annotation_file: object_detection/streams_1/high/annotations/instances_glb1cls10.json
1486+
annotation: mscoco_detection_high_10cls.pickle
1487+
dataset_meta: mscoco_detection_high_10cls.json
1488+
1489+
- name: smartlab_detection_3cl_high
1490+
data_source: object_detection/streams_1/high/images
1491+
annotation_conversion:
1492+
converter: mscoco_detection
1493+
annotation_file: object_detection/streams_1/high/annotations/instances_glb2bcls3.json
1494+
annotation: mscoco_detection_high_3cls.pickle
1495+
dataset_meta: mscoco_detection_high_3cls.json

demos/3d_segmentation_demo/python/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ python3 -mpip install --user -r <omz_dir>/demos/3d_segmentation_demo/python/requ
1818

1919
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
2020
The list of models supported by the demo is in `<omz_dir>/demos/3d_segmentation_demo/python/models.lst` file.
21-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (\*.xml + \*.bin).
21+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
2222

2323
An example of using the Model Downloader:
2424

demos/README.md

Lines changed: 98 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,76 @@
11
# Open Model Zoo Demos
22

3+
@sphinxdirective
4+
5+
.. toctree::
6+
:maxdepth: 1
7+
:hidden:
8+
9+
omz_demos_human_pose_estimation_3d_demo_python
10+
omz_demos_3d_segmentation_demo_python
11+
omz_demos_action_recognition_demo_python
12+
omz_demos_bert_named_entity_recognition_demo_python
13+
omz_demos_bert_question_answering_embedding_demo_python
14+
omz_demos_bert_question_answering_demo_python
15+
omz_demos_classification_demo_cpp
16+
omz_demos_colorization_demo_python
17+
omz_demos_crossroad_camera_demo_cpp
18+
omz_demos_face_detection_mtcnn_demo_python
19+
omz_demos_face_recognition_demo_python
20+
omz_demos_formula_recognition_demo_python
21+
omz_demos_gaze_estimation_demo_cpp_gapi
22+
omz_demos_interactive_face_detection_demo_cpp_gapi
23+
omz_demos_gaze_estimation_demo_cpp
24+
omz_demos_gesture_recognition_demo_python
25+
omz_demos_handwritten_text_recognition_demo_python
26+
omz_demos_human_pose_estimation_demo_cpp
27+
omz_demos_human_pose_estimation_demo_python
28+
omz_demos_deblurring_demo_python
29+
omz_demos_image_inpainting_demo_python
30+
omz_demos_image_processing_demo_cpp
31+
omz_demos_image_retrieval_demo_python
32+
omz_demos_segmentation_demo_cpp
33+
omz_demos_segmentation_demo_python
34+
omz_demos_image_translation_demo_python
35+
omz_demos_instance_segmentation_demo_python
36+
omz_demos_interactive_face_detection_demo_cpp
37+
omz_demos_machine_translation_demo_python
38+
omz_demos_monodepth_demo_python
39+
omz_demos_multi_camera_multi_target_tracking_demo_python
40+
omz_demos_multi_channel_face_detection_demo_cpp
41+
omz_demos_multi_channel_human_pose_estimation_demo_cpp
42+
omz_demos_multi_channel_object_detection_demo_yolov3_cpp
43+
omz_demos_noise_suppression_demo_python
44+
omz_demos_object_detection_demo_cpp
45+
omz_demos_object_detection_demo_python
46+
omz_demos_pedestrian_tracker_demo_cpp
47+
omz_demos_place_recognition_demo_python
48+
omz_demos_security_barrier_camera_demo_cpp
49+
omz_demos_single_human_pose_estimation_demo_python
50+
omz_demos_smart_classroom_demo_cpp
51+
omz_demos_social_distance_demo_cpp
52+
omz_demos_sound_classification_demo_python
53+
omz_demos_speech_recognition_deepspeech_demo_python
54+
omz_demos_speech_recognition_quartznet_demo_python
55+
omz_demos_mask_rcnn_demo_cpp
56+
omz_demos_text_detection_demo_cpp
57+
omz_demos_text_spotting_demo_python
58+
omz_demos_text_to_speech_demo_python
59+
omz_demos_time_series_forecasting_demo_python
60+
omz_demos_whiteboard_inpainting_demo_python
61+
62+
@endsphinxdirective
63+
364
The Open Model Zoo demo applications are console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state
465

5-
For the Intel® Distribution of OpenVINO™ toolkit, the demos are available after installation in the following directory: `<INSTALL_DIR>/deployment_tools/open_model_zoo/demos`.
6-
The demos can also be obtained from the Open Model Zoo [GitHub repository](https://github.com/openvinotoolkit/open_model_zoo/).
66+
Source code of the demos can be obtained from the Open Model Zoo [GitHub repository](https://github.com/openvinotoolkit/open_model_zoo/).
67+
68+
```sh
69+
git clone https://github.com/openvinotoolkit/open_model_zoo.git
70+
cd open_model_zoo
71+
git submodule update --init --recursive
72+
```
73+
774
C++, C++ G-API and Python\* versions are located in the `cpp`, `cpp_gapi` and `python` subdirectories respectively.
875

976
The Open Model Zoo includes the following demos:
@@ -64,6 +131,7 @@ The Open Model Zoo includes the following demos:
64131
- [Single Human Pose Estimation Python\* Demo](./single_human_pose_estimation_demo/python/README.md) - 2D human pose estimation demo.
65132
- [Smart Classroom C++ Demo](./smart_classroom_demo/cpp/README.md) - Face recognition and action detection demo for classroom environment.
66133
- [Smart Classroom C++ G-API Demo](./smart_classroom_demo/cpp_gapi/README.md) - Face recognition and action detection demo for classroom environment. G-PI version.
134+
- [Smartlab Python\* Demo](./smartlab_demo/python/README.md) - action recognition and object detection for smartlab.
67135
- [Social Distance C++ Demo](./social_distance_demo/cpp/README.md) - This demo showcases a retail social distance application that detects people and measures the distance between them.
68136
- [Sound Classification Python\* Demo](./sound_classification_demo/python/README.md) - Demo application for sound classification algorithm.
69137
- [Text Detection C++ Demo](./text_detection_demo/cpp/README.md) - Text Detection demo. It detects and recognizes multi-oriented scene text on an input image and puts a bounding box around detected area.
@@ -78,22 +146,33 @@ To run the demo applications, you can use images and videos from the media files
78146

79147
## Demos that Support Pre-Trained Models
80148

81-
> **NOTE:** Inference Engine HDDL plugin is available in [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution only.
149+
> **NOTE:** OpenVINO™ Runtime HDDL plugin is available in [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution only.
82150
83151
You can download the [Intel pre-trained models](../models/intel/index.md) or [public pre-trained models](../models/public/index.md) using the OpenVINO [Model Downloader](../tools/model_tools/README.md).
84152

85153
## Build the Demo Applications
86154

87-
To be able to build demos you need to source Inference Engine and OpenCV environment from a binary package which is available as [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution.
88-
Please run the following command before the demos build (assuming that the binary package was installed to `<INSTALL_DIR>`):
155+
To build the demos, you need to source OpenVINO™ and OpenCV environment. You can install the OpenVINO™ toolkit using the installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
156+
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download OpenCV and set environment variables before building the demos:
89157

90158
```sh
91-
source <INSTALL_DIR>/deployment_tools/bin/setupvars.sh
159+
<INSTALL_DIR>/extras/scripts/download_opencv.sh
160+
source <INSTALL_DIR>/setupvars.sh
92161
```
93162

94-
You can also build demos manually using Inference Engine built from the [openvino](https://github.com/openvinotoolkit/openvino) repo. In this case please set `InferenceEngine_DIR` environment variable to a folder containing `InferenceEngineConfig.cmake` and `ngraph_DIR` to a folder containing `ngraphConfig.cmake` in a build folder. Please also set the `OpenCV_DIR` to point to the OpenCV package to use. The same OpenCV version should be used both for Inference Engine and demos build. Alternatively these values can be provided via command line while running `cmake`. See [CMake's search procedure](https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure).
95-
Please refer to the Inference Engine [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode)
96-
for details. Please also add path to built Inference Engine libraries to `LD_LIBRARY_PATH` (Linux*) or `PATH` (Windows*) variable before building the demos.
163+
> **NOTE:** If you plan to use Python\* demos only, you can install the OpenVINO Python\* package.
164+
> ```sh
165+
> pip install openvino
166+
> ```
167+
168+
For the open-source version of OpenVINO, set the following variables:
169+
* `InferenceEngine_DIR` pointing to a folder containing `InferenceEngineConfig.cmake`
170+
* `OpenVINO_DIR` pointing to a folder containing `OpenVINOConfig.cmake`
171+
* `ngraph_DIR` pointing to a folder containing `ngraphConfig.cmake`.
172+
* `OpenCV_DIR` pointing to OpenCV. The same OpenCV version should be used both for OpenVINO and demos build.
173+
174+
Alternatively, these values can be provided via command line while running `cmake`. See [CMake search procedure](https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure).
175+
Also add paths to the built OpenVINO™ Runtime libraries to the `LD_LIBRARY_PATH` (Linux) or `PATH` (Windows) variable before building the demos.
97176
98177
### <a name="build_demos_linux"></a>Build the Demo Applications on Linux*
99178
@@ -270,15 +349,15 @@ build_demos_msvc.bat --target="classification_demo segmentation_demo"
270349
271350
### Get Ready for Running the Demo Applications on Linux*
272351
273-
Before running compiled binary files, make sure your application can find the Inference Engine and OpenCV libraries.
352+
Before running compiled binary files, make sure your application can find the OpenVINO™ and OpenCV libraries.
274353
If you use a [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution to build demos,
275354
run the `setupvars` script to set all necessary environment variables:
276355
277356
```sh
278-
source <INSTALL_DIR>/bin/setupvars.sh
357+
source <INSTALL_DIR>/setupvars.sh
279358
```
280359
281-
If you use your own Inference Engine and OpenCV binaries to build the demos please make sure you have added them
360+
If you use your own OpenVINO™ and OpenCV binaries to build the demos please make sure you have added them
282361
to the `LD_LIBRARY_PATH` environment variable.
283362
284363
**(Optional)**: The OpenVINO environment variables are removed when you close the
@@ -293,7 +372,7 @@ vi <user_home_directory>/.bashrc
293372
2. Add this line to the end of the file:
294373
295374
```sh
296-
source <INSTALL_DIR>/bin/setupvars.sh
375+
source <INSTALL_DIR>/setupvars.sh
297376
```
298377
299378
3. Save and close the file: press the **Esc** key, type `:wq` and press the **Enter** key.
@@ -313,16 +392,16 @@ list above.
313392
314393
### Get Ready for Running the Demo Applications on Windows*
315394
316-
Before running compiled binary files, make sure your application can find the Inference Engine and OpenCV libraries.
317-
Optionally download OpenCV community FFmpeg plugin. There is a downloader script in the OpenVINO package: `<INSTALL_DIR>\opencv\ffmpeg-download.ps1`.
318-
If you use a [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution to build demos,
395+
Before running compiled binary files, make sure your application can find the OpenVINO™ and OpenCV libraries.
396+
Optionally, download the OpenCV community FFmpeg plugin using the downloader script in the OpenVINO package: `<INSTALL_DIR>\extras\opencv\ffmpeg-download.ps1`.
397+
If you use the [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) distribution to build demos,
319398
run the `setupvars` script to set all necessary environment variables:
320399
321400
```bat
322-
<INSTALL_DIR>\bin\setupvars.bat
401+
<INSTALL_DIR>\setupvars.bat
323402
```
324403
325-
If you use your own Inference Engine and OpenCV binaries to build the demos please make sure you have added
404+
If you use your own OpenVINO™ and OpenCV binaries to build the demos please make sure you have added
326405
to the `PATH` environment variable.
327406
328407
To run Python demo applications that require native Python extension modules, you must additionally
@@ -336,7 +415,7 @@ set PYTHONPATH=%PYTHONPATH%;<bin_dir>
336415
To debug or run the demos on Windows in Microsoft Visual Studio, make sure you
337416
have properly configured **Debugging** environment settings for the **Debug**
338417
and **Release** configurations. Set correct paths to the OpenCV libraries, and
339-
debug and release versions of the Inference Engine libraries.
418+
debug and release versions of the OpenVINO™ libraries.
340419
For example, for the **Debug** configuration, go to the project's
341420
**Configuration Properties** to the **Debugging** category and set the `PATH`
342421
variable in the **Environment** field to the following:

demos/action_recognition_demo/python/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ You can change the value of `num_requests` in `action_recognition_demo.py` to fi
3838

3939
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
4040
The list of models supported by the demo is in `<omz_dir>/demos/action_recognition_demo/python/models.lst` file.
41-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (\*.xml + \*.bin).
41+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
4242

4343
An example of using the Model Downloader:
4444

demos/background_subtraction_demo/cpp_gapi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The demo workflow is the following:
3434

3535
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
3636
The list of models supported by the demo is in `<omz_dir>/demos/background_subtraction_demo/cpp_gapi/models.lst` file.
37-
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (\*.xml + \*.bin).
37+
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
3838

3939
An example of using the Model Downloader:
4040

0 commit comments

Comments
 (0)