Skip to content

Commit 7f44468

Browse files
authored
Merge branch 'openvinotoolkit:master' into master
2 parents a63e794 + 23c329e commit 7f44468

File tree

401 files changed

+4664
-95460
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

401 files changed

+4664
-95460
lines changed

ci/dependencies.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
opencv_linux: '20220223_0602-4.5.5_073'
2-
opencv_windows: '20220223_0602-4.5.5_073'
3-
openvino_linux: '2022.1.0.606'
4-
openvino_windows: '2022.1.0.606'
5-
wheel_linux: '2022.1.0.dev20220222-6839'
6-
wheel_windows: '2022.1.0.dev20220222-6839'
1+
opencv_linux: '20220228_0602-4.5.5_079'
2+
opencv_windows: '20220228_0602-4.5.5_079'
3+
openvino_linux: '2022.1.0.612'
4+
openvino_windows: '2022.1.0.612'
5+
wheel_linux: '2022.1.0.dev20220228-6910'
6+
wheel_windows: '2022.1.0.dev20220228-6910'

ci/prepare-documentation.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -382,6 +382,11 @@ def main():
382382
title='OMZ Model API OVMS adapter')
383383
ovms_adapter_element.attrib[XML_ID_ATTRIBUTE] = 'omz_model_api_ovms_adapter'
384384

385+
model_api_element = add_page(output_root, navindex_element, id='omz_python_model_api',
386+
path='demos/common/python/openvino/model_zoo/model_api/README.md',
387+
title='OMZ Python Model API')
388+
model_api_element.attrib[XML_ID_ATTRIBUTE] = 'omz_python_model_api'
389+
385390
for md_path in all_md_paths:
386391
if md_path not in documentation_md_paths:
387392
raise RuntimeError(f'{all_md_paths[md_path]}: '

data/dataset_definitions.yml

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1129,8 +1129,8 @@ datasets:
11291129
- name: WMT_en_ru
11301130
annotation_conversion:
11311131
converter: wmt
1132-
input_file: wmt19-ru-en.en.spbpe
1133-
reference_file: wmt19-ru-en.ru.spbpe
1132+
input_file: WMT/wmt19-ru-en.en.spbpe
1133+
reference_file: WMT/wmt19-ru-en.ru.spbpe
11341134

11351135
reader:
11361136
type: annotation_features_extractor
@@ -1149,8 +1149,8 @@ datasets:
11491149
- name: WMT_ru_en
11501150
annotation_conversion:
11511151
converter: wmt
1152-
input_file: wmt19-ru-en.ru.spbpe
1153-
reference_file: wmt19-ru-en.en.spbpe
1152+
input_file: WMT/wmt19-ru-en.ru.spbpe
1153+
reference_file: WMT/wmt19-ru-en.en.spbpe
11541154

11551155
reader:
11561156
type: annotation_features_extractor
@@ -1229,8 +1229,8 @@ datasets:
12291229
data_source: gnhk
12301230
annotation_conversion:
12311231
converter: unicode_character_recognition
1232-
decoding_char_file: gnhk_char_list.txt
1233-
annotation_file: test_img_id_gt.txt
1232+
decoding_char_file: gnhk/gnhk.txt
1233+
annotation_file: gnhk/test_img_id_gt.txt
12341234
annotation: gnhk_recognition.pickle
12351235
dataset_meta: gnhk_recognition.json
12361236

@@ -1424,15 +1424,15 @@ datasets:
14241424
image_folder: Raw-data/Single-channel/Val
14251425
reconstructed_folder: accuracy_checker_preprocess/Single-channel/Reconstructed
14261426
sampled_folder: accuracy_checker_preprocess/Single-channel/Sampled
1427-
mask_file: sampling_mask_20perc.npy
1428-
stats_file: stats_fs_unet_norm_20.npy
1427+
mask_file: hybrid-cs-model-mri/sampling_mask_20perc.npy
1428+
stats_file: hybrid-cs-model-mri/stats_fs_unet_norm_20.npy
14291429

14301430
- name: WikiText_2_raw_gpt2
14311431
annotation: wikitext_2_raw.pickle
14321432
annotation_conversion:
14331433
converter: wikitext2raw
1434-
vocab_file: gpt2/vocab.json
1435-
merges_file: gpt2/merges.txt
1434+
vocab_file: gpt2/gpt2-vocab.json
1435+
merges_file: gpt2/gpt2-merges.txt
14361436
testing_file: wikitext-2-raw/wiki.test.raw
14371437
max_seq_length: 1024
14381438

demos/3d_segmentation_demo/python/3d_segmentation_demo.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,7 @@ def read_image(test_data_path, data_name, sizes=(128, 128, 128), is_series=True,
239239
def main():
240240
args = parse_arguments()
241241

242-
log.info('OpenVINO Inference Engine')
242+
log.info('OpenVINO Runtime')
243243
log.info('\tbuild: {}'.format(get_version()))
244244
core = Core()
245245

demos/3d_segmentation_demo/python/README.md

Lines changed: 4 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,12 @@ This topic demonstrates how to run the 3D Segmentation Demo, which segments 3D i
44

55
## How It Works
66

7-
On startup, the demo reads command-line parameters and loads a network and images to the Inference Engine plugin.
7+
On startup, the demo reads command-line parameters and loads a model and images to OpenVINO™ Runtime plugin.
88

99
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
1010
1111
## Preparing to Run
1212

13-
The demo dependencies should be installed before run. That can be achieved with the following command:
14-
15-
```sh
16-
python3 -mpip install --user -r <omz_dir>/demos/3d_segmentation_demo/python/requirements.txt
17-
```
18-
1913
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
2014
The list of models supported by the demo is in `<omz_dir>/demos/3d_segmentation_demo/python/models.lst` file.
2115
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).
@@ -45,11 +39,10 @@ Run the application with the `-h` or `--help` option to see the usage message:
4539

4640
```
4741
usage: 3d_segmentation_demo.py [-h] -i PATH_TO_INPUT_DATA -m PATH_TO_MODEL -o
48-
PATH_TO_OUTPUT [-d TARGET_DEVICE]
49-
[-l PATH_TO_EXTENSION] [-nii]
42+
PATH_TO_OUTPUT [-d TARGET_DEVICE] [-nii]
5043
[-nthreads NUMBER_THREADS]
51-
[-s [SHAPE [SHAPE ...]]]
52-
[-c PATH_TO_CLDNN_CONFIG]
44+
[-s [SHAPE [SHAPE ...]]] [-ms N1,N2,N3,N4]
45+
[--full_intensities_range]
5346
5447
Options:
5548
-h, --help Show this help message and exit.
@@ -65,18 +58,12 @@ Options:
6558
Optional. Specify a target device to infer on: CPU, GPU.
6659
Use "-d HETERO:<comma separated devices list>" format
6760
to specify HETERO plugin.
68-
-l PATH_TO_EXTENSION, --path_to_extension PATH_TO_EXTENSION
69-
Required for CPU custom layers. Absolute path to a
70-
shared library with the kernels implementations.
7161
-nii, --output_nifti Show output inference results as raw values
7262
-nthreads NUMBER_THREADS, --number_threads NUMBER_THREADS
7363
Optional. Number of threads to use for inference on
7464
CPU (including HETERO cases).
7565
-s [SHAPE [SHAPE ...]], --shape [SHAPE [SHAPE ...]]
7666
Optional. Specify shape for a network
77-
-c PATH_TO_CLDNN_CONFIG, --path_to_cldnn_config PATH_TO_CLDNN_CONFIG
78-
Required for GPU custom kernels. Absolute path to an
79-
.xml file with the kernels description.
8067
-ms N1,N2,N3,N4, --mri_sequence N1,N2,N3,N4
8168
Optional. Transfer MRI-sequence from dataset order to the network order.
8269
--full_intensities_range

demos/CMakeLists.txt

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -34,11 +34,6 @@ foreach(artifact IN ITEMS ARCHIVE COMPILE_PDB LIBRARY PDB RUNTIME)
3434
set("CMAKE_${artifact}_OUTPUT_DIRECTORY" "${CMAKE_CURRENT_BINARY_DIR}/${BIN_FOLDER}/$<CONFIG>")
3535
endforeach()
3636

37-
if(UNIX)
38-
string(APPEND CMAKE_LIBRARY_OUTPUT_DIRECTORY "/lib")
39-
string(APPEND CMAKE_ARCHIVE_OUTPUT_DIRECTORY "/lib")
40-
endif()
41-
4237
if(WIN32)
4338
if(NOT "${CMAKE_SIZEOF_VOID_P}" EQUAL "8")
4439
message(FATAL_ERROR "Only 64-bit supported on Windows")
@@ -134,8 +129,8 @@ macro(add_demo)
134129
target_include_directories(${OMZ_DEMO_NAME} PRIVATE ${OMZ_DEMO_INCLUDE_DIRECTORIES})
135130
endif()
136131

137-
target_link_libraries(${OMZ_DEMO_NAME} PRIVATE ${OpenCV_LIBRARIES} openvino::runtime ${InferenceEngine_LIBRARIES}
138-
${OMZ_DEMO_DEPENDENCIES} ngraph::ngraph utils gflags)
132+
target_link_libraries(${OMZ_DEMO_NAME} PRIVATE ${OpenCV_LIBRARIES} openvino::runtime
133+
${OMZ_DEMO_DEPENDENCIES} utils gflags)
139134

140135
if(UNIX)
141136
target_link_libraries(${OMZ_DEMO_NAME} PRIVATE pthread)
@@ -144,9 +139,6 @@ endmacro()
144139

145140
find_package(OpenCV REQUIRED COMPONENTS core highgui videoio imgproc imgcodecs gapi)
146141
find_package(OpenVINO REQUIRED COMPONENTS Runtime)
147-
# TODO: remove InferenceEngine and ngraph after 2022.1
148-
find_package(InferenceEngine REQUIRED)
149-
find_package(ngraph REQUIRED)
150142

151143
add_subdirectory(thirdparty/gflags)
152144
add_subdirectory(common/cpp)

demos/README.md

Lines changed: 23 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
# Open Model Zoo Demos
22

3+
<!--
34
@sphinxdirective
45
56
.. toctree::
@@ -9,19 +10,25 @@
910
omz_demos_human_pose_estimation_3d_demo_python
1011
omz_demos_3d_segmentation_demo_python
1112
omz_demos_action_recognition_demo_python
13+
omz_demos_background_subtraction_demo_cpp_gapi
14+
omz_demos_background_subtraction_demo_python
1215
omz_demos_bert_named_entity_recognition_demo_python
1316
omz_demos_bert_question_answering_embedding_demo_python
1417
omz_demos_bert_question_answering_demo_python
15-
omz_demos_classification_demo_cpp
18+
omz_demos_classification_benchmark_demo_cpp
19+
omz_demos_classification_demo_python
1620
omz_demos_colorization_demo_python
1721
omz_demos_crossroad_camera_demo_cpp
22+
omz_demos_face_detection_mtcnn_demo_cpp_gapi
1823
omz_demos_face_detection_mtcnn_demo_python
1924
omz_demos_face_recognition_demo_python
2025
omz_demos_formula_recognition_demo_python
2126
omz_demos_gaze_estimation_demo_cpp_gapi
2227
omz_demos_interactive_face_detection_demo_cpp_gapi
2328
omz_demos_gaze_estimation_demo_cpp
29+
omz_demos_gesture_recognition_demo_cpp_gapi
2430
omz_demos_gesture_recognition_demo_python
31+
omz_demos_gpt2_text_prediction_demo_python
2532
omz_demos_handwritten_text_recognition_demo_python
2633
omz_demos_human_pose_estimation_demo_cpp
2734
omz_demos_human_pose_estimation_demo_python
@@ -36,22 +43,28 @@
3643
omz_demos_interactive_face_detection_demo_cpp
3744
omz_demos_machine_translation_demo_python
3845
omz_demos_monodepth_demo_python
46+
omz_demos_mri_reconstruction_demo_cpp
47+
omz_demos_mri_reconstruction_demo_python
3948
omz_demos_multi_camera_multi_target_tracking_demo_python
4049
omz_demos_multi_channel_face_detection_demo_cpp
4150
omz_demos_multi_channel_human_pose_estimation_demo_cpp
4251
omz_demos_multi_channel_object_detection_demo_yolov3_cpp
52+
omz_demos_noise_suppression_demo_cpp
4353
omz_demos_noise_suppression_demo_python
4454
omz_demos_object_detection_demo_cpp
4555
omz_demos_object_detection_demo_python
4656
omz_demos_pedestrian_tracker_demo_cpp
4757
omz_demos_place_recognition_demo_python
4858
omz_demos_security_barrier_camera_demo_cpp
4959
omz_demos_single_human_pose_estimation_demo_python
60+
omz_demos_smartlab_demo_python
5061
omz_demos_smart_classroom_demo_cpp
62+
omz_demos_smart_classroom_demo_cpp_gapi
5163
omz_demos_social_distance_demo_cpp
5264
omz_demos_sound_classification_demo_python
5365
omz_demos_speech_recognition_deepspeech_demo_python
5466
omz_demos_speech_recognition_quartznet_demo_python
67+
omz_demos_speech_recognition_wav2vec_demo_python
5568
omz_demos_mask_rcnn_demo_cpp
5669
omz_demos_text_detection_demo_cpp
5770
omz_demos_text_spotting_demo_python
@@ -60,6 +73,7 @@
6073
omz_demos_whiteboard_inpainting_demo_python
6174
6275
@endsphinxdirective
76+
-->
6377

6478
The Open Model Zoo demo applications are console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state
6579

@@ -137,7 +151,7 @@ The Open Model Zoo includes the following demos:
137151
- [Text Detection C++ Demo](./text_detection_demo/cpp/README.md) - Text Detection demo. It detects and recognizes multi-oriented scene text on an input image and puts a bounding box around detected area.
138152
- [Text Spotting Python\* Demo](./text_spotting_demo/python/README.md) - The demo demonstrates how to run Text Spotting models.
139153
- [Text-to-speech Python\* Demo](./text_to_speech_demo/python/README.md) - Shows an example of using Forward Tacotron and WaveRNN neural networks for text to speech task.
140-
- [Time Series Forecasting Python\* Demo](./time_series_forecasting_demo/python/README.md) - The demo shows how to use the OpenVINO™ toolkit to time series forecastig.
154+
- [Time Series Forecasting Python\* Demo](./time_series_forecasting_demo/python/README.md) - The demo shows how to use the OpenVINO™ toolkit to time series forecasting.
141155
- [Whiteboard Inpainting Python\* Demo](./whiteboard_inpainting_demo/python/README.md) - The demo shows how to use the OpenVINO™ toolkit to detect and hide a person on a video so that all text on a whiteboard is visible.
142156

143157
## Media Files Available for Demos
@@ -272,37 +286,17 @@ cmake -A x64 <open_model_zoo>/demos
272286
cmake --build . --config Debug
273287
```
274288
275-
### <a name="model_api_installation"></a>Python\* model API installation
289+
### <a name="python_requirements"></a>Dependencies for Python* Demos
276290
277-
Python Model API with model wrappers and pipelines can be installed as a part of OpenVINO&trade; toolkit or from source.
278-
Installation from source is as follows:
279-
280-
1. Install Python (version 3.6 or higher), [setuptools](https://pypi.org/project/setuptools/):
281-
282-
2. Build the wheel with the following command:
291+
The dependencies for Python demos must be installed before running. It can be achieved with the following command:
283292
284293
```sh
285-
python <omz_dir>/demos/common/python/setup.py bdist_wheel
294+
python -mpip install --user -r <omz_dir>/demos/requirements.txt
286295
```
287-
The built wheel should appear in the dist folder;
288-
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`
289296
290-
3. Install the package in the clean environment with `--force-reinstall` key:
291-
```sh
292-
python -m pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
293-
```
294-
Alternatively, instead of building the wheel you can use the following command inside `<omz_dir>/demos/common/python/` directory to build and install the package:
295-
```sh
296-
python -m pip install .
297-
```
298-
299-
When the model API package is installed, you can import it as follows:
300-
```sh
301-
python -c "from openvino.model_zoo import model_api"
302-
```
297+
### <a name="python_model_api"></a>Python\* model API package
303298
304-
> **NOTE**: On Linux and macOS, you may need to type `python3` instead of `python`. You may also need to [install pip](https://pip.pypa.io/en/stable/installation/).
305-
> For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-pip`.
299+
To run Python demo applications, you need to install the Python* Model API package. Refer to [Python* Model API documentation](common/python/openvino/model_zoo/model_api/README.md#installing-python*-model-api-package) to learn about its installation.
306300
307301
### <a name="build_python_extensions"></a>Build the Native Python\* Extension Modules
308302
@@ -383,7 +377,7 @@ set up the `PYTHONPATH` environment variable as follows, where `<bin_dir>` is th
383377
the built demo applications:
384378
385379
```sh
386-
export PYTHONPATH="$PYTHONPATH:<bin_dir>/lib"
380+
export PYTHONPATH="<bin_dir>:$PYTHONPATH"
387381
```
388382
389383
You are ready to run the demo applications. To learn about how to run a particular
@@ -409,7 +403,7 @@ set up the `PYTHONPATH` environment variable as follows, where `<bin_dir>` is th
409403
the built demo applications:
410404
411405
```bat
412-
set PYTHONPATH=%PYTHONPATH%;<bin_dir>
406+
set PYTHONPATH=<bin_dir>;%PYTHONPATH%
413407
```
414408
415409
To debug or run the demos on Windows in Microsoft Visual Studio, make sure you

demos/action_recognition_demo/python/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,17 +17,17 @@ Every step implements `PipelineStep` interface by creating a class derived from
1717

1818
* `DataStep` reads frames from the input video.
1919
* Model step depends on architecture type:
20-
- For encder-decoder models there are two steps:
20+
- For encoder-decoder models there are two steps:
2121
- `EncoderStep` preprocesses a frame and feeds it to the encoder model to produce a frame embedding. Simple averaging of encoder's outputs over a time window is applied.
2222
- `DecoderStep` feeds embeddings produced by the `EncoderStep` to the decoder model and produces predictions. For models that use `DummyDecoder`, simple averaging of encoder's outputs over a time window is applied.
2323
- For the specific implemented single models, the corresponding `<ModelNameStep>` does preprocessing and produces predictions.
2424
* `RenderStep` renders prediction results.
2525

26-
Pipeline steps are composed in `AsyncPipeline`. Every step can be run in separate thread by adding it to the pipeline with `parallel=True` option.
26+
Pipeline steps are composed in `AsyncPipeline`. Every step can be run in a separate thread by adding it to the pipeline with `parallel=True` option.
2727
When two consequent steps occur in separate threads, they communicate via message queue (for example, deliver step result or stop signal).
2828

29-
To ensure maximum performance, Inference Engine models are wrapped in `AsyncWrapper`
30-
that uses Inference Engine async API by scheduling infer requests in cyclical order
29+
To ensure maximum performance, models are wrapped in `AsyncWrapper`
30+
that uses Asynchronous Inference Request API by scheduling infer requests in cyclical order
3131
(inference on every new input is started asynchronously, result of the longest working infer request is returned).
3232
You can change the value of `num_requests` in `action_recognition_demo.py` to find an optimal number of parallel working infer requests for your inference accelerators
3333
(Intel(R) Neural Compute Stick devices and GPUs benefit from higher number of infer requests).

demos/action_recognition_demo/python/action_recognition_demo.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ def main():
8383
else:
8484
labels = None
8585

86-
log.info('OpenVINO Inference Engine')
86+
log.info('OpenVINO Runtime')
8787
log.info('\tbuild: {}'.format(get_version()))
8888
core = Core()
8989

demos/background_subtraction_demo/cpp_gapi/README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,9 +59,8 @@ omz_converter --list models.lst
5959
Run the application with the `-h` option to see the following usage message:
6060

6161
```
62-
[ INFO ] OpenVINO Inference Engine
63-
[ INFO ] version: <version>
64-
[ INFO ] build: <number>
62+
[ INFO ] OpenVINO Runtime version ......... <version>
63+
[ INFO ] Build ........... <build>
6564
6665
background_subtraction_demo_gapi [OPTION]
6766
Options:

0 commit comments

Comments
 (0)