Skip to content

Commit 780c30e

Browse files
committed
Merge branch 'master' into merge-releases/2022/SCv1.1-into-master
2 parents 40b189a + 5b68283 commit 780c30e

File tree

474 files changed

+13255
-6543
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

474 files changed

+13255
-6543
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# [OpenVINO™ Toolkit](https://docs.openvino.ai/latest/index.html) - Open Model Zoo repository
2-
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/open_model_zoo/releases/tag/2022.1)
2+
[![Stable release](https://img.shields.io/badge/version-2022.2.0-green.svg)](https://github.com/openvinotoolkit/open_model_zoo/releases/tag/2022.2.0)
33
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/open_model_zoo/community)
44
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
55

ci/dependencies.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
opencv_linux: '20220228_0602-4.5.5_079'
2-
opencv_windows: '20220228_0602-4.5.5_079'
3-
openvino_linux: '2022.1.0.612'
4-
openvino_windows: '2022.1.0.612'
5-
wheel_linux: '2022.1.0.dev20220228-6910'
6-
wheel_windows: '2022.1.0.dev20220228-6910'
1+
opencv_linux: '20220311_0602-4.5.5_090'
2+
opencv_windows: '20220311_0602-4.5.5_090'
3+
openvino_linux: '2022.1.0.643'
4+
openvino_windows: '2022.1.0.643'
5+
wheel_linux: '2022.1.0-7019'
6+
wheel_windows: '2022.1.0-7019'

ci/requirements-openvino-dev.in

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ numpy (<1.20,>=1.16.6) ; extra == 'kaldi'
3030
numpy (<1.20,>=1.16.6) ; extra == 'mxnet'
3131
numpy (<1.20,>=1.16.6) ; extra == 'onnx'
3232
numpy (<1.20,>=1.16.6) ; extra == 'tensorflow2'
33-
onnx (>=1.8.1) ; extra == 'caffe2'
34-
onnx (>=1.8.1) ; extra == 'onnx'
35-
onnx (>=1.8.1) ; extra == 'pytorch'
33+
onnx (<=1.12,>=1.8.1) ; extra == 'caffe2'
34+
onnx (<=1.12,>=1.8.1) ; extra == 'onnx'
35+
onnx (<=1.12,>=1.8.1) ; extra == 'pytorch'
3636
opencv-python (==4.5.*)
3737
openvino (==2021.4.2)
3838
pandas (~=1.1.5)

data/dataset_definitions.yml

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1254,7 +1254,7 @@ datasets:
12541254
dataset_meta: antispoofing.json
12551255

12561256
- name: sound_classification
1257-
data_source: audio_dataset
1257+
data_source: audio_dataset/data
12581258
annotation_conversion:
12591259
converter: sound_classification
12601260
annotation_file: audio_dataset/validation.csv
@@ -1424,8 +1424,8 @@ datasets:
14241424
image_folder: Raw-data/Single-channel/Val
14251425
reconstructed_folder: accuracy_checker_preprocess/Single-channel/Reconstructed
14261426
sampled_folder: accuracy_checker_preprocess/Single-channel/Sampled
1427-
mask_file: hybrid-cs-model-mri/sampling_mask_20perc.npy
1428-
stats_file: hybrid-cs-model-mri/stats_fs_unet_norm_20.npy
1427+
mask_file: sampling_mask_20perc.npy
1428+
stats_file: stats_fs_unet_norm_20.npy
14291429

14301430
- name: WikiText_2_raw_gpt2
14311431
annotation: wikitext_2_raw.pickle
@@ -1493,3 +1493,14 @@ datasets:
14931493
annotation_file: object_detection/streams_1/high/annotations/instances_glb2bcls3.json
14941494
annotation: mscoco_detection_high_3cls.pickle
14951495
dataset_meta: mscoco_detection_high_3cls.json
1496+
1497+
- name: HumanMattingDataset
1498+
data_source: human_matting_dataset/clip_img/1803151818/clip_00000000
1499+
additional_data_source: human_matting_dataset/matting/1803151818/matting_00000000
1500+
annotation_conversion:
1501+
converter: background_matting
1502+
images_dir: human_matting_dataset/clip_img/1803151818/clip_00000000
1503+
masks_dir: human_matting_dataset/matting/1803151818/matting_00000000
1504+
image_postfix: '.jpg'
1505+
annotation: human_matting.pickle
1506+
dataset_meta: human_matting.json

data/datasets.md

Lines changed: 99 additions & 98 deletions
Large diffs are not rendered by default.

demos/CMakeLists.txt

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -137,12 +137,16 @@ macro(add_demo)
137137
endif()
138138
endmacro()
139139

140-
find_package(OpenCV REQUIRED COMPONENTS core highgui videoio imgproc imgcodecs gapi)
140+
find_package(OpenCV REQUIRED COMPONENTS core highgui videoio imgproc imgcodecs)
141141
find_package(OpenVINO REQUIRED COMPONENTS Runtime)
142142

143143
add_subdirectory(thirdparty/gflags)
144144
add_subdirectory(common/cpp)
145-
add_subdirectory(common/cpp_gapi)
145+
# TODO: remove wrapping if after OpenCV3 is dropped
146+
if(OpenCV_VERSION VERSION_GREATER_EQUAL 4.5.3)
147+
find_package(OpenCV REQUIRED COMPONENTS gapi)
148+
add_subdirectory(common/cpp_gapi)
149+
endif()
146150
add_subdirectory(multi_channel_common/cpp)
147151

148152
# collect all samples subdirectories

demos/README.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -75,14 +75,12 @@
7575
@endsphinxdirective
7676
-->
7777

78-
The Open Model Zoo demo applications are console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state
78+
Open Model Zoo demos are console applications that provide templates to help implement specific deep learning inference scenarios. These applications show how to preprocess and postrpocess data for model inference and organize processing pipelines. Some pipelines collect analysis data from several models being inferred simultaneously. For example, [detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state](./interactive_face_detection_demo/cpp/README.md).
7979

8080
Source code of the demos can be obtained from the Open Model Zoo [GitHub repository](https://github.com/openvinotoolkit/open_model_zoo/).
8181

8282
```sh
83-
git clone https://github.com/openvinotoolkit/open_model_zoo.git
84-
cd open_model_zoo
85-
git submodule update --init --recursive
83+
git clone --recurse-submodules https://github.com/openvinotoolkit/open_model_zoo.git
8684
```
8785

8886
C++, C++ G-API and Python\* versions are located in the `cpp`, `cpp_gapi` and `python` subdirectories respectively.
@@ -156,7 +154,7 @@ The Open Model Zoo includes the following demos:
156154

157155
## Media Files Available for Demos
158156

159-
To run the demo applications, you can use images and videos from the media files collection available at https://github.com/intel-iot-devkit/sample-videos.
157+
To run the demo applications, you can use videos from https://storage.openvinotoolkit.org/data/test_data/videos.
160158

161159
## Demos that Support Pre-Trained Models
162160

@@ -166,11 +164,10 @@ You can download the [Intel pre-trained models](../models/intel/index.md) or [pu
166164

167165
## Build the Demo Applications
168166

169-
To build the demos, you need to source OpenVINO™ and OpenCV environment. You can install the OpenVINO™ toolkit using the installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
167+
To build the demos, you need to source OpenVINO™ environment and [get OpenCV](https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO). You can install the OpenVINO™ toolkit using the installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
170168
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download prebuilt OpenCV and set environment variables before building the demos:
171169

172170
```sh
173-
<INSTALL_DIR>/extras/scripts/download_opencv.sh
174171
source <INSTALL_DIR>/setupvars.sh
175172
```
176173

@@ -253,7 +250,7 @@ build_demos_msvc.bat VS2019
253250
```
254251
255252
By default, the demo applications binaries are build into the `C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release` directory.
256-
The default build folder can be changed with `-b` option. For example, following command will buid Open Model Zoo demos into `c:\temp\omz-demos-build` folder:
253+
The default build folder can be changed with `-b` option. For example, following command will build Open Model Zoo demos into `c:\temp\omz-demos-build` folder:
257254
258255
```bat
259256
build_demos_msvc.bat -b c:\temp\omz-demos-build
@@ -298,7 +295,7 @@ python -mpip install --user -r <omz_dir>/demos/requirements.txt
298295
299296
### <a name="python_model_api"></a>Python\* model API package
300297
301-
To run Python demo applications, you need to install the Python* Model API package. Refer to [Python* Model API documentation](common/python/openvino/model_zoo/model_api/README.md#installing-python*-model-api-package) to learn about its installation.
298+
Python* ModelAPI is factored out as a sepparate package. Refer to the [Python Model API documentation](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/common/python/openvino/model_zoo/model_api/README.md#installing-python-model-api-package)) to learn about its installation. At the same time demos can find this package on their own. It's not required to install ModelAPI for demos.
302299
303300
### <a name="build_python_extensions"></a>Build the Native Python\* Extension Modules
304301

demos/action_recognition_demo/python/action_recognition_demo/models.py

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@
2222
import cv2
2323
import numpy as np
2424

25+
from openvino.runtime import AsyncInferQueue
26+
2527

2628
def center_crop(frame, crop_size):
2729
img_h, img_w, _ = frame.shape
@@ -93,22 +95,31 @@ def __init__(self, model_path, core, target_device, num_requests, model_type):
9395
log.error("Demo supports only models with 1 output")
9496
sys.exit(1)
9597

98+
self.outputs = {}
9699
compiled_model = core.compile_model(self.model, target_device)
97100
self.output_tensor = compiled_model.outputs[0]
98101
self.input_name = self.model.inputs[0].get_any_name()
99102
self.input_shape = self.model.inputs[0].shape
100103

101104
self.num_requests = num_requests
102-
self.infer_requests = [compiled_model.create_infer_request() for _ in range(self.num_requests)]
105+
self.infer_queue = AsyncInferQueue(compiled_model, num_requests)
106+
self.infer_queue.set_callback(self.completion_callback)
103107
log.info('The {} model {} is loaded to {}'.format(model_type, model_path, target_device))
104108

109+
def completion_callback(self, infer_request, id):
110+
self.outputs[id] = infer_request.results[self.output_tensor]
111+
105112
def async_infer(self, frame, req_id):
106113
input_data = {self.input_name: frame}
107-
self.infer_requests[req_id].start_async(inputs=input_data)
114+
self.infer_queue.start_async(input_data, req_id)
108115

109116
def wait_request(self, req_id):
110-
self.infer_requests[req_id].wait()
111-
return self.infer_requests[req_id].results[self.output_tensor]
117+
self.infer_queue[req_id].wait()
118+
return self.outputs.pop(req_id, None)
119+
120+
def cancel(self):
121+
for ireq in self.infer_queue:
122+
ireq.cancel()
112123

113124

114125
class DummyDecoder:
@@ -126,3 +137,6 @@ def async_infer(self, model_input, req_id):
126137
def wait_request(self, req_id):
127138
assert req_id in self.requests
128139
return self.requests.pop(req_id)
140+
141+
def cancel(self):
142+
pass

demos/action_recognition_demo/python/action_recognition_demo/steps.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,9 @@ def __init__(self, encoder):
101101
self.encoder = encoder
102102
self.async_model = AsyncWrapper(self.encoder, self.encoder.num_requests)
103103

104+
def __del__(self):
105+
self.encoder.cancel()
106+
104107
def process(self, frame):
105108
preprocessed = preprocess_frame(frame)
106109
preprocessed = preprocessed[np.newaxis, ...] # add batch dimension
@@ -121,6 +124,9 @@ def __init__(self, decoder, sequence_size=16):
121124
self.async_model = AsyncWrapper(self.decoder, self.decoder.num_requests)
122125
self._embeddings = deque(maxlen=self.sequence_size)
123126

127+
def __del__(self):
128+
self.decoder.cancel()
129+
124130
def process(self, item):
125131
if item is None:
126132
return None

demos/background_subtraction_demo/cpp_gapi/README.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ This demo shows how to perform background subtraction using G-API.
55
> **NOTE**: Only batch size of 1 is supported.
66
77
## How It Works
8-
The demo application expects an instance-segmentation-security-???? or trimap free background matting based on pixel-level segmentation approach model in the Intermediate Representation (IR) format.
8+
The demo application expects an instance-segmentation-security-???? or trimap free background matting based on pixel-level segmentation approach model in the Intermediate Representation (IR) format. Please note, that there aren't background matting models in `OpenModelZoo` collection.
99

1010
1. for instance segmentation models based on `Mask RCNN` approach:
1111
* One input: `image` for input image.
@@ -54,6 +54,34 @@ omz_converter --list models.lst
5454

5555
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
5656
57+
58+
### OneVPL Support
59+
60+
Demo provides functionality to use [OneVPL](https://github.com/oneapi-src/oneVPL#-video-processing-library) video decoding.
61+
Example:
62+
```sh
63+
./background_subtraction_demo_gapi/ -m <path_to_model> -i <path_to_video_file> -use_onevpl
64+
```
65+
66+
In order to provide additional configuration paramaters use `-onevpl_params`:
67+
```sh
68+
./background_subtraction_demo_gapi/ -m <path_to_model> -i <path_to_raw_file> -use_onevpl -onevpl_params="mfxImplDescription.mfxDecoderDescription.decoder.CodecID:MFX_CODEC_HEVC"
69+
```
70+
>**NOTE**: Only raw formats such as `h264`, `h265` etc are supported on Linux.
71+
Working with raw formats user always must specify `codec` type via `-onevpl_params`. See example below.
72+
73+
To build OpenCV G-API with `oneVPL` support follow instruction:
74+
[Building G-API with oneVPL Toolkit support](https://github.com/opencv/opencv/wiki/Graph-API#building-with-onevpl-toolkit-support)
75+
76+
#### Troubleshooting
77+
During execution `oneVPL` might report warnings that tell the user that source can be configurable more accurate.
78+
79+
For example:
80+
```
81+
cv::gapi::wip::onevpl::VPLLegacyDecodeEngine::process_error [000001CED3851C70] error: cv::gapi::wip::onevpl::CachedPool::find_free - cannot get free surface from pool, size: 5
82+
```
83+
This might be fixed by increasing pool size using `-onevpl_pool_size` parameter.
84+
5785
## Running
5886

5987
Run the application with the `-h` option to see the following usage message:
@@ -82,6 +110,9 @@ Options:
82110
-blur_bgr Optional. Blur background.
83111
-target_bgr Optional. Background onto which to composite the output (by default to green field).
84112
-u Optional. List of monitors to show initially.
113+
-use_onevpl Optional. Use onevpl video decoding.
114+
-onevpl_params Optional. Parameters for onevpl video decoding. OneVPL source can be fine-grained by providing configuration parameters. Format: <prop name>:<value>,<prop name>:<value> Several important configuration parameters: 'mfxImplDescription.mfxDecoderDescription.decoder.CodecID' values: https://spec.oneapi.io/onevpl/2.7.0/API_ref/VPL_enums.html?highlight=mfx_codec_hevc#codecformatfourcc and 'mfxImplDescription.AccelerationMode' values: https://spec.oneapi.io/onevpl/2.7.0/API_ref/VPL_disp_api_enum.html?highlight=d3d11#mfxaccelerationmode(see `MFXSetConfigFilterProperty` by https://spec.oneapi.io/versions/latest/elements/oneVPL/source/index.html)
115+
-onevpl_pool_size OneVPL source applies this parameter as preallocated frames pool size. 0 leaves frames pool size default for your system. This parameter doesn't have a god default value. It must be adjusted for specific execution (video, model, system ...).
85116
86117
Available target devices: <targets>
87118
```

0 commit comments

Comments
 (0)