Skip to content

Commit 3cf4b3f

Browse files
authored
Merge branch 'openvinotoolkit:master' into master
2 parents 7f44468 + e9b5cfd commit 3cf4b3f

File tree

227 files changed

+4486
-1485
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

227 files changed

+4486
-1485
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# [OpenVINO™ Toolkit](https://docs.openvino.ai/latest/index.html) - Open Model Zoo repository
2-
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/open_model_zoo/releases/tag/2022.1)
2+
[![Stable release](https://img.shields.io/badge/version-2022.2.0-green.svg)](https://github.com/openvinotoolkit/open_model_zoo/releases/tag/2022.2.0)
33
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/open_model_zoo/community)
44
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
55

ci/dependencies.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
opencv_linux: '20220228_0602-4.5.5_079'
2-
opencv_windows: '20220228_0602-4.5.5_079'
3-
openvino_linux: '2022.1.0.612'
4-
openvino_windows: '2022.1.0.612'
5-
wheel_linux: '2022.1.0.dev20220228-6910'
6-
wheel_windows: '2022.1.0.dev20220228-6910'
1+
opencv_linux: '20220311_0602-4.5.5_090'
2+
opencv_windows: '20220311_0602-4.5.5_090'
3+
openvino_linux: '2022.1.0.643'
4+
openvino_windows: '2022.1.0.643'
5+
wheel_linux: '2022.1.0-7019'
6+
wheel_windows: '2022.1.0-7019'

data/dataset_definitions.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1493,3 +1493,14 @@ datasets:
14931493
annotation_file: object_detection/streams_1/high/annotations/instances_glb2bcls3.json
14941494
annotation: mscoco_detection_high_3cls.pickle
14951495
dataset_meta: mscoco_detection_high_3cls.json
1496+
1497+
- name: HumanMattingDataset
1498+
data_source: human_matting_dataset/clip_img/1803151818/clip_00000000
1499+
additional_data_source: human_matting_dataset/matting/1803151818/matting_00000000
1500+
annotation_conversion:
1501+
converter: background_matting
1502+
images_dir: human_matting_dataset/clip_img/1803151818/clip_00000000
1503+
masks_dir: human_matting_dataset/matting/1803151818/matting_00000000
1504+
image_postfix: '.jpg'
1505+
annotation: human_matting.pickle
1506+
dataset_meta: human_matting.json

demos/3d_segmentation_demo/python/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ This topic demonstrates how to run the 3D Segmentation Demo, which segments 3D i
66

77
On startup, the demo reads command-line parameters and loads a model and images to OpenVINO™ Runtime plugin.
88

9-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
9+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
1010
1111
## Preparing to Run
1212

@@ -110,5 +110,5 @@ The demo reports
110110
## See Also
111111

112112
* [Open Model Zoo Demos](../../README.md)
113-
* [Model Optimizer](https://docs.openvino.ai/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
113+
* [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
114114
* [Model Downloader](../../../tools/model_tools/README.md)

demos/README.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ You can download the [Intel pre-trained models](../models/intel/index.md) or [pu
167167
## Build the Demo Applications
168168

169169
To build the demos, you need to source OpenVINO™ and OpenCV environment. You can install the OpenVINO™ toolkit using the installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
170-
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download OpenCV and set environment variables before building the demos:
170+
For the Intel® Distribution of OpenVINO™ toolkit installed to the `<INSTALL_DIR>` directory on your machine, run the following commands to download prebuilt OpenCV and set environment variables before building the demos:
171171

172172
```sh
173173
<INSTALL_DIR>/extras/scripts/download_opencv.sh
@@ -180,9 +180,7 @@ source <INSTALL_DIR>/setupvars.sh
180180
> ```
181181
182182
For the open-source version of OpenVINO, set the following variables:
183-
* `InferenceEngine_DIR` pointing to a folder containing `InferenceEngineConfig.cmake`
184183
* `OpenVINO_DIR` pointing to a folder containing `OpenVINOConfig.cmake`
185-
* `ngraph_DIR` pointing to a folder containing `ngraphConfig.cmake`.
186184
* `OpenCV_DIR` pointing to OpenCV. The same OpenCV version should be used both for OpenVINO and demos build.
187185
188186
Alternatively, these values can be provided via command line while running `cmake`. See [CMake search procedure](https://cmake.org/cmake/help/latest/command/find_package.html#search-procedure).
@@ -192,8 +190,8 @@ Also add paths to the built OpenVINO™ Runtime libraries to the `LD_LIBRARY_PAT
192190
193191
The officially supported Linux* build environment is the following:
194192
195-
- Ubuntu* 18.04 LTS 64-bit or CentOS* 7.6 64-bit
196-
- GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
193+
- Ubuntu* 18.04 LTS 64-bit or Ubuntu* 20.04 LTS 64-bit
194+
- GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 9.3.0 (for Ubuntu* 20.04)
197195
- CMake* version 3.10 or higher.
198196
199197
To build the demo applications for Linux, go to the directory with the `build_demos.sh` script and
@@ -236,10 +234,8 @@ for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
236234
The recommended Windows* build environment is the following:
237235
238236
- Microsoft Windows* 10
239-
- Microsoft Visual Studio* 2017, or 2019
240-
- CMake* version 3.10 or higher
241-
242-
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
237+
- Microsoft Visual Studio* 2019
238+
- CMake* version 3.14 or higher
243239
244240
To build the demo applications for Windows, go to the directory with the `build_demos_msvc.bat`
245241
batch file and run it:
@@ -250,13 +246,19 @@ build_demos_msvc.bat
250246
251247
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
252248
a solution for a demo code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
253-
versions are: `VS2017`, `VS2019`. For example, to build the demos using the Microsoft Visual Studio 2017, use the following command:
249+
version is: `VS2019`. For example, to build the demos using the Microsoft Visual Studio 2019, use the following command:
250+
251+
```bat
252+
build_demos_msvc.bat VS2019
253+
```
254+
255+
By default, the demo applications binaries are build into the `C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release` directory.
256+
The default build folder can be changed with `-b` option. For example, following command will buid Open Model Zoo demos into `c:\temp\omz-demos-build` folder:
254257
255258
```bat
256-
build_demos_msvc.bat VS2017
259+
build_demos_msvc.bat -b c:\temp\omz-demos-build
257260
```
258261
259-
The demo applications binaries are in the `C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release` directory.
260262
261263
You can also build a generated solution by yourself, for example, if you want to
262264
build binaries in Debug configuration. Run the appropriate version of the
@@ -415,7 +417,7 @@ For example, for the **Debug** configuration, go to the project's
415417
variable in the **Environment** field to the following:
416418
417419
```
418-
PATH=<INSTALL_DIR>\deployment_tools\inference_engine\bin\intel64\Debug;<INSTALL_DIR>\opencv\bin;%PATH%
420+
PATH=<INSTALL_DIR>\runtime\bin\intel64\Debug;<INSTALL_DIR>\extras\opencv\bin;%PATH%
419421
```
420422
421423
where `<INSTALL_DIR>` is the directory in which the OpenVINO toolkit is installed.

demos/action_recognition_demo/python/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ that uses Asynchronous Inference Request API by scheduling infer requests in cyc
3232
You can change the value of `num_requests` in `action_recognition_demo.py` to find an optimal number of parallel working infer requests for your inference accelerators
3333
(Intel(R) Neural Compute Stick devices and GPUs benefit from higher number of infer requests).
3434

35-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
35+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
3636
3737
## Preparing to Run
3838

@@ -147,5 +147,5 @@ The application uses OpenCV to display the real-time action recognition results
147147
## See Also
148148

149149
* [Open Model Zoo Demos](../../README.md)
150-
* [Model Optimizer](https://docs.openvino.ai/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
150+
* [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
151151
* [Model Downloader](../../../tools/model_tools/README.md)

demos/background_subtraction_demo/cpp_gapi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The demo workflow is the following:
2828
* If you specify `--target_bgr`, background will be replaced by a chosen image or video. By default background replaced by green field.
2929
* If you specify `--blur_bgr`, background will be blurred according to a set value. By default equal to zero and is not applied.
3030

31-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
31+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
3232
3333
## Preparing to Run
3434

@@ -114,5 +114,5 @@ The demo reports
114114
## See Also
115115

116116
* [Open Model Zoo Demos](../../README.md)
117-
* [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
117+
* [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
118118
* [Model Downloader](../../../tools/model_tools/README.md)

demos/background_subtraction_demo/cpp_gapi/include/custom_kernels.hpp

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,8 @@ G_API_OP(GCalculateMaskRCNNBGMask,
3333

3434
class NNBGReplacer {
3535
public:
36+
NNBGReplacer() = default;
37+
virtual ~NNBGReplacer() = default;
3638
NNBGReplacer(const std::string& model_path);
3739
virtual cv::GMat replace(cv::GMat, const cv::Size&, cv::GMat) = 0;
3840
const std::string& getName() { return m_tag; }

demos/background_subtraction_demo/python/README.md

Lines changed: 16 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,10 @@ The demo application expects an instance segmentation or background matting mode
2929
* At least two outputs including:
3030
* `fgr` with normalized in [0, 1] range foreground
3131
* `pha` with normalized in [0, 1] range alpha
32-
4. for video background matting models based on RNN architecture:
32+
4. for image background matting models without trimap (background segmentation):
33+
* Single input for input image.
34+
* Single output with normalized in [0, 1] range alpha
35+
5. for video background matting models based on RNN architecture:
3336
* Five inputs:
3437
* `src` for input image
3538
* recurrent inputs: `r1`, `r2`, `r3`, `r4`
@@ -53,7 +56,13 @@ The demo workflow is the following:
5356
* If you specify `--blur_bgr`, background will be blurred according to a set value. By default equal to zero and is not applied.
5457
* If you specify `--show_with_original_frame`, the result image will be merged with an input one.
5558

56-
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
59+
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](@ref openvino_docs_MO_DG_Additional_Optimization_Use_Cases).
60+
61+
## Model API
62+
63+
The demo utilizes model wrappers, adapters and pipelines from [Python* Model API](../../common/python/openvino/model_zoo/model_api/README.md).
64+
65+
The generalized interface of wrappers with its unified results representation provides the support of multiple different background subtraction model topologies in one demo.
5766

5867
## Preparing to Run
5968

@@ -75,10 +84,12 @@ omz_converter --list models.lst
7584

7685
### Supported Models
7786

78-
* instance-segmentation-person-????
79-
* yolact-resnet50-fpn-pytorch
8087
* background-matting-mobilenetv2
88+
* instance-segmentation-person-????
89+
* modnet-photographic-portrait-matting
90+
* modnet-webcam-portrait-matting
8191
* robust-video-matting-mobilenetv3
92+
* yolact-resnet50-fpn-pytorch
8293

8394
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
8495
@@ -227,5 +238,5 @@ You can use these metrics to measure application-level performance.
227238
## See Also
228239

229240
* [Open Model Zoo Demos](../../README.md)
230-
* [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
241+
* [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
231242
* [Model Downloader](../../../tools/model_tools/README.md)

demos/background_subtraction_demo/python/background_subtraction_demo.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,10 @@
2626

2727
sys.path.append(str(Path(__file__).resolve().parents[2] / 'common/python'))
2828

29-
from openvino.model_zoo.model_api.models import MaskRCNNModel, OutputTransform, RESIZE_TYPES, YolactModel, ImageMattingWithBackground, VideoBackgroundMatting
29+
from openvino.model_zoo.model_api.models import (
30+
MaskRCNNModel, OutputTransform, RESIZE_TYPES, YolactModel,
31+
ImageMattingWithBackground, VideoBackgroundMatting, PortraitBackgroundMatting
32+
)
3033
from openvino.model_zoo.model_api.models.utils import load_labels
3134
from openvino.model_zoo.model_api.performance_metrics import PerformanceMetrics
3235
from openvino.model_zoo.model_api.pipelines import get_user_config, AsyncPipeline
@@ -123,6 +126,9 @@ def get_model(model_adapter, configuration, args):
123126
model = ImageMattingWithBackground(model_adapter, configuration)
124127
need_bgr_input = True
125128
is_matting_model = True
129+
elif len(inputs) == 1 and len(outputs) == 1:
130+
model = PortraitBackgroundMatting(model_adapter, configuration)
131+
is_matting_model = True
126132
else:
127133
model = MaskRCNNModel(model_adapter, configuration)
128134
if not need_bgr_input and args.background is not None:

0 commit comments

Comments
 (0)