Skip to content

Commit 1b93e58

Browse files
authored
Merge pull request #3338 from openvinotoolkit/master
merge master to release
2 parents 8645f95 + 04ab99c commit 1b93e58

File tree

24 files changed

+398
-354
lines changed

24 files changed

+398
-354
lines changed

ci/prepare-documentation.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -382,6 +382,11 @@ def main():
382382
title='OMZ Model API OVMS adapter')
383383
ovms_adapter_element.attrib[XML_ID_ATTRIBUTE] = 'omz_model_api_ovms_adapter'
384384

385+
model_api_element = add_page(output_root, navindex_element, id='omz_python_model_api',
386+
path='demos/common/python/openvino/model_zoo/model_api/README.md',
387+
title='OMZ Python Model API')
388+
model_api_element.attrib[XML_ID_ATTRIBUTE] = 'omz_python_model_api'
389+
385390
for md_path in all_md_paths:
386391
if md_path not in documentation_md_paths:
387392
raise RuntimeError(f'{all_md_paths[md_path]}: '

demos/3d_segmentation_demo/python/README.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,6 @@ On startup, the demo reads command-line parameters and loads a model and images
1010
1111
## Preparing to Run
1212

13-
The demo dependencies should be installed before run. That can be achieved with the following command:
14-
15-
```sh
16-
python3 -mpip install --user -r <omz_dir>/demos/3d_segmentation_demo/python/requirements.txt
17-
```
18-
1913
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
2014
The list of models supported by the demo is in `<omz_dir>/demos/3d_segmentation_demo/python/models.lst` file.
2115
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).

demos/README.md

Lines changed: 5 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -286,37 +286,17 @@ cmake -A x64 <open_model_zoo>/demos
286286
cmake --build . --config Debug
287287
```
288288
289-
### <a name="model_api_installation"></a>Python\* model API installation
289+
### <a name="python_requirements"></a>Dependencies for Python* Demos
290290
291-
Python Model API with model wrappers and pipelines can be installed as a part of OpenVINO&trade; toolkit or from source.
292-
Installation from source is as follows:
293-
294-
1. Install Python (version 3.6 or higher), [setuptools](https://pypi.org/project/setuptools/):
295-
296-
2. Build the wheel with the following command:
291+
The dependencies for Python demos must be installed before running. It can be achieved with the following command:
297292
298293
```sh
299-
python <omz_dir>/demos/common/python/setup.py bdist_wheel
294+
python -mpip install --user -r <omz_dir>/demos/requirements.txt
300295
```
301-
The built wheel should appear in the dist folder;
302-
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`
303296
304-
3. Install the package in the clean environment with `--force-reinstall` key:
305-
```sh
306-
python -m pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
307-
```
308-
Alternatively, instead of building the wheel you can use the following command inside `<omz_dir>/demos/common/python/` directory to build and install the package:
309-
```sh
310-
python -m pip install .
311-
```
312-
313-
When the model API package is installed, you can import it as follows:
314-
```sh
315-
python -c "from openvino.model_zoo import model_api"
316-
```
297+
### <a name="python_model_api"></a>Python\* model API package
317298
318-
> **NOTE**: On Linux and macOS, you may need to type `python3` instead of `python`. You may also need to [install pip](https://pip.pypa.io/en/stable/installation/).
319-
> For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-pip`.
299+
To run Python demo applications, you need to install the Python* Model API package. Refer to [Python* Model API documentation](common/python/openvino/model_zoo/model_api/README.md#installing-python*-model-api-package) to learn about its installation.
320300
321301
### <a name="build_python_extensions"></a>Build the Native Python\* Extension Modules
322302

demos/common/cpp/models/src/classification_model.cpp

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,11 @@ std::unique_ptr<ResultBase> ClassificationModel::postprocess(InferenceResult& in
4242

4343
result->topLabels.reserve(scoresTensor.get_size());
4444
for (size_t i = 0; i < scoresTensor.get_size(); ++i) {
45-
result->topLabels.emplace_back(indicesPtr[i], labels[indicesPtr[i]], scoresPtr[i]);
45+
int ind = indicesPtr[i];
46+
if (ind < 0 || ind >= (int)labels.size()) {
47+
throw std::runtime_error("Invalid index for the class label is found during postprocessing");
48+
}
49+
result->topLabels.emplace_back(ind, labels[ind], scoresPtr[i]);
4650
}
4751

4852
return retVal;
Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
# Python* Model API package
2+
3+
Model API package is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc...)
4+
An application feeds model class with input data, then the model returns postprocessed output data in user-friendly format.
5+
6+
## Package structure
7+
8+
The Model API consists of 3 libraries:
9+
* _adapters_ implements a common interface to allow Model API wrappers usage with different executors. See [Model API adapters](#model-api-adapters) section
10+
* _models_ implements wrappers for Open Model Zoo models. See [Model API Wrappers](#model-api-wrappers) section
11+
* _pipelines_ implements pipelines for model inference and manage the synchronous/asynchronous execution. See [Model API Pipelines](#model-api-pipelines) section
12+
13+
### Prerequisites
14+
15+
The package requires
16+
- one of OpenVINO supported Python version (see OpenVINO documentation for the details)
17+
- OpenVINO™ toolkit
18+
19+
If you build Model API package from source, you should install the OpenVINO™ toolkit. See the options:
20+
21+
Use installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
22+
23+
Also, you can install the OpenVINO Python\* package via the command:
24+
```sh
25+
pip install openvino
26+
```
27+
28+
## Installing Python* Model API package
29+
30+
Use the following command to install Model API from source:
31+
```sh
32+
pip install <omz_dir>/demos/common/python
33+
```
34+
35+
Alternatively, you can generate the package using a wheel. Follow the steps below:
36+
1. Build the wheel.
37+
38+
```sh
39+
python <omz_dir>/demos/common/python/setup.py bdist_wheel
40+
```
41+
The wheel should appear in the dist folder.
42+
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`
43+
44+
2. Install the package in the clean environment with `--force-reinstall` key.
45+
```sh
46+
pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
47+
```
48+
49+
To verify the package is installed, you might use the following command:
50+
```sh
51+
python -c "from openvino.model_zoo import model_api"
52+
```
53+
54+
## Model API Wrappers
55+
56+
The Model API package provides model wrappers, which implement standardized preprocessing/postprocessing functions per "task type" and incapsulate model-specific logic for usage of different models in a unified manner inside the application.
57+
58+
The following tasks can be solved with wrappers usage:
59+
60+
| Task type | Model API wrappers |
61+
|----------------------------|--------------------|
62+
| Background Matting | <ul><li>`VideoBackgroundMatting`</li><li>`ImageMattingWithBackground`</li></ul> |
63+
| Classification | <ul><li>`Classification`</li></ul> |
64+
| Deblurring | <ul><li>`Deblurring`</li></ul> |
65+
| Human Pose Estimation | <ul><li>`HpeAssociativeEmbedding`</li><li>`OpenPose`</li></ul> |
66+
| Instance Segmentation | <ul><li>`MaskRCNNModel`</li><li>`YolactModel`</li></ul> |
67+
| Monocular Depth Estimation | <ul><li> `MonoDepthModel`</li></ul> |
68+
| Named Entity Recognition | <ul><li>`BertNamedEntityRecognition`</li></ul> |
69+
| Object Detection | <ul><li>`CenterNet`</li><li>`DETR`</li><li>`CTPN`</li><li>`FaceBoxes`</li><li>`RetinaFace`</li><li>`RetinaFacePyTorch`</li><li>`SSD`</li><li>`UltraLightweightFaceDetection`</li><li>`YOLO`</li><li>`YoloV3ONNX`</li><li>`YoloV4`</li><li>`YOLOF`</li><li>`YOLOX`</li></ul> |
70+
| Question Answering | <ul><li>`BertQuestionAnswering`</li></ul> |
71+
| Salient Object Detection | <ul><li>`SalientObjectDetectionModel`</li></ul> |
72+
| Semantic Segmentation | <ul><li>`SegmentationModel`</li></ul> |
73+
74+
## Model API Adapters
75+
76+
Model API wrappers are executor-agnostic, meaning it does not implement the specific model inference or model loading, instead it can be used with different executors having the implementation of common interface methods in adapter class respectively.
77+
78+
Currently, `OpenvinoAdapter` and `OVMSAdapter` are supported.
79+
80+
#### OpenVINO Adapter
81+
82+
`OpenvinoAdapter` hides the OpenVINO™ toolkit API, which allows Model API wrappers launching with models represented in Intermediate Representation (IR) format.
83+
It accepts a path to either `xml` model file or `onnx` model file.
84+
85+
#### OpenVINO Model Server Adapter
86+
87+
`OVMSAdapter` hides the OpenVINO Model Server python client API, which allows Model API wrappers launching with models served by OVMS.
88+
89+
Refer to __[`OVMSAdapter`](adapters/ovms_adapter.md)__ to learn about running demos with OVMS.
90+
91+
For using OpenVINO Model Server Adapter you need to install the package with extra module:
92+
```sh
93+
pip install <omz_dir>/demos/common/python[ovms]
94+
```
95+
96+
## Model API Pipelines
97+
98+
Model API Pipelines represent the high-level wrappers upon the input data and accessing model results management.
99+
They perform the data submission for model inference, verification of inference status, whether the result is ready or not, and results accessing.
100+
101+
The `AsyncPipeline` is available, which handles the asynchronous execution of a single model.
102+
103+
## Ready-to-use Model API solutions
104+
105+
To apply Model API wrappers in custom applications, learn the provided example of common scenario of how to use Model API.
106+
107+
In the example, the SSD architecture is used to predict bounding boxes on input image `"sample.png"`. The model execution is produced by `OpenvinoAdapter`, therefore we submit the path to the model's `xml` file.
108+
109+
Once the SSD model wrapper instance is created, we get the predictions by the model in one line: `ssd_model(input_data)` - the wrapper performs the preprocess method, synchronous inference on OpenVINO™ toolkit side and postprocess method.
110+
111+
```python
112+
import cv2
113+
# import model wrapper class
114+
from openvino.model_zoo.model_api.models import SSD
115+
# import inference adapter and helper for runtime setup
116+
from openvino.model_zoo.model_api.adapters import OpenvinoAdapter, create_core
117+
118+
119+
# read input image using opencv
120+
input_data = cv2.imread("sample.png")
121+
122+
# define the path to mobilenet-ssd model in IR format
123+
model_path = "public/mobilenet-ssd/FP32/mobilenet-ssd.xml"
124+
125+
# create adapter for OpenVINO™ runtime, pass the model path
126+
model_adapter = OpenvinoAdapter(create_core(), model_path, device="CPU")
127+
128+
# create model API wrapper for SSD architecture
129+
# preload=True loads the model on CPU inside the adapter
130+
ssd_model = SSD(model_adapter, preload=True)
131+
132+
# apply input preprocessing, sync inference, model output postprocessing
133+
results = ssd_model(input_data)
134+
```
135+
136+
To study the complex scenarios, refer to Open Model Zoo Python* demos, where the asynchronous inference is applied.

demos/common/python/setup.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,9 @@
3030
with open(SETUP_DIR / 'requirements.txt') as f:
3131
required = f.read().splitlines()
3232

33+
with open(SETUP_DIR / 'requirements_ovms.txt') as f:
34+
ovms_required = f.read().splitlines()
35+
3336
packages = find_packages(str(SETUP_DIR))
3437
package_dir = {'openvino': str(SETUP_DIR / 'openvino')}
3538

@@ -47,4 +50,5 @@
4750
packages=packages,
4851
package_dir=package_dir,
4952
install_requires=required,
53+
extras_require={'ovms': ovms_required}
5054
)

demos/face_recognition_demo/python/README.md

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -31,20 +31,6 @@ visualized and displayed on the screen or written to the output file.
3131
3232
## Preparing to Run
3333

34-
### Installation and dependencies
35-
36-
The demo depends on:
37-
38-
* OpenVINO library (2021.4 or newer)
39-
* Python (any, which is supported by OpenVINO)
40-
* OpenCV (>=4.2.5)
41-
42-
To install all the required Python modules you can use:
43-
44-
``` sh
45-
pip install -r requirements.txt
46-
```
47-
4834
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).
4935
The list of models supported by the demo is in `<omz_dir>/demos/face_recognition_demo/python/models.lst` file.
5036
This file can be used as a parameter for [Model Downloader](../../../tools/model_tools/README.md) and Converter to download and, if necessary, convert models to OpenVINO IR format (\*.xml + \*.bin).

demos/formula_recognition_demo/python/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,6 @@ Regardless of what mode is selected (interactive or non-interactive) the process
5656

5757
##### Requirements for rendering
5858

59-
Sympy python package is used for rendering. To install it, please, run:
60-
`pip install -r requirements.txt`
6159
Sympy package needs LaTeX system installed in the operating system.
6260
For Windows you can use [MiKTeX\*](https://miktex.org/) (just download and install it), for Ubuntu/MacOS you can use TeX Live\*:
6361
Ubuntu:

demos/gaze_estimation_demo/cpp/models.lst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# This file can be used with the --list option of the model downloader.
2-
#facial-landmarks-98-detection-????
32
facial-landmarks-35-adas-????
3+
facial-landmarks-98-detection-????
44
face-detection-adas-????
55
face-detection-retail-????
66
gaze-estimation-adas-????

demos/gpt2_text_prediction_demo/python/gpt2_text_prediction_demo.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ def main():
7070
log.debug("Loaded vocab file from {}, get {} tokens".format(args.vocab, len(vocab)))
7171

7272
# create tokenizer
73-
tokenizer = Tokenizer(BPE(str(args.vocab), str(args.merges)))
73+
tokenizer = Tokenizer(BPE.from_file(str(args.vocab), str(args.merges)))
7474
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False)
7575
tokenizer.decoder = decoders.ByteLevel()
7676

0 commit comments

Comments
 (0)