Skip to content

Commit e315f1d

Browse files
Python Model API package: add main documentation (#3268)
* add documentation to Model API * add readme.md * fix spelling * add Model API section to object_detection_demo readme * remove extra whitespace * add bullet points * Apply suggestions * modify the usage example * Modify documentation * add extra module * don't check relative links for Model API package * update check-documentation.py * prepare-documentation for Python Model API * suggestions * move the list of supported demos to demos/README.md * remove list of demos, remove statement in documentation * add documentation to Model API * add readme.md * fix spelling * add Model API section to object_detection_demo readme * remove extra whitespace * add bullet points * Apply suggestions * modify the usage example * Modify documentation * add extra module * don't check relative links for Model API package * update check-documentation.py * prepare-documentation for Python Model API * suggestions * move the list of supported demos to demos/README.md * remove list of demos, remove statement in documentation * OMZ models instead of architectures, OV supported Python instead certain versions * remove python in documentation, update package structure section Co-authored-by: Vladimir Dudnik <[email protected]>
1 parent fb2dc8d commit e315f1d

File tree

5 files changed

+153
-30
lines changed

5 files changed

+153
-30
lines changed

ci/prepare-documentation.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -382,6 +382,11 @@ def main():
382382
title='OMZ Model API OVMS adapter')
383383
ovms_adapter_element.attrib[XML_ID_ATTRIBUTE] = 'omz_model_api_ovms_adapter'
384384

385+
model_api_element = add_page(output_root, navindex_element, id='omz_python_model_api',
386+
path='demos/common/python/openvino/model_zoo/model_api/README.md',
387+
title='OMZ Python Model API')
388+
model_api_element.attrib[XML_ID_ATTRIBUTE] = 'omz_python_model_api'
389+
385390
for md_path in all_md_paths:
386391
if md_path not in documentation_md_paths:
387392
raise RuntimeError(f'{all_md_paths[md_path]}: '

demos/README.md

Lines changed: 2 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -286,37 +286,9 @@ cmake -A x64 <open_model_zoo>/demos
286286
cmake --build . --config Debug
287287
```
288288
289-
### <a name="model_api_installation"></a>Python\* model API installation
289+
### <a name="python_model_api"></a>Python\* model API package
290290
291-
Python Model API with model wrappers and pipelines can be installed as a part of OpenVINO&trade; toolkit or from source.
292-
Installation from source is as follows:
293-
294-
1. Install Python (version 3.6 or higher), [setuptools](https://pypi.org/project/setuptools/):
295-
296-
2. Build the wheel with the following command:
297-
298-
```sh
299-
python <omz_dir>/demos/common/python/setup.py bdist_wheel
300-
```
301-
The built wheel should appear in the dist folder;
302-
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`
303-
304-
3. Install the package in the clean environment with `--force-reinstall` key:
305-
```sh
306-
python -m pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
307-
```
308-
Alternatively, instead of building the wheel you can use the following command inside `<omz_dir>/demos/common/python/` directory to build and install the package:
309-
```sh
310-
python -m pip install .
311-
```
312-
313-
When the model API package is installed, you can import it as follows:
314-
```sh
315-
python -c "from openvino.model_zoo import model_api"
316-
```
317-
318-
> **NOTE**: On Linux and macOS, you may need to type `python3` instead of `python`. You may also need to [install pip](https://pip.pypa.io/en/stable/installation/).
319-
> For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-pip`.
291+
To run Python demo applications, you need to install the Python* Model API package. Refer to [Python* Model API documentation](common/python/openvino/model_zoo/model_api/README.md#installing-python*-model-api-package) to learn about its installation.
320292
321293
### <a name="build_python_extensions"></a>Build the Native Python\* Extension Modules
322294
Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
# Python* Model API package
2+
3+
Model API package is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc...)
4+
An application feeds model class with input data, then the model returns postprocessed output data in user-friendly format.
5+
6+
## Package structure
7+
8+
The Model API consists of 3 libraries:
9+
* _adapters_ implements a common interface to allow Model API wrappers usage with different executors. See [Model API adapters](#model-api-adapters) section
10+
* _models_ implements wrappers for Open Model Zoo models. See [Model API Wrappers](#model-api-wrappers) section
11+
* _pipelines_ implements pipelines for model inference and manage the synchronous/asynchronous execution. See [Model API Pipelines](#model-api-pipelines) section
12+
13+
### Prerequisites
14+
15+
The package requires
16+
- one of OpenVINO supported Python version (see OpenVINO documentation for the details)
17+
- OpenVINO™ toolkit
18+
19+
If you build Model API package from source, you should install the OpenVINO™ toolkit. See the options:
20+
21+
Use installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
22+
23+
Also, you can install the OpenVINO Python\* package via the command:
24+
```sh
25+
pip install openvino
26+
```
27+
28+
## Installing Python* Model API package
29+
30+
Use the following command to install Model API from source:
31+
```sh
32+
pip install <omz_dir>/demos/common/python
33+
```
34+
35+
Alternatively, you can generate the package using a wheel. Follow the steps below:
36+
1. Build the wheel.
37+
38+
```sh
39+
python <omz_dir>/demos/common/python/setup.py bdist_wheel
40+
```
41+
The wheel should appear in the dist folder.
42+
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`
43+
44+
2. Install the package in the clean environment with `--force-reinstall` key.
45+
```sh
46+
pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
47+
```
48+
49+
To verify the package is installed, you might use the following command:
50+
```sh
51+
python -c "from openvino.model_zoo import model_api"
52+
```
53+
54+
## Model API Wrappers
55+
56+
The Model API package provides model wrappers, which implement standardized preprocessing/postprocessing functions per "task type" and incapsulate model-specific logic for usage of different models in a unified manner inside the application.
57+
58+
The following tasks can be solved with wrappers usage:
59+
60+
| Task type | Model API wrappers |
61+
|----------------------------|--------------------|
62+
| Background Matting | <ul><li>`VideoBackgroundMatting`</li><li>`ImageMattingWithBackground`</li></ul> |
63+
| Classification | <ul><li>`Classification`</li></ul> |
64+
| Deblurring | <ul><li>`Deblurring`</li></ul> |
65+
| Human Pose Estimation | <ul><li>`HpeAssociativeEmbedding`</li><li>`OpenPose`</li></ul> |
66+
| Instance Segmentation | <ul><li>`MaskRCNNModel`</li><li>`YolactModel`</li></ul> |
67+
| Monocular Depth Estimation | <ul><li> `MonoDepthModel`</li></ul> |
68+
| Named Entity Recognition | <ul><li>`BertNamedEntityRecognition`</li></ul> |
69+
| Object Detection | <ul><li>`CenterNet`</li><li>`DETR`</li><li>`CTPN`</li><li>`FaceBoxes`</li><li>`RetinaFace`</li><li>`RetinaFacePyTorch`</li><li>`SSD`</li><li>`UltraLightweightFaceDetection`</li><li>`YOLO`</li><li>`YoloV3ONNX`</li><li>`YoloV4`</li><li>`YOLOF`</li><li>`YOLOX`</li></ul> |
70+
| Question Answering | <ul><li>`BertQuestionAnswering`</li></ul> |
71+
| Salient Object Detection | <ul><li>`SalientObjectDetectionModel`</li></ul> |
72+
| Semantic Segmentation | <ul><li>`SegmentationModel`</li></ul> |
73+
74+
## Model API Adapters
75+
76+
Model API wrappers are executor-agnostic, meaning it does not implement the specific model inference or model loading, instead it can be used with different executors having the implementation of common interface methods in adapter class respectively.
77+
78+
Currently, `OpenvinoAdapter` and `OVMSAdapter` are supported.
79+
80+
#### OpenVINO Adapter
81+
82+
`OpenvinoAdapter` hides the OpenVINO™ toolkit API, which allows Model API wrappers launching with models represented in Intermediate Representation (IR) format.
83+
It accepts a path to either `xml` model file or `onnx` model file.
84+
85+
#### OpenVINO Model Server Adapter
86+
87+
`OVMSAdapter` hides the OpenVINO Model Server python client API, which allows Model API wrappers launching with models served by OVMS.
88+
89+
Refer to __[`OVMSAdapter`](adapters/ovms_adapter.md)__ to learn about running demos with OVMS.
90+
91+
For using OpenVINO Model Server Adapter you need to install the package with extra module:
92+
```sh
93+
pip install <omz_dir>/demos/common/python[ovms]
94+
```
95+
96+
## Model API Pipelines
97+
98+
Model API Pipelines represent the high-level wrappers upon the input data and accessing model results management.
99+
They perform the data submission for model inference, verification of inference status, whether the result is ready or not, and results accessing.
100+
101+
The `AsyncPipeline` is available, which handles the asynchronous execution of a single model.
102+
103+
## Ready-to-use Model API solutions
104+
105+
To apply Model API wrappers in custom applications, learn the provided example of common scenario of how to use Model API.
106+
107+
In the example, the SSD architecture is used to predict bounding boxes on input image `"sample.png"`. The model execution is produced by `OpenvinoAdapter`, therefore we submit the path to the model's `xml` file.
108+
109+
Once the SSD model wrapper instance is created, we get the predictions by the model in one line: `ssd_model(input_data)` - the wrapper performs the preprocess method, synchronous inference on OpenVINO™ toolkit side and postprocess method.
110+
111+
```python
112+
import cv2
113+
# import model wrapper class
114+
from openvino.model_zoo.model_api.models import SSD
115+
# import inference adapter and helper for runtime setup
116+
from openvino.model_zoo.model_api.adapters import OpenvinoAdapter, create_core
117+
118+
119+
# read input image using opencv
120+
input_data = cv2.imread("sample.png")
121+
122+
# define the path to mobilenet-ssd model in IR format
123+
model_path = "public/mobilenet-ssd/FP32/mobilenet-ssd.xml"
124+
125+
# create adapter for OpenVINO™ runtime, pass the model path
126+
model_adapter = OpenvinoAdapter(create_core(), model_path, device="CPU")
127+
128+
# create model API wrapper for SSD architecture
129+
# preload=True loads the model on CPU inside the adapter
130+
ssd_model = SSD(model_adapter, preload=True)
131+
132+
# apply input preprocessing, sync inference, model output postprocessing
133+
results = ssd_model(input_data)
134+
```
135+
136+
To study the complex scenarios, refer to Open Model Zoo Python* demos, where the asynchronous inference is applied.

demos/common/python/setup.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,9 @@
3030
with open(SETUP_DIR / 'requirements.txt') as f:
3131
required = f.read().splitlines()
3232

33+
with open(SETUP_DIR / 'requirements_ovms.txt') as f:
34+
ovms_required = f.read().splitlines()
35+
3336
packages = find_packages(str(SETUP_DIR))
3437
package_dir = {'openvino': str(SETUP_DIR / 'openvino')}
3538

@@ -47,4 +50,5 @@
4750
packages=packages,
4851
package_dir=package_dir,
4952
install_requires=required,
53+
extras_require={'ovms': ovms_required}
5054
)

demos/object_detection_demo/python/README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,12 @@ Async API operates with a notion of the "Infer Request" that encapsulates the in
3838

3939
> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#general-conversion-parameters).
4040
41+
## Model API
42+
43+
The demo utilizes model wrappers, adapters and pipelines from [Python* Model API](../../common/python/openvino/model_zoo/model_api/README.md).
44+
45+
The generalized interface of wrappers with its unified results representation provides the support of multiple different object detection model topologies in one demo.
46+
4147
## Preparing to Run
4248

4349
For demo input image or video files, refer to the section **Media Files Available for Demos** in the [Open Model Zoo Demos Overview](../../README.md).

0 commit comments

Comments
 (0)