Skip to content

Commit 10e4f1e

Browse files
authored
Merge branch 'openvinotoolkit:master' into master
2 parents 440da01 + 1697e60 commit 10e4f1e

File tree

825 files changed

+2695
-9803
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

825 files changed

+2695
-9803
lines changed

CONTRIBUTING.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ We appreciate your intention to contribute model to the OpenVINO™ Open Mod
44

55
Frameworks supported by the Open Model Zoo:
66
* Caffe\*
7-
* Caffe2\* (via conversion to ONNX\*)
87
* TensorFlow\*
98
* PyTorch\* (via conversion to ONNX\*)
109
* MXNet\*
@@ -113,7 +112,7 @@ For replacement operation:
113112
- `replacement` — Replacement string
114113
- `count` (*optional*) — Exact number of replacements (if number of `pattern` occurrences less then this number, downloading will be aborted)
115114

116-
**`conversion_to_onnx_args`** (*only for Caffe2\*, PyTorch\* models*)
115+
**`conversion_to_onnx_args`** (*only for PyTorch\* models*)
117116

118117
List of ONNX\* conversion parameters, see `model_optimizer_args` for details.
119118

@@ -177,15 +176,15 @@ license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICE
177176

178177
## Model Conversion
179178

180-
Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After a successful conversion you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters.
179+
Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After a successful conversion you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters.
181180

182181
> **NOTE 1**: Image preprocessing parameters (mean and scale) must be built into a converted model to simplify model usage.
183182
184183
> **NOTE 2**: If a model input is a color image, color channel order should be `BGR`.
185184
186185
## Demo
187186

188-
A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](demos/README.md) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python).
187+
A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](demos/README.md) or [samples](https://docs.openvino.ai/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python).
189188

190189
The demo's name should end with `_demo` suffix to follow the convention of the project.
191190

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Open Model Zoo repository
1+
# [OpenVINO™ Toolkit](https://docs.openvino.ai/latest/index.html) - Open Model Zoo repository
22
[![Stable release](https://img.shields.io/badge/version-2021.4-green.svg)](https://github.com/openvinotoolkit/open_model_zoo/releases/tag/2022.1)
33
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/open_model_zoo/community)
44
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
@@ -19,12 +19,12 @@ Open Model Zoo is licensed under [Apache License Version 2.0](LICENSE).
1919

2020
## Online Documentation
2121
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
22-
* [Pre-Trained Models](https://docs.openvinotoolkit.org/latest/trained_models.html)
23-
* [Demos and Samples](https://docs.openvinotoolkit.org/latest/omz_demos.html)
22+
* [Pre-Trained Models](https://docs.openvino.ai/latest/model_zoo.html)
23+
* [Demos and Samples](https://docs.openvino.ai/latest/omz_demos.html)
2424

2525
## Other Usage Examples
26-
* [Open Visual Cloud](https://01.org/openvisualcloud)
27-
* [Tutorial: Build and Run the AD Insertion Sample on public cloud or local machine](https://01.org/openvisualcloud/documents/tutorial-build-and-run-ad-insertion-sample-public-cloud-or-local-machine)
26+
* [Open Visual Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/open-visual-cloud.html)
27+
* [Tutorial: Running AD Insertion on Public Cloud](https://github.com/OpenVisualCloud/Ad-Insertion-Sample/wiki/Tutorial:-Running-AD-Insertion-on-Public-Cloud)
2828
* [GitHub Repo for Ad Insertion Sample](https://github.com/OpenVisualCloud/Ad-Insertion-Sample)
2929
* [OpenVINO for Smart City](https://github.com/incluit/OpenVino-For-SmartCity)
3030
* [OpenVINO Driver Behavior](https://github.com/incluit/OpenVino-Driver-Behaviour)

ci/requirements-ac-test.txt

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ decorator==4.4.2
2020
# via networkx
2121
defusedxml==0.7.1
2222
# via -r tools/accuracy_checker/requirements-core.in
23-
editdistance==0.5.3
24-
# via -r tools/accuracy_checker/requirements.in
2523
fast-ctc-decode==0.3.0
2624
# via -r tools/accuracy_checker/requirements.in
2725
filelock==3.0.12
@@ -78,6 +76,8 @@ numpy==1.19.5
7876
# scipy
7977
# tifffile
8078
# transformers
79+
openvino-telemetry==2022.1.0
80+
# via -r tools/accuracy_checker/requirements-core.in
8181
packaging==21.0
8282
# via
8383
# huggingface-hub
@@ -136,6 +136,7 @@ regex==2021.8.28
136136
requests==2.26.0
137137
# via
138138
# huggingface-hub
139+
# openvino-telemetry
139140
# transformers
140141
sacremoses==0.0.45
141142
# via transformers

ci/requirements-ac.txt

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,6 @@ decorator==4.4.2
1616
# via networkx
1717
defusedxml==0.7.1
1818
# via -r tools/accuracy_checker/requirements-core.in
19-
editdistance==0.5.3
20-
# via -r tools/accuracy_checker/requirements.in
2119
fast-ctc-decode==0.3.0
2220
# via -r tools/accuracy_checker/requirements.in
2321
filelock==3.0.12
@@ -70,6 +68,8 @@ numpy==1.19.5
7068
# scipy
7169
# tifffile
7270
# transformers
71+
openvino-telemetry==2022.1.0
72+
# via -r tools/accuracy_checker/requirements-core.in
7373
packaging==21.0
7474
# via
7575
# huggingface-hub
@@ -117,6 +117,7 @@ regex==2021.8.28
117117
requests==2.26.0
118118
# via
119119
# huggingface-hub
120+
# openvino-telemetry
120121
# transformers
121122
sacremoses==0.0.45
122123
# via transformers

ci/requirements-conversion.txt

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,6 @@ defusedxml==0.7.1
2626
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_tf2.txt
2727
flatbuffers==1.12
2828
# via tensorflow
29-
future==0.18.2
30-
# via -r tools/model_tools/requirements-caffe2.in
3129
gast==0.3.3
3230
# via tensorflow
3331
google-auth==1.35.0
@@ -84,7 +82,6 @@ oauthlib==3.1.1
8482
onnx==1.10.1
8583
# via
8684
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_onnx.txt
87-
# -r tools/model_tools/requirements-caffe2.in
8885
# -r tools/model_tools/requirements-pytorch.in
8986
opt-einsum==3.3.0
9087
# via tensorflow
@@ -126,7 +123,6 @@ six==1.15.0
126123
# google-auth
127124
# google-pasta
128125
# grpcio
129-
# h5py
130126
# keras-preprocessing
131127
# onnx
132128
# protobuf
@@ -147,7 +143,6 @@ termcolor==1.1.0
147143
# via tensorflow
148144
torch==1.8.1
149145
# via
150-
# -r tools/model_tools/requirements-caffe2.in
151146
# -r tools/model_tools/requirements-pytorch.in
152147
# torchvision
153148
torchvision==0.9.1

ci/requirements-downloader.txt

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,15 @@ charset-normalizer==2.0.4
66
# via requests
77
idna==3.2
88
# via requests
9+
openvino-telemetry==2022.1.0
10+
# via -r tools/model_tools/requirements.in
911
pyrx==0.3.0
1012
# via -r tools/model_tools/requirements.in
1113
pyyaml==5.4.1
1214
# via -r tools/model_tools/requirements.in
1315
requests==2.26.0
14-
# via -r tools/model_tools/requirements.in
16+
# via
17+
# -r tools/model_tools/requirements.in
18+
# openvino-telemetry
1519
urllib3==1.26.6
1620
# via requests

ci/requirements-quantization.txt

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ defusedxml==0.7.1
2020
# via
2121
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_kaldi.txt
2222
# -r tools/accuracy_checker/requirements-core.in
23-
editdistance==0.5.3
24-
# via -r tools/accuracy_checker/requirements.in
2523
fast-ctc-decode==0.3.0
2624
# via -r tools/accuracy_checker/requirements.in
2725
filelock==3.0.12
@@ -81,6 +79,8 @@ numpy==1.19.5
8179
# scipy
8280
# tifffile
8381
# transformers
82+
openvino-telemetry==2022.1.0
83+
# via -r tools/accuracy_checker/requirements-core.in
8484
packaging==21.0
8585
# via
8686
# huggingface-hub
@@ -131,6 +131,7 @@ requests==2.26.0
131131
# via
132132
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_kaldi.txt
133133
# huggingface-hub
134+
# openvino-telemetry
134135
# transformers
135136
sacremoses==0.0.45
136137
# via transformers

ci/update-requirements.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def pc(target, *sources):
8787
pc('ci/requirements-check-basics.txt',
8888
'ci/requirements-check-basics.in', 'ci/requirements-documentation.in')
8989
pc('ci/requirements-conversion.txt',
90-
*(f'tools/model_tools/requirements-{suffix}.in' for suffix in ['caffe2', 'pytorch', 'tensorflow']),
90+
*(f'tools/model_tools/requirements-{suffix}.in' for suffix in ['pytorch', 'tensorflow']),
9191
*(openvino_dir / f'deployment_tools/model_optimizer/requirements_{suffix}.txt'
9292
for suffix in ['caffe', 'mxnet', 'onnx', 'tf2']))
9393
pc('ci/requirements-demos.txt',

data/datasets.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ To use this dataset with OMZ tools, make sure `<DATASET_DIR>` contains the follo
4040

4141
### Datasets in dataset_definitions.yml
4242
* `imagenet_1000_classes` used for evaluation models trained on ILSVRC 2012 dataset with 1000 classes. (model examples: [`alexnet`](../models/public/alexnet/README.md), [`vgg16`](../models/public/vgg16/README.md))
43-
* `imagenet_1000_classes_2015` used for evaluation models trained on ILSVRC 2015 dataset with 1000 classes. (model examples: [`se-resnet-152`](../models/public/se-resnet-152/README.md), [`se-resnext-50`](../models/public/se-resnext-50/README.md))
43+
* `imagenet_1000_classes_2015` used for evaluation models trained on ILSVRC 2015 dataset with 1000 classes. (model examples: [`se-resnet-50`](../models/public/se-resnet-50/README.md), [`se-resnext-50`](../models/public/se-resnext-50/README.md))
4444
* `imagenet_1001_classes` used for evaluation models trained on ILSVRC 2012 dataset with 1001 classes (background label + original labels). (model examples: [`googlenet-v2-tf`](../models/public/googlenet-v2-tf/README.md), [`resnet-50-tf`](../models/public/resnet-50-tf/README.md))
4545

4646
## [Common Objects in Context (COCO)](https://cocodataset.org/#home)
@@ -62,9 +62,9 @@ To use this dataset with OMZ tools, make sure `<DATASET_DIR>` contains the follo
6262

6363
### Datasets in dataset_definitions.yml
6464
* `ms_coco_mask_rcnn` used for evaluation models trained on COCO dataset for object detection and instance segmentation tasks. Background label + label map with 80 public available object categories are used. Annotations are saved in order of ascending image ID.
65-
* `ms_coco_detection_91_classes` used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used (original indexing to 91 categories is preserved. You can find more information about object categories labels [here](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/)). Annotations are saved in order of ascending image ID. (model examples: [`faster_rcnn_resnet50_coco`](../models/public/faster_rcnn_resnet50_coco/README.md), [`ssd_resnet50_v1_fpn_coco`](../models/public/ssd_resnet50_v1_fpn_coco/README.md))
65+
* `ms_coco_detection_91_classes` used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used (original indexing to 91 categories is preserved. You can find more information about object categories labels [here](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/)). Annotations are saved in order of ascending image ID. (model examples: [`faster_rcnn_resnet50_coco`](../models/public/faster_rcnn_resnet50_coco/README.md), [`ssd_mobilenet_v1_coco`](../models/public/ssd_mobilenet_v1_coco/README.md))
6666
* `ms_coco_detection_80_class_with_background` used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used. Annotations are saved in order of ascending image ID. (model examples: [`faster-rcnn-resnet101-coco-sparse-60-0001`](../models/intel/faster-rcnn-resnet101-coco-sparse-60-0001/README.md), [`ssd-resnet34-1200-onnx`](../models/public/ssd-resnet34-1200-onnx/README.md))
67-
* `ms_coco_detection_80_class_without_background` used for evaluation models trained on COCO dataset for object detection tasks. Label map with 80 public available object categories is used. Annotations are saved in order of ascending image ID. (model examples: [`ctdet_coco_dlav0_384`](../models/public/ctdet_coco_dlav0_384/README.md), [`yolo-v3-tf`](../models/public/yolo-v3-tf/README.md))
67+
* `ms_coco_detection_80_class_without_background` used for evaluation models trained on COCO dataset for object detection tasks. Label map with 80 public available object categories is used. Annotations are saved in order of ascending image ID. (model examples: [`ctdet_coco_dlav0_512`](../models/public/ctdet_coco_dlav0_512/README.md), [`yolo-v3-tf`](../models/public/yolo-v3-tf/README.md))
6868
* `ms_coco_keypoints` used for evaluation models trained on COCO dataset for human pose estimation tasks. Each annotation stores multiple keypoints for one image. (model examples: [`human-pose-estimation-0001`](../models/intel/human-pose-estimation-0001/README.md))
6969
* `ms_coco_single_keypoints` used for evaluation models trained on COCO dataset for human pose estimation tasks. Each annotation stores single keypoints for image, so several annotation can be associated to one image. (model examples: [`single-human-pose-estimation-0001`](../models/public/single-human-pose-estimation-0001/README.md))
7070

demos/3d_segmentation_demo/python/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,5 +123,5 @@ The demo reports
123123
## See Also
124124

125125
* [Open Model Zoo Demos](../../README.md)
126-
* [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
126+
* [Model Optimizer](https://docs.openvino.ai/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
127127
* [Model Downloader](../../../tools/model_tools/README.md)

0 commit comments

Comments
 (0)