Skip to content

Commit 573fb5d

Browse files
Songki Choikprokofieunwooshcih9088nikita-savelyevv
authored
[RELEASE] Merge back v1.0.1 (#1899)
* [RELEASE][DOC] Fix wrong info in documentation (#1849) * updated dataset formats info. Fix for multilabel classification * revert file * revert file. minor * added warning to instance segmentation * revert changes * [Enhance] Separate installation for each tasks on release task (#1869) * separte_import * align with pre commit * update unit test code * add separate task env pre merge test & aplly it to github action * add multiprocess to requirement * [FIX][REL1.0] Fix Geti integration issues (#1885) * Fix ote_config -> otx_config * [FIX] hang issue when tracing a stack in certain scenario (#1868) fix: use primitive library * [FIX][POT] Set stat_requests_number parameter to 1 (#1870) Set POT stat_requests_number parameter to 1 in order to lower RAM footprint * [FIX] Training error when batch size is 1 (#1872) fix: drop last batch * Recover detection num_workers=2 * Remove nbmake from base requirements * Add py.typed in package * [FIX] Arrange scale between bbox preds and bbox targets in ATSS (#1880) Arrange scale between bbox preds and bbox targets * [FIX][RELEASE1.0] Remove cfg dump in ckpt (#1895) * Remove cfg dump in ckpt * Fix pre-commit * Release v1.0.1 * [FIX] Prevent torch 2.0.0 installation (#1896) * Add torchvision & torchtext in requirements/anomaly.txt with fixed version * Update requirements/anomaly.txt * Fix _model_cfg -> _recipe_cfg due to cfg merge --------- Signed-off-by: Songki Choi <[email protected]> Co-authored-by: Prokofiev Kirill <[email protected]> Co-authored-by: Eunwoo Shin <[email protected]> Co-authored-by: Inhyuk Cho <[email protected]> Co-authored-by: Nikita Savelyev <[email protected]> Co-authored-by: Jaeguk Hyun <[email protected]> Co-authored-by: Jihwan Eom <[email protected]>
1 parent 8f2c882 commit 573fb5d

File tree

9 files changed

+40
-42
lines changed

9 files changed

+40
-42
lines changed

CHANGELOG.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,23 @@
22

33
All notable changes to this project will be documented in this file.
44

5+
## \[v1.0.1\]
6+
7+
### Enhancements
8+
9+
- Refine documents by proof review
10+
- Separate installation for each tasks
11+
- Improve POT efficiency by setting stat_requests_number parameter to 1
12+
13+
### Bug fixes
14+
15+
- Fix missing classes in cls checkpoint
16+
- Fix action task sample codes
17+
- Fix label_scheme mismatch in classification
18+
- Fix training error when batch size is 1
19+
- Fix hang issue when tracing a stack in certain scenario
20+
- Fix pickling error by Removing mmcv cfg dump in ckpt
21+
522
## \[v1.0.0\]
623

724
> _**NOTES**_

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,9 @@ OpenVINO™ Training Extensions is a low-code transfer learning framework for Co
3131
The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on [PyTorch](https://pytorch.org) and [OpenVINO™
3232
toolkit](https://software.intel.com/en-us/openvino-toolkit).
3333

34-
OpenVINO™ Training Extensions provides a "model template" for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on [torchvision](https://pytorch.org/vision/latest/index.html), [pytorchcv](https://github.com/osmr/imgclsmob), [mmcv](https://github.com/open-mmlab/mmcv) and [OpenVINO Model Zoo (OMZ)](https://github.com/openvinotoolkit/open_model_zoo).
34+
OpenVINO™ Training Extensions provides a "model template" for every supported task type, which consolidates necessary information to build a model.
35+
Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general.
36+
If you are an experienced user, you can configure your own model based on [torchvision](https://pytorch.org/vision/latest/index.html), [pytorchcv](https://github.com/osmr/imgclsmob), [mmcv](https://github.com/open-mmlab/mmcv) and [OpenVINO Model Zoo (OMZ)](https://github.com/openvinotoolkit/open_model_zoo).
3537

3638
Furthermore, OpenVINO™ Training Extensions provides automatic configuration of task types and hyperparameters.
3739
The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.

otx/algorithms/common/tasks/nncf_base.py

Lines changed: 9 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@
1818
import io
1919
import json
2020
import os
21-
from collections.abc import Mapping
2221
from copy import deepcopy
2322
from typing import Dict, List, Optional
2423

@@ -336,29 +335,16 @@ def save_model(self, output_model: ModelEntity):
336335
hyperparams_str = ids_to_strings(cfg_helper.convert(self._hyperparams, dict, enum_to_str=True))
337336
labels = {label.name: label.color.rgb_tuple for label in self._labels}
338337

339-
config = deepcopy(self._recipe_cfg)
340-
341-
def update(d, u): # pylint: disable=invalid-name
342-
for k, v in u.items(): # pylint: disable=invalid-name
343-
if isinstance(v, Mapping):
344-
d[k] = update(d.get(k, {}), v)
345-
else:
346-
d[k] = v
347-
return d
348-
349-
modelinfo = torch.load(self._model_ckpt, map_location=torch.device("cpu"))
350-
modelinfo = update(
351-
dict(model=modelinfo),
352-
{
353-
"meta": {
354-
"nncf_enable_compression": True,
355-
"config": config,
356-
},
357-
"config": hyperparams_str,
358-
"labels": labels,
359-
"VERSION": 1,
338+
model_ckpt = torch.load(self._model_ckpt, map_location=torch.device("cpu"))
339+
modelinfo = {
340+
"model": model_ckpt,
341+
"config": hyperparams_str,
342+
"labels": labels,
343+
"VERSION": 1,
344+
"meta": {
345+
"nncf_enable_compression": True,
360346
},
361-
)
347+
}
362348
self._save_model_post_hook(modelinfo)
363349

364350
torch.save(modelinfo, buffer)

otx/algorithms/detection/configs/detection/configuration.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ learning_parameters:
101101
warning: null
102102
num_workers:
103103
affects_outcome_of: NONE
104-
default_value: 0
104+
default_value: 2
105105
description:
106106
Increasing this value might improve training speed however it might
107107
cause out of memory errors. If the number of workers is set to zero, data loading

otx/algorithms/detection/tasks/nncf.py

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@
2222
from otx.algorithms.common.adapters.mmcv.utils import remove_from_config
2323
from otx.algorithms.common.tasks.nncf_base import NNCFBaseTask
2424
from otx.algorithms.detection.adapters.mmdet.nncf import build_nncf_detector
25+
from otx.algorithms.detection.adapters.mmdet.utils.config_utils import (
26+
should_cluster_anchors,
27+
)
2528
from otx.api.entities.datasets import DatasetEntity
2629
from otx.api.entities.inference_parameters import InferenceParameters
2730
from otx.api.entities.model import ModelEntity
@@ -110,17 +113,8 @@ def _optimize_post_hook(
110113
output_model.performance = performance
111114

112115
def _save_model_post_hook(self, modelinfo):
113-
config = modelinfo["meta"]["config"]
114-
if hasattr(config.model, "bbox_head") and hasattr(config.model.bbox_head, "anchor_generator"):
115-
if getattr(
116-
config.model.bbox_head.anchor_generator,
117-
"reclustering_anchors",
118-
False,
119-
):
120-
generator = config.model.bbox_head.anchor_generator
121-
modelinfo["anchors"] = {
122-
"heights": generator.heights,
123-
"widths": generator.widths,
124-
}
116+
if self._recipe_cfg is not None and should_cluster_anchors(self._recipe_cfg):
117+
modelinfo["anchors"] = {}
118+
self._update_anchors(modelinfo["anchors"], self._recipe_cfg.model.bbox_head.anchor_generator)
125119

126120
modelinfo["confidence_threshold"] = self.confidence_threshold

otx/api/configuration/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
#
1515

1616

17-
import otx.api.configuration.helper as ote_config_helper # for 'ote' backward compatibility
17+
import otx.api.configuration.helper as otx_config_helper # for backward compatibility
1818
import otx.api.configuration.helper as cfg_helper # pylint: disable=reimported
1919
from otx.api.configuration.elements import metadata_keys
2020
from otx.api.configuration.elements.configurable_enum import ConfigurableEnum
@@ -27,7 +27,7 @@
2727
__all__ = [
2828
"metadata_keys",
2929
"cfg_helper",
30-
"ote_config_helper",
30+
"otx_config_helper",
3131
"ConfigurableEnum",
3232
"ModelLifecycle",
3333
"Action",

requirements/base.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
22
# Base Algo Requirements. #
33
natsort>=6.0.0
4-
nbmake
54
prettytable
65
protobuf>=3.20.0
76
pyyaml

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ def find_yaml_recipes():
174174
return results
175175

176176

177-
package_data = {"": ["requirements.txt", "README.md", "LICENSE"]} # Needed for exportable code
177+
package_data = {"": ["requirements.txt", "README.md", "LICENSE", "py.typed"]}
178178
package_data.update(find_yaml_recipes())
179179

180180
setup(

tox.ini

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ commands =
218218

219219
[testenv:bandit-scan]
220220
skip_install = true
221-
deps =
221+
deps =
222222
bandit
223223
allowlist_externals =
224224
bandit

0 commit comments

Comments
 (0)