You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/guide/get_started/introduction.rst
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,11 @@ Introduction
9
9
10
10
**OpenVINO™ Training Extensions** is a low-code transfer learning framework for Computer Vision.
11
11
12
-
Using the simple CLI commands of the framework, users can train, infer, optimize and deploy models simply and fast even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch <https://pytorch.org/>`_ and `OpenVINO™ toolkit <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_.
12
+
The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch <https://pytorch.org/>`_ and `OpenVINO™ toolkit <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_.
13
13
14
14
OpenVINO™ Training Extensions provides a **“model template”** for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on `torchvision <https://pytorch.org/vision/stable/index.html>`_, `pytorchcv <https://github.com/osmr/imgclsmob>`_, `mmcv <https://github.com/open-mmlab/mmcv>`_ and `OpenVINO Model Zoo (OMZ) <https://github.com/openvinotoolkit/open_model_zoo>`_ frameworks.
15
15
16
-
Moreover, OpenVINO™ Training Extensions provides :doc:`automatic configuration <../explanation/additional_features/auto_configuration>` of task types and hyperparameters. The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.
16
+
Furthermore, OpenVINO™ Training Extensions provides :doc:`automatic configuration <../explanation/additional_features/auto_configuration>` of task types and hyperparameters. The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.
17
17
18
18
************
19
19
Key Features
@@ -30,7 +30,7 @@ OpenVINO™ Training Extensions supports the following computer vision tasks:
30
30
31
31
OpenVINO™ Training Extensions supports the :doc:`following learning methods <../explanation/algorithms/index>`:
32
32
33
-
- **Supervised**, incremental training including class incremental scenario and contrastive learning for classification and semantic segmentation tasks
33
+
- **Supervised**, incremental training, which includes class incremental scenario and contrastive learning for classification and semantic segmentation tasks
34
34
- **Semi-supervised learning**
35
35
- **Self-supervised learning**
36
36
@@ -39,5 +39,5 @@ OpenVINO™ Training Extensions will provide the :doc:`following features <../ex
39
39
- **Distributed training** to accelerate the training process when you have multiple GPUs
40
40
- **Half-precision training** to save GPUs memory and use larger batch sizes
41
41
- Integrated, efficient :doc:`hyper-parameter optimization module <../explanation/additional_features/hpo>` (**HPO**). Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget.
42
-
- OpenVINO™ Training Extensions uses `Datumaro <https://openvinotoolkit.github.io/datumaro/docs/>`_ as the backend to handle datasets. Thanks to that, OpenVINO™ Training Extensions supports the most common academic field dataset formats for each task. We constantly working to extend supported formats to give more freedom of datasets format choice.
43
-
- Improved :doc:`auto-configuration functionality <../explanation/additional_features/auto_configuration>`. OpenVINO™ Training Extensions analyzes provided dataset and chooses the proper task and model template to have the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided.
42
+
- OpenVINO™ Training Extensions uses `Datumaro <https://openvinotoolkit.github.io/datumaro/docs/>`_ as the backend to handle datasets. On account of that, OpenVINO™ Training Extensions supports the most common academic field dataset formats for each task. In the future there will be more supported formats available to give more freedom of datasets format choice.
43
+
- Improved :doc:`auto-configuration functionality <../explanation/additional_features/auto_configuration>`. OpenVINO™ Training Extensions analyzes provided dataset and selects the proper task and model template to provide the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided.
Copy file name to clipboardExpand all lines: docs/source/guide/get_started/quick_start_guide/cli_commands.rst
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
OpenVINO™ Training Extensions CLI commands
2
2
=================
3
3
4
-
Below, all possible OpenVINO™ Training Extensions CLI commands are presented with some general examples of how to run specific functionality. We also have:doc:`dedicated tutorials <../../tutorials/base/how_to_train/index>` in our documentation with life-practical examples on specific datasets for each task.
4
+
All possible OpenVINO™ Training Extensions CLI commands are presented below along with some general examples of how to run specific functionality. There are:doc:`dedicated tutorials <../../tutorials/base/how_to_train/index>` in our documentation with life-practical examples on specific datasets for each task.
5
5
6
6
.. note::
7
7
8
-
To run CLI commands we need to prepare a dataset. Each task requires specific data formats. To know more about which formats are supported by each task, refer to :doc:`explanation section <../../explanation/index>` in the documentation.
8
+
To run CLI commands you need to prepare a dataset. Each task requires specific data formats. To know more about which formats are supported by each task, refer to :doc:`explanation section <../../explanation/index>` in the documentation.
9
9
10
10
*****
11
11
Find
@@ -209,19 +209,19 @@ Example of the command line to start object detection training:
209
209
210
210
211
211
.. note::
212
-
We also can visualize the training using ``Tensorboard`` as these logs are located in ``<work_dir>/tf_logs``.
212
+
You also can visualize the training using ``Tensorboard`` as these logs are located in ``<work_dir>/tf_logs``.
213
213
214
214
It is also possible to start training by omitting the template and just passing the paths to dataset roots, then the :doc:`auto-configuration <../../explanation/additional_features/auto_configuration>` will be enabled. Based on the dataset, OpenVINO™ Training Extensions will choose the task type and template with the best accuracy/speed trade-off.
215
215
216
-
We also can modify model template-specific parameters through the command line. To print all the available parameters the following command can be executed:
216
+
You also can modify model template-specific parameters through the command line. To print all the available parameters the following command can be executed:
217
217
218
218
.. code-block::
219
219
220
220
(otx) ...$ otx train TEMPLATE params --help
221
221
222
222
223
223
224
-
For example, that is how we can change the learning rate and the batch size for the SSD model:
224
+
For example, that is how you can change the learning rate and the batch size for the SSD model:
Copy file name to clipboardExpand all lines: docs/source/guide/get_started/quick_start_guide/installation.rst
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ Installation
5
5
Prerequisites
6
6
**************
7
7
8
-
The current version of OpenVINO™ Training Extensions was tested under the following environment:
8
+
The current version of OpenVINO™ Training Extensions was tested in the following environment:
9
9
10
10
- Ubuntu 20.04
11
11
- Python 3.8.x
@@ -46,7 +46,7 @@ Refer to the `official installation guide <https://pytorch.org/get-started/previ
46
46
47
47
.. note::
48
48
49
-
Currently, only torch==1.13.1 was fully validated, torch==2.x will be supported soon. (Earlier versions are not supported due to security issues)
49
+
Currently, only torch==1.13.1 was fully validated, torch==2.x will be supported in the future (Previous versions are not supported due to security issues).
50
50
51
51
.. code-block::
52
52
@@ -56,16 +56,15 @@ Refer to the `official installation guide <https://pytorch.org/get-started/previ
56
56
# or install command for torch==1.13.1 for CUDA 11.1:
In this tutorial we will show how to run :doc:`trained <how_to_train/index>` model inside OTX repository in demonstration mode.
5
-
It allows us to apply our model on the custom data or the online footage from a web camera and see how it will work in the real-life scenario.
4
+
This tutorial shows how to run :doc:`trained <how_to_train/index>` model inside OTX repository in demonstration mode.
5
+
It allows you to apply the model on the custom data or the online footage from a web camera and see how it will work in the real-life scenario.
6
6
7
7
.. note::
8
8
9
9
This tutorial uses an object detection model for example, however for other tasks the functionality remains the same - you just need to replace the input dataset with your own.
10
10
11
-
For visualization we use images from WGISD dataset from the :doc: `object detection tutorial <how_to_train/detection>`.
11
+
For visualization you use images from WGISD dataset from the :doc: `object detection tutorial <how_to_train/detection>`.
12
12
13
13
1. Activate the virtual environment
14
14
created in the previous step.
@@ -20,7 +20,7 @@ created in the previous step.
20
20
2. As an ``input`` we can use a single image,
21
21
a folder of images, a video file, or a web camera id. We can run the demo on PyTorch (.pth) model and IR (.xml) model.
22
22
23
-
The following line will run the demo on your input source, using PyTorch ``outputs/weights.pth``.
23
+
The following command will run the demo on your input source, using PyTorch ``outputs/weights.pth``.
24
24
25
25
.. code-block::
26
26
@@ -64,8 +64,8 @@ we can run the following line:
64
64
.. :alt: this image shows the inference results with inference time on the WGISD dataset
65
65
.. image to be generated and added
66
66
67
-
6. To run a demo on a web camera, we need to know its ID.
68
-
We can check a list of camera devices by running this command line on Linux system:
67
+
6. To run a demo on a web camera, you need to know its ID.
68
+
You can check a list of camera devices by running the command line below on Linux system:
69
69
70
70
.. code-block::
71
71
@@ -79,13 +79,13 @@ The output will look like this:
79
79
Integrated Camera (usb-0000:00:1a.0-1.6):
80
80
/dev/video0
81
81
82
-
After that, we can use this ``/dev/video0`` as a camera ID for ``--input``.
82
+
After that, you can use this ``/dev/video0`` as a camera ID for ``--input``.
83
83
84
-
Congratulations! Now you have learned how to use base OpenVINO™ Training Extensions functionality. For the advanced features, please refer to the next section called :doc:`../advanced/index`.
84
+
Congratulations! Now you have learned how to use base OpenVINO™ Training Extensions functionality. For the advanced features, refer to the next section called :doc:`../advanced/index`.
85
85
86
86
***************
87
87
Troubleshooting
88
88
***************
89
89
90
-
If you use Anaconda environment, you should consider that OpenVINO has limited `Conda support <https://docs.openvino.ai/2021.4/openvino_docs_install_guides_installing_openvino_conda.html>`_ for Python 3.6 and 3.7 versions only. But the demo package requires python 3.8.
91
-
So please use other tools to create the environment (like ``venv`` or ``virtualenv``) and use ``pip`` as a package manager.
90
+
If you use Anaconda environment, keep in mind that OpenVINO has limited `Conda support <https://docs.openvino.ai/2021.4/openvino_docs_install_guides_installing_openvino_conda.html>`_ for Python 3.6 and 3.7 versions only. The demo package requires python 3.8, though.
91
+
Therefore, use other tools to create the environment (like ``venv`` or ``virtualenv``) and use ``pip`` as a package manager.
This guide shows, how to deploy a model trained in the :doc:`previous stage <how_to_train/index>` and visualize it outside of this repository.
5
-
As a result of this step, we'll get the exported model together with the self-contained python package and a demo application to visualize results in other environment without long installation process.
4
+
This guide explains how to deploy a model trained in the :doc:`previous stage <how_to_train/index>` and visualize it outside of this repository.
5
+
As a result of this step, you'll get the exported model together with the self-contained python package and a demo application to visualize results in other environment without long installation process.
6
6
7
7
.. NOTE::
8
8
To learn how to use demonstration mode inside this repository utilizing OTX CLI , refer to :doc:`demo`.
9
9
10
-
To be specific, this tutorial uses as an example the object detection ATSS model trained and exported in the previuos step and located in ``outputs/openvino``.
11
-
But it can be runned for any task in the same manner.
10
+
To be specific, this tutorial uses the object detection ATSS model trained and exported in the previuos step, which is located in ``outputs/openvino``.
11
+
Nevertheless, it can be run for any task in the same manner.
12
12
13
13
**********
14
14
Deployment
@@ -38,7 +38,7 @@ archive with the following files:
38
38
- ``requirements.txt`` - minimal packages required to run the demo
39
39
40
40
41
-
3. We can deploy the model exported to IR,
41
+
3. You can deploy the model exported to IR,
42
42
using the command below:
43
43
44
44
.. code-block::
@@ -52,9 +52,9 @@ using the command below:
52
52
2023-01-20 09:30:41,737 | INFO : Deploying the model
53
53
2023-01-20 09:30:41,753 | INFO : Deploying completed
54
54
55
-
We also can deploy the quantized model, that was optimized with NNCF or POT, passing the path to this model in IR format to ``--load-weights`` parameter.
55
+
You can also deploy the quantized model, that was optimized with NNCF or POT, passing the path to this model in IR format to ``--load-weights`` parameter.
56
56
57
-
After that, we can use the resulting ``openvino.zip`` archive in other application.
57
+
After that, you can use the resulting ``openvino.zip`` archive in other application.
0 commit comments