Skip to content

Commit 418850a

Browse files
[Doc] Apply text feedbacks from OV doc team (#1838)
* Update text correction * Update text correction * Update text correction
1 parent 3f9987a commit 418850a

File tree

12 files changed

+74
-75
lines changed

12 files changed

+74
-75
lines changed

docs/source/guide/get_started/introduction.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ Introduction
99

1010
**OpenVINO™ Training Extensions** is a low-code transfer learning framework for Computer Vision.
1111

12-
Using the simple CLI commands of the framework, users can train, infer, optimize and deploy models simply and fast even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch <https://pytorch.org/>`_ and `OpenVINO™ toolkit <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_.
12+
The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch <https://pytorch.org/>`_ and `OpenVINO™ toolkit <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_.
1313

1414
OpenVINO™ Training Extensions provides a **“model template”** for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on `torchvision <https://pytorch.org/vision/stable/index.html>`_, `pytorchcv <https://github.com/osmr/imgclsmob>`_, `mmcv <https://github.com/open-mmlab/mmcv>`_ and `OpenVINO Model Zoo (OMZ) <https://github.com/openvinotoolkit/open_model_zoo>`_ frameworks.
1515

16-
Moreover, OpenVINO™ Training Extensions provides :doc:`automatic configuration <../explanation/additional_features/auto_configuration>` of task types and hyperparameters. The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.
16+
Furthermore, OpenVINO™ Training Extensions provides :doc:`automatic configuration <../explanation/additional_features/auto_configuration>` of task types and hyperparameters. The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.
1717

1818
************
1919
Key Features
@@ -30,7 +30,7 @@ OpenVINO™ Training Extensions supports the following computer vision tasks:
3030

3131
OpenVINO™ Training Extensions supports the :doc:`following learning methods <../explanation/algorithms/index>`:
3232

33-
- **Supervised**, incremental training including class incremental scenario and contrastive learning for classification and semantic segmentation tasks
33+
- **Supervised**, incremental training, which includes class incremental scenario and contrastive learning for classification and semantic segmentation tasks
3434
- **Semi-supervised learning**
3535
- **Self-supervised learning**
3636

@@ -39,5 +39,5 @@ OpenVINO™ Training Extensions will provide the :doc:`following features <../ex
3939
- **Distributed training** to accelerate the training process when you have multiple GPUs
4040
- **Half-precision training** to save GPUs memory and use larger batch sizes
4141
- Integrated, efficient :doc:`hyper-parameter optimization module <../explanation/additional_features/hpo>` (**HPO**). Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget.
42-
- OpenVINO™ Training Extensions uses `Datumaro <https://openvinotoolkit.github.io/datumaro/docs/>`_ as the backend to handle datasets. Thanks to that, OpenVINO™ Training Extensions supports the most common academic field dataset formats for each task. We constantly working to extend supported formats to give more freedom of datasets format choice.
43-
- Improved :doc:`auto-configuration functionality <../explanation/additional_features/auto_configuration>`. OpenVINO™ Training Extensions analyzes provided dataset and chooses the proper task and model template to have the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided.
42+
- OpenVINO™ Training Extensions uses `Datumaro <https://openvinotoolkit.github.io/datumaro/docs/>`_ as the backend to handle datasets. On account of that, OpenVINO™ Training Extensions supports the most common academic field dataset formats for each task. In the future there will be more supported formats available to give more freedom of datasets format choice.
43+
- Improved :doc:`auto-configuration functionality <../explanation/additional_features/auto_configuration>`. OpenVINO™ Training Extensions analyzes provided dataset and selects the proper task and model template to provide the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided.

docs/source/guide/get_started/quick_start_guide/cli_commands.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
OpenVINO™ Training Extensions CLI commands
22
=================
33

4-
Below, all possible OpenVINO™ Training Extensions CLI commands are presented with some general examples of how to run specific functionality. We also have :doc:`dedicated tutorials <../../tutorials/base/how_to_train/index>` in our documentation with life-practical examples on specific datasets for each task.
4+
All possible OpenVINO™ Training Extensions CLI commands are presented below along with some general examples of how to run specific functionality. There are :doc:`dedicated tutorials <../../tutorials/base/how_to_train/index>` in our documentation with life-practical examples on specific datasets for each task.
55

66
.. note::
77

8-
To run CLI commands we need to prepare a dataset. Each task requires specific data formats. To know more about which formats are supported by each task, refer to :doc:`explanation section <../../explanation/index>` in the documentation.
8+
To run CLI commands you need to prepare a dataset. Each task requires specific data formats. To know more about which formats are supported by each task, refer to :doc:`explanation section <../../explanation/index>` in the documentation.
99

1010
*****
1111
Find
@@ -209,19 +209,19 @@ Example of the command line to start object detection training:
209209
210210
211211
.. note::
212-
We also can visualize the training using ``Tensorboard`` as these logs are located in ``<work_dir>/tf_logs``.
212+
You also can visualize the training using ``Tensorboard`` as these logs are located in ``<work_dir>/tf_logs``.
213213

214214
It is also possible to start training by omitting the template and just passing the paths to dataset roots, then the :doc:`auto-configuration <../../explanation/additional_features/auto_configuration>` will be enabled. Based on the dataset, OpenVINO™ Training Extensions will choose the task type and template with the best accuracy/speed trade-off.
215215

216-
We also can modify model template-specific parameters through the command line. To print all the available parameters the following command can be executed:
216+
You also can modify model template-specific parameters through the command line. To print all the available parameters the following command can be executed:
217217

218218
.. code-block::
219219
220220
(otx) ...$ otx train TEMPLATE params --help
221221
222222
223223
224-
For example, that is how we can change the learning rate and the batch size for the SSD model:
224+
For example, that is how you can change the learning rate and the batch size for the SSD model:
225225

226226
.. code-block::
227227

docs/source/guide/get_started/quick_start_guide/installation.rst

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Installation
55
Prerequisites
66
**************
77

8-
The current version of OpenVINO™ Training Extensions was tested under the following environment:
8+
The current version of OpenVINO™ Training Extensions was tested in the following environment:
99

1010
- Ubuntu 20.04
1111
- Python 3.8.x
@@ -46,7 +46,7 @@ Refer to the `official installation guide <https://pytorch.org/get-started/previ
4646

4747
.. note::
4848

49-
Currently, only torch==1.13.1 was fully validated, torch==2.x will be supported soon. (Earlier versions are not supported due to security issues)
49+
Currently, only torch==1.13.1 was fully validated, torch==2.x will be supported in the future (Previous versions are not supported due to security issues).
5050

5151
.. code-block::
5252
@@ -56,16 +56,15 @@ Refer to the `official installation guide <https://pytorch.org/get-started/previ
5656
# or install command for torch==1.13.1 for CUDA 11.1:
5757
pip install torch==1.13.1 torchvision==0.14.1 --extra-index-url https://download.pytorch.org/whl/cu111
5858
59-
4. Then, install
60-
OpenVINO™ Training Extensions package.
59+
4. Install OpenVINO™ Training Extensions package from either:
6160

62-
Install from a local source in development mode:
61+
* A local source in development mode
6362

6463
.. code-block::
6564
6665
pip install -e .[full]
6766
68-
Or, you can install from PyPI:
67+
* PyPI
6968

7069
.. code-block::
7170

docs/source/guide/tutorials/base/demo.rst

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
How to run the demonstration mode with OpenVINO™ Training Extensions CLI
22
========================================================================
33

4-
In this tutorial we will show how to run :doc:`trained <how_to_train/index>` model inside OTX repository in demonstration mode.
5-
It allows us to apply our model on the custom data or the online footage from a web camera and see how it will work in the real-life scenario.
4+
This tutorial shows how to run :doc:`trained <how_to_train/index>` model inside OTX repository in demonstration mode.
5+
It allows you to apply the model on the custom data or the online footage from a web camera and see how it will work in the real-life scenario.
66

77
.. note::
88

99
This tutorial uses an object detection model for example, however for other tasks the functionality remains the same - you just need to replace the input dataset with your own.
1010

11-
For visualization we use images from WGISD dataset from the :doc: `object detection tutorial <how_to_train/detection>`.
11+
For visualization you use images from WGISD dataset from the :doc: `object detection tutorial <how_to_train/detection>`.
1212

1313
1. Activate the virtual environment
1414
created in the previous step.
@@ -20,7 +20,7 @@ created in the previous step.
2020
2. As an ``input`` we can use a single image,
2121
a folder of images, a video file, or a web camera id. We can run the demo on PyTorch (.pth) model and IR (.xml) model.
2222

23-
The following line will run the demo on your input source, using PyTorch ``outputs/weights.pth``.
23+
The following command will run the demo on your input source, using PyTorch ``outputs/weights.pth``.
2424

2525
.. code-block::
2626
@@ -64,8 +64,8 @@ we can run the following line:
6464
.. :alt: this image shows the inference results with inference time on the WGISD dataset
6565
.. image to be generated and added
6666
67-
6. To run a demo on a web camera, we need to know its ID.
68-
We can check a list of camera devices by running this command line on Linux system:
67+
6. To run a demo on a web camera, you need to know its ID.
68+
You can check a list of camera devices by running the command line below on Linux system:
6969

7070
.. code-block::
7171
@@ -79,13 +79,13 @@ The output will look like this:
7979
Integrated Camera (usb-0000:00:1a.0-1.6):
8080
/dev/video0
8181
82-
After that, we can use this ``/dev/video0`` as a camera ID for ``--input``.
82+
After that, you can use this ``/dev/video0`` as a camera ID for ``--input``.
8383

84-
Congratulations! Now you have learned how to use base OpenVINO™ Training Extensions functionality. For the advanced features, please refer to the next section called :doc:`../advanced/index`.
84+
Congratulations! Now you have learned how to use base OpenVINO™ Training Extensions functionality. For the advanced features, refer to the next section called :doc:`../advanced/index`.
8585

8686
***************
8787
Troubleshooting
8888
***************
8989

90-
If you use Anaconda environment, you should consider that OpenVINO has limited `Conda support <https://docs.openvino.ai/2021.4/openvino_docs_install_guides_installing_openvino_conda.html>`_ for Python 3.6 and 3.7 versions only. But the demo package requires python 3.8.
91-
So please use other tools to create the environment (like ``venv`` or ``virtualenv``) and use ``pip`` as a package manager.
90+
If you use Anaconda environment, keep in mind that OpenVINO has limited `Conda support <https://docs.openvino.ai/2021.4/openvino_docs_install_guides_installing_openvino_conda.html>`_ for Python 3.6 and 3.7 versions only. The demo package requires python 3.8, though.
91+
Therefore, use other tools to create the environment (like ``venv`` or ``virtualenv``) and use ``pip`` as a package manager.

docs/source/guide/tutorials/base/deploy.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
How to deploy the model and use demo in exportable code
22
=======================================================
33

4-
This guide shows, how to deploy a model trained in the :doc:`previous stage <how_to_train/index>` and visualize it outside of this repository.
5-
As a result of this step, we'll get the exported model together with the self-contained python package and a demo application to visualize results in other environment without long installation process.
4+
This guide explains how to deploy a model trained in the :doc:`previous stage <how_to_train/index>` and visualize it outside of this repository.
5+
As a result of this step, you'll get the exported model together with the self-contained python package and a demo application to visualize results in other environment without long installation process.
66

77
.. NOTE::
88
To learn how to use demonstration mode inside this repository utilizing OTX CLI , refer to :doc:`demo`.
99

10-
To be specific, this tutorial uses as an example the object detection ATSS model trained and exported in the previuos step and located in ``outputs/openvino``.
11-
But it can be runned for any task in the same manner.
10+
To be specific, this tutorial uses the object detection ATSS model trained and exported in the previuos step, which is located in ``outputs/openvino``.
11+
Nevertheless, it can be run for any task in the same manner.
1212

1313
**********
1414
Deployment
@@ -38,7 +38,7 @@ archive with the following files:
3838
- ``requirements.txt`` - minimal packages required to run the demo
3939

4040

41-
3. We can deploy the model exported to IR,
41+
3. You can deploy the model exported to IR,
4242
using the command below:
4343

4444
.. code-block::
@@ -52,9 +52,9 @@ using the command below:
5252
2023-01-20 09:30:41,737 | INFO : Deploying the model
5353
2023-01-20 09:30:41,753 | INFO : Deploying completed
5454
55-
We also can deploy the quantized model, that was optimized with NNCF or POT, passing the path to this model in IR format to ``--load-weights`` parameter.
55+
You can also deploy the quantized model, that was optimized with NNCF or POT, passing the path to this model in IR format to ``--load-weights`` parameter.
5656

57-
After that, we can use the resulting ``openvino.zip`` archive in other application.
57+
After that, you can use the resulting ``openvino.zip`` archive in other application.
5858

5959
*************
6060
Demonstrarion

docs/source/guide/tutorials/base/explain.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
How to explain the model behavior
22
=================================
33

4-
This guide shows how to explain the model behavior, which is trained through :doc:`previous stage <how_to_train/index>`.
5-
It allows us to show the saliency maps, which provides the locality where the model gave an attention to predict the specific category.
4+
This guide explains the model behavior, which is trained through :doc:`previous stage <how_to_train/index>`.
5+
It allows displaying the saliency maps, which provide the locality where the model gave an attention to predict a specific category.
66

77
To be specific, this tutorial uses as an example of the ATSS model trained through ``otx train`` and saved as ``outputs/weights.pth``.
88

0 commit comments

Comments
 (0)