Skip to content

Commit 515f3ce

Browse files
committed
Updates:
- Updated How_to_deploy READMEs for Instance Seg, Object Detect and Pose Estimation Signed-off-by: Guillaume Hortes <[email protected]>
1 parent 571546d commit 515f3ce

File tree

3 files changed

+67
-3
lines changed

3 files changed

+67
-3
lines changed

instance_segmentation/deployment/doc/tuto/How_to_deploy_yolov8_instance_segmentation.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,26 @@ The STMicroelectronics Ultralytics fork: [https://github.com/stm32-hotspot/ultra
2222
These models are ready to be deployed and you can go directly to the deployment section.
2323
The other sections below explain how to start from a model trained with Ultralytics scripts and not quantized.
2424

25+
If you just want to deploy pre-trained and quantized segmentation, you can get them from the STMicroelectronics Ultralytics.
26+
If you want to train, you can use directly Ultralytics repository at [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics).
27+
28+
## Pre-requisite
29+
30+
By default, Ultralytics requirements do not install the packages required to export to onnx or tensorflow lite.
31+
When exporting for the first time, it will either use pre-installed packages or do an auto update installing the latest versions which then causes compatibility issues.
32+
To ensure compatibility, you need to install (or downgrade) the versions of tensorflow, onnx and onnxruntime following below requirements:
33+
Use a python 3.9 environment (for the tflite_support package dependency)
34+
Tensorflow version between 2.8.3 and 2.15.1
35+
ONNX version between 1.12.0 and 1.15.0
36+
ONNX runtime version between 1.13 and 1.18.1
37+
```
38+
pip install tensorflow==2.15.1
39+
pip install tf_keras==2.15.1
40+
pip install onnx==1.15.0
41+
pip install onnxruntime==1.18.1
42+
```
43+
Other packages can be installed through the auto update procedure.
44+
2545
## Training a model with Ultralytics scripts
2646

2747
Train the `Yolov8n-seg` model as usual using Ultralytics scripts or start from the pre-trained Yolov8n-seg Pytorch model.

object_detection/deployment/doc/tuto/How_to_deploy_yolov8_yolov5_object_detection.md

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,26 @@ The STMicroelectronics Ultralytics fork: [https://github.com/stm32-hotspot/ultra
2222
These models are ready to be deployed and you can go directly to the deployment section.
2323
The other sections below explain how to start from a model trained with Ultralytics scripts and not quantized.
2424

25+
If you just want to deploy pre-trained and quantized segmentation, you can get them from the STMicroelectronics Ultralytics.
26+
If you want to train, you can use directly Ultralytics repository at [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics).
27+
28+
## Pre-requisite
29+
30+
By default, Ultralytics requirements do not install the packages required to export to onnx or tensorflow lite.
31+
When exporting for the first time, it will either use pre-installed packages or do an auto update installing the latest versions which then causes compatibility issues.
32+
To ensure compatibility, you need to install (or downgrade) the versions of tensorflow, onnx and onnxruntime following below requirements:
33+
Use a python 3.9 environment (for the tflite_support package dependency)
34+
Tensorflow version between 2.8.3 and 2.15.1
35+
ONNX version between 1.12.0 and 1.15.0
36+
ONNX runtime version between 1.13 and 1.18.1
37+
```
38+
pip install tensorflow==2.15.1
39+
pip install tf_keras==2.15.1
40+
pip install onnx==1.15.0
41+
pip install onnxruntime==1.18.1
42+
```
43+
Other packages can be installed through the auto update procedure.
44+
2545
## Training a model with Ultralytics scripts
2646

2747
Train the `Yolov8n` model as usual using Ultralytics scripts or start from the pre-trained Yolov8n Pytorch model.
@@ -48,14 +68,14 @@ By default the exported models are:
4868
3. A quantized model per tensor with input / output in integer int8 format: yolov8n_saved_model/yolov8n_integer_quant.tflite.
4969
4. A quantized model per tensor with input / output in float format: yolov8n_saved_model/yolov8n_full_integer_quant.tflite.
5070

51-
> [!TIPS] It is recommended to use per-channel quantization to better maintain the accuracy, so we recommend to use directly tensorflow lite converter to do the quantization.
71+
> [!TIP] It is recommended to use per-channel quantization to better maintain the accuracy, so we recommend to use directly tensorflow lite converter to do the quantization.
5272
5373
Start from the generated saved model (1 above) as input for the tensorflow converter. Be sure to used the saved model generated through the export command with int8=True.
5474
A script is provided to quantize the model, the yaml file provide the quantization information (see below details).
5575

5676
For deployment the model shall be quantized with input as uint8 and output as float or int8.
5777

58-
> [!Note] Yolov5
78+
> [!NOTE] Yolov5
5979
6080
> The initial version of yolov5n is using a different output shape. For deployment it requires then to add transpose layers compared to the yolov8n.
6181
> Ultralytics introduced the yolov5nu version that is aligned with yolov8 output shape.
@@ -205,6 +225,10 @@ Configuration of the post-processing parameters is done through the configuratio
205225

206226
For model with int8 output, the application will detect automatically the zero point and scale to apply for the post processing.
207227

208-
> [!Note] Yolov5
228+
> [!NOTE] Yolov5
209229
210230
> According the model used is the yolov5nu for a given resolution, use the same parameters as for yolov8 for the post-processing as they are identical.
231+
> In the application code, the code enabled by selecting POSTPROCESS_OD_YOLO_V5_UU is deprecated. it corresponds to the older version of yolov5n and not to the yolov5nu.
232+
> It would require a model with uint8 input and uint8 output.
233+
> Using the `yolo_v5u` model_type will enable the POSTPROCESS_OD_YOLO_V8_UF or POSTPROCESS_OD_YOLO_V8_UI depending on the input/output format.
234+

pose_estimation/deployment/doc/tuto/How_to_deploy_yolov8_pose_estimation.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,26 @@ The STMicroelectronics Ultralytics fork: [https://github.com/stm32-hotspot/ultra
2222
These models are ready to be deployed and you can go directly to the deployment section.
2323
The other sections below explain how to start from a model trained with Ultralytics scripts and not quantized.
2424

25+
If you just want to deploy pre-trained and quantized pose estimation, you can get them from the STMicroelectronics Ultralytics.
26+
New: if you want to train, you can use now use directly Ultralytics repository at [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics).
27+
The latest release of Ultralytics fixed the quantization issue on the pose estimation and is now equivalent to ST fork.
28+
29+
## Pre-requisite
30+
31+
By default, Ultralytics requirements do not install the packages required to export to onnx or tensorflow lite.
32+
When exporting for the first time, it will either use pre-installed packages or do an auto update installing the latest versions which then causes compatibility issues.
33+
To ensure compatibility, you need to install (or downgrade) the versions of tensorflow, onnx and onnxruntime following below requirements:
34+
Use a python 3.9 environment (for the tflite_support package dependency)
35+
Tensorflow version between 2.8.3 and 2.15.1
36+
ONNX version between 1.12.0 and 1.15.0
37+
ONNX runtime version between 1.13 and 1.18.1
38+
```
39+
pip install tensorflow==2.15.0
40+
pip install onnx==1.15.0
41+
pip install onnxruntime==1.18.1
42+
```
43+
Other packages can be installed through the auto update procedure.
44+
2545
## Training a model with Ultralytics scripts
2646

2747
Train the `Yolov8n-pose` model as usual using Ultralytics scripts or start from the pre-trained Yolov8n-pose Pytorch model.

0 commit comments

Comments
 (0)