Skip to content

Commit 6369aa4

Browse files
author
nullptr
authored
chore: fix bugs and update docs
commit 1aca485 Author: nullptr <nullptr@localhost> Date: Wed Dec 18 09:32:46 2024 +0000 docs: update contents commit e8fa958 Author: nullptr <nullptr@localhost> Date: Wed Dec 18 08:51:59 2024 +0000 docs: update contents commit 1904196 Author: nullptr <nullptr@localhost> Date: Wed Dec 18 08:41:42 2024 +0000 fix: missing hook commit 2ea6677 Author: nullptr <nullptr@localhost> Date: Wed Dec 18 08:15:38 2024 +0000 docs: update fomo commit cd21634 Author: nullptr <nullptr@localhost> Date: Wed Dec 18 08:15:16 2024 +0000 fix: runner work dir commit 005009e Author: nullptr <nullptr@localhost> Date: Wed Dec 18 08:13:45 2024 +0000 fix: fomo qat
1 parent b86f659 commit 6369aa4

File tree

21 files changed

+367
-516
lines changed

21 files changed

+367
-516
lines changed

configs/fomo/fomo_mobnetv2_0.35_x8_coco.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030

3131
from torch.optim import Adam, SGD
3232
from sscma.evaluation import FomoMetric
33-
from sscma.quantizer.models import FomoQuantizer
33+
from sscma.quantizer import FomoQuantizer
3434

3535
# ========================Suggested optional parameters========================
3636
# MODEL
@@ -104,6 +104,12 @@
104104
skip_preprocessor=True,
105105
)
106106

107+
quantizer_config = dict(
108+
type=FomoQuantizer,
109+
head=model['head'],
110+
data_preprocessor=model['data_preprocessor'],
111+
)
112+
107113
deploy = dict(
108114
type=FomoInfer,
109115
data_preprocessor=dict(
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
# Deploying SSCMA Models on Grove Vision AI V2
2+
3+
This example is a deployment tutorial for models included in [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) on the Grove Vision AI V2 module.
4+
5+
To deploy SSCMA models on the Grove Vision AI V2, you need to first convert the models into quantized TensorFlow Lite format. After conversion with the Vela converter, you can deploy the models to the Grove Vision AI V2 module. SSCMA has added this feature in the export tool, allowing you to add the parameter `--format vela` during the export process to export Vela models. Once successfully exported, the remaining steps are consistent with [SSCMA - Model Deployment](overview).
6+
7+
Additionally, you can attempt to manually build the firmware to accommodate the model.
8+
9+
## Building Firmware in Linux Environment
10+
11+
The following steps have been tested on Ubuntu 20.04 PC.
12+
13+
### Install Dependencies
14+
15+
```bash
16+
sudo apt install make
17+
```
18+
19+
### Download Arm GNU Toolchain
20+
21+
```bash
22+
cd ~
23+
wget https://developer.arm.com/-/media/Files/downloads/gnu/13.2.rel1/binrel/arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz
24+
```
25+
26+
### Extract the File
27+
28+
```bash
29+
tar -xvf arm-gnu-toolchain-13.2.rel1-x86_64-arm-none-eabi.tar.xz
30+
```
31+
32+
### Add to PATH
33+
34+
```bash
35+
export PATH="$HOME/arm-gnu-toolchain-13.2.Rel1-x86_64-arm-none-eabi/bin/:$PATH"
36+
```
37+
38+
### Clone the Following Repository and Enter the Seeed_Grove_Vision_AI_Module_V2 Folder
39+
40+
```bash
41+
git clone --recursive https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2.git
42+
cd Seeed_Grove_Vision_AI_Module_V2
43+
```
44+
45+
### Compile the Firmware
46+
47+
```bash
48+
cd EPII_CM55M_APP_S
49+
make clean
50+
make
51+
```
52+
53+
The output ELF file is located at `/obj_epii_evb_icv30_bdv10/gnu_epii_evb_WLCSP65/EPII_CM55M_gnu_epii_evb_WLCSP65_s.elf`.
54+
55+
### Generate Firmware Image File
56+
57+
```bash
58+
cd ../we2_image_gen_local/
59+
cp ../EPII_CM55M_APP_S/obj_epii_evb_icv30_bdv10/gnu_epii_evb_WLCSP65/EPII_CM55M_gnu_epii_evb_WLCSP65_s.elf input_case1_secboot/
60+
./we2_local_image_gen project_case1_blp_wlcsp.json
61+
```
62+
63+
The output firmware image is located at `./output_case1_sec_wlcsp/output.img`.
64+
65+
### Flashing the Firmware
66+
67+
You can use the SSCMA Web Toolkit or Grove Vision AI V2's USB-to-serial tool to flash the firmware, or you can directly use the Xmodem protocol to flash the firmware.
Lines changed: 150 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
# Deploying Models on Espressif Chips
2+
3+
This example is a tutorial for deploying models included in [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) on Espressif chips, with the deployment work based on [ESP-IDF](https://github.com/espressif/esp-idf) and [Tensorflow Lite Micro](https://github.com/tensorflow/tflite-micro).
4+
5+
## Prerequisites
6+
7+
### Hardware
8+
9+
- A Linux or macOS computer
10+
11+
- An ESP32-S3 development board with a camera (e.g., [Seeed Studio XIAO](https://www.seeedstudio.com/XIAO-ESP32S3-Sense-p-5639.html))
12+
13+
- A USB data cable
14+
15+
### Installing ESP-IDF
16+
17+
The deployment of models included in [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) on ESP32 requires ESP-IDF `5.1.x`. Please refer to the following tutorial [ESP-IDF Get Started Guide](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html) to install and configure the toolchain and ESP-IDF.
18+
19+
After successfully installing ESP-IDF, please confirm again whether the [IDF environment variables are set up](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables):
20+
21+
- The `IDF_PATH` environment variable is set.
22+
23+
- Ensure that tools such as `idf.py` and Xtensa-ESP32 (e.g., `xtensa-esp32-elf-gcc`) are included in `$PATH`.
24+
25+
:::tip
26+
27+
We do not recommend configuring ESP-IDF in a virtual environment. You can use the following command to exit the virtual environment (can be used multiple times to exit nested virtual environments):
28+
29+
```sh
30+
conda deactivate
31+
```
32+
33+
Additionally, if your ESP-IDF is not configured in a virtual environment, any operations related to ESP-IDF, such as calls to `idf.py`, should be performed outside of the virtual environment.
34+
35+
:::
36+
37+
### Obtaining Examples and Submodules
38+
39+
**Navigate to the root directory of the [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) project** and run the following commands to obtain the examples and submodules.
40+
41+
```sh
42+
git clone https://github.com/Seeed-Studio/sscma-example-esp32 -b 1.0.0 examples/esp32 && \
43+
pushd examples/esp32 && \
44+
git submodule init && \
45+
git submodule update && \
46+
popd
47+
```
48+
49+
:::warning
50+
51+
You need to complete the installation and configuration of [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) in advance. If you have not installed [SSCMA](https://github.com/Seeed-Studio/ModelAssistant), please refer to the [SSCMA Installation Guide](../../introduction/installation).
52+
53+
:::
54+
55+
## Preparing the Model
56+
57+
Before starting to compile and deploy, you need to prepare the model you want to deploy according to the actual application scenario. Therefore, you may need to go through steps such as selecting a model or neural network, customizing a dataset, exporting or converting a model, etc.
58+
59+
To help you understand the process more systematically, we have written complete documents for different application scenarios [SSCMA - Model Training and Export](../training/overview.md).
60+
61+
:::warning
62+
63+
Before [compiling and deploying](#compiling-and-deploying), you need to prepare the corresponding model in advance.
64+
65+
:::
66+
67+
## Compiling and Deploying
68+
69+
### Compiling Routines
70+
71+
1. Navigate to the root directory of the [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) project and run the following command to enter the example directory `examples`:
72+
73+
```sh
74+
cd examples/<examples>
75+
```
76+
77+
2. Set `IDF_TARGET` to `esp32s3`:
78+
79+
```sh
80+
idf.py set-target esp32s3
81+
```
82+
83+
3. Compile the routine:
84+
85+
```sh
86+
idf.py build
87+
```
88+
89+
### Deploying Routines
90+
91+
1. Connect the ESP32 MCU to the computer and determine the serial port path of the ESP32. On Linux, you can use the following command to check the currently available serial ports (for newly connected ESP32 devices on Linux, the serial port path is generally `/dev/ttyUSB0`):
92+
93+
```sh
94+
lsusb -t && \
95+
ls /dev | grep tty
96+
```
97+
98+
2. Flash the firmware (replace `<TARGET_SERIAL_PORT>` with the serial port path of the ESP32):
99+
100+
```sh
101+
idf.py --port <TARGET_SERIAL_PORT> flash
102+
```
103+
104+
3. Monitor the serial output and wait for the MCU to restart (replace `<TARGET_SERIAL_PORT>` with the serial port path of the ESP32):
105+
106+
```sh
107+
idf.py --port <TARGET_SERIAL_PORT> monitor
108+
```
109+
110+
:::tip
111+
112+
The two commands for flashing the firmware and monitoring the serial output can be combined:
113+
114+
```sh
115+
idf.py --port <TARGET_SERIAL_PORT> flash monitor
116+
```
117+
118+
Use `Ctrl+]` to exit the serial output monitoring interface.
119+
120+
:::
121+
122+
### Performance Overview
123+
124+
By measuring on different chips, the performance of models related to [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) is summarized in the table below.
125+
126+
| Target | Model | Dataset | Input Resolution | Peak RAM | Inferencing Time | F1 Score | Link |
127+
|--|--|--|--|--|--|--|--|
128+
| ESP32-S3 | Meter | [Custom Meter](https://files.seeedstudio.com/sscma/datasets/meter.zip) | 112x112 (RGB) | 320KB | 380ms | 97% | [pfld_meter_int8.tflite](https://github.com/Seeed-Studio/ModelAssistant/releases/tag/model_zoo) |
129+
| ESP32-S3 | Fomo | [COCO MASK](https://files.seeedstudio.com/sscma/datasets/coco_mask.zip) | 96x96 (GRAY) | 244KB | 150ms | 99.5% | [fomo_mask_int8.tflite](https://github.com/Seeed-Studio/ModelAssistant/releases/tag/model_zoo) |
130+
131+
:::tip
132+
For more models, please visit [SSCMA Model Zoo](https://github.com/Seeed-Studio/sscma-model-zoo)
133+
134+
:::
135+
136+
## Contributing
137+
138+
- If you find any issues in these examples or wish to submit an enhancement request, please use [GitHub Issue](https://github.com/Seeed-Studio/ModelAssistant).
139+
140+
- For ESP-IDF related issues, please refer to [ESP-IDF](https://github.com/espressif/esp-idf).
141+
142+
- For TensorFlow Lite Micro related information, please refer to [TFLite-Micro](https://github.com/tensorflow/tflite-micro).
143+
144+
- For [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) related information, please refer to [SSCMA](https://github.com/Seeed-Studio/ModelAssistant).
145+
146+
## License
147+
148+
These examples use ESP-IDF, which is released under the [Apache 2.0 License](https://github.com/espressif/esp-idf/blob/master/LICENSE).
149+
150+
TensorFlow library code and third-party code include their own licenses, which are explained in [TFLite-Micro](https://github.com/tensorflow/tflite-micro).

docs/tutorials/training/fomo.md

Lines changed: 23 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,16 @@ Before training the FOMO model, we need to prepare the dataset. Here, we take th
1111
SSCMA offers various FOMO model configurations, and you can choose the appropriate model for training based on your needs.
1212

1313
```sh
14-
fomo_mobnetv2_0.35_abl_coco.py
14+
fomo_mobnetv2_0.1_x8_coco.py
15+
fomo_mobnetv2_0.35_x8_coco.py
1516
fomo_mobnetv2_1_x16_coco.py
1617
```
1718

18-
Here, we take `fomo_mobnetv2_0.35_abl_coco.py` as an example to show how to use SSCMA for FOMO model training.
19+
Here, we take `fomo_mobnetv2_0.35_x8_coco.py` as an example to show how to use SSCMA for FOMO model training.
1920

2021
```sh
2122
python3 tools/train.py \
22-
configs/fomo/fomo_mobnetv2_0.35_abl_coco.py \
23+
configs/fomo/fomo_mobnetv2_0.35_x8_coco.py \
2324
--cfg-options \
2425
data_root=$(pwd)/datasets/coco_mask/mask/ \
2526
num_classes=2 \
@@ -32,7 +33,7 @@ python3 tools/train.py \
3233
width=192
3334
```
3435

35-
- `configs/fomo/fomo_mobnetv2_0.35_abl_coco.py`: Specifies the configuration file, defining the model and training settings.
36+
- `configs/fomo/fomo_mobnetv2_0.35_x8_coco.py`: Specifies the configuration file, defining the model and training settings.
3637
- `--cfg-options`: Used to specify additional configuration options.
3738
- `data_root`: Sets the root directory of the dataset.
3839
- `num_classes`: Specifies the number of categories the model needs to recognize.
@@ -42,12 +43,12 @@ python3 tools/train.py \
4243
- `val_data`: Specifies the prefix path for validation images.
4344
- `epochs`: Sets the maximum number of training epochs.
4445

45-
After the training is complete, you can find the trained model in the `work_dirs/fomo_mobnetv2_0.35_abl_coco` directory. Before looking for the model, we suggest focusing on the training results first. Below is an analysis of the results and some suggestions for improvement.
46+
After the training is complete, you can find the trained model in the `work_dirs/fomo_mobnetv2_0.35_x8_coco` directory. Before looking for the model, we suggest focusing on the training results first. Below is an analysis of the results and some suggestions for improvement.
4647

4748
:::details
4849

4950
```sh
50-
12/16 04:32:12 - mmengine - INFO - Epoch(val) [100][6/6] P: 0.0000 R: 0.0000 F1: 0.0000 data_time: 0.0664 time: 0.0796
51+
12/18 01:47:05 - mmengine - INFO - Epoch(val) [50][6/6] P: 0.2545 R: 0.4610 F1: 0.3279 data_time: 0.0644 time: 0.0798
5152
```
5253

5354
The F1 score combines the precision and recall metrics, aiming to provide a single number to measure the overall performance of the model. The F1 score ranges from 0 to 1, with higher values indicating higher precision and recall, and better performance. The F1 score reaches its maximum value when the precision and recall of the model are equal.
@@ -64,8 +65,8 @@ Here, we take exporting the TFLite model as an example. You can use the followin
6465

6566
```sh
6667
python3 tools/export.py \
67-
configs/fomo/fomo_mobnetv2_0.35_abl_coco.py \
68-
work_dirs/epoch_50.pth \
68+
configs/fomo/fomo_mobnetv2_0.35_x8_coco.py \
69+
work_dirs/fomo_mobnetv2_0.35_x8_coco/epoch_50.pth \
6970
--cfg-options \
7071
data_root=$(pwd)/datasets/coco_mask/mask/ \
7172
num_classes=2 \
@@ -115,15 +116,17 @@ After exporting the model, you can use the following command to verify its perfo
115116

116117
```sh
117118
python3 tools/test.py \
118-
configs/fomo/fomo_mobnetv2_0.35_abl_coco.py \
119-
work_dirs/epoch_50_int8.tflite \
119+
configs/fomo/fomo_mobnetv2_0.35_x8_coco.py \
120+
work_dirs/fomo_mobnetv2_0.35_x8_coco/epoch_50_int8.tflite \
120121
--cfg-options \
121122
data_root=$(pwd)/datasets/coco_mask/mask/ \
122123
num_classes=2 \
123124
train_ann=train/_annotations.coco.json \
124125
val_ann=valid/_annotations.coco.json \
125126
train_data=train/ \
126-
val_data=valid/
127+
val_data=valid/ \
128+
height=192 \
129+
width=192
127130
```
128131

129132
### QAT
@@ -132,31 +135,33 @@ QAT (Quantization-Aware Training) is a method that simulates quantization operat
132135

133136
```sh
134137
python3 tools/quantization.py \
135-
configs/fomo/fomo_mobnetv2_0.35_abl_coco.py \
136-
work_dirs/epoch_50.pth \
138+
configs/fomo/fomo_mobnetv2_0.35_x8_coco.py \
139+
work_dirs/fomo_mobnetv2_0.35_x8_coco/epoch_50.pth \
137140
--cfg-options \
138141
data_root=$(pwd)/datasets/coco_mask/mask/ \
139142
num_classes=2 \
140143
train_ann=train/_annotations.coco.json \
141144
val_ann=valid/_annotations.coco.json \
142145
train_data=train/ \
143146
val_data=valid/ \
144-
epochs=50 \
147+
epochs=5 \
145148
height=192 \
146149
width=192
147150
```
148151

149-
After QAT training, the quantized model will be automatically exported, and its storage path will be `out/qat_model_test.tflite`. You can use the following command to verify it:
152+
After QAT training, the quantized model will be automatically exported. You can use the following command to verify it:
150153

151154
```sh
152155
python3 tools/test.py \
153-
configs/fomo/fomo_mobnetv2_0.35_abl_coco.py \
154-
out/qat_model_test.tflite \
156+
configs/fomo/fomo_mobnetv2_0.35_x8_coco.py \
157+
work_dirs/fomo_mobnetv2_0.35_x8_coco/qat/qat_model_int8.tflite \
155158
--cfg-options \
156159
data_root=$(pwd)/datasets/coco_mask/mask/ \
157160
num_classes=2 \
158161
train_ann=train/_annotations.coco.json \
159162
val_ann=valid/_annotations.coco.json \
160163
train_data=train/ \
161-
val_data=valid/
164+
val_data=valid/ \
165+
height=192 \
166+
width=192
162167
```

docs/tutorials/training/pfld.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Here, we take exporting the TFLite model as an example. You can use the followin
6666
```sh
6767
python3 tools/export.py \
6868
configs/pfld/pfld_mbv2_1000e.py \
69-
work_dirs/epoch_100.pth \
69+
work_dirs/pfld_mbv2_1000e/epoch_100.pth \
7070
--cfg-options \
7171
data_root=$(pwd)/datasets/meter/ \
7272
val_workers=2 \
@@ -112,7 +112,7 @@ After exporting the model, you can use the following command to verify its perfo
112112
```sh
113113
python3 tools/test.py \
114114
configs/pfld/pfld_mbv2_1000e.py \
115-
work_dirs/epoch_100_int8.tflite \
115+
work_dirs/pfld_mbv2_1000e/epoch_100_int8.tflite \
116116
--cfg-options \
117117
data_root=$(pwd)/datasets/meter/ \
118118
val_workers=2
@@ -125,19 +125,19 @@ QAT (Quantization-Aware Training) is a method that simulates quantization operat
125125
```sh
126126
python3 tools/quantization.py \
127127
configs/pfld/pfld_mbv2_1000e.py \
128-
work_dirs/epoch_100.pth \
128+
work_dirs/pfld_mbv2_1000e/epoch_100.pth \
129129
--cfg-options \
130130
data_root=$(pwd)/datasets/meter/ \
131-
epochs=100 \
131+
epochs=5 \
132132
val_workers=2
133133
```
134134

135-
After QAT training, the quantized model will be automatically exported, and its storage path will be `out/qat_model_test.tflite`. You can use the following command to verify it:
135+
After QAT training, the quantized model will be automatically exported. You can use the following command to verify it:
136136

137137
```sh
138138
python3 tools/test.py \
139139
configs/pfld/pfld_mbv2_1000e.py \
140-
out/qat_model_test.tflite \
140+
work_dirs/rtmdet_nano_8xb32_300e_coco/qat/qat_model_int8.tflite \
141141
--cfg-options \
142142
data_root=$(pwd)/datasets/meter/ \
143143
val_workers=2

0 commit comments

Comments
 (0)