|
| 1 | +# How to evaluate my model on STM32N6 target? |
| 2 | + |
| 3 | +The evaluation of a model consists in running several inferences on a representative test set in order to get a quality metric of the model, like the accuracy, the mAP, the OKS or any other depending on the UC. This evaluation can be done on : |
| 4 | + - "host" using TensorFlow or ONNX Run Times and executed on the host machine. |
| 5 | + - "stedgeai_host" using a DLL containing emulated STM32 kernels implementation and executed on the host machine |
| 6 | + - "stedgeai_n6" using a generic test application and executed on the STM32N6 target |
| 7 | + |
| 8 | + |
| 9 | +## Environment setup: |
| 10 | +The evaluation on the target requires installation and configuration of ST Edge AI Core you can find here : |
| 11 | +- [ST Edge AI Core](https://www.st.com/en/development-tools/stedgeai-core.html) |
| 12 | +- [STM32CubeIDE](https://www.st.com/en/development-tools/stm32cubeide.html) |
| 13 | + |
| 14 | +A few configurations are required, please find below an example following a standard installation of STEdgeAI_Core v2.0. |
| 15 | + |
| 16 | +- The 'C:/ST/STEdgeAI_Core/2.0/scripts/N6_scripts/config_n6.json' file should be updated to configure the N6 loader. |
| 17 | +```json |
| 18 | +{ |
| 19 | + // The 2lines below are _only used if you call n6_loader.py ALONE (memdump is optional and will be the parent dir of network.c by default) |
| 20 | + "network.c": "C:/ST/STEdgeAI_Core/2.0/script/N6_scripts/st_ai_output/network.c", |
| 21 | + //"memdump_path": "C:/Users/foobar/CODE/stm.ai/stm32ai_output", |
| 22 | + // Location of the "validation" project + build config name to be built (if applicable) |
| 23 | + // If using the provided project, valid build_conf names are "N6-DK" (CR5 boards), "N6-DK-legacy" (older-than-CR5-boards); "N6-Nucleo" can also be used for IAR project. |
| 24 | + "project_path": "C:/ST/STEdgeAI_Core/2.0/Projects/STM32N6570-DK/Applications/NPU_Validation", |
| 25 | + "project_build_conf": "N6-DK", |
| 26 | + // Skip programming weights to earn time (but lose accuracy) -- useful for performance tests |
| 27 | + "skip_external_flash_programming": false, |
| 28 | + "skip_ram_data_programming": false |
| 29 | +} |
| 30 | +``` |
| 31 | +- The 'C:/ST/STEdgeAI_Core/2.0/scripts/N6_scripts/config.json' file should be updated to indicate the paths to find the external tools. |
| 32 | +```json |
| 33 | +{ |
| 34 | + // Set Compiler_type to either gcc or iar |
| 35 | + "compiler_type": "iar", |
| 36 | + // Set Compiler_binary_path to your bin/ directory where IAR or GCC can be found |
| 37 | + // If "Compiler_type" == gcc, then gdb_server_path shall point to where ST-LINK_gdbserver.exe can be found |
| 38 | + "gdb_server_path": "C:/ST/STM32CubeIDE_1.17.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.externaltools.stlink-gdb-server.win32_2.2.0.202409170845/tools/bin/", |
| 39 | + "gcc_binary_path": "C:/ST/STM32CubeIDE_1.17.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.externaltools.gnu-tools-for-stm32.12.3.rel1.win32_1.1.0.202410251130/tools/bin/", |
| 40 | + "iar_binary_path": "C:/Program Files/IAR Systems/Embedded Workbench 9.1/common/bin/", |
| 41 | + // Full path to arm-none-eabi-objcopy.exe |
| 42 | + "objcopy_binary_path": "C:/ST/STM32CubeIDE_1.17.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.externaltools.gnu-tools-for-stm32.12.3.rel1.win32_1.1.0.202410251130/tools/bin/arm-none-eabi-objcopy.exe", |
| 43 | + // Cube Programmer binary path |
| 44 | + "cubeProgrammerCLI_binary_path": "C:/Program Files/STMicroelectronics/STM32Cube/STM32CubeProgrammer/bin/STM32_Programmer_CLI.exe", |
| 45 | + "cubeide_path":"C:/ST/STM32CubeIDE_1.17.0/STM32CubeIDE" |
| 46 | +} |
| 47 | +``` |
| 48 | +Please refer to [stedge ai core getting started on how to evaluate a model on STM32N6 board](https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_getting_started.html#ref_tools_config_n6l_json) for more information on how it works and on the setup. |
| 49 | + |
| 50 | + |
| 51 | +## Before launching the stm32ai_eval_on_target.py script: |
| 52 | +The script to be used for the evaluation on target is taking as parameter a configuration file. The one to use and to adapt is [evaluation_on_target_config.yaml](../../../src/config_file_examples/evaluation_on_target_config.yaml) in config_file_examples folder. |
| 53 | +Below are the main parameters to define. |
| 54 | +In the general section: |
| 55 | +* The `model_path` : path to the model you want to evaluate. |
| 56 | +* The `config_path` : yaml configurationof the model you want to evaluate as it contains some information of the model itself, on the pre-processing or other required informations. |
| 57 | +* The `dataset_path` : path to the representative dataset for the evaluation. When running on the target, it should be of reasonable size because the inputs are transfered from the host to the target so the evaluation could take time. |
| 58 | + |
| 59 | +In the parameters section: |
| 60 | +* `profile` : This relates to the user_neuralart.json file that contains various profiles the memory mapping and the compiler options. This is for advanced users and `profile_O3` is very good to start with. More information in [this article](https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_built_in_tool_profiles). |
| 61 | +* `input_type` : This is the input type provided by the pipeline, before entering the model. In this use case, this can be set to int8 as this is the expected audio input format. |
| 62 | +* `output_type` : This is the output type expected for the post processing, after model execution. This can be set to float32 here. |
| 63 | +* `input_chpos` : This refers to the input data layout (NHWC vs NCHW). As this is an onnx model and don't want to change the layout, this can be set to chfirst. |
| 64 | +* `output_chpos` : This refers to the output data layout (NHWC vs NCHW). Here as well, for consistency this should be set to chfirst. |
| 65 | +* `evaluation_target` : This is to set the type of evaluation (host, STM32 emulated on host, or on STM32N6 HW) |
| 66 | + |
| 67 | +Please refer to the online documentation on the [I/O data type or layout changes](https://stedgeai-dc.st.com/assets/embedded-docs/how_to_change_io_data_type_format.html) for more information. |
| 68 | +In the Tools section: |
| 69 | +* `path_to_stedgeai` : This the path of the stedgeai core executable |
| 70 | +* `path_to_loader` : This is the path to the loader in charge of initializing the memories and loading the model and FW on the N6 device. |
| 71 | + |
| 72 | +```yaml |
| 73 | +general: |
| 74 | + model_path: ../../stm32ai-modelzoo/audio_event_detection/yamnet/ST_pretrainedmodel_public_dataset/esc10/yamnet_1024_64x96_tl/yamnet_1024_64x96_tl_qdq_int8.onnx |
| 75 | + config_path: ../../stm32ai-modelzoo/audio_event_detection/yamnet/ST_pretrainedmodel_public_dataset/esc10/yamnet_1024_64x96_tl/yamnet_1024_64x96_tl.yaml |
| 76 | + training_audio_path: ../../stm32ai-modelzoo-services/audio_event_detection/datasets/ESC-50-master/audio |
| 77 | + training_csv_path: ../../stm32ai-modelzoo-services/audio_event_detection/datasets/ESC-50-master/meta/esc50.csv |
| 78 | + |
| 79 | +parameters: |
| 80 | + profile: profile_O3 |
| 81 | + input_type: int8 # int8 / uint8 / float32 |
| 82 | + output_type: float32 # int8 / uint8 / float32 |
| 83 | + input_chpos: chfirst # chlast / chfirst |
| 84 | + output_chpos: chfirst # chlast / chfirst |
| 85 | + evaluation_target: stedgeai_n6 # host, stedgeai_host, stedgeai_n6 |
| 86 | + |
| 87 | +tools: |
| 88 | + stedgeai: |
| 89 | + path_to_stedgeai: C:/ST/STEdgeAI_Core/2.0/Utilities/windows/stedgeai.exe |
| 90 | + path_to_loader: C:/ST/STEdgeAI_Core/2.0/scripts/N6_scripts/n6_loader.py |
| 91 | + |
| 92 | +hydra: |
| 93 | + verbose: false |
| 94 | + job_logging: |
| 95 | + level: ERROR |
| 96 | + output_subdir: null |
| 97 | + run: |
| 98 | + dir: ./experiments_outputs |
| 99 | +``` |
| 100 | +
|
| 101 | +
|
| 102 | +## Run the script: |
| 103 | +Edit the evaluation_on_target_config.yaml as explained above then open a CMD (make sure to be in the application folder containing the stm32ai_eval_on_target.py script). Finally, run the command: |
| 104 | +
|
| 105 | +```powershell |
| 106 | +python stm32ai_eval_on_target.py --config-path ./src/config_file_examples --config-name evaluation_on_target_config.yaml |
| 107 | +``` |
| 108 | +You can also use any .yaml file using command below: |
| 109 | +```powershell |
| 110 | +python stm32ai_eval_on_target.py --config-path=path_to_the_folder_of_the_yaml --config-name=name_of_your_yaml_file |
| 111 | +``` |
| 112 | + |
| 113 | +## Script outcome: |
| 114 | +Close to the end of the log, you will see patch and clip accuracy results for the evaluation of your model, based on the dataset you used. |
| 115 | + |
| 116 | + |
0 commit comments