You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: audio_event_detection/deployment/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ This tutorial only describes enough settings for you to be able to deploy a pret
47
47
In this tutorial, we will be deploying a pretrained model from the STM32 model zoo.
48
48
Pretrained models informations can be found under the [pretrained_models](../pretrained_models/) folder. Each model has its own subfolder. Each of these subfolders has a copy of the configuration file used to train the model. You can copy the `preprocessing` and `feature_extraction` sections to your own configuration file, to ensure you have the correct preprocessing parameters.
49
49
50
-
In this tutorial, we will deploy a quantized [Yamnet-256](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/yamnet/ST_pretrainedmodel_public_dataset/esc10/yamnet_256_64x96_tl/yamnet_256_64x96_tl_int8.tflite) that has been trained on ESC-10 using transfer learning.
50
+
In this tutorial, we will deploy a quantized [Yamnet-256](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/audio_event_detection/yamnet/ST_pretrainedmodel_public_dataset/esc10/yamnet_256_64x96_tl/yamnet_256_64x96_tl_int8.tflite) that has been trained on ESC-10 using transfer learning.
Copy file name to clipboardExpand all lines: audio_event_detection/pretrained_models/README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,11 @@
3
3
The STM32 model zoo includes several Tensorflow models for the audio event detection use case pre-trained on custom and public datasets.
4
4
Under each model directory, you can find the `ST_pretrainedmodel_public_dataset` directory, which contains different audio event detection models trained on various public datasets following the [training section](../src/training/README.md) in STM32 model zoo.
5
5
6
-
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/).**
6
+
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/).**
7
7
8
8
9
9
You can get footprints and performance information for each model following links below:
Copy file name to clipboardExpand all lines: audio_event_detection/src/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -323,7 +323,7 @@ In a typical AED pipeline, once the temporal domain preprocessing has been perfo
323
323
324
324
In the model zoo, we convert the input waveform to a log-mel spectrogram. This spectrogram is then cut into several patches of fixed size, and each patch is fed as input to the model. When running the model on the board, patches are computed on the fly, and passed as input to the model in realtime.
325
325
326
-
Different models expect spectrograms computed with different parameters. You can reference the several `config.yaml` files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/) to find out which parameters were used for each model.
326
+
Different models expect spectrograms computed with different parameters. You can reference the several `config.yaml` files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/) to find out which parameters were used for each model.
327
327
328
328
The 'feature_extraction' section and its attributes is shown below.
329
329
@@ -489,7 +489,7 @@ training:
489
489
490
490
The `model` subsection is used to specify a model that is available with the Model Zoo:
491
491
- The `name` and `input_shape` attributes must always be present.
492
-
- Additional attributes are needed depending on the type of model. For example, an `embedding_size` attribute is required for a Yamnet model and `n_stacks` and `version` attributes are required for a Miniresnet model. To know which models require which attributes, please consult <a href="#appendix-a">Appendix-A: Models available with the Model Zoo</a>, or the [models.json](training/doc/models.json) documentation. Additionally, you can reference the configuration files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/)
492
+
- Additional attributes are needed depending on the type of model. For example, an `embedding_size` attribute is required for a Yamnet model and `n_stacks` and `version` attributes are required for a Miniresnet model. To know which models require which attributes, please consult <a href="#appendix-a">Appendix-A: Models available with the Model Zoo</a>, or the [models.json](training/doc/models.json) documentation. Additionally, you can reference the configuration files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/)
493
493
- The optional `pretrained_weights` attribute can be used to load pretrained weights in the model before it gets trained, and perform transfer learning.
494
494
If set to True, pretrained weights are loaded, and if set to False the model is trained from scratch. If you want to load pretrained weights, and fine-tune the entire model (instead of just performing transfer learning by retraining the last layer), you can set the `fine_tune` parameter to True.
495
495
Transfer learning is covered in the "Transfer learning" section of the documentation.
@@ -989,7 +989,7 @@ The models that are available with the Model Zoo and their parameters are descri
989
989
990
990
The 'model' sections that are shown below must be added to the 'training' section of the configuration file.
991
991
992
-
When using pretrained backbones with these models, you will want to have specific preprocessing and feature extraction parameters. Please, refer to the configuration files provided in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/) for these parameters.
992
+
When using pretrained backbones with these models, you will want to have specific preprocessing and feature extraction parameters. Please, refer to the configuration files provided in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/) for these parameters.
993
993
994
994
If you are fine-tuning, or training from scratch, feel free to use whichever preprocessing and feature extraction parameters you desire !
Copy file name to clipboardExpand all lines: hand_posture/deployment/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Detailed instructions on installation are available in this [wiki article](https
35
35
36
36
You can use the deployment service by using a model zoo pre-trained model from the [STM32 model zoo on GH](../pretrained_models/README.md) or your own Hand Posture model. Please refer to the YAML file [deployment_config.yaml](../src/config_file_examples/deployment_config.yaml), which is a ready YAML file with all the necessary sections ready to be filled, or you can update the [user_config.yaml](../src/user_config.yaml) to use it.
37
37
38
-
As an example, we will show how to deploy the model [CNN2D_ST_HandPosture_8classes.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/hand_posture/CNN2D_ST_HandPosture/ST_pretrainedmodel_custom_dataset/ST_VL53L8CX_handposture_dataset/CNN2D_ST_HandPosture_8classes/) pre-trained on the [ST_VL53L8CX_handposture_dataset](../datasets/) dataset .
38
+
As an example, we will show how to deploy the model [CNN2D_ST_HandPosture_8classes.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/hand_posture/CNN2D_ST_HandPosture/ST_pretrainedmodel_custom_dataset/ST_VL53L8CX_handposture_dataset/CNN2D_ST_HandPosture_8classes/) pre-trained on the [ST_VL53L8CX_handposture_dataset](../datasets/) dataset .
39
39
40
40
<ul><detailsopen><summary><ahref="#2-1">2.1 Setting the Model and the Operation Mode</a></summary><aid="2-1"></a>
Copy file name to clipboardExpand all lines: hand_posture/pretrained_models/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ The STM32 model zoo includes several models for hand posture recognition use cas
4
4
5
5
-`ST_pretrainedmodel_custom_dataset` contains different hand posture models trained on ST custom datasets using our [training scripts](../src/config_file_examples/training_config.yaml).
6
6
7
-
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/hand_posture/).**
7
+
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/hand_posture/).**
8
8
9
9
10
10
You can get footprints and performance information for each model following links below:
Copy file name to clipboardExpand all lines: human_activity_recognition/deployment/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,9 +42,9 @@ The deployment of the model is driven by a configuration file written in the YAM
42
42
This tutorial only describes enough settings for you to be able to deploy a pretrained model from the model zoo. Please refer to the [human_activity_recognition/README.md](../src/README.md) file for more information on the configuration file.
43
43
44
44
In this tutorial, we will be deploying a pretrained model from the STM32 model zoo.
45
-
Pretrained models can be found under the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/) folder. Each of the pretrained models has its own subfolder. These subfolders contain a copy of the configuration file used to train this model. Copy the `preprocessing` section from the given model to your own configuration file [user_config.yaml](../src/user_config.yaml), to ensure you have the correct preprocessing parameters for the given model.
45
+
Pretrained models can be found under the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/human_activity_recognition/) folder. Each of the pretrained models has its own subfolder. These subfolders contain a copy of the configuration file used to train this model. Copy the `preprocessing` section from the given model to your own configuration file [user_config.yaml](../src/user_config.yaml), to ensure you have the correct preprocessing parameters for the given model.
46
46
47
-
In this tutorial, we will deploy an [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5) that has been trained on mobility_v1, a proprietary dataset collected by STMicroelectronics.
47
+
In this tutorial, we will deploy an [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5) that has been trained on mobility_v1, a proprietary dataset collected by STMicroelectronics.
@@ -113,7 +113,7 @@ For more details on this section, please consult section 3.5 and section 6 of th
113
113
</details></ul>
114
114
<ul><details open><summary><a href="#2-4">2.4 Data Preparation and Preprocessing</a></summary><a id="2-4"></a>
115
115
116
-
When performing Human Activity Recognition, the data is not processed sample by sample; rather, the data is first framed using different lengths depending on how often a prediction is to be made. In this operation, we are using a model which used a framing of length 24, as suggested by the name: [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5), `wl` stands for window length. The first step of the data preparation is to do the framing of the samples. This information is provided in the section `training.model` as shown below while training:
116
+
When performing Human Activity Recognition, the data is not processed sample by sample; rather, the data is first framed using different lengths depending on how often a prediction is to be made. In this operation, we are using a model which used a framing of length 24, as suggested by the name: [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5), `wl` stands for window length. The first step of the data preparation is to do the framing of the samples. This information is provided in the section `training.model` as shown below while training:
Copy file name to clipboardExpand all lines: human_activity_recognition/pretrained_models/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,10 +5,10 @@ The STM32 model zoo includes several models for the human activity recognition (
5
5
-`ST_pretrainedmodel_custom_dataset` directory contains different human activity recognition models trained on ST custom datasets.
6
6
-`ST_pretrainedmodel_public_dataset` directory contains different human activity recognition models trained on public datasets.
7
7
8
-
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/).**
8
+
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/human_activity_recognition/).**
9
9
10
10
11
11
You can get footprints and performance information for each model following links below:
0 commit comments