Skip to content

Commit 49741c7

Browse files
committed
Fix broken links in README files and code typo.
Signed-off-by: khaoula boutiche <[email protected]>
1 parent 9bb116e commit 49741c7

File tree

29 files changed

+101
-94
lines changed

29 files changed

+101
-94
lines changed

README.md

Lines changed: 34 additions & 34 deletions
Large diffs are not rendered by default.

audio_event_detection/deployment/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ This tutorial only describes enough settings for you to be able to deploy a pret
4747
In this tutorial, we will be deploying a pretrained model from the STM32 model zoo.
4848
Pretrained models informations can be found under the [pretrained_models](../pretrained_models/) folder. Each model has its own subfolder. Each of these subfolders has a copy of the configuration file used to train the model. You can copy the `preprocessing` and `feature_extraction` sections to your own configuration file, to ensure you have the correct preprocessing parameters.
4949

50-
In this tutorial, we will deploy a quantized [Yamnet-256](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/yamnet/ST_pretrainedmodel_public_dataset/esc10/yamnet_256_64x96_tl/yamnet_256_64x96_tl_int8.tflite) that has been trained on ESC-10 using transfer learning.
50+
In this tutorial, we will deploy a quantized [Yamnet-256](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/audio_event_detection/yamnet/ST_pretrainedmodel_public_dataset/esc10/yamnet_256_64x96_tl/yamnet_256_64x96_tl_int8.tflite) that has been trained on ESC-10 using transfer learning.
5151

5252
<ul><details open><summary><a href="#2-1">2.1 Operation mode</a></summary><a id="2-1"></a>
5353

audio_event_detection/pretrained_models/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@
33
The STM32 model zoo includes several Tensorflow models for the audio event detection use case pre-trained on custom and public datasets.
44
Under each model directory, you can find the `ST_pretrainedmodel_public_dataset` directory, which contains different audio event detection models trained on various public datasets following the [training section](../src/training/README.md) in STM32 model zoo.
55

6-
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/).**
6+
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/).**
77

88

99
You can get footprints and performance information for each model following links below:
10-
- [Mini Resnet v1](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/miniresnet/README.md)
11-
- [Mini Resnet v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/miniresnetv2/README.md)
12-
- [Yamnet](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/yamnet/README.md)
10+
- [Mini Resnet v1](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/audio_event_detection/miniresnet/README.md)
11+
- [Mini Resnet v2](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/audio_event_detection/miniresnetv2/README.md)
12+
- [Yamnet](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/audio_event_detection/yamnet/README.md)
1313

audio_event_detection/src/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,7 @@ In a typical AED pipeline, once the temporal domain preprocessing has been perfo
323323

324324
In the model zoo, we convert the input waveform to a log-mel spectrogram. This spectrogram is then cut into several patches of fixed size, and each patch is fed as input to the model. When running the model on the board, patches are computed on the fly, and passed as input to the model in realtime.
325325

326-
Different models expect spectrograms computed with different parameters. You can reference the several `config.yaml` files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/) to find out which parameters were used for each model.
326+
Different models expect spectrograms computed with different parameters. You can reference the several `config.yaml` files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/) to find out which parameters were used for each model.
327327

328328
The 'feature_extraction' section and its attributes is shown below.
329329

@@ -489,7 +489,7 @@ training:
489489

490490
The `model` subsection is used to specify a model that is available with the Model Zoo:
491491
- The `name` and `input_shape` attributes must always be present.
492-
- Additional attributes are needed depending on the type of model. For example, an `embedding_size` attribute is required for a Yamnet model and `n_stacks` and `version` attributes are required for a Miniresnet model. To know which models require which attributes, please consult <a href="#appendix-a">Appendix-A: Models available with the Model Zoo</a>, or the [models.json](training/doc/models.json) documentation. Additionally, you can reference the configuration files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/)
492+
- Additional attributes are needed depending on the type of model. For example, an `embedding_size` attribute is required for a Yamnet model and `n_stacks` and `version` attributes are required for a Miniresnet model. To know which models require which attributes, please consult <a href="#appendix-a">Appendix-A: Models available with the Model Zoo</a>, or the [models.json](training/doc/models.json) documentation. Additionally, you can reference the configuration files provided with the pretrained models in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/)
493493
- The optional `pretrained_weights` attribute can be used to load pretrained weights in the model before it gets trained, and perform transfer learning.
494494
If set to True, pretrained weights are loaded, and if set to False the model is trained from scratch. If you want to load pretrained weights, and fine-tune the entire model (instead of just performing transfer learning by retraining the last layer), you can set the `fine_tune` parameter to True.
495495
Transfer learning is covered in the "Transfer learning" section of the documentation.
@@ -989,7 +989,7 @@ The models that are available with the Model Zoo and their parameters are descri
989989

990990
The 'model' sections that are shown below must be added to the 'training' section of the configuration file.
991991

992-
When using pretrained backbones with these models, you will want to have specific preprocessing and feature extraction parameters. Please, refer to the configuration files provided in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/audio_event_detection/) for these parameters.
992+
When using pretrained backbones with these models, you will want to have specific preprocessing and feature extraction parameters. Please, refer to the configuration files provided in the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/audio_event_detection/) for these parameters.
993993

994994
If you are fine-tuning, or training from scratch, feel free to use whichever preprocessing and feature extraction parameters you desire !
995995

audio_event_detection/src/config_file_examples/chain_tqe_config.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,13 @@ training:
105105
patience: 60
106106
# trained_model_path: trained.h5 # Optional, use it if you want to save the best model at the end of the training to a path of your choice
107107

108+
quantization:
109+
quantizer: TFlite_converter
110+
quantization_type: PTQ
111+
quantization_input_type: int8
112+
quantization_output_type: float
113+
export_dir: quantized_models
114+
108115
mlflow:
109116
uri: ./experiments_outputs/mlruns
110117

common/stm32ai_local/stm_ai_tools.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,9 +106,9 @@ def _get_stm32ai_cli(root_path: Union[str, Path], host_os: str) -> List[Tuple[ST
106106
version_10 = pattern_10.findall(lines[4]) if len(lines) > 4 else []
107107
if version_8 != []:
108108
version = version_8
109-
if version_9 != []:
109+
elif version_9 != []:
110110
version = version_9
111-
if version_10 != []:
111+
elif version_10 != []:
112112
version = version_10
113113
else:
114114
raise ValueError('Unable to find the CubeAi version')

hand_posture/deployment/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Detailed instructions on installation are available in this [wiki article](https
3535

3636
You can use the deployment service by using a model zoo pre-trained model from the [STM32 model zoo on GH](../pretrained_models/README.md) or your own Hand Posture model. Please refer to the YAML file [deployment_config.yaml](../src/config_file_examples/deployment_config.yaml), which is a ready YAML file with all the necessary sections ready to be filled, or you can update the [user_config.yaml](../src/user_config.yaml) to use it.
3737

38-
As an example, we will show how to deploy the model [CNN2D_ST_HandPosture_8classes.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/hand_posture/CNN2D_ST_HandPosture/ST_pretrainedmodel_custom_dataset/ST_VL53L8CX_handposture_dataset/CNN2D_ST_HandPosture_8classes/) pre-trained on the [ST_VL53L8CX_handposture_dataset](../datasets/) dataset .
38+
As an example, we will show how to deploy the model [CNN2D_ST_HandPosture_8classes.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/hand_posture/CNN2D_ST_HandPosture/ST_pretrainedmodel_custom_dataset/ST_VL53L8CX_handposture_dataset/CNN2D_ST_HandPosture_8classes/) pre-trained on the [ST_VL53L8CX_handposture_dataset](../datasets/) dataset .
3939

4040
<ul><details open><summary><a href="#2-1">2.1 Setting the Model and the Operation Mode</a></summary><a id="2-1"></a>
4141

hand_posture/pretrained_models/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ The STM32 model zoo includes several models for hand posture recognition use cas
44

55
- `ST_pretrainedmodel_custom_dataset` contains different hand posture models trained on ST custom datasets using our [training scripts](../src/config_file_examples/training_config.yaml).
66

7-
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/hand_posture/).**
7+
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/hand_posture/).**
88

99

1010
You can get footprints and performance information for each model following links below:
11-
- [CNN2D_ST_HandPosture](https://github.com/STMicroelectronics/stm32ai-modelzoo/hand_posture/CNN2D_ST_HandPosture/README.md)
11+
- [CNN2D_ST_HandPosture](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/hand_posture/CNN2D_ST_HandPosture/README.md)

human_activity_recognition/deployment/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,9 +42,9 @@ The deployment of the model is driven by a configuration file written in the YAM
4242
This tutorial only describes enough settings for you to be able to deploy a pretrained model from the model zoo. Please refer to the [human_activity_recognition/README.md](../src/README.md) file for more information on the configuration file.
4343

4444
In this tutorial, we will be deploying a pretrained model from the STM32 model zoo.
45-
Pretrained models can be found under the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/) folder. Each of the pretrained models has its own subfolder. These subfolders contain a copy of the configuration file used to train this model. Copy the `preprocessing` section from the given model to your own configuration file [user_config.yaml](../src/user_config.yaml), to ensure you have the correct preprocessing parameters for the given model.
45+
Pretrained models can be found under the [model zoo on GH](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/human_activity_recognition/) folder. Each of the pretrained models has its own subfolder. These subfolders contain a copy of the configuration file used to train this model. Copy the `preprocessing` section from the given model to your own configuration file [user_config.yaml](../src/user_config.yaml), to ensure you have the correct preprocessing parameters for the given model.
4646

47-
In this tutorial, we will deploy an [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5) that has been trained on mobility_v1, a proprietary dataset collected by STMicroelectronics.
47+
In this tutorial, we will deploy an [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5) that has been trained on mobility_v1, a proprietary dataset collected by STMicroelectronics.
4848

4949
<ul><details open><summary><a href="#2-1">2.1 Operation mode</a></summary><a id="2-1"></a>
5050

@@ -113,7 +113,7 @@ For more details on this section, please consult section 3.5 and section 6 of th
113113
</details></ul>
114114
<ul><details open><summary><a href="#2-4">2.4 Data Preparation and Preprocessing</a></summary><a id="2-4"></a>
115115

116-
When performing Human Activity Recognition, the data is not processed sample by sample; rather, the data is first framed using different lengths depending on how often a prediction is to be made. In this operation, we are using a model which used a framing of length 24, as suggested by the name: [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5), `wl` stands for window length. The first step of the data preparation is to do the framing of the samples. This information is provided in the section `training.model` as shown below while training:
116+
When performing Human Activity Recognition, the data is not processed sample by sample; rather, the data is first framed using different lengths depending on how often a prediction is to be made. In this operation, we are using a model which used a framing of length 24, as suggested by the name: [ign_wl_24.h5](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5), `wl` stands for window length. The first step of the data preparation is to do the framing of the samples. This information is provided in the section `training.model` as shown below while training:
117117
```yaml
118118
training:
119119
model:

human_activity_recognition/pretrained_models/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,10 @@ The STM32 model zoo includes several models for the human activity recognition (
55
- `ST_pretrainedmodel_custom_dataset` directory contains different human activity recognition models trained on ST custom datasets.
66
- `ST_pretrainedmodel_public_dataset` directory contains different human activity recognition models trained on public datasets.
77

8-
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/).**
8+
**Feel free to explore the model zoo and get pre-trained models [here](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/master/human_activity_recognition/).**
99

1010

1111
You can get footprints and performance information for each model following links below:
12-
- [IGN](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/README.md)
13-
- [GMP](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/gmp/README.md)
12+
- [IGN](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/human_activity_recognition/ign/README.md)
13+
- [GMP](https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/master/human_activity_recognition/gmp/README.md)
1414

0 commit comments

Comments
 (0)