Skip to content

Commit 7dbc9a1

Browse files
committed
update with working instructions for INC
1 parent f58498a commit 7dbc9a1

File tree

2 files changed

+10
-9
lines changed

2 files changed

+10
-9
lines changed

AI-and-Analytics/End-to-end-Workloads/LanguageIdentification/Inference/quantize_model.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,6 @@
1818
from neural_compressor.utils.pytorch import load
1919
from speechbrain.pretrained import EncoderClassifier
2020

21-
DEFAULT_EVAL_DATA_PATH = "/data/commonVoice/dev"
22-
2321
def prepare_dataset(path):
2422
data_list = []
2523
for dir_name in os.listdir(path):
@@ -33,7 +31,7 @@ def main(argv):
3331
import argparse
3432
parser = argparse.ArgumentParser()
3533
parser.add_argument('-p', type=str, required=True, help="Path to the model to be optimized")
36-
parser.add_argument('-datapath', type=str, default=DEFAULT_EVAL_DATA_PATH, help="Path to evaluation dataset")
34+
parser.add_argument('-datapath', type=str, required=True, help="Path to evaluation dataset")
3735
args = parser.parse_args()
3836

3937
model_path = args.p

AI-and-Analytics/End-to-end-Workloads/LanguageIdentification/README.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -360,9 +360,9 @@ The following examples describe how to use the scripts to produce specific outco
360360

361361
1. To improve inference latency, you can use the Intel® Neural Compressor (INC) to quantize the trained model from FP32 to INT8 by running `quantize_model.py`.
362362
```bash
363-
python quantize_model.py -p ./lang_id_commonvoice_model -datapath $COMMON_VOICE_PATH/dev
363+
python quantize_model.py -p ./lang_id_commonvoice_model -datapath $COMMON_VOICE_PATH/processed_data/dev
364364
```
365-
Use the `-datapath` argument to specify a custom evaluation dataset. By default, the datapath is set to the `$COMMON_VOICE_PATH/dev` folder that was generated from the data preprocessing scripts in the `Training` folder.
365+
Use the `-datapath` argument to specify a custom evaluation dataset. By default, the datapath is set to the `$COMMON_VOICE_PATH/processed_data/dev` folder that was generated from the data preprocessing scripts in the `Training` folder.
366366

367367
After quantization, the model will be stored in `lang_id_commonvoice_model_INT8` and `neural_compressor.utils.pytorch.load` will have to be used to load the quantized model for inference. If `self.language_id` is the original model and `data_path` is the path to the audio file:
368368
```
@@ -372,13 +372,16 @@ The following examples describe how to use the scripts to produce specific outco
372372
prediction = self.model_int8(signal)
373373
```
374374

375-
**(Optional) Comparing Predictions with Ground Truth**
375+
The code above is integrated into `inference_custom.py`. You can now run inference on your data using this INT8 model:
376+
```bash
377+
python inference_custom.py -p data_custom -d 3 -s 50 --vad --int8_model --verbose
378+
```
376379

377-
You can choose to modify `audio_ground_truth_labels.csv` to include the name of the audio file and expected audio label (like, `en` for English), then run `inference_custom.py` with the `--ground_truth_compare` option. By default, this is disabled.
380+
>**Note**: The `--verbose` option is required to view the latency measurements.
378381
379-
### Troubleshooting
382+
**(Optional) Comparing Predictions with Ground Truth**
380383

381-
If the model appears to be giving the same output regardless of input, try running `clean.sh` to remove the `RIR_NOISES` and `speechbrain` folders. Redownload that data after cleaning by running `initialize.sh` and either `inference_commonVoice.py` or `inference_custom.py`.
384+
You can choose to modify `audio_ground_truth_labels.csv` to include the name of the audio file and expected audio label (like, `en` for English), then run `inference_custom.py` with the `--ground_truth_compare` option. By default, this is disabled.
382385

383386
## License
384387

0 commit comments

Comments
 (0)