You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/_demo.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Demo - Audio transcription on Arm
2
+
title: Demo - Whisper Voice Audio transcription on Arm
3
3
overview: |
4
4
This Learning Path shows you how to use a c8g.8xlarge AWS Graviton4 instance, powered by an Arm Neoverse CPU, to build a simple Transcription-as-a-Service server.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/_index.md
+9-12Lines changed: 9 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,25 +1,22 @@
1
1
---
2
-
title: Run OpenAI Whisper Audio Model efficiently on Arm with Hugging Face Transformers
3
-
4
-
draft: true
5
-
cascade:
6
-
draft: true
2
+
title: Accelerate Whisper on Arm with Hugging Face Transformers
7
3
8
4
minutes_to_complete: 15
9
5
10
-
who_is_this_for: This Learning Path is for software developers looking to run the Whisper automatic speech recognition (ASR) model efficiently. You will use an Arm-based cloud instance to run and build speech transcription based applications.
6
+
who_is_this_for: This Learning Path is for software developers familiar with basic machine learning concepts and looking to run the OpenAI Whisper Automatic Speech Recognition (ASR) model efficiently, using an Arm-based cloud instance.
11
7
12
8
learning_objectives:
13
-
- Install the dependencies to run the Whisper Model
14
-
- Run the OpenAI Whisper model using Hugging Face Transformers.
9
+
- Install the dependencies for the Whisper ASR Model.
10
+
- Run the Whisper model using Hugging Face Transformers.
15
11
- Enable performance-enhancing features for running the model on Arm CPUs.
16
-
- Compare the total time taken to generate transcript with Whisper.
12
+
- Evaluate transcript generation times using Whisper.
17
13
18
14
19
15
prerequisites:
20
-
- An [Arm-based compute instance](/learning-paths/servers-and-cloud-computing/intro/) with 32 cores, 8GB of RAM, and 32GB disk space running Ubuntu.
21
-
- Basic understanding of Python and ML concepts.
22
-
- Understanding of Whisper ASR Model fundamentals.
16
+
- An [Arm-based compute instance](/learning-paths/servers-and-cloud-computing/intro/) running Ubuntu with 32 cores, 8GB of RAM, and 32GB of disk space.
17
+
- Basic knowledge of Python.
18
+
- Familiarity with machine learning concepts.
19
+
- Familiarity with the fundamentals of the Whisper ASR Model.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/whisper.md
+34-16Lines changed: 34 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,35 +1,47 @@
1
1
---
2
2
# User change
3
-
title: "Setup the Whisper Model"
3
+
title: "Set up the Whisper Model"
4
4
5
-
weight: 2
5
+
weight: 3
6
6
7
7
# Do not modify these elements
8
8
layout: "learningpathall"
9
9
---
10
10
11
11
## Before you begin
12
12
13
-
This Learning Path demonstrates how to run the [whisper-large-v3-turbo model](https://huggingface.co/openai/whisper-large-v3-turbo) as an application that takes an audio input and computes the text transcript of it. The instructions in this Learning Path have been designed for Arm servers running Ubuntu 24.04 LTS. You need an Arm server instance with 32 cores, atleast 8GB of RAM and 32GB disk to run this example. The instructions have been tested on a AWS Graviton4 `c8g.8xlarge` instance.
13
+
This Learning Path demonstrates how to run the [whisper-large-v3-turbo model](https://huggingface.co/openai/whisper-large-v3-turbo) as an application that accepts an audio input and computes its text transcript.
14
14
15
-
## Overview
15
+
The instructions in this Learning Path have been designed for Arm servers running Ubuntu 24.04 LTS. You will need an Arm server instance with 32 cores, at least 8GB of RAM, and 32GB of disk space.
16
16
17
-
OpenAI Whisper is an open-source Automatic Speech Recognition (ASR) model trained on the multilingual and multitask data, which enables the transcript generation in multiple languages and translations from different languages to English. You will learn about the foundational aspects of speech-to-text transcription applications, specifically focusing on running OpenAI’s Whisper on an Arm CPU. Lastly, you will explore the implementation and performance considerations required to efficiently deploy Whisper using Hugging Face Transformers framework.
17
+
These steps have been tested on an AWS Graviton4 `c8g.8xlarge` instance.
18
+
19
+
## Overview and Focus of Learning Path
20
+
21
+
OpenAI Whisper is an open-source Automatic Speech Recognition (ASR) model trained on multilingual, multitask data. It can generate transcripts in multiple languages and translate various languages into English.
22
+
23
+
In this Learning Path, you will learn about the foundational aspects of speech-to-text transcription applications, with a focus on running OpenAI’s Whisper on an Arm CPU. You will explore the implementation and performance considerations required to efficiently deploy Whisper using the Hugging Face Transformers framework.
18
24
19
25
### Speech-to-text ML applications
20
26
21
-
Speech-to-text (STT) transcription applications transform spoken language into written text, enabling voice-driven interfaces, accessibility tools, and real-time communication services. Audio is first cleaned and converted into a format suitable for processing, then passed through a deep learning model trained to recognize speech patterns. Advanced language models help refine the output, improving accuracy by predicting likely word sequences based on context. Whether running on cloud servers, STT applications must balance accuracy, latency, and computational efficiency to meet the needs of diverse use cases.
27
+
Speech-to-text (STT) transcription applications transform spoken language into written text, enabling voice-driven interfaces, accessibility tools, and real-time communication services.
28
+
29
+
Audio is first cleaned and converted into a format suitable for processing, then passed through a deep learning model trained to recognize speech patterns. Advanced language models help refine the output, improving accuracy by predicting likely word sequences based on context. When deployed on cloud servers, STT applications must balance accuracy, latency, and computational efficiency to meet diverse use cases.
30
+
31
+
## Learning Path Setup
22
32
23
-
## Install dependencies
33
+
To get set up, follow these steps, copying the code snippets at each stage.
24
34
25
-
Install the following packages on your Arm based server instance:
35
+
### Install dependencies
36
+
37
+
Install the following packages on your Arm-based server instance:
## Create a python script for audio to text transcription
73
+
###Create a Python Script for Audio-To-Text Transcription
60
74
61
-
You will use the Hugging Face `transformers` framework to help process the audio. It contains classes that configures the model, and prepares it for inference. `pipeline` is an end-to-end function for NLP tasks. In the code below, it's configured to do pre- and post-processing of the sample in this example, as well as running the actual inference.
75
+
Use the Hugging Face `Transformers` framework to process the audio. It provides classes to configure the model and prepare it for inference.
62
76
63
-
Using a file editor of your choice, create a python file named `whisper-application.py` with the content shown below:
77
+
The `pipeline` function is an end-to-end solution for NLP tasks. In the code below, it is configured to do pre- and post-processing of the sample in this example, as well as running inference.
78
+
79
+
Using a file editor of your choice, create a Python file named `whisper-application.py` with the following content:
64
80
65
81
```python { file_name="whisper-application.py" }
66
82
import torch
@@ -122,9 +138,11 @@ export DNNL_VERBOSE=1
122
138
python3 whisper-application.py
123
139
```
124
140
125
-
You should see output similar to the image below with a log output, transcript of the audio and the `Inference elapsed time`.
141
+
You should see output similar to the image below, which includes the log output, the audio transcript, and the `Inferencing elapsed time`.
126
142
127
143

128
144
129
145
130
-
You've now run the Whisper model successfully on your Arm-based CPU. Continue to the next section to configure flags that can increase the performance your running model.
146
+
You have now run the Whisper model successfully on your Arm-based CPU.
147
+
148
+
Continue to the next section to configure flags that can boost your model's performance.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/whisper_deploy.md
+19-8Lines changed: 19 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,27 +5,38 @@ weight: 4
5
5
layout: learningpathall
6
6
---
7
7
8
-
## Setting environment variables that impact performance
8
+
## Optimize Environment Variables to Boost Performance
9
9
10
-
Speech-to-text applications often process large amounts of audio data in real time, requiring efficient computation to balance accuracy and speed. Low-level implementations of the kernels in the neural network enhance performance by reducing processing overhead. When tailored for specific hardware architectures, such as Arm CPUs, these kernels accelerate key tasks like feature extraction and neural network inference. Optimized kernels ensure that speech models like OpenAI’s Whisper can run efficiently, making high-quality transcription more accessible across various server applications.
10
+
Speech-to-text applications often process large amounts of audio data in real time, requiring efficient computation to balance accuracy and speed. Low-level implementations of neural network kernels can enhance performance by reducing processing overhead.
11
11
12
-
Other considerations below allow us to use the memory more efficiently. Things like allocating additional memory and threads for a certain task can increase performance. By enabling these hardware-aware options, applications achieve lower latency, reduced power consumption, and smoother real-time transcription.
12
+
When tailored for specific hardware architectures, such as Arm CPUs, these kernels accelerate key tasks such as feature extraction and neural network inference. Optimized kernels ensure that speech models like OpenAI’s Whisper run efficiently, making high-quality transcription more accessible across various server applications.
13
13
14
-
Use the following flags to enable fast math BFloat16(BF16) GEMM kernels, Linux Transparent Huge Page (THP) allocations, logs to confirm kernel and set LRU cache capacity and OMP_NUM_THREADS to run the Whisper efficiently on Arm machines.
14
+
Other factors contribute to more efficient memory usage. For example, allocating additional memory and threads for specific tasks can boost performance. By leveraging these hardware-aware optimizations, applications can achieve lower latency, reduced power consumption, and smoother real-time transcription.
15
+
16
+
Use the following flags to optimize performance on Arm machines:
15
17
16
18
```bash
17
19
export DNNL_DEFAULT_FPMATH_MODE=BF16
18
20
export THP_MEM_ALLOC_ENABLE=1
19
21
export LRU_CACHE_CAPACITY=1024
20
22
export OMP_NUM_THREADS=32
21
23
```
24
+
These variables do the following:
25
+
26
+
*`export DNNL_DEFAULT_FPMATH_MODE=BF16` - sets the default floating-point math mode for the oneDNN library to BF16 (bfloat16). This can improve performance and efficiency on hardware that supports BF16 precision.
27
+
28
+
*`export THP_MEM_ALLOC_ENABLE=1` - enables an optimized memory allocation strategy - often leveraging transparent huge pages - which can enhance memory management and reduce fragmentation in frameworks like PyTorch.
29
+
30
+
*`export LRU_CACHE_CAPACITY=1024` - configures the capacity of a Least Recently Used (LRU) cache to 1024 entries. This helps store and quickly retrieve recently used data, reducing redundant computations.
31
+
32
+
*`export OMP_NUM_THREADS=32` - sets the number of threads for OpenMP-based parallel processing to 32, allowing your application to take full advantage of multi-core systems for faster performance.
22
33
23
34
{{% notice Note %}}
24
35
BF16 support is merged into PyTorch versions greater than 2.3.0.
25
36
{{% /notice %}}
26
37
27
38
## Run Whisper File
28
-
After setting the environment variables in the previous step, now lets run the Whisper model again and analyze the performance impact.
39
+
After setting the environment variables in the previous step, run the Whisper model again and analyze the performance impact.
You should now observe that the processing time has gone down compared to the last run:
49
+
You should now see that the processing time has gone down compared to the last run:
39
50
40
51

41
52
42
-
The output in the above image has the log containing `attr-fpmath:bf16`, which confirms that fast math BF16 kernels are used in the compute process to improve the performance.
53
+
The output in the above image has the log containing `attr-fpmath:bf16`, which confirms that the compute process uses fast math BF16 kernels to improve performance.
43
54
44
-
By enabling the environment variables as described in the learning path you can see the performance uplift with the Whisper using Hugging Face Transformers framework on Arm.
55
+
You have now learned how configuring these environment variables can achieve performance uplift of OpenAI's Whisper model when using Hugging Face Transformers framework on Arm-based systems.
0 commit comments