You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/_index.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,18 +3,19 @@ title: Accelerate Whisper on Arm with Hugging Face Transformers
3
3
4
4
minutes_to_complete: 30
5
5
6
-
who_is_this_for: This Learning Path is for software developers looking to run the Whisper Automatic Speech Recognition (ASR) model efficiently. You will use an Arm-based cloud instance to run and build speech transcription-based applications.
6
+
who_is_this_for: This Learning Path is for software developers familiar with basic machine learning concepts and looking to run the OpenAI Whisper Automatic Speech Recognition (ASR) model efficiently, using an Arm-based cloud instance.
7
7
8
8
learning_objectives:
9
9
- Install the dependencies for the Whisper ASR Model.
10
-
- Run the OpenAI Whisper model using Hugging Face Transformers.
10
+
- Run the Whisper model using Hugging Face Transformers.
11
11
- Enable performance-enhancing features for running the model on Arm CPUs.
12
12
- Evaluate transcript generation times using Whisper.
13
13
14
14
15
15
prerequisites:
16
-
- An [Arm-based compute instance](/learning-paths/servers-and-cloud-computing/intro/) running Ubuntu with 32 cores, 8GB of RAM, and 32GB disk space.
17
-
- Basic knowledge of Python and machine learning concepts.
16
+
- An [Arm-based compute instance](/learning-paths/servers-and-cloud-computing/intro/) running Ubuntu with 32 cores, 8GB of RAM, and 32GB of disk space.
17
+
- Basic knowledge of Python.
18
+
- Familiarity with machine learning concepts.
18
19
- Familiarity with the fundamentals of the Whisper ASR Model.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/whisper.md
+19-11Lines changed: 19 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,19 +12,27 @@ layout: "learningpathall"
12
12
13
13
This Learning Path demonstrates how to run the [whisper-large-v3-turbo model](https://huggingface.co/openai/whisper-large-v3-turbo) as an application that accepts an audio input and computes its text transcript.
14
14
15
-
The instructions in this Learning Path have been designed for Arm servers running Ubuntu 24.04 LTS. You will need an Arm server instance with 32 cores, at least 8GB of RAM, and 32GB of disk space. These steps have been tested on an AWS Graviton4 `c8g.8xlarge` instance.
15
+
The instructions in this Learning Path have been designed for Arm servers running Ubuntu 24.04 LTS. You will need an Arm server instance with 32 cores, at least 8GB of RAM, and 32GB of disk space.
16
16
17
-
## Overview
17
+
These steps have been tested on an AWS Graviton4 `c8g.8xlarge` instance.
18
+
19
+
## Overview and Focus of Learning Path
18
20
19
21
OpenAI Whisper is an open-source Automatic Speech Recognition (ASR) model trained on multilingual, multitask data. It can generate transcripts in multiple languages and translate various languages into English.
20
22
21
-
In this Learning Path, you will learn about the foundational aspects of speech-to-text transcription applications, with a focus on running OpenAI’s Whisper on an Arm CPU. Finally, you will explore the implementation and performance considerations required to efficiently deploy Whisper using the Hugging Face Transformers framework.
23
+
In this Learning Path, you will learn about the foundational aspects of speech-to-text transcription applications, with a focus on running OpenAI’s Whisper on an Arm CPU. You will explore the implementation and performance considerations required to efficiently deploy Whisper using the Hugging Face Transformers framework.
24
+
25
+
### Speech-to-text ML applications
26
+
27
+
Speech-to-text (STT) transcription applications transform spoken language into written text, enabling voice-driven interfaces, accessibility tools, and real-time communication services.
28
+
29
+
Audio is first cleaned and converted into a format suitable for processing, then passed through a deep learning model trained to recognize speech patterns. Advanced language models help refine the output, improving accuracy by predicting likely word sequences based on context. When deployed on cloud servers, STT applications must balance accuracy, latency, and computational efficiency to meet diverse use cases.
22
30
23
-
### Speech-to-Text ML applications
31
+
##Learning Path Setup
24
32
25
-
Speech-to-text (STT) transcription applications transform spoken language into written text, enabling voice-driven interfaces, accessibility tools, and real-time communication services. Audio is first cleaned and converted into a format suitable for processing, then passed through a deep learning model trained to recognize speech patterns. Advanced language models help refine the output, improving accuracy by predicting likely word sequences based on context. When deployed on cloud servers, STT applications must balance accuracy, latency, and computational efficiency to meet diverse use cases.
33
+
To get set up, follow these steps, copying the code snippets at each stage.
26
34
27
-
## Install dependencies
35
+
###Install dependencies
28
36
29
37
Install the following packages on your Arm-based server instance:
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/whisper/whisper_deploy.md
+17-10Lines changed: 17 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,31 +5,38 @@ weight: 4
5
5
layout: learningpathall
6
6
---
7
7
8
-
## Setting Environment Variables that Impact Performance
8
+
## Optimize Environment Variables to Boost Performance
9
9
10
-
Speech-to-text applications often process large amounts of audio data in real time, requiring efficient computation to balance accuracy and speed. Low-level implementations of neural network kernels can enhance performance by reducing processing overhead. When tailored for specific hardware architectures, such as Arm CPUs, these kernels accelerate key tasks like feature extraction and neural network inference. Optimized kernels ensure that speech models like OpenAI’s Whisper run efficiently, making high-quality transcription more accessible across various server applications.
10
+
Speech-to-text applications often process large amounts of audio data in real time, requiring efficient computation to balance accuracy and speed. Low-level implementations of neural network kernels can enhance performance by reducing processing overhead.
11
11
12
-
Other considerations allow for more efficient memory usage. For example, allocating additional memory and threads for specific tasks can increase performance. By enabling these hardware-aware options, applications achieve lower latency, reduced power consumption, and smoother real-time transcription.
12
+
When tailored for specific hardware architectures, such as Arm CPUs, these kernels accelerate key tasks such as feature extraction and neural network inference. Optimized kernels ensure that speech models like OpenAI’s Whisper run efficiently, making high-quality transcription more accessible across various server applications.
13
13
14
-
Use the following flags to optimize performance on Arm machines:
14
+
Other factors contribute to more efficient memory usage. For example, allocating additional memory and threads for specific tasks can boost performance. By leveraging these hardware-aware optimizations, applications can achieve lower latency, reduced power consumption, and smoother real-time transcription.
15
15
16
-
* Enable fast math BFloat16(BF16) GEMM kernels.
17
-
* Enable Linux Transparent Huge Page (THP) allocations.
18
-
* Enable logs to confirm kernel and set LRU cache capacity and OMP_NUM_THREADS.
16
+
Use the following flags to optimize performance on Arm machines:
19
17
20
18
```bash
21
19
export DNNL_DEFAULT_FPMATH_MODE=BF16
22
20
export THP_MEM_ALLOC_ENABLE=1
23
21
export LRU_CACHE_CAPACITY=1024
24
22
export OMP_NUM_THREADS=32
25
23
```
24
+
These variables do the following:
25
+
26
+
*`export DNNL_DEFAULT_FPMATH_MODE=BF16` - sets the default floating-point math mode for the oneDNN library to BF16 (bfloat16). This can improve performance and efficiency on hardware that supports BF16 precision.
27
+
28
+
*`export THP_MEM_ALLOC_ENABLE=1` - enables an optimized memory allocation strategy - often leveraging transparent huge pages - which can enhance memory management and reduce fragmentation in frameworks like PyTorch.
29
+
30
+
*`export LRU_CACHE_CAPACITY=1024` - configures the capacity of a Least Recently Used (LRU) cache to 1024 entries. This helps store and quickly retrieve recently used data, reducing redundant computations.
31
+
32
+
*`export OMP_NUM_THREADS=32` - sets the number of threads for OpenMP-based parallel processing to 32, allowing your application to take full advantage of multi-core systems for faster performance.
26
33
27
34
{{% notice Note %}}
28
35
BF16 support is merged into PyTorch versions greater than 2.3.0.
29
36
{{% /notice %}}
30
37
31
38
## Run Whisper File
32
-
After setting the environment variables in the previous step, you can now run the Whisper model again and analyze the performance impact.
39
+
After setting the environment variables in the previous step, run the Whisper model again and analyze the performance impact.
33
40
34
41
Run the `whisper-application.py` file:
35
42
@@ -43,6 +50,6 @@ You should now see that the processing time has gone down compared to the last r
43
50
44
51

45
52
46
-
The output in the above image has the log containing `attr-fpmath:bf16`, which confirms that fast math BF16 kernels are used in the compute process to improve the performance.
53
+
The output in the above image has the log containing `attr-fpmath:bf16`, which confirms that the compute process uses fast math BF16 kernels to improve performance.
47
54
48
-
Enable the environment variables detailed in this Learning Path to achieve performance uplift of OpenAI Whisper using Hugging Face Transformers framework on Arm.
55
+
You have now learned how configuring these environment variables can achieve performance uplift of OpenAI's Whisper model when using Hugging Face Transformers framework on Arm-based systems.
0 commit comments