Skip to content

Commit a8fdea3

Browse files
authored
Merge pull request #2368 from NinaARM/feature/voice-assistant-updates
Update voice assistant LP with multimodal functionality
2 parents 189d132 + 44a7038 commit a8fdea3

18 files changed

+107
-22
lines changed

content/learning-paths/mobile-graphics-and-gaming/voice-assistant/1-prerequisites.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ Begin by installing the latest version of [Android Studio](https://developer.and
1414

1515
Next, install the following command-line tools:
1616
- `cmake`; a cross-platform build system.
17+
- `python3`; interpreted programming language, used by project to fetch dependencies and models.
1718
- `git`; a version control system that you use to clone the Voice Assistant codebase.
1819
- `adb`; Android Debug Bridge, used to communicate with and control Android devices.
1920

@@ -22,9 +23,20 @@ Install these tools with the appropriate command for your OS:
2223
{{< tabpane code=true >}}
2324
{{< tab header="Linux/Ubuntu" language="bash">}}
2425
sudo apt update
25-
sudo apt install git adb cmake -y
26+
sudo apt install git adb cmake python3 -y
2627
{{< /tab >}}
2728
{{< tab header="macOS" language="bash">}}
28-
brew install git android-platform-tools cmake
29+
brew install git android-platform-tools cmake python
30+
{{< /tab >}}
31+
{{< /tabpane >}}
32+
33+
Ensure the correct version of python is installed, the project needs python version 3.9 or later:
34+
35+
{{< tabpane code=true >}}
36+
{{< tab header="Linux/Ubuntu" language="bash">}}
37+
python3 --version
38+
{{< /tab >}}
39+
{{< tab header="macOS" language="bash">}}
40+
python3 --version
2941
{{< /tab >}}
3042
{{< /tabpane >}}

content/learning-paths/mobile-graphics-and-gaming/voice-assistant/2-overview.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,26 @@ This process includes the following stages:
3333
- A neural network analyzes these features to predict the most likely transcription based on grammar and context.
3434
- The recognized text is passed to the next stage of the pipeline.
3535

36+
The voice assistant pipeline imports and builds a separate module to provide this STT functionality. You can access this at:
37+
38+
```
39+
https://gitlab.arm.com/kleidi/kleidi-examples/speech-to-text
40+
```
41+
42+
and build for various platforms to independently benchmark STT functionality:
43+
44+
|Platform|Details|
45+
|---|---|
46+
|Linux|x86_64 - KleidiAI is disabled by default, aarch64 - KleidiAI is enabled by default.|
47+
|Android|Cross-compile for an Android device, ensure the Android NDK path is set and correct toolchain file is provided. KleidiAI enabled by default.|
48+
|MacOS|Native or cross-compilation for a Mac device. KleidiAI and SME kernels can be used if available on device.|
49+
50+
Currently, this module uses [whisper.cpp](https://github.com/ggml-org/whisper.cpp) and wraps the backend library by a thin C++ layer. The module also provides JNI bindings for developers targetting Android based applications.
51+
52+
{{% notice %}}
53+
You can get more information on how to build and use this module [here](https://gitlab.arm.com/kleidi/kleidi-examples/speech-to-text/-/blob/main/README.md?ref_type=heads)
54+
{{% /notice %}}
55+
3656
## Large Language Model
3757

3858
Large Language Models (LLMs) enable natural language understanding and, in this application, are used for question-answering.
@@ -41,8 +61,37 @@ The text transcription from the previous part of the pipeline is used as input t
4161

4262
By default, the LLM runs asynchronously, streaming tokens as they are generated. The UI updates in real time with each token, which is also passed to the final pipeline stage.
4363

64+
The voice assistant pipeline imports and builds a separate module to provide this LLM functionality. You can access this at:
65+
66+
```
67+
https://gitlab.arm.com/kleidi/kleidi-examples/large-language-models
68+
```
69+
70+
and build for various platforms to independently benchmark LLM functionality:
71+
72+
|Platform|Details|
73+
|---|---|
74+
|Linux|x86_64 - KleidiAI is disabled by default, aarch64 - KleidiAI is enabled by default.|
75+
|Android|Cross-compile for an Android device, ensure the Android NDK path is set and correct toolchain file is provided. KleidiAI enabled by default.|
76+
|MacOS|Native or cross-compilation for a Mac device. KleidiAI and SME kernels can be used if available on device.|
77+
78+
Currently, this module provides a thin C++ layer as well as JNI bindings for developers targetting Android based applications, supported backends are:
79+
|Framework|Dependency|Input modalities supported|Output modalities supported|Neural Network|
80+
|---|---|---|---|---|
81+
|llama.cpp|https://github.com/ggml-org/llama.cpp|`image`, `text`|`text`|phi-2,Qwen2-VL-2B-Instruct|
82+
|onnxruntime-genai|https://github.com/microsoft/onnxruntime-genai|`text`|`text`|phi-4-mini-instruct-onnx|
83+
|mediapipe|https://github.com/google-ai-edge/mediapipe|`text`|`text`|gemma-2b-it-cpu-int4|
84+
85+
86+
87+
{{% notice %}}
88+
You can get more information on how to build and use this module [here](https://gitlab.arm.com/kleidi/kleidi-examples/large-language-models/-/blob/main/README.md?ref_type=heads)
89+
{{% /notice %}}
90+
4491
## Text-to-Speech
4592

4693
This part of the application pipeline uses the Android Text-to-Speech API along with additional logic to produce smooth, natural speech.
4794

4895
In synchronous mode, speech playback begins only after the full LLM response is received. By default, the application operates in asynchronous mode, where speech synthesis starts as soon as a full or partial sentence is ready. Remaining tokens are buffered and processed by the Android Text-to-Speech engine to ensure uninterrupted playback.
96+
97+
You are now familiar with the building blocks of this application and can build these independently for various platforms. You can now build the multi-modal Voice Assistant example which runs on Android OS in the next step.

content/learning-paths/mobile-graphics-and-gaming/voice-assistant/4-run.md

Lines changed: 30 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -16,33 +16,51 @@ By default, Android devices ship with developer mode disabled. To enable it, fol
1616

1717
Once developer mode is enabled, connect your phone to your computer with USB. It should appear as a running device in the top toolbar. Select the device and click **Run** (a small green triangle, as shown below). This transfers the app to your phone and launches it.
1818

19+
In the graphic below, a Google Pixel 8 Pro phone is connected to the USB cable:
1920

20-
In the graphic below, a Samsung Galaxy Z Flip 6 phone is connected to the USB cable:
2121
![upload image alt-text#center](upload.png "Upload the Voice App")
22-
=======
22+
2323
## Launch the Voice Assistant
2424

2525
The app starts with this welcome screen:
2626

27-
![welcome image alt-text#center](voice_assistant_view1.jpg "Welcome Screen")
27+
![welcome image alt-text#center](voice_assistant_view1.png "Welcome Screen")
2828

2929
Tap **Press to talk** at the bottom of the screen to begin speaking your request.
3030

3131
## Voice Assistant controls
3232

33-
### View performance counters
33+
You can use application controls to enable extra functionality or gather performance data.
3434

35-
You can toggle performance counters such as:
36-
- Speech recognition time.
37-
- LLM encode tokens per second.
38-
- LLM decode tokens per second.
39-
- Speech generation time.
35+
|Button|Control name|Description|
36+
|---|---|---|
37+
|1|Performance counters|Performance counters are hidden by default, click this to show speech recognition time, LLM encode and decode rate.|
38+
|2|Speech generation|Speech generation is disabled by default, click this to use Android Text-to-Speech and get audible answers.|
39+
|3|Reset conversation|By default, the application keeps context so you can follow-up questions, click this to reset voice assistant conversation history.|
4040

4141
Click the icon circled in red in the top left corner to show or hide these metrics:
4242

43-
![performance image alt-text#center](voice_assistant_view2.jpg "Performance Counters")
43+
![performance image alt-text#center](voice_assistant_view2.png "Performance Counters")
44+
45+
### Multimodal Question Answering
46+
47+
If you have built the application using the default `llama.cpp` backend, you can also use it in multimodal `(input + text)` question answering mode.
48+
49+
For this, click the image button first:
50+
51+
![use image alt-text#center](voice_assistant_multimodal_1.png "Add image button")
52+
53+
This will bring up the photos you can chose from:
54+
55+
![choose image alt-text#center](choose_image.png "Choose image from the gallery")
56+
57+
Choose the image, and add image for voice assistant:
58+
59+
![add image alt-text#center](add_image.png "Add image to the question")
60+
61+
You can now ask questions related to this image, the large language model will you the image and text for multimodal question answering.
4462

45-
To reset the Voice Assistant's conversation history, click the icon circled in red in the top right:
63+
![ask question image alt-text#center](voice_assistant_multimodal_2.png "Add image to the question")
4664

47-
![reset image alt-text#center](voice_assistant_view3.jpg "Reset the Voice Assistant's Context")
65+
Now that you have explored how the android application is set up and built, you can see in detail how KleidiAI library is used in the next step.
4866

content/learning-paths/mobile-graphics-and-gaming/voice-assistant/5-kleidiai.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,5 @@ To disable KleidiAI during build:
3131

3232
KleidiAI simplifies development by abstracting away low-level optimization: developers can write high-level code while the KleidiAI library selects the most efficient implementation at runtime based on the target hardware. This is possible thanks to its deeply optimized micro-kernels tailored for Arm architectures.
3333

34-
As newer versions of the architecture become available, KleidiAI becomes even more powerful: simply updating the library allows applications like the Voice Assistant to take advantage of the latest architectural improvements - such as SME2 — without requiring any code changes. This means better performance on newer devices with no additional effort from developers.
34+
As newer versions of the architecture become available, KleidiAI becomes even more powerful: simply updating the library allows applications like the multi-modal Voice Assistant to take advantage of the latest architectural improvements - such as SME2 — without requiring any code changes. This means better performance on newer devices with no additional effort from developers.
35+

content/learning-paths/mobile-graphics-and-gaming/voice-assistant/_index.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,23 @@
11
---
2-
title: Accelerate Voice Assistant performance with KleidiAI and SME2
2+
title: Accelerate multi-modal Voice Assistant performance with KleidiAI and SME2
33

44
minutes_to_complete: 30
55

6-
who_is_this_for: This is an introductory topic for developers who want to accelerate Voice Assistant performance on Android devices using KleidiAI and SME2.
6+
who_is_this_for: This is an introductory topic for developers who want to see a pipeline of a multi-modal Voice Assistant application and accelerate the performance on Android devices using KleidiAI and SME2.
77

88
learning_objectives:
9-
- Compile and run a Voice Assistant Android application.
10-
- Optimize performance using KleidiAI and SME2.
9+
- Learn about the multi-modal Voice Assistant pipeline and different components used.
10+
- Learn about the functionality of ML components used and how these can be built and benchmarked on various platforms.
11+
- Compile and run a multi-modal Voice Assistant example based on Android OS.
12+
- Optimize performance of multi-modal Voice Assistant using KleidiAI and SME2.
1113

1214
prerequisites:
13-
- An Android phone that supports the i8mm Arm architecture feature (8-bit integer matrix multiplication). This Learning Path was tested on a Samsung Galaxy Z Flip 6.
15+
- An Android phone that supports the i8mm Arm architecture feature (8-bit integer matrix multiplication). This Learning Path was tested on a Google Pixel 8 Pro.
1416
- A development machine with [Android Studio](https://developer.android.com/studio) installed.
1517

16-
author: Arnaud de Grandmaison
18+
author:
19+
- Arnaud de Grandmaison
20+
- Nina Drozd
1721

1822
skilllevels: Introductory
1923
subjects: Performance and Architecture
@@ -22,10 +26,11 @@ armips:
2226
tools_software_languages:
2327
- Java
2428
- Kotlin
29+
- C++
2530
operatingsystems:
31+
- Android
2632
- Linux
2733
- macOS
28-
- Android
2934

3035
further_reading:
3136

183 KB
Loading
262 KB
Loading
6.23 KB
Loading
84.1 KB
Loading
9.81 KB
Loading

0 commit comments

Comments
 (0)