You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_index.md
+9-8Lines changed: 9 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,21 @@
1
1
---
2
-
title: Profile the performance of ML models on Arm
3
-
4
-
draft: true
5
-
cascade:
6
-
draft: true
2
+
title: Profile the Performance of AI and ML Mobile Applications on Arm
7
3
8
4
minutes_to_complete: 60
9
5
10
-
who_is_this_for: This is an introductory topic for software developers who want to learn how to profile the performance of their ML models running on Arm devices.
6
+
who_is_this_for: This is an introductory topic for software developers who want to learn how to profile the performance of Machine Learning (ML) models running on Arm devices.
11
7
12
8
learning_objectives:
13
9
- Profile the execution times of ML models on Arm devices.
14
10
- Profile ML application performance on Arm devices.
11
+
- Describe how profiling can help optimize the performance of Machine Learning applications.
15
12
16
13
prerequisites:
17
-
- An Arm-powered Android smartphone, and USB cable to connect with it.
14
+
- An Arm-powered Android smartphone, and a USB cable to connect to it.
15
+
- For profiling the ML inference, [ArmNN's ExecuteNetwork](https://github.com/ARM-software/armnn/releases).
16
+
- For profiling the application, [Arm Performance Studio's Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,35 +4,35 @@ review:
4
4
question: >
5
5
Streamline Profiling lets you profile:
6
6
answers:
7
-
- Arm CPU activity
8
-
- Arm GPU activity
9
-
- when your Neural Network is running
10
-
- All of the above
7
+
- Arm CPU activity.
8
+
- Arm GPU activity.
9
+
- When your Neural Network is running.
10
+
- All of the above.
11
11
correct_answer: 4
12
12
explanation: >
13
-
Streamline will show you CPU and GPU activity (and a lot more counters!), and if Custom Activity Maps are used, you can see when your Neural Network and other parts of your application are running.
13
+
Streamline shows you CPU and GPU activity (and a lot more counters!) and if Custom Activity Maps are used, you can see when your Neural Network and other parts of your application are running.
14
14
15
15
- questions:
16
16
question: >
17
17
Does Android Studio have a profiler?
18
18
answers:
19
-
- "Yes"
20
-
- "No"
19
+
- "Yes."
20
+
- "No."
21
21
correct_answer: 1
22
22
explanation: >
23
-
Yes, Android Studio has a built-in profiler that can be used to monitor the memory usage of your app among other things
23
+
Yes, Android Studio has a built-in profiler that can be used to monitor the memory usage of your application, amongst other functions.
24
24
25
25
- questions:
26
26
question: >
27
27
Is there a way to profile what is happening inside your Neural Network?
28
28
answers:
29
-
- Yes, Streamline just shows you out of the box
30
29
- No.
31
-
- Yes, ArmNN's ExecuteNetwork can do this
32
-
- Yes, Android Studio Profiler can do this
30
+
- Yes, Streamline just shows you out of the box.
31
+
- Yes, ArmNN's ExecuteNetwork can do this.
32
+
- Yes, Android Studio Profiler can do this.
33
33
correct_answer: 3
34
34
explanation: >
35
-
Standard profilers don't have an easy way to see what is happening inside an ML framework to see a model running inside it. ArmNN's ExecuteNetwork can do this for TensorFlow Lite models, and ExecuTorch has tools that can do this for PyTorch models.
35
+
Standard profilers do not have an easy way to see what is happening inside an ML framework to see a model running inside it. ArmNN's ExecuteNetwork can do this for LiteRT models, and ExecuTorch has tools that can do this for PyTorch models.
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-android-studio.md
+49-16Lines changed: 49 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,39 +7,72 @@ layout: learningpathall
7
7
---
8
8
9
9
## Android Memory Profiling
10
-
Memory is often a problem in ML, with ever bigger models and data. For profiling an Android app's memory, Android Studio has a built-in profiler. This can be used to monitor the memory usage of your app, and to find memory leaks.
10
+
Memory is a common problem in ML, with ever-increasing model parameters and datasets. For profiling an Android app's memory, Android Studio has a built-in profiler. You can use this to monitor the memory usage of your app, and to detect memory leaks.
11
11
12
-
To find the Profiler, open your project in Android Studio and click on the *View* menu, then *Tool Windows*, and then *Profiler*. This opens the Profiler window. Attach your device in Developer Mode with a USB cable, and then you should be able to select your app's process. Here there are a number of different profiling tasks available.
12
+
### Set up the Profiler
13
13
14
-
Most likely with an Android ML app you'll need to look at memory both from the Java/Kotlin side and the native side. The Java/Kotlin side is where the app runs, and may be where buffers are allocated for input and output if, for example, you're using LiteRT (formerly known as TensorFlow Lite). The native side is where the ML framework will run. Looking at the memory consumption for Java/Kotlin and native is 2 separate tasks in the Profiler: *Track Memory Consumption (Java/Kotlin Allocations)* and *Track Memory Consumption (Native Allocations)*.
14
+
* To find the Profiler, open your project in Android Studio, and select the **View** menu.
15
15
16
-
Before you start either task, you have to build your app for profiling. The instructions for this and for general profiling setup can be found [here](https://developer.android.com/studio/profile). You will want to start the correct profiling version of the app depending on the task.
16
+
* Next, click **Tool Windows**, and then **Profiler**. This opens the Profiler window.
17
17
18
-

18
+
* Attach your device in Developer Mode with a USB cable, and then select your app's process. There are a number of different profiling tasks available.
19
19
20
-
For the Java/Kotlin side, you want the **debuggable** "Profile 'app' with complete data", which is based off the debug variant. For the native side, you want the **profileable** "Profile 'app' with low overhead", which is based off the release variant.
20
+
Most likely with an Android ML app you will need to look at memory both from the Java/Kotlin side, and the native side:
21
+
22
+
* The Java/Kotlin side is where the app runs, and might be where buffers are allocated for input and output if, for example, you are using LiteRT.
23
+
* The native side is where the ML framework runs.
24
+
25
+
{{% notice Note %}}
26
+
Before you start either task, you must build your app for profiling. The instructions for this, and for general profiling setup can be found at [Profile your app performance](https://developer.android.com/studio/profile) on the Android Studio website. You need to start the correct profiling version of the app depending on the task.
27
+
{{% /notice %}}
28
+
29
+
Looking at the memory consumption for Java/Kotlin and native, there are two separate tasks in the Profiler:

35
+
36
+
For the Java/Kotlin side, select **Profile 'app' with complete data**, which is based off the debug variant. For the native side, you want the **profileable** "Profile 'app' with low overhead", which is based off the release variant.
21
37
22
38
### Java/Kotlin
23
39
24
-
If you start looking at the [Java/Kotlin side](https://developer.android.com/studio/profile/record-java-kotlin-allocations), choose *Profiler: Run 'app' as debuggable*, and then select the *Track Memory Consumption (Java/Kotlin Allocations)* task. Navigate to the part of the app you wish to profile and then you can start profiling. At the bottom of the Profiling window it should look like Figure 2 below. Click *Start Profiler Task*.
40
+
To investigate the Java/Kotlin side, see the notes on [Record Java/Kotlin allocations](https://developer.android.com/studio/profile/record-java-kotlin-allocations).
41
+
42
+
Select **Profiler: Run 'app' as debuggable**, and then select the **Track Memory Consumption (Java/Kotlin Allocations)** task.
43
+
44
+
Navigate to the part of the app that you would like to profile, and then you can start profiling.
25
45
26
-

46
+
The bottom of the profiling window should resemble Figure 4.
27
47
28
-
When you're ready, *Stop* the profiling again. Now there will be a nice timeline graph of memory usage. While Android Studio has a nicer interface for the Java/Kotlin side than the native side, the key to the timeline graph may be missing. This key is shown below in Figure 3, so you can refer to the colors from this.
29
-

48
+

30
49
31
-
The default height of the Profiling view, as well as the timeline graph within it is usually too small, so adjust these heights to get a sensible graph. You can click at different points of the graph to see the memory allocations at that time. If you look according to the key you can see how much memory is allocated by Java, Native, Graphics, Code etc.
50
+
Click **Start profiler task**.
32
51
33
-
Looking further down you can see the *Table* of Java/Kotlin allocations for your selected time on the timeline. With ML a lot of your allocations are likely to be byte[] for byte buffers, or possibly int[] for image data, etc. Clicking on the data type will open up the particular allocations, showing their size and when they were allocated. This will help to quickly narrow down their use, and whether they are all needed etc.
52
+
When you're ready, select *Stop* to stop the profiling again.
53
+
54
+
Now there will be a timeline graph of memory usage. While Android Studio has a more user-friendly interface for the Java/Kotlin side than the native side, the key to the timeline graph might be missing. This key is shown in Figure 3.
55
+
56
+

57
+
58
+
If you prefer, you can adjust the default height of the profiling view, as well as the timeline graph within it, as they are usually too small.
59
+
60
+
Now click on different points of the graph to see the memory allocations at each specific time. Using the key on the graph, you can see how much memory is allocated by different categories of consumption, such as Java, Native, Graphics, and Code.
61
+
62
+
If you look further down, you can see the **Table** of Java/Kotlin allocations for your selected time on the timeline. With ML, many of your allocations are likely to be scenarios such as byte[] for byte buffers, or possibly int[] for image data. Clicking on the data type opens up the particular allocations, showing their size and when they were allocated. This will help to quickly narrow down their use, and whether they are all needed.
34
63
35
64
### Native
36
65
37
-
For the [native side](https://developer.android.com/studio/profile/record-native-allocations), the process is similar but with different options. Choose *Profiler: Run 'app' as profileable*, and then select the *Track Memory Consumption (Native Allocations)* task. Here you have to *Start profiler task from: Process Start*. Choose *Stop* once you've captured enough data.
66
+
For the [native side](https://developer.android.com/studio/profile/record-native-allocations), the process is similar but with different options. Select **Profiler: Run 'app' as profileable**, and then select the **Track Memory Consumption (Native Allocations)** task. Here you have to **Start profiler task from: Process Start**. Select **Stop** once you've captured enough data.
38
67
39
-
The Native view doesn't have the same nice timeline graph as the Java/Kotlin side, but it does have the *Table* and *Visualization* tabs. The *Table* tab no longer has a list of allocations, but options to *Arrange by allocation method* or *callstack*. Choose *Arrange by callstack* and then you can trace down which functions were allocating significant memory. Potentially more useful, you can also see Remaining Size.
68
+
The Native view does not provide the same kind of timeline graph as the Java/Kotlin side, but it does have the **Table** and **Visualization** tabs. The **Table** tab no longer has a list of allocations, but options to **Arrange by allocation method** or **callstack**. Select **Arrange by callstack** and then you can trace down which functions allocate significant memory resource. There is also the **Remaining Size** tab, which is arguably more useful.
40
69
41
-
In the Visualization tab you can see the callstack as a graph, and once again you can look at total Allocations Size or Remaining Size. If you look at Remaining Size, you can see what is still allocated at the end of the profiling, and by looking a few steps up the stack, probably see which allocations are related to the ML model, by seeing functions that relate to the framework you are using. A lot of the memory may be allocated by that framework rather than in your code, and you may not have much control over it, but it is useful to know where the memory is going.
70
+
In the **Visualization** tab, you can see the callstack as a graph, and once again you can look at total **Allocations Size** or **Remaining Size**. If you look at **Remaining Size**, you can see what remains allocated at the end of the profiling, and by looking a few steps up the stack, probably see which allocations are related to the ML model, by seeing functions that relate to the framework you are using. A lot of the memory may be allocated by that framework rather than in your code, and you may not have much control over it, but it is useful to know where the memory is going.
42
71
43
72
## Other platforms
44
73
45
-
On other platforms, you will need a different memory profiler. The objective of working out where the memory is being used is the same, and whether there are issues with leaks or just too much memory being used. There are often trade-offs between memory and speed, and they can be considered more sensibly if the numbers involved are known.
74
+
On other platforms, you will need a different memory profiler. The objective is the same; to investigate memory consumption in terms of identifying whether there are issues with leaks or if there is too much memory being used.
75
+
76
+
There are often trade-offs between memory and speed, and investigating memory consumption provides data that can help inform assessments of this balance.
0 commit comments