You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,8 @@ learning_objectives:
12
12
13
13
prerequisites:
14
14
- An Arm-powered Android smartphone, and a USB cable to connect to it.
15
-
- For profiling the ML inference, [ArmNN's ExecuteNetwork](https://github.com/ARM-software/armnn/releases).
16
-
- For profiling the application, [Arm Performance Studio's Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
15
+
- For profiling the ML inference, [Arm NN ExecuteNetwork](https://github.com/ARM-software/armnn/releases).
16
+
- For profiling the application, [Arm Performance Studio with Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,11 +28,11 @@ review:
28
28
answers:
29
29
- No.
30
30
- Yes, Streamline just shows you out of the box.
31
-
- Yes, ArmNN's ExecuteNetwork can do this.
31
+
- Yes, Arm NN ExecuteNetwork can do this.
32
32
- Yes, Android Studio Profiler can do this.
33
33
correct_answer: 3
34
34
explanation: >
35
-
Standard profilers do not have an easy way to see what is happening inside an ML framework to see a model running inside it. ArmNN's ExecuteNetwork can do this for LiteRT models, and ExecuTorch has tools that can do this for PyTorch models.
35
+
Standard profilers do not have an easy way to see what is happening inside an ML framework to see a model running inside it. Arm NN ExecuteNetwork can do this for LiteRT models, and ExecuTorch has tools that can do this for PyTorch models.
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-streamline.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -204,7 +204,7 @@ For the example project, add it into the `onCreate()` function of `MainActivity.
204
204
205
205
In the example app, you can add this in the `onCreate()` function of `MainActivity.kt` after the `Module.load()` call to load the `model.pth`.
206
206
207
-
This *colored marker with a string* annotation will add the string and time to Streamline's log view, and it appears like the image shown below in Streamline's timeline (in the example app, ArmNN is not used, so there are no white ArmNN markers):
207
+
This *colored marker with a string* annotation will add the string and time to Streamline's log view, and it appears like the image shown below in Streamline's timeline (in the example app, Arm NN is not used, so there are no white Arm NN markers):
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,18 +6,18 @@ weight: 6
6
6
layout: learningpathall
7
7
---
8
8
9
-
## ArmNN's Network Profiler
10
-
One way of running LiteRT models is to use ArmNN, which is open-source network machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, ArmNN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model.
9
+
## Arm NN Network Profiler
10
+
One way of running LiteRT models is to use Arm NN, which is open-source network machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, Arm NN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model.
11
11
12
-
If you are using LiteRT without ArmNN, then the output from `ExecuteNetwork` is more of an indication than a definitive answer, but it can still be useful in identifying any obvious problems.
12
+
If you are using LiteRT without Arm NN, then the output from `ExecuteNetwork` is more of an indication than a definitive answer, but it can still be useful in identifying any obvious problems.
13
13
14
14
### Download a LiteRT Model
15
15
16
16
To try this out, you can download a LiteRT model from the [Arm Model Zoo](https://github.com/ARM-software/ML-zoo). Specifically for this Learning Path, you will download [mobilenet tflite](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_int8/mobilenet_v2_1.0_224_INT8.tflite).
17
17
18
18
### Download and setup ExecuteNetwork
19
19
20
-
You can download `ExecuteNetwork` from the [ArmNN GitHub](https://github.com/ARM-software/armnn/releases). Download the version appropriate for the Android phone that you are testing on, ensuring that it matches the Android version and architecture of the phone. If you are unsure of the architecture, you can use a lower one, but you might miss out on some optimizations.`ExecuteNetwork` is included inside the `tar.gz` archive that you download. Among the other release downloads on the ArmNN Github is a separate file for the `aar` delegate which you can also easily download.
20
+
You can download `ExecuteNetwork` from the [Arm NN GitHub](https://github.com/ARM-software/armnn/releases). Download the version appropriate for the Android phone that you are testing on, ensuring that it matches the Android version and architecture of the phone. If you are unsure of the architecture, you can use a lower one, but you might miss out on some optimizations.`ExecuteNetwork` is included inside the `tar.gz` archive that you download. Among the other release downloads on the Arm NN Github is a separate file for the `aar` delegate which you can also easily download.
21
21
22
22
To run `ExecuteNetwork,` you need to use `adb` to push the model and the executable to your phone, and then run it from the adb shell. `adb` is included with Android Studio, but you might need to add it to your path. Android Studio normally installs it to a location such as:
Push all the `.so` library files that are in the base folder of the `tar.gz` archive you downloaded, alongside `ExecuteNetwork`, and all the `.so` files in the `delegate` sub-folder.
37
37
@@ -74,7 +74,7 @@ Depending on the size of your model, the output will probably be quite large. Yo
74
74
75
75
At the top is the summary, with the setup time and inference time of the two runs, which look something like this:
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-general.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,8 +11,8 @@ App profilers provide a good overall view of performance, but you might want to
11
11
12
12
With general profilers this is hard to do, as there needs to be annotation inside the ML framework code to retrieve the information. It is a complex task to write the profiling annotation throughout the framework, so it is easier to use tools from a framework or inference engine that already has the required instrumentation.
13
13
14
-
Depending on the model you use, your choice of tools will vary. For example, if you are using LiteRT (formerly TensorFlow Lite), Arm provides the ArmNN delegate that you can run with the model running on Linux or Android, CPU or GPU.
14
+
Depending on the model you use, your choice of tools will vary. For example, if you are using LiteRT (formerly TensorFlow Lite), Arm provides the Arm NN delegate that you can run with the model running on Linux or Android, CPU or GPU.
15
15
16
-
ArmNN in turn provides a tool called ExecuteNetwork that can run the model and provide layer timings, amongst other useful information.
16
+
Arm NN in turn provides a tool called ExecuteNetwork that can run the model and provide layer timings, amongst other useful information.
17
17
18
18
If you are using PyTorch, you will probably use ExecuTorch, which is the on-device inference runtime for your Android phone. ExecuTorch has a profiler available alongside it.
Copy file name to clipboardExpand all lines: content/learning-paths/smartphones-and-mobile/totalcompute/build.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ weight: 2 # 1 is first, 2 is second, etc.
7
7
# Do not modify these elements
8
8
layout: "learningpathall"
9
9
---
10
-
The [Arm Total Compute](https://developer.arm.com/Tools%20and%20Software/Total%20Compute) reference software stack is a fully integrated open-source stack, from firmware up to Android. he stack includes open-source code available from the relevant upstream projects: SCP firmware, Trusted firmware, Linux kernel, Android, and ArmNN.
10
+
The [Arm Total Compute](https://developer.arm.com/Tools%20and%20Software/Total%20Compute) reference software stack is a fully integrated open-source stack, from firmware up to Android. he stack includes open-source code available from the relevant upstream projects: SCP firmware, Trusted firmware, Linux kernel, Android, and Arm NN.
0 commit comments