Skip to content

Commit fc5abce

Browse files
Merge pull request #1437 from jasonrandrews/spelling
Modify Arm NN spelling
2 parents ba70736 + 9fce79a commit fc5abce

File tree

8 files changed

+17
-17
lines changed

8 files changed

+17
-17
lines changed

content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ learning_objectives:
1212

1313
prerequisites:
1414
- An Arm-powered Android smartphone, and a USB cable to connect to it.
15-
- For profiling the ML inference, [ArmNN's ExecuteNetwork](https://github.com/ARM-software/armnn/releases).
16-
- For profiling the application, [Arm Performance Studio's Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
15+
- For profiling the ML inference, [Arm NN ExecuteNetwork](https://github.com/ARM-software/armnn/releases).
16+
- For profiling the application, [Arm Performance Studio with Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
1717
- Android Studio Profiler.
1818

1919

content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,11 @@ review:
2828
answers:
2929
- No.
3030
- Yes, Streamline just shows you out of the box.
31-
- Yes, ArmNN's ExecuteNetwork can do this.
31+
- Yes, Arm NN ExecuteNetwork can do this.
3232
- Yes, Android Studio Profiler can do this.
3333
correct_answer: 3
3434
explanation: >
35-
Standard profilers do not have an easy way to see what is happening inside an ML framework to see a model running inside it. ArmNN's ExecuteNetwork can do this for LiteRT models, and ExecuTorch has tools that can do this for PyTorch models.
35+
Standard profilers do not have an easy way to see what is happening inside an ML framework to see a model running inside it. Arm NN ExecuteNetwork can do this for LiteRT models, and ExecuTorch has tools that can do this for PyTorch models.
3636
3737
3838

content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/app-profiling-streamline.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ For the example project, add it into the `onCreate()` function of `MainActivity.
204204

205205
In the example app, you can add this in the `onCreate()` function of `MainActivity.kt` after the `Module.load()` call to load the `model.pth`.
206206

207-
This *colored marker with a string* annotation will add the string and time to Streamline's log view, and it appears like the image shown below in Streamline's timeline (in the example app, ArmNN is not used, so there are no white ArmNN markers):
207+
This *colored marker with a string* annotation will add the string and time to Streamline's log view, and it appears like the image shown below in Streamline's timeline (in the example app, Arm NN is not used, so there are no white Arm NN markers):
208208

209209
![Streamline image alt-text#center](streamline_marker.png "Figure 2. Streamline timeline markers")
210210

content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,18 @@ weight: 6
66
layout: learningpathall
77
---
88

9-
## ArmNN's Network Profiler
10-
One way of running LiteRT models is to use ArmNN, which is open-source network machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, ArmNN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model.
9+
## Arm NN Network Profiler
10+
One way of running LiteRT models is to use Arm NN, which is open-source network machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, Arm NN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model.
1111

12-
If you are using LiteRT without ArmNN, then the output from `ExecuteNetwork` is more of an indication than a definitive answer, but it can still be useful in identifying any obvious problems.
12+
If you are using LiteRT without Arm NN, then the output from `ExecuteNetwork` is more of an indication than a definitive answer, but it can still be useful in identifying any obvious problems.
1313

1414
### Download a LiteRT Model
1515

1616
To try this out, you can download a LiteRT model from the [Arm Model Zoo](https://github.com/ARM-software/ML-zoo). Specifically for this Learning Path, you will download [mobilenet tflite](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_int8/mobilenet_v2_1.0_224_INT8.tflite).
1717

1818
### Download and setup ExecuteNetwork
1919

20-
You can download `ExecuteNetwork` from the [ArmNN GitHub](https://github.com/ARM-software/armnn/releases). Download the version appropriate for the Android phone that you are testing on, ensuring that it matches the Android version and architecture of the phone. If you are unsure of the architecture, you can use a lower one, but you might miss out on some optimizations.`ExecuteNetwork` is included inside the `tar.gz` archive that you download. Among the other release downloads on the ArmNN Github is a separate file for the `aar` delegate which you can also easily download.
20+
You can download `ExecuteNetwork` from the [Arm NN GitHub](https://github.com/ARM-software/armnn/releases). Download the version appropriate for the Android phone that you are testing on, ensuring that it matches the Android version and architecture of the phone. If you are unsure of the architecture, you can use a lower one, but you might miss out on some optimizations.`ExecuteNetwork` is included inside the `tar.gz` archive that you download. Among the other release downloads on the Arm NN Github is a separate file for the `aar` delegate which you can also easily download.
2121

2222
To run `ExecuteNetwork,` you need to use `adb` to push the model and the executable to your phone, and then run it from the adb shell. `adb` is included with Android Studio, but you might need to add it to your path. Android Studio normally installs it to a location such as:
2323

@@ -31,7 +31,7 @@ adb push ExecuteNetwork /data/local/tmp/
3131
adb push libarm_compute.so /data/local/tmp/
3232
adb push libarmnn.so /data/local/tmp/
3333
adb push libarmnn_support_library.so /data/local/tmp/
34-
# more ArmNN .so library files
34+
# more Arm NN .so library files
3535
```
3636
Push all the `.so` library files that are in the base folder of the `tar.gz` archive you downloaded, alongside `ExecuteNetwork`, and all the `.so` files in the `delegate` sub-folder.
3737

@@ -74,7 +74,7 @@ Depending on the size of your model, the output will probably be quite large. Yo
7474

7575
At the top is the summary, with the setup time and inference time of the two runs, which look something like this:
7676

77-
```text
77+
```output
7878
Info: ArmNN v33.2.0
7979
Info: Initialization time: 7.20 ms.
8080
Info: ArmnnSubgraph creation

content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-general.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ App profilers provide a good overall view of performance, but you might want to
1111

1212
With general profilers this is hard to do, as there needs to be annotation inside the ML framework code to retrieve the information. It is a complex task to write the profiling annotation throughout the framework, so it is easier to use tools from a framework or inference engine that already has the required instrumentation.
1313

14-
Depending on the model you use, your choice of tools will vary. For example, if you are using LiteRT (formerly TensorFlow Lite), Arm provides the ArmNN delegate that you can run with the model running on Linux or Android, CPU or GPU.
14+
Depending on the model you use, your choice of tools will vary. For example, if you are using LiteRT (formerly TensorFlow Lite), Arm provides the Arm NN delegate that you can run with the model running on Linux or Android, CPU or GPU.
1515

16-
ArmNN in turn provides a tool called ExecuteNetwork that can run the model and provide layer timings, amongst other useful information.
16+
Arm NN in turn provides a tool called ExecuteNetwork that can run the model and provide layer timings, amongst other useful information.
1717

1818
If you are using PyTorch, you will probably use ExecuTorch, which is the on-device inference runtime for your Android phone. ExecuTorch has a profiler available alongside it.

content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/plan.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ here's how to do that...
1313
Also Android Profiler, memory example
1414

1515
Ml network, it will depend on the inference engine you are using
16-
- here's an example for if you are using ArmNN with TFLite
16+
- here's an example for if you are using Arm NN with TFLite
1717
- if you're not using it, it may still have some useful information, but different operators will be used and their performance will be different
1818
can see structure with netron or google model explorer to compare operators or different versions of networks
1919
may need to use a conversion tool to convert to TFLite (or whatever your inference engine wants)

content/learning-paths/smartphones-and-mobile/totalcompute/_review.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,10 @@ review:
2828
- "Trusted firmware"
2929
- "Android"
3030
- "CMSIS"
31-
- "ArmNN"
31+
- "Arm NN"
3232
correct_answer: 3
3333
explanation: >
34-
The stack includes open-source code available from these upstream projects: SCP firmware, Trusted firmware, Linux kernel, Android, and ArmNN.
34+
The stack includes open-source code available from these upstream projects: SCP firmware, Trusted firmware, Linux kernel, Android, and Arm NN.
3535
3636
3737
# ================================================================================

content/learning-paths/smartphones-and-mobile/totalcompute/build.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ weight: 2 # 1 is first, 2 is second, etc.
77
# Do not modify these elements
88
layout: "learningpathall"
99
---
10-
The [Arm Total Compute](https://developer.arm.com/Tools%20and%20Software/Total%20Compute) reference software stack is a fully integrated open-source stack, from firmware up to Android. he stack includes open-source code available from the relevant upstream projects: SCP firmware, Trusted firmware, Linux kernel, Android, and ArmNN.
10+
The [Arm Total Compute](https://developer.arm.com/Tools%20and%20Software/Total%20Compute) reference software stack is a fully integrated open-source stack, from firmware up to Android. he stack includes open-source code available from the relevant upstream projects: SCP firmware, Trusted firmware, Linux kernel, Android, and Arm NN.
1111

1212
## Download and install the FVP
1313

0 commit comments

Comments
 (0)