Skip to content

Commit 7f6ae9a

Browse files
Merge pull request #1571 from BmanClark/main
minor fixes
2 parents d8cac1e + 0c1deb0 commit 7f6ae9a

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ learning_objectives:
1212

1313
prerequisites:
1414
- An Arm-powered Android smartphone, and a USB cable to connect to it.
15-
- For profiling the ML inference, [Arm NN ExecuteNetwork](https://github.com/ARM-software/armnn/releases).
15+
- For profiling the ML inference, [Arm NN ExecuteNetwork](https://github.com/ARM-software/armnn/releases) or [ExecuTorch](https://github.com/pytorch/executorch).
1616
- For profiling the application, [Arm Performance Studio with Streamline](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
1717
- Android Studio Profiler.
1818

content/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/nn-profiling-executenetwork.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ layout: learningpathall
77
---
88

99
## Arm NN Network Profiler
10-
One way of running LiteRT models is to use Arm NN, which is open-source network machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, Arm NN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model.
10+
One way of running LiteRT models is to use Arm NN, which is open-source machine learning (ML) software. This is available as a delegate to the standard LiteRT interpreter. But to profile the model, Arm NN comes with a command-line utility called `ExecuteNetwork`. This program runs the model without the rest of the app. It is able to output layer timings and other useful information to report where there might be bottlenecks within your model.
1111

1212
If you are using LiteRT without Arm NN, then the output from `ExecuteNetwork` is more of an indication than a definitive answer, but it can still be useful in identifying any obvious problems.
1313

0 commit comments

Comments
 (0)