Skip to content

Commit dfa240b

Browse files
committed
Updating docs and other refs to xnn_executor_runner
Note, this change only affects the cmake build process. Similar changes will be necessary for the buck build flow.
1 parent 2e8d40c commit dfa240b

File tree

4 files changed

+14
-14
lines changed

4 files changed

+14
-14
lines changed

backends/xnnpack/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ After lowering to the XNNPACK Program, we can then prepare it for executorch and
9292

9393

9494
### Running the XNNPACK Model with CMake
95-
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
95+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the executor_runner, which is a sample wrapper for the ExecuTorch Runtime. The XNNPACK Backend is enabled via the compilation flag `-DEXECUTORCH_BUILD_XNNPACK=ON`. We first begin by configuring the CMake build like such:
9696
```bash
9797
# cd to the root of executorch repo
9898
cd executorch
@@ -119,9 +119,9 @@ Then you can build the runtime componenets with
119119
cmake --build cmake-out -j9 --target install --config Release
120120
```
121121

122-
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
122+
Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
123123
```bash
124-
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_fp32.pte
124+
./cmake-out/executor_runner --model_path=./mv2_xnnpack_fp32.pte
125125
```
126126

127127
## Help & Improvements

docs/source/backend-delegates-xnnpack-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre
7070
When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors.
7171

7272
#### **Profiling**
73-
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `xnn_executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
73+
We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)).
7474

7575

7676
[comment]: <> (TODO: Refactor quantizer to a more official quantization doc)

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ Note in the example above,
141141
The generated model file will be named `[model_name]_xnnpack_[qs8/fp32].pte` depending on the arguments supplied.
142142

143143
## Running the XNNPACK Model with CMake
144-
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
144+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the executor_runner, which is a sample wrapper for the ExecuTorch Runtime. The XNNPACK Backend is enabled via the compilation flag `-DEXECUTORCH_BUILD_XNNPACK=ON`. We first begin by configuring the CMake build like such:
145145
```bash
146146
# cd to the root of executorch repo
147147
cd executorch
@@ -168,15 +168,15 @@ Then you can build the runtime componenets with
168168
cmake --build cmake-out -j9 --target install --config Release
169169
```
170170

171-
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
171+
Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
172172
```bash
173-
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_fp32.pte
173+
./cmake-out/executor_runner --model_path=./mv2_xnnpack_fp32.pte
174174
# or to run the quantized variant
175-
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_q8.pte
175+
./cmake-out/executor_runner --model_path=./mv2_xnnpack_q8.pte
176176
```
177177

178178
## Building and Linking with the XNNPACK Backend
179179
You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](using-executorch-android.md) next.
180180

181181
## Profiling
182-
To enable profiling in the `xnn_executor_runner` pass the flags `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` and `-DEXECUTORCH_BUILD_DEVTOOLS=ON` to the build command (add `-DENABLE_XNNPACK_PROFILING=ON` for additional details). This will enable ETDump generation when running the inference and enables command line flags for profiling (see `xnn_executor_runner --help` for details).
182+
To enable profiling in the `executor_runner` pass the flags `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` and `-DEXECUTORCH_BUILD_DEVTOOLS=ON` to the build command (add `-DENABLE_XNNPACK_PROFILING=ON` for additional details). This will enable ETDump generation when running the inference and enables command line flags for profiling (see `executor_runner --help` for details).

examples/xnnpack/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ The following command will produce a floating-point XNNPACK delegated model `mv2
2424
python3 -m examples.xnnpack.aot_compiler --model_name="mv2" --delegate
2525
```
2626

27-
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `xnn_executor_runner`. With cmake, you first configure your cmake with the following:
27+
Once we have the model binary (pte) file, then let's run it with ExecuTorch runtime using the `executor_runner`. With cmake, you first configure your cmake with the following:
2828

2929
```bash
3030
# cd to the root of executorch repo
@@ -56,7 +56,7 @@ cmake --build cmake-out -j9 --target install --config Release
5656
Now finally you should be able to run this model with the following command
5757

5858
```bash
59-
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path ./mv2_xnnpack_fp32.pte
59+
./cmake-out/executor_runner --model_path ./mv2_xnnpack_fp32.pte
6060
```
6161

6262
## Quantization
@@ -80,7 +80,7 @@ python3 -m examples.xnnpack.quantization.example --help
8080
```
8181

8282
## Running the XNNPACK Model with CMake
83-
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
83+
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the executor_runner, which is a sample wrapper for the ExecuTorch Runtime. The XNNPACK Backend is enabled via the compilation flag `-DEXECUTORCH_BUILD_XNNPACK=ON`. We first begin by configuring the CMake build like such:
8484
```bash
8585
# cd to the root of executorch repo
8686
cd executorch
@@ -107,9 +107,9 @@ Then you can build the runtime componenets with
107107
cmake --build cmake-out -j9 --target install --config Release
108108
```
109109

110-
Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
110+
Now you should be able to find the executable built at `./cmake-out/executor_runner` you can run the executable with the model you generated as such
111111
```bash
112-
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_quantized.pte
112+
./cmake-out/executor_runner --model_path=./mv2_quantized.pte
113113
```
114114

115115
## Delegating a Quantized Model

0 commit comments

Comments
 (0)