|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "code", |
| 5 | + "execution_count": null, |
| 6 | + "metadata": {}, |
| 7 | + "outputs": [], |
| 8 | + "source": [ |
| 9 | + "# Copyright 2025 Arm Limited and/or its affiliates.\n", |
| 10 | + "#\n", |
| 11 | + "# This source code is licensed under the BSD-style license found in the\n", |
| 12 | + "# LICENSE file in the root directory of this source tree." |
| 13 | + ] |
| 14 | + }, |
| 15 | + { |
| 16 | + "cell_type": "markdown", |
| 17 | + "metadata": {}, |
| 18 | + "source": [ |
| 19 | + "# VGF Backend flow example\n", |
| 20 | + "\n", |
| 21 | + "This guide demonstrates the full flow for lowering a module using the VGF backend using ExecuTorch. \n", |
| 22 | + "Tested on Linux x86_64. If something is not working for you, please raise a GitHub issue and tag Arm.\n", |
| 23 | + "\n", |
| 24 | + "Before you begin:\n", |
| 25 | + "1. (In a clean virtual environment with a compatible Python version) Install executorch using `./install_executorch.sh`\n", |
| 26 | + "2. Install MLSDK and Tosa using `examples/arm/setup.sh --disable-ethos-u-deps --enable-mlsdk-deps (For further guidance, refer to https://docs.pytorch.org/executorch/main/tutorial-arm.html)\n", |
| 27 | + "3. Export vulkan environment variables and add MLSDK components to PATH and LD_LIBRARY_PATH using `examples/arm/ethos-u-scratch/setup_path.sh`\n", |
| 28 | + "\n", |
| 29 | + "With all commands executed from the base `executorch` folder.\n", |
| 30 | + "\n", |
| 31 | + "\n", |
| 32 | + "\n", |
| 33 | + "*Some scripts in this notebook produce long output logs: Configuring the 'Customizing Notebook Layout' settings to enable 'Output:scrolling' and setting 'Output:Text Line Limit' makes this more manageable*" |
| 34 | + ] |
| 35 | + }, |
| 36 | + { |
| 37 | + "cell_type": "markdown", |
| 38 | + "metadata": {}, |
| 39 | + "source": [ |
| 40 | + "## AOT Flow\n", |
| 41 | + "\n", |
| 42 | + "The first step is creating the PyTorch module and exporting it. Exporting converts the python code in the module into a graph structure. The result is still runnable python code, which can be displayed by printing the `graph_module` of the exported program. " |
| 43 | + ] |
| 44 | + }, |
| 45 | + { |
| 46 | + "cell_type": "code", |
| 47 | + "execution_count": null, |
| 48 | + "metadata": {}, |
| 49 | + "outputs": [], |
| 50 | + "source": [ |
| 51 | + "import torch\n", |
| 52 | + "\n", |
| 53 | + "class Add(torch.nn.Module):\n", |
| 54 | + " def forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:\n", |
| 55 | + " return x + y\n", |
| 56 | + "\n", |
| 57 | + "example_inputs = (torch.ones(1,1,1,1),torch.ones(1,1,1,1))\n", |
| 58 | + "\n", |
| 59 | + "model = Add()\n", |
| 60 | + "model = model.eval()\n", |
| 61 | + "exported_program = torch.export.export_for_training(model, example_inputs)\n", |
| 62 | + "graph_module = exported_program.module()\n", |
| 63 | + "\n", |
| 64 | + "_ = graph_module.print_readable()" |
| 65 | + ] |
| 66 | + }, |
| 67 | + { |
| 68 | + "cell_type": "markdown", |
| 69 | + "metadata": {}, |
| 70 | + "source": [ |
| 71 | + "# VGF backend supports both INT and FP targets. \n", |
| 72 | + "\n", |
| 73 | + "To lower the graph_module for FP targets using the VGF backend, we run it through the default FP lowering pipeline. \n", |
| 74 | + "\n", |
| 75 | + "FP lowering can be customized for different subgraphs; the sequence shown here is the recommended workflow for VGF.\n", |
| 76 | + "Because we are staying in floating-point precision, no calibration with example inputs is required. \n", |
| 77 | + "\n", |
| 78 | + "If you print the module again, you will see that nodes are left in FP form (or annotated with any necessary casts) without any quantize/dequantize wrappers.\n" |
| 79 | + ] |
| 80 | + }, |
| 81 | + { |
| 82 | + "cell_type": "code", |
| 83 | + "execution_count": null, |
| 84 | + "metadata": {}, |
| 85 | + "outputs": [], |
| 86 | + "source": [ |
| 87 | + "from executorch.backends.arm.arm_backend import ArmCompileSpecBuilder\n", |
| 88 | + "from executorch.backends.arm.tosa_specification import ( \n", |
| 89 | + " TosaSpecification,\n", |
| 90 | + ")\n", |
| 91 | + "\n", |
| 92 | + "# Create a compilation spec describing the floating point target.\n", |
| 93 | + "tosa_spec = TosaSpecification.create_from_string(\"TOSA-1.0+FP\")\n", |
| 94 | + "\n", |
| 95 | + "spec_builder = ArmCompileSpecBuilder().vgf_compile_spec(tosa_spec)\n", |
| 96 | + "compile_spec = spec_builder.build()\n", |
| 97 | + "\n", |
| 98 | + "_ = graph_module.print_readable()\n", |
| 99 | + "\n", |
| 100 | + "# Create a new exported program using the graph_module\n", |
| 101 | + "exported_program = torch.export.export_for_training(graph_module, example_inputs)" |
| 102 | + ] |
| 103 | + }, |
| 104 | + { |
| 105 | + "cell_type": "markdown", |
| 106 | + "metadata": {}, |
| 107 | + "source": [ |
| 108 | + "To lower the graph_module for INT targets using the VGF backend, we apply the arm_quantizer. \n", |
| 109 | + "\n", |
| 110 | + "Quantization can be performed in various ways and tailored to different subgraphs; the sequence shown here represents the recommended workflow for VGF. \n", |
| 111 | + "\n", |
| 112 | + "This step also requires calibrating the module with representative inputs. \n", |
| 113 | + "\n", |
| 114 | + "If you print the module again, you’ll see that each node is now wrapped in quantization/dequantization nodes that embed the calculated quantization parameters." |
| 115 | + ] |
| 116 | + }, |
| 117 | + { |
| 118 | + "cell_type": "code", |
| 119 | + "execution_count": null, |
| 120 | + "metadata": {}, |
| 121 | + "outputs": [], |
| 122 | + "source": [ |
| 123 | + "from executorch.backends.arm.quantizer import (\n", |
| 124 | + " VgfQuantizer,\n", |
| 125 | + " get_symmetric_quantization_config,\n", |
| 126 | + ")\n", |
| 127 | + "from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e\n", |
| 128 | + "\n", |
| 129 | + "# Create a compilation spec describing the target for configuring the quantizer\n", |
| 130 | + "tosa_spec = TosaSpecification.create_from_string(\"TOSA-1.0+INT\")\n", |
| 131 | + "\n", |
| 132 | + "spec_builder = ArmCompileSpecBuilder().vgf_compile_spec(tosa_spec)\n", |
| 133 | + "compile_spec = spec_builder.build()\n", |
| 134 | + "\n", |
| 135 | + "# Create and configure quantizer to use a symmetric quantization config globally on all nodes\n", |
| 136 | + "quantizer = VgfQuantizer(compile_spec)\n", |
| 137 | + "operator_config = get_symmetric_quantization_config(is_per_channel=False)\n", |
| 138 | + "quantizer.set_global(operator_config)\n", |
| 139 | + "\n", |
| 140 | + "# Post training quantization\n", |
| 141 | + "quantized_graph_module = prepare_pt2e(graph_module, quantizer)\n", |
| 142 | + "quantized_graph_module(*example_inputs) # Calibrate the graph module with the example input\n", |
| 143 | + "quantized_graph_module = convert_pt2e(quantized_graph_module)\n", |
| 144 | + "\n", |
| 145 | + "_ = quantized_graph_module.print_readable()\n", |
| 146 | + "\n", |
| 147 | + "# Create a new exported program using the quantized_graph_module\n", |
| 148 | + "quantized_exported_program = torch.export.export_for_training(quantized_graph_module, example_inputs)" |
| 149 | + ] |
| 150 | + }, |
| 151 | + { |
| 152 | + "cell_type": "markdown", |
| 153 | + "metadata": {}, |
| 154 | + "source": [ |
| 155 | + "# In the example below, we will make use of the quantized graph module.\n", |
| 156 | + "\n", |
| 157 | + "The lowering in the VGFBackend happens in five steps:\n", |
| 158 | + "\n", |
| 159 | + "1. **Lowering to core Aten operator set**: Transform module to use a subset of operators applicable to edge devices. \n", |
| 160 | + "2. **Partitioning**: Find subgraphs that will be lowered by the VGF backend.\n", |
| 161 | + "3. **Lowering to TOSA compatible operator set**: Perform transforms to make the VGF subgraph(s) compatible with TOSA \n", |
| 162 | + "4. **Serialization to TOSA**: Compiles the graph module into a TOSA graph \n", |
| 163 | + "5. **Compilation to VGF**: Compiles the FX GraphModule into a VGF representation using the model_converter and the previously created compile_spec. It also prints a network summary for each processed VGF partition.\n", |
| 164 | + "\n", |
| 165 | + "All of this happens behind the scenes in `to_edge_transform_and_lower`. Printing the graph module shows that what is left in the graph is two quantization nodes for `x` and `y` going into an `executorch_call_delegate` node, followed by a dequantization node." |
| 166 | + ] |
| 167 | + }, |
| 168 | + { |
| 169 | + "cell_type": "code", |
| 170 | + "execution_count": null, |
| 171 | + "metadata": {}, |
| 172 | + "outputs": [], |
| 173 | + "source": [ |
| 174 | + "import os\n", |
| 175 | + "from executorch.backends.arm.vgf_partitioner import VgfPartitioner\n", |
| 176 | + "from executorch.exir import (\n", |
| 177 | + " EdgeCompileConfig,\n", |
| 178 | + " ExecutorchBackendConfig,\n", |
| 179 | + " to_edge_transform_and_lower,\n", |
| 180 | + ")\n", |
| 181 | + "from executorch.extension.export_util.utils import save_pte_program\n", |
| 182 | + "\n", |
| 183 | + "# Create partitioner from compile spec\n", |
| 184 | + "partitioner = VgfPartitioner(compile_spec)\n", |
| 185 | + "\n", |
| 186 | + "# Lower the exported program to the VGF backend\n", |
| 187 | + "edge_program_manager = to_edge_transform_and_lower(\n", |
| 188 | + " quantized_exported_program,\n", |
| 189 | + " partitioner=[partitioner],\n", |
| 190 | + " compile_config=EdgeCompileConfig(\n", |
| 191 | + " _check_ir_validity=False,\n", |
| 192 | + " ),\n", |
| 193 | + ")\n", |
| 194 | + "\n", |
| 195 | + "# Convert edge program to executorch\n", |
| 196 | + "executorch_program_manager = edge_program_manager.to_executorch(\n", |
| 197 | + " config=ExecutorchBackendConfig(extract_delegate_segments=False)\n", |
| 198 | + ")\n", |
| 199 | + "\n", |
| 200 | + "executorch_program_manager.exported_program().module().print_readable()\n", |
| 201 | + "\n", |
| 202 | + "# Save pte file\n", |
| 203 | + "cwd_dir = os.getcwd()\n", |
| 204 | + "pte_base_name = \"simple_example\"\n", |
| 205 | + "pte_name = pte_base_name + \".pte\"\n", |
| 206 | + "pte_path = os.path.join(cwd_dir, pte_name)\n", |
| 207 | + "save_pte_program(executorch_program_manager, pte_name)\n", |
| 208 | + "assert os.path.exists(pte_path), \"Build failed; no .pte-file found\"" |
| 209 | + ] |
| 210 | + }, |
| 211 | + { |
| 212 | + "cell_type": "markdown", |
| 213 | + "metadata": {}, |
| 214 | + "source": [ |
| 215 | + "## Build executor runtime\n", |
| 216 | + "\n", |
| 217 | + "### Prerequisite\n", |
| 218 | + "With our VGF inside our PTE we now need to setup the runtime. To do this we will use the previously built MLSDK dependencies, but we will also need to setup a Vulkan environment externally to Executorch.\n", |
| 219 | + "Plese follow https://vulkan.lunarg.com/sdk/home in order to setup. \n", |
| 220 | + "\n", |
| 221 | + "\n", |
| 222 | + "After the AOT compilation flow is done, we need to build the executor_runner target. For this example the generic version will be used.\n", |
| 223 | + "To do this, please ensure the following commands are executed before moving onto the next step.\n", |
| 224 | + "\n", |
| 225 | + "Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here.\n", |
| 226 | + "```\n", |
| 227 | + "cmake \\\n", |
| 228 | + " -DCMAKE_INSTALL_PREFIX=cmake-out \\\n", |
| 229 | + " -DCMAKE_BUILD_TYPE=Debug \\\n", |
| 230 | + " -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \\\n", |
| 231 | + " -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \\\n", |
| 232 | + " -DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \\\n", |
| 233 | + " -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \\\n", |
| 234 | + " -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \\\n", |
| 235 | + " -DEXECUTORCH_BUILD_XNNPACK=OFF \\\n", |
| 236 | + " -DEXECUTORCH_BUILD_VULKAN=ON \\\n", |
| 237 | + " -DEXECUTORCH_BUILD_VGF=ON \\\n", |
| 238 | + " -DEXECUTORCH_ENABLE_LOGGING=ON \\\n", |
| 239 | + " -DPYTHON_EXECUTABLE=python \\\n", |
| 240 | + " -Bcmake-out .\n", |
| 241 | + "```\n", |
| 242 | + "\n", |
| 243 | + "Build the executor_runner target\n", |
| 244 | + "`cmake --build cmake-out --target executor_runner`\n" |
| 245 | + ] |
| 246 | + }, |
| 247 | + { |
| 248 | + "cell_type": "markdown", |
| 249 | + "metadata": {}, |
| 250 | + "source": [ |
| 251 | + "# Run on VKML Emulator\n", |
| 252 | + "\n", |
| 253 | + "We can finally use the `backends/arm/scripts/run_vkml.sh` utility script to run the .pte end-to-end and proving out a backend’s kernel implementation. This Script runs the model with an input of ones, so the expected result of the addition should be close to 2." |
| 254 | + ] |
| 255 | + }, |
| 256 | + { |
| 257 | + "cell_type": "code", |
| 258 | + "execution_count": null, |
| 259 | + "metadata": {}, |
| 260 | + "outputs": [], |
| 261 | + "source": [ |
| 262 | + "import subprocess\n", |
| 263 | + "\n", |
| 264 | + "# Setup paths\n", |
| 265 | + "et_dir = os.path.join(cwd_dir, \"..\", \"..\")\n", |
| 266 | + "et_dir = os.path.abspath(et_dir)\n", |
| 267 | + "script_dir = os.path.join(et_dir, \"backends\", \"arm\", \"scripts\")\n", |
| 268 | + "\n", |
| 269 | + "args = f\"--model={pte_path}\"\n", |
| 270 | + "subprocess.run(os.path.join(script_dir, \"run_vkml.sh\") + \" \" + args, shell=True, cwd=et_dir)" |
| 271 | + ] |
| 272 | + }, |
| 273 | + { |
| 274 | + "cell_type": "code", |
| 275 | + "execution_count": null, |
| 276 | + "metadata": {}, |
| 277 | + "outputs": [], |
| 278 | + "source": [] |
| 279 | + } |
| 280 | + ], |
| 281 | + "metadata": { |
| 282 | + "kernelspec": { |
| 283 | + "display_name": "Python 3 (ipykernel)", |
| 284 | + "language": "python", |
| 285 | + "name": "python3" |
| 286 | + }, |
| 287 | + "language_info": { |
| 288 | + "codemirror_mode": { |
| 289 | + "name": "ipython", |
| 290 | + "version": 3 |
| 291 | + }, |
| 292 | + "file_extension": ".py", |
| 293 | + "mimetype": "text/x-python", |
| 294 | + "name": "python", |
| 295 | + "nbconvert_exporter": "python", |
| 296 | + "pygments_lexer": "ipython3", |
| 297 | + "version": "3.10.12" |
| 298 | + } |
| 299 | + }, |
| 300 | + "nbformat": 4, |
| 301 | + "nbformat_minor": 4 |
| 302 | +} |
0 commit comments