|
1 | 1 | # Core ML Backend |
2 | 2 |
|
3 | | -Core ML delegate uses Core ML APIs to enable running neural networks via Apple's hardware acceleration. For more about Core ML you can read [here](https://developer.apple.com/documentation/coreml). In this tutorial, we will walk through the steps of lowering a PyTorch model to Core ML delegate |
| 3 | +Core ML delegate is the ExecuTorch solution to take advantage of Apple's [CoreML framework](https://developer.apple.com/documentation/coreml) for on-device ML. With CoreML, a model can run on CPU, GPU, and the Apple Neural Engine (ANE). |
4 | 4 |
|
| 5 | +## Features |
5 | 6 |
|
6 | | -::::{grid} 2 |
7 | | -:::{grid-item-card} What you will learn in this tutorial: |
8 | | -:class-card: card-prerequisites |
9 | | -* In this tutorial you will learn how to export [MobileNet V3](https://pytorch.org/vision/main/models/mobilenetv3.html) model so that it runs on Core ML backend. |
10 | | -* You will also learn how to deploy and run the exported model on a supported Apple device. |
11 | | -::: |
12 | | -:::{grid-item-card} Tutorials we recommend you complete before this: |
13 | | -:class-card: card-prerequisites |
14 | | -* [Introduction to ExecuTorch](./intro-how-it-works.md) |
15 | | -* [Getting Started](./getting-started.md) |
16 | | -* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md) |
17 | | -* [ExecuTorch iOS Demo App](demo-apps-ios.md) |
18 | | -::: |
19 | | -:::: |
| 7 | +- Dynamic dispatch to the CPU, GPU, and ANE. |
| 8 | +- Supports fp32 and fp16 computation. |
20 | 9 |
|
| 10 | +## Target Requirements |
21 | 11 |
|
22 | | -## Prerequisites (Hardware and Software) |
| 12 | +Below are the minimum OS requirements on various hardware for running a CoreML-delegated ExecuTorch model: |
| 13 | +- [macOS](https://developer.apple.com/macos) >= 13.0 |
| 14 | +- [iOS](https://developer.apple.com/ios/) >= 16.0 |
| 15 | +- [iPadOS](https://developer.apple.com/ipados/) >= 16.0 |
| 16 | +- [tvOS](https://developer.apple.com/tvos/) >= 16.0 |
23 | 17 |
|
24 | | -In order to be able to successfully build and run the ExecuTorch's Core ML backend you'll need the following hardware and software components. |
| 18 | +## Development Requirements |
| 19 | +To develop you need: |
25 | 20 |
|
26 | | -### Hardware: |
27 | | -- A [mac](https://www.apple.com/mac/) system for building. |
28 | | -- A [mac](https://www.apple.com/mac/) or [iPhone](https://www.apple.com/iphone/) or [iPad](https://www.apple.com/ipad/) or [Apple TV](https://www.apple.com/tv-home/) device for running the model. |
| 21 | +- [macOS](https://developer.apple.com/macos) >= 13.0. |
| 22 | +- [Xcode](https://developer.apple.com/documentation/xcode) >= 14.1 |
29 | 23 |
|
30 | | -### Software: |
31 | 24 |
|
32 | | -- [Xcode](https://developer.apple.com/documentation/xcode) >= 14.1, [macOS](https://developer.apple.com/macos) >= 13.0 for building. |
33 | | -- [macOS](https://developer.apple.com/macos) >= 13.0, [iOS](https://developer.apple.com/ios/) >= 16.0, [iPadOS](https://developer.apple.com/ipados/) >= 16.0, and [tvOS](https://developer.apple.com/tvos/) >= 16.0 for running the model. |
34 | | - |
35 | | -## Setting up your developer environment |
36 | | - |
37 | | -1. Make sure that you have completed the ExecuTorch setup tutorials linked to at the top of this page and setup the environment. |
38 | | -2. Run `install_requirements.sh` to install dependencies required by the **Core ML** backend. |
| 25 | +Before starting, make sure you install the Xcode Command Line Tools: |
39 | 26 |
|
40 | 27 | ```bash |
41 | | -cd executorch |
42 | | -./backends/apple/coreml/scripts/install_requirements.sh |
| 28 | +xcode-select --install |
43 | 29 | ``` |
44 | | -3. Install [Xcode](https://developer.apple.com/xcode/). |
45 | | -4. Install Xcode Command Line Tools. |
46 | 30 |
|
| 31 | +Finally you must install the CoreML backend by running the following script: |
47 | 32 | ```bash |
48 | | -xcode-select --install |
| 33 | +sh ./backends/apple/coreml/scripts/install_requirements.sh |
49 | 34 | ``` |
50 | 35 |
|
51 | | -## Build |
52 | 36 |
|
53 | | -### AOT (Ahead-of-time) components: |
| 37 | +---- |
54 | 38 |
|
| 39 | +## Using the CoreML Backend |
55 | 40 |
|
56 | | -**Exporting a Core ML delegated Program**: |
57 | | -- In this step, you will lower the [MobileNet V3](https://pytorch.org/vision/main/models/mobilenetv3.html) model to the Core ML backend and export the ExecuTorch program. You'll then deploy and run the exported program on a supported Apple device using Core ML backend. |
58 | | -```bash |
59 | | -cd executorch |
| 41 | +To target the CoreML backend during the export and lowering process, pass an instance of the `CoreMLPartitioner` to `to_edge_transform_and_lower`. The example below demonstrates this process using the MobileNet V2 model from torchvision. |
| 42 | + |
| 43 | +```python |
| 44 | +import torchvision.models as models |
| 45 | +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights |
| 46 | +from executorch.backends.apple.coreml.partition import CoreMLPartitioner |
| 47 | +from executorch.exir import to_edge_transform_and_lower |
| 48 | + |
| 49 | +mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() |
| 50 | +sample_inputs = (torch.randn(1, 3, 224, 224), ) |
60 | 51 |
|
61 | | -# Generates ./mv3_coreml_all.pte file. |
62 | | -python3 -m examples.apple.coreml.scripts.export --model_name mv3 |
| 52 | +et_program = to_edge_transform_and_lower( |
| 53 | + torch.export.export(mobilenet_v2, sample_inputs), |
| 54 | + partitioner=[CoreMLPartitioner()], |
| 55 | +).to_executorch() |
| 56 | + |
| 57 | +with open("mv2_coreml.pte", "wb") as file: |
| 58 | + et_program.write_to_file(file) |
63 | 59 | ``` |
64 | 60 |
|
65 | | -- Core ML backend uses [coremltools](https://apple.github.io/coremltools/docs-guides/source/overview-coremltools.html) to lower [Edge dialect](ir-exir.md#edge-dialect) to Core ML format and then bundles it in the `.pte` file. |
| 61 | +### Partitioner API |
| 62 | + |
| 63 | +The CoreML partitioner API allows for configuration of the model delegation to CoreML. Passing an `CoreMLPartitioner` instance with no additional parameters will run as much of the model as possible on the CoreML backend with default settings. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the [constructor](https://github.com/pytorch/executorch/blob/14ff52ff89a89c074fc6c14d3f01683677783dcd/backends/apple/coreml/partition/coreml_partitioner.py#L60): |
| 64 | + |
| 65 | + |
| 66 | + - `skip_ops_for_coreml_delegation`: Allows you to skip ops for delegation by CoreML. By default, all ops that CoreML supports will be delegated. See [here](https://github.com/pytorch/executorch/blob/14ff52ff89a89c074fc6c14d3f01683677783dcd/backends/apple/coreml/test/test_coreml_partitioner.py#L42) for an example of skipping an op for delegation. |
| 67 | +- `compile_specs`: A list of CompileSpec for the CoreML backend. These control low-level details of CoreML delegation, such as the compute unit (CPU, GPU, ANE), the iOS deployment target, and the compute precision (FP16, FP32). These are discussed more below. |
| 68 | +- `take_over_mutable_buffer`: A boolean that indicates whether PyTorch mutable buffers in stateful models should be converted to [CoreML MLState](https://developer.apple.com/documentation/coreml/mlstate). If set to false, mutable buffers in the PyTorch graph are converted to graph inputs and outputs to the CoreML lowered module under the hood. Generally setting take_over_mutable_buffer to true will result in better performance, but using MLState requires iOS >= 18.0, macOS >= 15.0, and XCode >= 16.0. |
| 69 | + |
| 70 | +#### CoreML CompileSpec |
| 71 | + |
| 72 | +A list of CompileSpec is constructed with [CoreMLBackend.generate_compile_specs](https://github.com/pytorch/executorch/blob/14ff52ff89a89c074fc6c14d3f01683677783dcd/backends/apple/coreml/compiler/coreml_preprocess.py#L210). Below are the available options: |
| 73 | +- `compute_unit`: this controls the compute units (CPU, GPU, ANE) that are used by CoreML. The default value is coremltools.ComputeUnit.ALL. The [available options] from coremltools are: |
| 74 | + - coremltools.ComputeUnit.ALL (uses the CPU, GPU, and ANE) |
| 75 | + - coremltools.ComputeUnit.CPU_ONLY (uses the CPU only) |
| 76 | + - coremltools.ComputeUnit.CPU_AND_GPU (uses both the CPU and GPU, but not the ANE) |
| 77 | + - coremltools.ComputeUnit.CPU_AND_NE (uses both the CPU and ANE, but not the GPU) |
| 78 | +- `minimum_deployment_target`: The minimum iOS deployment target (e.g., coremltools.target.iOS18). The default value is coremltools.target.iOS15. |
| 79 | +- `compute_precision`: The compute precision used by CoreML (coremltools.precision.FLOAT16, coremltools.precision.FLOAT32). The default value is coremltools.precision.FLOAT16. Note that the compute precision is applied no matter what dtype is specified in the exported PyTorch model. For example, an FP32 PyTorch model will be converted to FP16 when delegating to the CoreML backend by default. Also note that the ANE only supports FP16 precision. |
| 80 | +- `model_type`: Whether the model should be compiled to the CoreML [mlmodelc format](https://developer.apple.com/documentation/coreml/downloading-and-compiling-a-model-on-the-user-s-device) during .pte creation ([CoreMLBackend.MODEL_TYPE.COMPILED_MODEL](https://github.com/pytorch/executorch/blob/14ff52ff89a89c074fc6c14d3f01683677783dcd/backends/apple/coreml/compiler/coreml_preprocess.py#L71)), or whether it should be compiled to mlmodelc on device ([CoreMLBackend.MODEL_TYPE.MODEL](https://github.com/pytorch/executorch/blob/14ff52ff89a89c074fc6c14d3f01683677783dcd/backends/apple/coreml/compiler/coreml_preprocess.py#L70)). Using CoreMLBackend.MODEL_TYPE.COMPILED_MODEL and doing compilation ahead of time should improve the first time on-device model load time. |
| 81 | + |
| 82 | + |
| 83 | +### Testing the Model |
| 84 | + |
| 85 | +After generating the CoreML-delegated .pte, the model can be tested from Python using the ExecuTorch runtime python bindings. This can be used to sanity check the model and evaluate numerical accuracy. See [Testing the Model](using-executorch-export.md#testing-the-model) for more information. |
66 | 86 |
|
| 87 | +--- |
67 | 88 |
|
68 | 89 | ### Runtime: |
69 | 90 |
|
|
0 commit comments