Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 1 addition & 106 deletions backends/apple/coreml/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# ExecuTorch Core ML Delegate


This subtree contains the Core ML Delegate implementation for ExecuTorch.
Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices.
Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends-coreml.md).

## Layout
- `compiler/` : Lowers a module to Core ML backend.
Expand All @@ -19,110 +18,6 @@ Core ML is an optimized framework for running machine learning models on Apple d
- `workspace` : Xcode workspace for the runtime.
- `third-party/`: External dependencies.

## Partition and Delegation

To delegate a Program to the **Core ML** backend, the client must call `to_backend` with the **CoreMLPartitioner**.

```python
import torch
import executorch.exir

from executorch.backends.apple.coreml.compiler import CoreMLBackend
from executorch.backends.apple.coreml.partition import CoreMLPartitioner

class Model(torch.nn.Module):
def __init__(self):
super().__init__()

def forward(self, x):
return torch.sin(x)

source_model = Model()
example_inputs = (torch.ones(1), )

# Export the source model to Edge IR representation
aten_program = torch.export.export(source_model, example_inputs)
edge_program_manager = executorch.exir.to_edge(aten_program)

# Delegate to Core ML backend
delegated_program_manager = edge_program_manager.to_backend(CoreMLPartitioner())

# Serialize delegated program
executorch_program = delegated_program_manager.to_executorch()
with open("model.pte", "wb") as f:
f.write(executorch_program.buffer)
```

The module will be fully or partially delegated to **Core ML**, depending on whether all or part of ops are supported by the **Core ML** backend. User may force skip certain ops by `CoreMLPartitioner(skip_ops_for_coreml_delegation=...)`

The `to_backend` implementation is a thin wrapper over [coremltools](https://apple.github.io/coremltools/docs-guides/), `coremltools` is responsible for converting an **ExportedProgram** to a **MLModel**. The converted **MLModel** data is saved, flattened, and returned as bytes to **ExecuTorch**.

## Quantization

To quantize a Program in a Core ML favored way, the client may utilize **CoreMLQuantizer**.

```python
import torch
import executorch.exir

from torch.export import export_for_training
from torch.ao.quantization.quantize_pt2e import (
convert_pt2e,
prepare_pt2e,
prepare_qat_pt2e,
)

from executorch.backends.apple.coreml.quantizer import CoreMLQuantizer
from coremltools.optimize.torch.quantization.quantization_config import (
LinearQuantizerConfig,
QuantizationScheme,
)

class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv = torch.nn.Conv2d(
in_channels=3, out_channels=16, kernel_size=3, padding=1
)
self.relu = torch.nn.ReLU()

def forward(self, x: torch.Tensor) -> torch.Tensor:
a = self.conv(x)
return self.relu(a)

source_model = Model()
example_inputs = (torch.randn((1, 3, 256, 256)), )

pre_autograd_aten_dialect = export_for_training(source_model, example_inputs).module()

quantization_config = LinearQuantizerConfig.from_dict(
{
"global_config": {
"quantization_scheme": QuantizationScheme.symmetric,
"activation_dtype": torch.quint8,
"weight_dtype": torch.qint8,
"weight_per_channel": True,
}
}
)
quantizer = CoreMLQuantizer(quantization_config)

# For post-training quantization, use `prepare_pt2e`
# For quantization-aware trainin,g use `prepare_qat_pt2e`
prepared_graph = prepare_pt2e(pre_autograd_aten_dialect, quantizer)

prepared_graph(*example_inputs)
converted_graph = convert_pt2e(prepared_graph)
```

The `converted_graph` is the quantized torch model, and can be delegated to **Core ML** similarly through **CoreMLPartitioner**

## Runtime

To execute a Core ML delegated program, the application must link to the `coremldelegate` library. Once linked there are no additional steps required, ExecuTorch when running the program would call the Core ML runtime to execute the Core ML delegated part of the program.

Please follow the instructions described in the [Core ML setup](/backends/apple/coreml/setup.md) to link the `coremldelegate` library.

## Help & Improvements
If you have problems or questions or have suggestions for ways to make
implementation and testing better, please create an issue on [github](https://www.github.com/pytorch/executorch/issues).
Loading