diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 45e03bd36e1..eb6a3a60d62 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -24,7 +24,7 @@ For Apple, please refer to the [iOS documentation](docs/source/using-executorch- executorch ├── backends - Backend delegate implementations for various hardware targets. Each backend uses partitioner to split the graph into subgraphs that can be executed on specific hardware, quantizer to optimize model precision, and runtime components to execute the graph on target hardware. For details refer to the backend documentation and the Export and Lowering tutorial for more information. │ ├── apple - Apple-specific backends. -│ │ ├── coreml - CoreML backend for Apple devices. See doc. +│ │ ├── coreml - CoreML backend for Apple devices. See doc. │ │ └── mps - Metal Performance Shaders backend for Apple devices. See doc. │ ├── arm - ARM architecture backends. See doc. │ ├── cadence - Cadence-specific backends. See doc. diff --git a/README-wheel.md b/README-wheel.md index 7ae9b0aa2e0..40dca1e9631 100644 --- a/README-wheel.md +++ b/README-wheel.md @@ -12,7 +12,7 @@ The prebuilt `executorch.runtime` module included in this package provides a way to run ExecuTorch `.pte` files, with some restrictions: * Only [core ATen operators](docs/source/ir-ops-set-definition.md) are linked into the prebuilt module * Only the [XNNPACK backend delegate](docs/source/backends-xnnpack.md) is linked into the prebuilt module. -* \[macOS only] [Core ML](docs/source/backends-coreml.md) and [MPS](docs/source/backends-mps.md) backend +* \[macOS only] [Core ML](docs/source/backends/coreml/coreml-overview.md) and [MPS](docs/source/backends-mps.md) backend are also linked into the prebuilt module. Please visit the [ExecuTorch website](https://pytorch.org/executorch) for diff --git a/backends/apple/coreml/README.md b/backends/apple/coreml/README.md index d063dfc8b71..d72f04da1a1 100644 --- a/backends/apple/coreml/README.md +++ b/backends/apple/coreml/README.md @@ -1,7 +1,7 @@ # ExecuTorch Core ML Delegate This subtree contains the Core ML Delegate implementation for ExecuTorch. -Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends-coreml.md). +Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends/coreml/coreml-overview.md). ## Layout - `compiler/` : Lowers a module to Core ML backend.