You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .ci/scripts/test_llava.sh
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -154,7 +154,7 @@ run_and_verify() {
154
154
EXPECTED_PREFIX="ASSISTANT: image captures a basketball game in progress, with several players on the court. One of the players is dribbling the ball, while the others are in various"
155
155
else
156
156
# set the expected prefix to be the same as prompt because there's a bug in sdpa_with_kv_cache that causes <unk> tokens.
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+6-13Lines changed: 6 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ executorch
58
58
│ ├── <ahref="exir/verification">verification</a> - IR verification.
59
59
├── <ahref="extension">extension</a> - Extensions built on top of the runtime.
60
60
│ ├── <ahref="extension/android">android</a> - ExecuTorch wrappers for Android apps. Please refer to the <ahref="docs/source/using-executorch-android.md">Android documentation</a> and <ahref="https://pytorch.org/executorch/main/javadoc/">Javadoc</a> for more information.
61
-
│ ├── <ahref="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <ahref="docs/source/using-executorch-ios.md">iOS documentation</a> and <ahref="https://pytorch.org/executorch/stable/apple-runtime.html">how to integrate into Apple platform</a> for more information.
61
+
│ ├── <ahref="extension/apple">apple</a> - ExecuTorch wrappers for iOS apps. Please refer to the <ahref="docs/source/using-executorch-ios.md">iOS documentation</a> and <ahref="https://pytorch.org/executorch/main/using-executorch-ios.html">how to integrate into Apple platform</a> for more information.
62
62
│ ├── <ahref="extension/aten_util">aten_util</a> - Converts to and from PyTorch ATen types.
63
63
│ ├── <ahref="extension/data_loader">data_loader</a> - 1st party data loader implementations.
64
64
│ ├── <ahref="extension/evalue_util">evalue_util</a> - Helpers for working with EValue objects.
@@ -102,6 +102,8 @@ executorch
102
102
## Contributing workflow
103
103
We actively welcome your pull requests (PRs).
104
104
105
+
If you're completely new to open-source projects, GitHub, or ExecuTorch, please see our [New Contributor Guide](./docs/source/new-contributor-guide.md) for a step-by-step walkthrough on making your first contribution. Otherwise, read on.
106
+
105
107
1.[Claim an issue](#claiming-issues), if present, before starting work. If an
106
108
issue doesn't cover the work you plan to do, consider creating one to provide
107
109
context about it, and to build consensus about the scope and solution.
@@ -407,18 +409,9 @@ for basics.
407
409
- If the reviewers have requests or questions, follow up with them.
408
410
- The goal of the reviewer is to ensure that the code in the `main` branch of
409
411
the repo is consistent, maintainable, and of high quality.
410
-
1. Once the PR has been approved,
411
-
- If you have the "write permission" in this repo, you can merge it yourself
412
-
by clicking the "Squash and merge" button once it is green and all CI
413
-
signals are passing.
414
-
- If you don't have "write permission" in this repo, the reviewer will take
415
-
care of the PR. The reviewer may import the PR into Meta's internal system
416
-
to validate it against internal CI.
417
-
- If the PR is approved but not merged within 5 business days, please comment
418
-
on the PR to ask about its status.
419
-
- Note that if the `main`[CI](#continuous-integration) jobs are broken, we
420
-
will only merge PRs that fix the broken jobs until all critical jobs are
421
-
fixed.
412
+
1. Once the PR has been approved, you can merge it yourself
413
+
by clicking the "Squash and merge" button once it is
This subtree contains the Core ML Delegate implementation for ExecuTorch.
5
-
Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices.
4
+
Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends-coreml.md).
6
5
7
6
## Layout
8
7
-`compiler/` : Lowers a module to Core ML backend.
@@ -19,110 +18,6 @@ Core ML is an optimized framework for running machine learning models on Apple d
19
18
-`workspace` : Xcode workspace for the runtime.
20
19
-`third-party/`: External dependencies.
21
20
22
-
## Partition and Delegation
23
-
24
-
To delegate a Program to the **Core ML** backend, the client must call `to_backend` with the **CoreMLPartitioner**.
25
-
26
-
```python
27
-
import torch
28
-
import executorch.exir
29
-
30
-
from executorch.backends.apple.coreml.compiler import CoreMLBackend
31
-
from executorch.backends.apple.coreml.partition import CoreMLPartitioner
32
-
33
-
classModel(torch.nn.Module):
34
-
def__init__(self):
35
-
super().__init__()
36
-
37
-
defforward(self, x):
38
-
return torch.sin(x)
39
-
40
-
source_model = Model()
41
-
example_inputs = (torch.ones(1), )
42
-
43
-
# Export the source model to Edge IR representation
The module will be fully or partially delegated to **Core ML**, depending on whether all or part of ops are supported by the **Core ML** backend. User may force skip certain ops by `CoreMLPartitioner(skip_ops_for_coreml_delegation=...)`
57
-
58
-
The `to_backend` implementation is a thin wrapper over [coremltools](https://apple.github.io/coremltools/docs-guides/), `coremltools` is responsible for converting an **ExportedProgram** to a **MLModel**. The converted **MLModel** data is saved, flattened, and returned as bytes to **ExecuTorch**.
59
-
60
-
## Quantization
61
-
62
-
To quantize a Program in a Core ML favored way, the client may utilize **CoreMLQuantizer**.
63
-
64
-
```python
65
-
import torch
66
-
import executorch.exir
67
-
68
-
from torch.export import export_for_training
69
-
from torch.ao.quantization.quantize_pt2e import (
70
-
convert_pt2e,
71
-
prepare_pt2e,
72
-
prepare_qat_pt2e,
73
-
)
74
-
75
-
from executorch.backends.apple.coreml.quantizer import CoreMLQuantizer
76
-
from coremltools.optimize.torch.quantization.quantization_config import (
The `converted_graph` is the quantized torch model, and can be delegated to **Core ML** similarly through **CoreMLPartitioner**
119
-
120
-
## Runtime
121
-
122
-
To execute a Core ML delegated program, the application must link to the `coremldelegate` library. Once linked there are no additional steps required, ExecuTorch when running the program would call the Core ML runtime to execute the Core ML delegated part of the program.
123
-
124
-
Please follow the instructions described in the [Core ML setup](/backends/apple/coreml/setup.md) to link the `coremldelegate` library.
125
-
126
21
## Help & Improvements
127
22
If you have problems or questions or have suggestions for ways to make
128
23
implementation and testing better, please create an issue on [github](https://www.github.com/pytorch/executorch/issues).
0 commit comments