Skip to content

Commit 789b271

Browse files
committed
Update
[ghstack-poisoned]
2 parents 822f235 + 637cc32 commit 789b271

File tree

27 files changed

+709
-512
lines changed

27 files changed

+709
-512
lines changed

.github/workflows/doc-build.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,8 +84,8 @@ jobs:
8484
needs: build
8585
if: github.repository == 'pytorch/executorch' && github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v'))
8686
permissions:
87+
id-token: write
8788
contents: write
88-
contents: read
8989
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
9090
with:
9191
repository: pytorch/executorch

README.md

Lines changed: 42 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,37 @@
1-
# ExecuTorch
2-
3-
**ExecuTorch** is an end-to-end solution for enabling on-device inference
4-
capabilities across mobile and edge devices including wearables, embedded
5-
devices and microcontrollers. It is part of the PyTorch Edge ecosystem and
6-
enables efficient deployment of PyTorch models to edge devices.
1+
<div align="center">
2+
<img src="./docs/source/_static/img/et-logo.png" alt="Logo" width="200">
3+
<h1 align="center">ExecuTorch: A powerful on-device AI Framework</h1>
4+
</div>
5+
6+
7+
<div align="center">
8+
<a href="https://github.com/pytorch/executorch/graphs/contributors"><img src="https://img.shields.io/github/contributors/pytorch/executorch?style=for-the-badge&color=blue" alt="Contributors"></a>
9+
<a href="https://github.com/pytorch/executorch/stargazers"><img src="https://img.shields.io/github/stars/pytorch/executorch?style=for-the-badge&color=blue" alt="Stargazers"></a>
10+
<a href="https://discord.gg/MeacgB7A"><img src="https://img.shields.io/badge/Discord-Join%20Us-purple?logo=discord&logoColor=white&style=for-the-badge" alt="Join our Discord community"></a>
11+
<a href="https://pytorch.org/executorch/stable/index.html"><img src="https://img.shields.io/badge/Documentation-000?logo=googledocs&logoColor=FFE165&style=for-the-badge" alt="Check out the documentation"></a>
12+
<hr>
13+
</div>
14+
15+
**ExecuTorch** is an end-to-end solution for on-device inference and training. It powers much of Meta's on-device AI experiences across Facebook, Instagram, Meta Quest, Ray-Ban Meta Smart Glasses, WhatsApp, and more.
16+
17+
It supports a wide range of models including LLMs (Large Language Models), CV (Computer Vision), ASR (Automatic Speech Recognition), and TTS (Text to Speech).
18+
19+
Platform Support:
20+
- Operating Systems:
21+
- iOS
22+
- Mac
23+
- Android
24+
- Linux
25+
- Microcontrollers
26+
27+
- Hardware Acceleration:
28+
- Apple
29+
- Arm
30+
- Cadence
31+
- MediaTek
32+
- Qualcomm
33+
- Vulkan
34+
- XNNPACK
735

836
Key value propositions of ExecuTorch are:
937

@@ -17,35 +45,21 @@ Key value propositions of ExecuTorch are:
1745
experience due to a lightweight runtime and utilizing full hardware
1846
capabilities such as CPUs, NPUs, and DSPs.
1947

20-
For a comprehensive technical overview of ExecuTorch and step-by-step tutorials,
21-
please visit our documentation website [for the latest release](https://pytorch.org/executorch/stable/index.html) (or the [main branch](https://pytorch.org/executorch/main/index.html)).
22-
23-
Check out the [Getting Started](https://pytorch.org/executorch/stable/getting-started-setup.html#quick-setup-colab-jupyter-notebook-prototype) page for a quick spin.
24-
25-
Check out the examples of [Llama](./examples/models/llama/README.md), [Llava](./examples/models/llava/README.md) and [other models](./examples/README.md) running on edge devices using ExecuTorch.
48+
## Getting Started
49+
To get started you can:
2650

51+
- Visit the [Step by Step Tutorial](https://pytorch.org/executorch/main/index.html) on getting things running locally and deploy a model to a device
52+
- Use this [Colab Notebook](https://pytorch.org/executorch/stable/getting-started-setup.html#quick-setup-colab-jupyter-notebook-prototype) to start playing around right away
53+
- Jump straight into LLMs use cases by following specific instructions for [Llama](./examples/models/llama/README.md) and [Llava](./examples/models/llava/README.md)
2754

28-
**[UPDATE - 10/24]** We have added support for running [Llama 3.2 Quantized 1B/3B](./examples/models/llama/README.md) models via ExecuTorch.
29-
30-
## Feedback
55+
## Feedback and Engagement
3156

3257
We welcome any feedback, suggestions, and bug reports from the community to help
33-
us improve our technology. Please use the [PyTorch
34-
Forums](https://discuss.pytorch.org/c/executorch) for discussion and feedback
35-
about ExecuTorch using the **ExecuTorch** category, and our [GitHub
36-
repository](https://github.com/pytorch/executorch/issues) for bug reporting.
37-
38-
We recommend using the latest release tag from the
39-
[Releases](https://github.com/pytorch/executorch/releases) page when developing.
58+
us improve our technology. Check out the [Discussion Board](https://github.com/pytorch/executorch/discussions) or chat real time with us on [Discord](https://discord.gg/MeacgB7A)
4059

4160
## Contributing
4261

43-
See [CONTRIBUTING.md](CONTRIBUTING.md) for details about issues, PRs, code
44-
style, CI jobs, and other development topics.
45-
46-
To connect with us and other community members, we invite you to join PyTorch Slack community by filling out this [form](https://docs.google.com/forms/d/e/1FAIpQLSeADnUNW36fjKjYzyHDOzEB_abKQE9b6gqqW9NXse6O0MWh0A/viewform). Once you've joined, you can:
47-
* Head to the `#executorch-general` channel for general questions, discussion, and community support.
48-
* Join the `#executorch-contributors` channel if you're interested in contributing directly to project development.
62+
We welcome contributions. To get started review the [guidelines](CONTRIBUTING.md) and chat with us on [Discord](https://discord.gg/MeacgB7A)
4963

5064

5165
## Directory Structure

backends/cadence/fusion_g3/operators/op_mean.cpp

Lines changed: 27 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ int prepare_data(
6060
return num_axis_dims;
6161
}
6262

63-
Tensor& mean_dim_out(
63+
Tensor& mean_out(
6464
KernelRuntimeContext& ctx,
6565
const Tensor& in,
6666
optional<ArrayRef<int64_t>> dim_list,
@@ -169,29 +169,32 @@ Tensor& mean_dim_out(
169169
InvalidArgument,
170170
out);
171171

172-
ET_SWITCH_REALHB_TYPES(in.scalar_type(), ctx, "mean.out", CTYPE_IN, [&] {
173-
ET_SWITCH_FLOATH_TYPES(
174-
out.scalar_type(), ctx, "mean.out", CTYPE_OUT, [&] {
175-
CTYPE_OUT* out_data = out.mutable_data_ptr<CTYPE_OUT>();
176-
const size_t num =
177-
torch::executor::get_reduced_dim_product(in, dim_list);
178-
for (size_t out_ix = 0; out_ix < out.numel(); ++out_ix) {
179-
CTYPE_OUT sum = 0;
180-
if (in.numel() > 0) {
181-
sum = torch::executor::
182-
map_reduce_over_dim_list<CTYPE_IN, CTYPE_OUT>(
183-
[](CTYPE_IN v) { return static_cast<CTYPE_OUT>(v); },
184-
[](CTYPE_OUT outv, CTYPE_OUT acc) {
185-
return acc + outv;
186-
},
187-
in,
188-
dim_list,
189-
out_ix);
190-
}
191-
out_data[out_ix] = sum / static_cast<float>(num);
192-
}
193-
});
194-
});
172+
ET_SWITCH_REALHBBF16_TYPES(
173+
in.scalar_type(), ctx, "mean.out", CTYPE_IN, [&] {
174+
ET_SWITCH_FLOATHBF16_TYPES(
175+
out.scalar_type(), ctx, "mean.out", CTYPE_OUT, [&] {
176+
CTYPE_OUT* out_data = out.mutable_data_ptr<CTYPE_OUT>();
177+
const size_t num =
178+
torch::executor::get_reduced_dim_product(in, dim_list);
179+
for (size_t out_ix = 0; out_ix < out.numel(); ++out_ix) {
180+
CTYPE_OUT sum = 0;
181+
if (in.numel() > 0) {
182+
sum = torch::executor::
183+
map_reduce_over_dim_list<CTYPE_IN, CTYPE_OUT>(
184+
[](CTYPE_IN v) {
185+
return static_cast<CTYPE_OUT>(v);
186+
},
187+
[](CTYPE_OUT outv, CTYPE_OUT acc) {
188+
return acc + outv;
189+
},
190+
in,
191+
dim_list,
192+
out_ix);
193+
}
194+
out_data[out_ix] = sum / static_cast<float>(num);
195+
}
196+
});
197+
});
195198
}
196199

197200
return out;

backends/qualcomm/tests/test_qnn_delegate.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,9 @@
3737
skip_annotation,
3838
update_spill_fill_size,
3939
)
40+
from executorch.examples.models.llama.llama_transformer import MOEFeedForward
4041

41-
from executorch.examples.models.llama.llama_transformer import ModelArgs, MOEFeedForward
42+
from executorch.examples.models.llama.model_args import ModelArgs
4243

4344
from executorch.examples.qualcomm.utils import setup_common_args_and_variables
4445

43.9 KB
Loading

examples/cadence/operators/facto_util.py

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,16 @@ def apply_tensor_contraints(op_name: str, tensor_constraints: list[object]) -> N
2222
tensor_constraints.extend(
2323
[
2424
cp.Dtype.In(lambda deps: [torch.float]),
25-
cp.Rank.Le(lambda deps: 2**3),
25+
cp.Rank.Le(lambda deps: 2**2),
26+
cp.Value.Ge(lambda deps, dtype, struct: -2),
27+
cp.Value.Le(lambda deps, dtype, struct: 2),
28+
]
29+
)
30+
case "mean.dim":
31+
tensor_constraints.extend(
32+
[
33+
cp.Dtype.In(lambda deps: [torch.float]),
34+
cp.Rank.Le(lambda deps: 2**2),
2635
]
2736
)
2837
case "exp.default":
@@ -86,8 +95,27 @@ def facto_testcase_gen(op_name: str) -> List[Tuple[List[str], OrderedDict[str, s
8695
cp.Value.Le(lambda deps, dtype: 2),
8796
]
8897
)
98+
elif in_spec.type.is_scalar_type():
99+
spec.inspec[index].constraints.extend(
100+
[
101+
cp.Dtype.In(lambda deps: apply_scalar_contraints(op_name)),
102+
]
103+
)
89104
elif in_spec.type.is_tensor():
90105
spec.inspec[index].constraints.extend(tensor_constraints)
106+
elif in_spec.type.is_dim_list():
107+
spec.inspec[index].constraints.extend(
108+
[
109+
cp.Length.Ge(lambda deps: 1),
110+
cp.Optional.Eq(lambda deps: False),
111+
]
112+
)
113+
elif in_spec.type.is_bool():
114+
spec.inspec[index].constraints.extend(
115+
[
116+
cp.Dtype.In(lambda deps: [torch.bool]),
117+
]
118+
)
91119

92120
return [
93121
(posargs, inkwargs)

examples/cadence/operators/test_g3_ops.py

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -259,6 +259,35 @@ def test_g3__softmax_out(
259259

260260
self.run_and_verify(model, (inputs,))
261261

262+
# pyre-ignore[16]: Module `parameterized.parameterized` has no attribute `expand`.
263+
@parameterized.expand([*facto_util.facto_testcase_gen("mean.dim")])
264+
def test_g3_mean_dim_out(
265+
self,
266+
posargs: List[int],
267+
inkwargs: OrderedDict[str, str],
268+
) -> None:
269+
class Meandim(nn.Module):
270+
def forward(
271+
self,
272+
x: torch.Tensor,
273+
dim_list: Tuple[int],
274+
keepdim: bool,
275+
dtype: torch.dtype = torch.float32,
276+
) -> torch.Tensor:
277+
return torch.ops.aten.mean.dim(
278+
x,
279+
dim_list,
280+
keepdim,
281+
dtype=dtype,
282+
)
283+
284+
model = Meandim()
285+
286+
self.run_and_verify(
287+
model,
288+
inputs=tuple(posargs),
289+
)
290+
262291

263292
if __name__ == "__main__":
264293
unittest.main()

examples/models/llama/TARGETS

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ runtime.python_library(
1414
srcs = [
1515
"llama_transformer.py",
1616
"rope.py",
17+
"attention.py",
18+
"model_args.py",
1719
],
1820
_is_external_target = True,
1921
base_module = "executorch.examples.models.llama",

0 commit comments

Comments
 (0)