Skip to content

Commit 725f2aa

Browse files
authored
Merge branch 'main' into fix_getting_started_cpp_example
2 parents 6749985 + 2fa749c commit 725f2aa

File tree

113 files changed

+811
-573
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+811
-573
lines changed

CONTRIBUTING.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
Thank you for your interest in contributing to ExecuTorch! We want to make
22
it easy to contribute to this project.
33

4-
 
54

65
## Dev Install
76

@@ -91,7 +90,7 @@ executorch
9190
│ └── <a href="runtime/platform">platform</a> - Layer between architecture specific code and portable C++.
9291
├── <a href="schema">schema</a> - ExecuTorch PTE file format flatbuffer schemas.
9392
├── <a href="scripts">scripts</a> - Utility scripts for building libs, size management, dependency management, etc.
94-
├── <a href="shim">shim</a> - Compatibility layer between OSS and Internal builds.
93+
├── <a href="shim_et">shim_et</a> - Compatibility layer between OSS and Internal builds.
9594
├── <a href="test">test</a> - Broad scoped end-to-end tests.
9695
├── <a href="third-party">third-party</a> - Third-party dependencies.
9796
├── <a href="tools">tools</a> - Tools for building ExecuTorch from source, for different built tools (CMake, Buck).
@@ -192,9 +191,6 @@ in the Github repo.
192191

193192
## Coding Style
194193

195-
Goal: Encourage standards that make it easier to read, edit, maintain, and debug
196-
the ExecuTorch code.
197-
198194
### lintrunner
199195

200196
We use [`lintrunner`](https://pypi.org/project/lintrunner/) to help make sure the
@@ -259,7 +255,7 @@ toolchains, and having access to relatively modern C++ features.
259255

260256
#### C/C++ standard library usage
261257

262-
**Restricted usage of the C++ standard library.**
258+
**Restricted usage of the C++ standard library**
263259

264260
Rationale: ExecuTorch is intended to be portable to bare-metal systems that lack
265261
certain features, like dynamic memory, threading, and locking, required by parts
@@ -280,7 +276,7 @@ careful to also manually destroy objects initialized in this way.
280276

281277
#### C++ language features
282278

283-
**Exceptions: Do not use.**
279+
**Exceptions: Do not use**
284280
- Rationale: Exceptions are not widely supported on some classes of
285281
microcontrollers and DSPs, and they can significantly increase binary size.
286282

@@ -289,12 +285,12 @@ must work with threading**
289285
- Rationale: The core runtime must work on systems that do not have threading
290286
support.
291287

292-
**RTTI, dynamic_cast, and `<typeid>`: Do not use.**
288+
**RTTI, dynamic_cast, and `<typeid>`: Do not use**
293289
- Rationale: RTTI adds extra data to every virtual class. ExecuTorch doesn't
294290
have a strong need for `dynamic_cast` and friends, so it's better to reduce
295291
the binary size.
296292

297-
**Templates and template metaprogramming: Be careful and avoid if possible.**
293+
**Templates and template metaprogramming: Be careful and avoid if possible**
298294
- Rationale: Most templating results in code generation, and is one of the most
299295
common sources of binary bloat. Some use of templates is fine (e.g. an
300296
`ArrayRef<T>`, or code that handles multiple `ScalarType` types), but for the
@@ -359,7 +355,7 @@ docs](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/
359355
for basics.
360356

361357
1. Push your branch to your fork of `pytorch/executorch`. Most people do not
362-
have permission to push a branch directoy to the upstream repo.
358+
have permission to push a branch directory to the upstream repo.
363359
1. Create your PR
364360
- Use the `main` branch as the base.
365361
- Give the PR a clear and descriptive title. It will become the title of the

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,9 @@ Key value propositions of ExecuTorch are:
4949
## Getting Started
5050
To get started you can:
5151

52-
- Visit the [Step by Step Tutorial](https://pytorch.org/executorch/main/index.html) on getting things running locally and deploy a model to a device
52+
- Visit the [Step by Step Tutorial](https://pytorch.org/executorch/main/index.html) to get things running locally and deploy a model to a device
5353
- Use this [Colab Notebook](https://pytorch.org/executorch/stable/getting-started-setup.html#quick-setup-colab-jupyter-notebook-prototype) to start playing around right away
54-
- Jump straight into LLMs use cases by following specific instructions for [Llama](./examples/models/llama/README.md) and [Llava](./examples/models/llava/README.md)
54+
- Jump straight into LLM use cases by following specific instructions for [Llama](./examples/models/llama/README.md) and [Llava](./examples/models/llava/README.md)
5555

5656
## Feedback and Engagement
5757

backends/arm/test/models/test_mobilenet_v3_arm.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ def test_mv3_tosa_BI():
4646
aten_op=[],
4747
exir_op=[],
4848
use_to_edge_transform_and_lower=True,
49-
atol=0.3,
49+
atol=0.5,
5050
qtol=1,
5151
)
5252
pipeline.run()
@@ -63,7 +63,7 @@ def test_mv3_u55_BI():
6363
exir_ops=[],
6464
run_on_fvp=True,
6565
use_to_edge_transform_and_lower=True,
66-
atol=0.3,
66+
atol=0.5,
6767
qtol=1,
6868
)
6969
pipeline.run()
@@ -80,7 +80,7 @@ def test_mv3_u85_BI():
8080
exir_ops=[],
8181
run_on_fvp=True,
8282
use_to_edge_transform_and_lower=True,
83-
atol=0.3,
83+
atol=0.5,
8484
qtol=1,
8585
)
8686
pipeline.run()

backends/arm/test/ops/test_sigmoid_16bit.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ def forward(self, x):
8181

8282

8383
@common.parametrize("test_data", test_data_suite)
84-
@pytest.mark.flaky(reruns=5)
84+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
8585
def test_sigmoid_tosa_BI(test_data):
8686
pipeline = TosaPipelineBI(
8787
Sigmoid(), (test_data(),), Sigmoid.aten_op, Sigmoid.exir_op
@@ -97,7 +97,7 @@ def test_sigmoid_tosa_BI(test_data):
9797
"ramp": "AssertionError: Output 0 does not match reference output. MLETORCH-787"
9898
},
9999
)
100-
@pytest.mark.flaky(reruns=5)
100+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
101101
def test_sigmoid_add_sigmoid_tosa_BI(test_data):
102102
pipeline = TosaPipelineBI(
103103
SigmoidAddSigmoid(), (test_data(),), Sigmoid.aten_op, Sigmoid.exir_op
@@ -110,6 +110,7 @@ def test_sigmoid_add_sigmoid_tosa_BI(test_data):
110110
"test_data",
111111
test_data_suite,
112112
)
113+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
113114
def test_sigmoid_tosa_u55(test_data):
114115
pipeline = OpNotSupportedPipeline(
115116
Sigmoid(), (test_data(),), "TOSA-0.80+BI+u55", {Sigmoid.exir_op: 1}
@@ -122,6 +123,7 @@ def test_sigmoid_tosa_u55(test_data):
122123
"test_data",
123124
test_data_suite,
124125
)
126+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
125127
def test_sigmoid_add_sigmoid_tosa_u55(test_data):
126128
pipeline = OpNotSupportedPipeline(
127129
SigmoidAddSigmoid(),
@@ -135,7 +137,7 @@ def test_sigmoid_add_sigmoid_tosa_u55(test_data):
135137

136138

137139
@common.parametrize("test_data", test_data_suite)
138-
@pytest.mark.flaky(reruns=5)
140+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
139141
@common.XfailIfNoCorstone320
140142
def test_sigmoid_tosa_u85(test_data):
141143
pipeline = EthosU85PipelineBI(
@@ -152,7 +154,7 @@ def test_sigmoid_tosa_u85(test_data):
152154
"ramp": "AssertionError: Output 0 does not match reference output.",
153155
},
154156
)
155-
@pytest.mark.flaky(reruns=5)
157+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
156158
@common.XfailIfNoCorstone320
157159
def test_sigmoid_add_sigmoid_tosa_u85(test_data):
158160
pipeline = EthosU85PipelineBI(

backends/arm/test/ops/test_sigmoid_32bit.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ def forward(self, x):
9797

9898

9999
@common.parametrize("test_data", test_data_suite)
100-
@pytest.mark.flaky(reruns=5)
100+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
101101
def test_sigmoid_tosa_BI(test_data):
102102
pipeline = TosaPipelineBI(
103103
Sigmoid(),
@@ -110,7 +110,7 @@ def test_sigmoid_tosa_BI(test_data):
110110

111111

112112
@common.parametrize("test_data", test_data_suite)
113-
@pytest.mark.flaky(reruns=5)
113+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
114114
def test_sigmoid_add_sigmoid_tosa_BI(test_data):
115115
pipeline = TosaPipelineBI(
116116
SigmoidAddSigmoid(),
@@ -123,6 +123,7 @@ def test_sigmoid_add_sigmoid_tosa_BI(test_data):
123123

124124

125125
@common.parametrize("test_data", test_data_suite)
126+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
126127
def test_sigmoid_tosa_u55(test_data):
127128
pipeline = OpNotSupportedPipeline(
128129
Sigmoid(), (test_data(),), "TOSA-0.80+BI+u55", {Sigmoid.exir_op: 1}
@@ -132,6 +133,7 @@ def test_sigmoid_tosa_u55(test_data):
132133

133134

134135
@common.parametrize("test_data", test_data_suite)
136+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
135137
def test_sigmoid_add_sigmoid_tosa_u55(test_data):
136138
pipeline = OpNotSupportedPipeline(
137139
SigmoidAddSigmoid(),
@@ -145,7 +147,7 @@ def test_sigmoid_add_sigmoid_tosa_u55(test_data):
145147

146148

147149
@common.parametrize("test_data", test_data_suite)
148-
@pytest.mark.flaky(reruns=5)
150+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
149151
@common.XfailIfNoCorstone320
150152
def test_sigmoid_tosa_u85(test_data):
151153
pipeline = EthosU85PipelineBI(
@@ -162,7 +164,7 @@ def test_sigmoid_tosa_u85(test_data):
162164
"ramp": "AssertionError: Output 0 does not match reference output.",
163165
},
164166
)
165-
@pytest.mark.flaky(reruns=5)
167+
@pytest.mark.flaky(reruns=32) # Flaky due to Vela bug: MLBEDSW-10642
166168
@common.XfailIfNoCorstone320
167169
def test_sigmoid_add_sigmoid_tosa_u85(test_data):
168170
pipeline = EthosU85PipelineBI(

backends/qualcomm/_passes/decompose_einsum.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
from executorch.exir.pass_base import ExportPass, PassResult
99
from torch.fx.experimental.proxy_tensor import make_fx
1010

11+
from .utils import copy_nn_module_stack
12+
1113

1214
class DecomposeEinsum(ExportPass):
1315
"""
@@ -36,6 +38,7 @@ def call(self, graph_module: torch.fx.GraphModule) -> PassResult:
3638
remap[f"arg1_{i+1}"] = arg
3739

3840
for decomposed_node in decomposed_module.graph.nodes:
41+
copy_nn_module_stack(node, decomposed_node)
3942
# This is the arg[0] equation string, which is not required anymore after decomposition
4043
if "arg0" in decomposed_node.name:
4144
continue

backends/qualcomm/_passes/decompose_linalg_vector_norm.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
from executorch.exir import to_edge
99
from executorch.exir.pass_base import ExportPass, PassResult
1010

11+
from .utils import copy_nn_module_stack
12+
1113

1214
class LinalgVectorNorm(torch.nn.Module):
1315
def __init__(self, exp, dim, keepdim):
@@ -62,6 +64,7 @@ def call(self, graph_module: torch.fx.GraphModule) -> PassResult:
6264
remap = {"x": node.args[0]}
6365

6466
for decomposed_node in decomposed_module.graph.nodes:
67+
copy_nn_module_stack(node, decomposed_node)
6568
# no need to copy existent 'output'
6669
if decomposed_node.op == "output":
6770
for user in node.users.copy():

backends/qualcomm/_passes/utils.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,14 @@ def get_passes_dependency_for_capture_program():
121121
}
122122

123123

124+
def copy_nn_module_stack(src, target):
125+
"""
126+
Copy meta["nn_module_stack"] from src node to target node if existing.
127+
"""
128+
if value := src.meta.get("nn_module_stack"):
129+
target.meta["nn_module_stack"] = value
130+
131+
124132
def is_float_tensor(node: torch.fx.Node) -> bool:
125133
if "val" not in node.meta or not isinstance(node.meta["val"], FakeTensor):
126134
return False

0 commit comments

Comments
 (0)