Skip to content

Commit 4127895

Browse files
committed
Update on "[ExecuTorch][to_backend] Enable passing Delegation Spec to to_backend"
Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend. ### Motivation A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods. ### Design We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering. ### Intended Flow ``` del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)]) encode_graph = torch.export.export(Encoder(), sample_inputs) decode_graph = torch.export.export(Decoder(), sample_inputs) edge_manager = to_edge({ "encode": encode_graph, "decode": decode_graph, }) lowered_edge_manager = edge_manager.to_backend(del_spec) # or if you want to specify which methods to lower to with del_spec lowered_edge_manager= edge_manager.to_backend({ "encode": del_spec, }) ``` Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/) cc cccclai [ghstack-poisoned]
2 parents a0c6cbe + ddd25de commit 4127895

File tree

2 files changed

+6
-5
lines changed

2 files changed

+6
-5
lines changed

exir/backend/canonical_partitioners/all_node_partitioner.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
)
1717
from torch._export.utils import is_buffer, is_lifted_tensor_constant, is_param
1818

19+
1920
def is_non_tensor_placeholder(node: torch.fx.Node, ep: ExportedProgram) -> bool:
2021
"""
2122
Returns true if the node is a placeholder node and it is not a tensor

exir/backend/test/test_backends.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1278,7 +1278,7 @@ def __init__(self):
12781278

12791279
def forward(self, x):
12801280
return [torch.sin(x)]
1281-
1281+
12821282

12831283
sin_module = SinModule()
12841284
model_inputs = (torch.ones(1),)
@@ -1372,14 +1372,14 @@ def inputs(self):
13721372
sin_module = SinModule()
13731373
max_value_sin = sin_module.inputs()[0].shape[0]
13741374
sin_partitioner = AllNodePartitioner(
1375-
"BackendWithCompilerDemo",
1375+
"BackendWithCompilerDemo",
13761376
[CompileSpec("max_value", bytes([max_value_sin]))],
13771377
)
13781378

13791379
add_mul_module = AddMulModule()
13801380
max_value_add_mul = add_mul_module.inputs()[0].shape[0]
13811381
add_mul_partitioner = AllNodePartitioner(
1382-
"BackendWithCompilerDemo",
1382+
"BackendWithCompilerDemo",
13831383
[CompileSpec("max_value", bytes([max_value_add_mul]))],
13841384
)
13851385

@@ -1430,8 +1430,8 @@ def inputs(self):
14301430
executorch_module = _load_for_executorch_from_buffer(buff)
14311431

14321432
for method_name, module in {
1433-
"sin": sin_module,
1434-
"add_mul": add_mul_module
1433+
"sin": sin_module,
1434+
"add_mul": add_mul_module,
14351435
}.items():
14361436
inputs_flattened, _ = tree_flatten(module.inputs())
14371437
model_outputs = executorch_module.run_method(

0 commit comments

Comments
 (0)