Skip to content

Commit b05885e

Browse files
committed
Update base for Update on "[XNNPACK][Weights Cache] Enable in XNNPACK"
We enable the XNNPACK Weights cache in XNNPACK. the weights cache is initialized for the runtime with the named data map and a memory allocator (for now the memory allocator is not used, but i hope in the future this can be used to managed the memory for packed weights). Before Creating the runtime, we first initialize the weights cache, this sets the finalization state to false. As we add weight/bias tensors to the graph, we load them through the named data map in the weights cache, and keep a map of the pointer to the name. When XNNPACK Creates the runtime and packs the weights, it uses the weights_cache method look_up_or_insert. We use the pointers provided in the cache key to look up their names and append them together like ("weightsbias"). We then insert the packed weights with that key. In future look ups, we just use the pointer cached at the named pack tensor key, saving us from packing in the future. After creating the runtime and packing the weights, we finalize the cache. This sets is_finalized to true. We also free all unpacked buffers loaded from the named data map as they are no longer needed. We also keep reference counts for all the packed weights incrementing the packed weights which were used by this runtime. We return a vector of all the packed weight names to the xnn_executor runner. When the XNNExecutor is destroyed, we decrement the counts of the packed buffers and destroy them if necessary. Note that this feature is gated behind the XNN_ENABLE_WEIGHTS_CACHE flag. Since the weights_cache is a global member of the singleton xnnpack backend class, and it is also read/write, we add a mutex to ensure that access to the weights_cache is thread safe. We added a new mutex, so the mutex hiearchy is: workspace_mutex_ -> weights_cache_mutex_ Differential Revision: [D70885926](https://our.internmc.facebook.com/intern/diff/D70885926/) [ghstack-poisoned]
1 parent a76d47f commit b05885e

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

backends/xnnpack/test/targets.bzl

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,13 +27,15 @@ def define_common_targets():
2727
third_party_dep("FP16"),
2828
"//executorch/runtime/core/exec_aten/testing_util:tensor_util",
2929
"//executorch/runtime/core/exec_aten/util:scalar_type_util",
30+
"//executorch/backends/xnnpack:xnnpack_backend",
3031
],
3132
)
3233

3334
runtime.cxx_test(
3435
name = "test_xnn_weights_cache",
3536
srcs = ["runtime/test_xnn_weights_cache.cpp"],
3637
deps = [
38+
third_party_dep("XNNPACK"),
3739
"//executorch/backends/xnnpack:xnnpack_backend",
3840
"//executorch/runtime/executor:pte_data_map",
3941
"//executorch/extension/data_loader:file_data_loader",

exir/backend/test/test_backend_with_named_data_map.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ def false_branch(self, x):
6767

6868
def forward(self, x, y):
6969
z = x / y
70-
z = torch.cond(z > 1, self.true_branch, self.false_branch, [x])
70+
z = torch.cond(z.sum() > 0, self.true_branch, self.false_branch, [x])
7171
return z - z
7272

7373
ep = to_edge(torch.export.export(M(), (torch.randn(1, 2), torch.randn(1, 2))))

0 commit comments

Comments
 (0)