Skip to content

Commit 515614f

Browse files
committed
Update base for Update on "[Executorch] Add quantized kv cache to oss ci"
Fixes to make sure quantized kv cache works in oss Differential Revision: [D66269487](https://our.internmc.facebook.com/intern/diff/D66269487/) [ghstack-poisoned]
1 parent d9627a3 commit 515614f

File tree

1 file changed

+11
-4
lines changed

1 file changed

+11
-4
lines changed

extension/llm/custom_ops/custom_ops.py

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,16 +17,23 @@
1717

1818
from torch.library import impl
1919

20-
# TODO rename this file to custom_ops_meta_registration.py
2120
try:
2221
op = torch.ops.llama.sdpa_with_kv_cache.default
2322
assert op is not None
2423
op2 = torch.ops.llama.fast_hadamard_transform.default
2524
assert op2 is not None
2625
except:
27-
path = Path(__file__).parent.resolve()
28-
logging.info(f"Looking for libcustom_ops_aot_lib.so in {path}")
29-
libs = list(path.glob("libcustom_ops_aot_lib.*"))
26+
import glob
27+
28+
import executorch
29+
30+
executorch_package_path = executorch.__path__[0]
31+
logging.info(f"Looking for libcustom_ops_aot_lib.so in {executorch_package_path }")
32+
libs = list(
33+
glob.glob(
34+
f"{executorch_package_path}/**/libquantized_ops_aot_lib.*", recursive=True
35+
)
36+
)
3037
assert len(libs) == 1, f"Expected 1 library but got {len(libs)}"
3138
logging.info(f"Loading custom ops library: {libs[0]}")
3239
torch.ops.load_library(libs[0])

0 commit comments

Comments
 (0)