Skip to content

Commit a605023

Browse files
committed
Update on "[Executorch][custom ops] Change lib loading logic to account for package dir"
Just looking at the location of the source file. In this case custom_ops.py, can, and does, yield to wrong location depending on where you import custom_ops from. If you are importing custom_ops from another source file inside extension folder, e.g. builder.py that is in extensions/llm/export, then, I think, custom_ops gets resolved to the one installed in site-packages or pip package. But if this is imported from say examples/models/llama/source_transformations/quantized_kv_cache.py (Like in the in next PR), then it seems to resolve to the source location. In one of the CI this is /pytorch/executorch. Now depending on which directory your filepath resolves to, you will search for lib in that. This of course does not work when filepath resolves to source location. This PR changes that to resolve to package location. Differential Revision: [D66385480](https://our.internmc.facebook.com/intern/diff/D66385480/) [ghstack-poisoned]
2 parents 56d02f7 + 0db3af0 commit a605023

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/models/llama/eval_llama_lib.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ def __init__(
102102

103103
# Note: import this after portable_lib
104104
from executorch.extension.llm.custom_ops import ( # noqa
105-
sdpa_with_kv_cache, # usort: skip
105+
custom_ops, # usort: skip
106106
)
107107
from executorch.kernels import quantized # noqa
108108

0 commit comments

Comments
 (0)