Skip to content

Commit 8bb3c18

Browse files
committed
Update on "Reuse GELU implementation from PyTorch core"
kernels/optimized doesn't need to support embedded systems, so it can just take a header-only dep on PyTorch. Note that, because we will pick up Sleef internally and ignore it externally thanks to ATen vec, this PR gets to enable optimized GELU in OSS. Testing: CI to make sure this doesn't break mobile build modes; happy to take advice on anything not currently covered that might break. Differential Revision: [D66335522](https://our.internmc.facebook.com/intern/diff/D66335522/) [ghstack-poisoned]
2 parents abc7e4e + 70b5627 commit 8bb3c18

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/models/llama/runner/CMakeLists.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ include(${EXECUTORCH_SRCS_FILE})
3838
list(TRANSFORM _llama_runner__srcs PREPEND "${EXECUTORCH_ROOT}/")
3939

4040
target_include_directories(
41-
extension_module INTERFACE ${_common_include_directories} ${EXECUTORCH_INCLUDE_DIRS}
41+
extension_module INTERFACE ${_common_include_directories}
4242
)
4343

4444
list(
@@ -82,6 +82,6 @@ set(llama_runner_deps executorch extension_data_loader extension_module
8282
target_link_libraries(llama_runner PUBLIC ${llama_runner_deps})
8383

8484
target_include_directories(
85-
llama_runner INTERFACE ${_common_include_directories} ${EXECUTORCH_INCLUDE_DIRS}
85+
llama_runner INTERFACE ${_common_include_directories} ${EXECUTORCH_ROOT}
8686
)
8787
target_compile_options(llama_runner PUBLIC ${_preprocessor_flag})

0 commit comments

Comments
 (0)