We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent f89cc69 commit 7ee7577Copy full SHA for 7ee7577
CMakeLists.txt
@@ -122,6 +122,7 @@ if (LLAMA_BUILD)
122
llama_cpp_python_install_target(ggml-rpc)
123
llama_cpp_python_install_target(ggml-sycl)
124
llama_cpp_python_install_target(ggml-vulkan)
125
+ llama_cpp_python_install_target(ggml-webgpu)
126
127
# Workaround for Windows + CUDA https://github.com/abetlen/llama-cpp-python/issues/563
128
if (WIN32)
0 commit comments