Skip to content

vulkan: fuse adds #15252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 16, 2025
Merged

vulkan: fuse adds #15252

merged 3 commits into from
Aug 16, 2025

Conversation

jeffbolznv
Copy link
Collaborator

Fuse adds that have the same shape, which are common in MoE models. It will currently fuse up to 6 adds, because we assume no more than 8 descriptors per dispatch. But this could be changed.

5090 before:

Z:\github\jeffbolznv\llama.cpp\build\bin\RelWithDebInfo>llama-bench.exe -fa 1 -n 128 -p 512 -r 10 --prio 1 -m c:\models\bartowski\DeepSeek-Coder-V2-Lite-Instruct-GGUF\DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf -m c:\models\Qwen_Qwen3-30B-A3B-Q4_K_M.gguf -m c:\models\gpt-oss-20b-mxfp4.gguf -m c:\models\deepseek-v2-lite-safetensors\deepseek-v2-lite-Q4_K_M.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 5090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
| model                          |       size |     params | backend    | ngl | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |      7544.08 ± 70.50 |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        249.36 ± 1.58 |
| qwen3moe 30B.A3B Q4_K - Medium |  17.35 GiB |    30.53 B | Vulkan     |  99 |  1 |           pp512 |      3971.06 ± 37.61 |
| qwen3moe 30B.A3B Q4_K - Medium |  17.35 GiB |    30.53 B | Vulkan     |  99 |  1 |           tg128 |        175.16 ± 0.48 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           pp512 |     6264.83 ± 251.65 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           tg128 |        212.95 ± 0.73 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |     6936.34 ± 183.22 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        233.17 ± 0.66 |

5090 after:

ggml_vulkan: 0 = NVIDIA GeForce RTX 5090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
| model                          |       size |     params | backend    | ngl | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |     7530.23 ± 127.67 |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        263.83 ± 0.93 |
| qwen3moe 30B.A3B Q4_K - Medium |  17.35 GiB |    30.53 B | Vulkan     |  99 |  1 |           pp512 |      3999.28 ± 41.87 |
| qwen3moe 30B.A3B Q4_K - Medium |  17.35 GiB |    30.53 B | Vulkan     |  99 |  1 |           tg128 |        188.21 ± 0.95 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           pp512 |     6327.27 ± 161.58 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           tg128 |        218.28 ± 1.83 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |     6916.37 ± 206.58 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        244.11 ± 0.78 |

4070 before:

Z:\github\jeffbolznv\llama.cpp\build\bin\RelWithDebInfo>llama-bench.exe -fa 1 -n 128 -p 512 -r 10 --prio 1 -m c:\models\bartowski\DeepSeek-Coder-V2-Lite-Instruct-GGUF\DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf -m c:\models\gpt-oss-20b-mxfp4.gguf -m c:\models\deepseek-v2-lite-safetensors\deepseek-v2-lite-Q4_K_M.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 4070 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
| model                          |       size |     params | backend    | ngl | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |      2777.76 ± 10.68 |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        181.91 ± 0.37 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           pp512 |      2550.84 ± 25.51 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           tg128 |        120.20 ± 0.20 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |       1983.36 ± 9.40 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        162.10 ± 0.29 |

4070 after:

ggml_vulkan: 0 = NVIDIA GeForce RTX 4070 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
| model                          |       size |     params | backend    | ngl | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |       2790.31 ± 7.72 |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        188.98 ± 0.25 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           pp512 |      2562.82 ± 28.59 |
| gpt-oss ?B MXFP4 MoE           |  11.27 GiB |    20.91 B | Vulkan     |  99 |  1 |           tg128 |        121.95 ± 0.28 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |       1984.36 ± 9.35 |
| deepseek2 16B Q4_K - Medium    |   9.65 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |        166.07 ± 1.04 |

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.
@jeffbolznv jeffbolznv requested a review from 0cc4m as a code owner August 11, 2025 21:14
@github-actions github-actions bot added testing Everything test related Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Aug 11, 2025
Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good on AMD and Nvidia, but I can't get it to run on Intel.

terminate called after throwing an instance of 'vk::DeviceLostError'
  what():  vk::Device::waitForFences: ErrorDeviceLost

I'll investigate further later.

@jeffbolznv
Copy link
Collaborator Author

Looks good on AMD and Nvidia, but I can't get it to run on Intel.

Strange. Any validation failures? Does the backend test fail, or just in real models?

@0cc4m
Copy link
Collaborator

0cc4m commented Aug 15, 2025

Looks good on AMD and Nvidia, but I can't get it to run on Intel.

Strange. Any validation failures? Does the backend test fail, or just in real models?

Yeah, the test fails too on Intel:

[ADD] NMSE = 18.017009325 > 0.000000100 ADD(type=f32,ne=[16,5,4,3],nr=[1,1,1,1],nf=16): FAIL

Edit: No validation failures. Probably a driver bug.

@jeffbolznv
Copy link
Collaborator Author

Edit: No validation failures. Probably a driver bug.

Shall I just disable the optimization for Intel?

@0cc4m
Copy link
Collaborator

0cc4m commented Aug 16, 2025

Edit: No validation failures. Probably a driver bug.

Shall I just disable the optimization for Intel?

Yeah, I don't see why it's failing.

@jeffbolznv jeffbolznv merged commit 1fe0029 into ggml-org:master Aug 16, 2025
50 of 51 checks passed
@rillomas
Copy link
Contributor

Looks good on AMD and Nvidia, but I can't get it to run on Intel.

terminate called after throwing an instance of 'vk::DeviceLostError'
  what():  vk::Device::waitForFences: ErrorDeviceLost

Hi @0cc4m. I wanted to test the crashing you were seeing on Intel GPU but so far haven't been able to reproduce it. How were you testing this exactly?

The test I ran was the following:

  • Environment: i9-12900K + Arc A770, GPU Driver: 32.0.101.6989, Windows 11 24H2 10.0.26100.4946
  • Apply following diff to b6189 and compile llama.cpp Vulkan.
diff --git a/ggml/src/ggml-vulkan/ggml-vulkan.cpp b/ggml/src/ggml-vulkan/ggml-vulkan.cpp
index 7ef93806..24ede177 100644
--- a/ggml/src/ggml-vulkan/ggml-vulkan.cpp
+++ b/ggml/src/ggml-vulkan/ggml-vulkan.cpp
@@ -3575,7 +3575,7 @@ static vk_device ggml_vk_get_device(size_t idx) {
         device->multi_add = vk12_props.shaderRoundingModeRTEFloat16 &&
                             device->properties.limits.maxPushConstantsSize >= sizeof(vk_op_multi_add_push_constants) &&
                             vk12_features.runtimeDescriptorArray &&
-                            device->vendor_id != VK_VENDOR_ID_INTEL &&
+                            // device->vendor_id != VK_VENDOR_ID_INTEL &&
                             getenv("GGML_VK_DISABLE_MULTI_ADD") == nullptr;

         if (device->subgroup_size_control) {

Execution log as follows

λ build\bin\Release\llama-bench.exe -m ..\..\Downloads\DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(TM) A770 Graphics (Intel Corporation) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none
| model                          |       size |     params | backend    | ngl | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           pp512 |        129.88 ± 0.21 |
| deepseek2 16B Q2_K - Medium    |   5.99 GiB |    15.71 B | Vulkan     |  99 |  1 |           tg128 |         65.12 ± 0.33 |

build: e5155e69 (6189)

@0cc4m
Copy link
Collaborator

0cc4m commented Aug 18, 2025

Looks good on AMD and Nvidia, but I can't get it to run on Intel.

terminate called after throwing an instance of 'vk::DeviceLostError'
  what():  vk::Device::waitForFences: ErrorDeviceLost

Hi @0cc4m. I wanted to test the crashing you were seeing on Intel GPU but so far haven't been able to reproduce it. How were you testing this exactly?

Hi @rillomas,

I ran this on Linux, from past reports I have already gathered that the Linux ANV driver is more unstable than the proprietary Windows driver.

I can reproduce the crash with your diff like this: build_vk/bin/llama-bench -m models/Qwen3-30B-A3B-Q4_K_M.gguf -ngl 40.

Crash log
» build_vk/bin/llama-bench -m models/Qwen3-30B-A3B-Q4_K_M.gguf -ngl 40
pci id for fd 9: 10de:2204, driver (null)
kmsro: driver missing
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
| model                          |       size |     params | backend    | ngl |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
[New LWP 28657]
[New LWP 28656]
[New LWP 28655]
[New LWP 28654]
[New LWP 28653]
[New LWP 28652]
[New LWP 28651]
[New LWP 28650]
[New LWP 28649]
[New LWP 28648]
[New LWP 28647]
[New LWP 28646]
[New LWP 28645]
[New LWP 28644]
[New LWP 28643]
[New LWP 28610]
[New LWP 28607]
[New LWP 28606]
[New LWP 28605]
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/liblber.so.2
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libbrotlidec.so.1
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libbrotlicommon.so.1
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libtinfo.so.6
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libcap.so.2
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007044675107e3 in __GI___wait4 (pid=28741, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
warning: 30     ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory
#0  0x00007044675107e3 in __GI___wait4 (pid=28741, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30      in ../sysdeps/unix/sysv/linux/wait4.c
#1  0x000070446a07aba6 in ggml_print_backtrace () from /home/user/upstream-llama.cpp/build_vk/bin/libggml-base.so
#2  0x000070446a08e5f6 in ggml_uncaught_exception() () from /home/user/upstream-llama.cpp/build_vk/bin/libggml-base.so
#3  0x00007044678bb0da in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007044678a5a55 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007044678bb391 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x0000704467e01c7f in ggml_vk_wait_for_fence(ggml_backend_vk_context*) () from /home/user/upstream-llama.cpp/build_vk/bin/libggml-vulkan.so
#7  0x0000704467dd0896 in ggml_vk_build_graph(ggml_backend_vk_context*, ggml_cgraph*, int, ggml_tensor*, int, bool, bool, bool, bool) () from /home/user/upstream-llama.cpp/build_vk/bin/libggml-vulkan.so
#8  0x0000704467dcaaa4 in ggml_backend_vk_graph_compute(ggml_backend*, ggml_cgraph*) () from /home/user/upstream-llama.cpp/build_vk/bin/libggml-vulkan.so
#9  0x000070446a09573b in ggml_backend_sched_graph_compute_async () from /home/user/upstream-llama.cpp/build_vk/bin/libggml-base.so
#10 0x0000704469e91471 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/user/upstream-llama.cpp/build_vk/bin/libllama.so
#11 0x0000704469e910f7 in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_context_i*, ggml_status&) () from /home/user/upstream-llama.cpp/build_vk/bin/libllama.so
#12 0x0000704469e924ee in llama_context::decode(llama_batch const&) () from /home/user/upstream-llama.cpp/build_vk/bin/libllama.so
#13 0x0000704469e965ab in llama_decode () from /home/user/upstream-llama.cpp/build_vk/bin/libllama.so
#14 0x00005d4e4831c0e8 in test_prompt(llama_context*, int, int, int) ()
#15 0x00005d4e48317e99 in main ()
[Inferior 1 (process 28594) detached]
terminate called after throwing an instance of 'vk::DeviceLostError'
  what():  vk::Device::waitForFences: ErrorDeviceLost
[1]    28594 IOT instruction (core dumped)  build_vk/bin/llama-bench -m ~/koboldcpp/models/Qwen3-30B-A3B-Q4_K_M.gguf -ngl

It works if I disable multi_add using GGML_VK_DISABLE_MULTI_ADD=1:

GGML_VK_DISABLE_MULTI_ADD=1 build_vk/bin/llama-bench -m models/Qwen3-30B-A3B-Q4_K_M.gguf -ngl 40                                                                                                                                                                                   134 ↵
pci id for fd 9: 10de:2204, driver (null)
kmsro: driver missing
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
| model                          |       size |     params | backend    | ngl |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3moe 30B.A3B Q4_K - Medium |  17.28 GiB |    30.53 B | Vulkan     |  40 |           pp512 |         33.84 ± 0.04 |
| qwen3moe 30B.A3B Q4_K - Medium |  17.28 GiB |    30.53 B | Vulkan     |  40 |           tg128 |         25.43 ± 0.57 |

Also, test-backend-ops fails in the test that was added in this PR: [ADD] NMSE = 17.031918146 > 0.000000100 ADD(type=f32,ne=[16,5,4,3],nr=[1,1,1,1],nf=16): FAIL

Can you run this with build_vk/bin/test-backend-ops -o ADD and see if it also fails on Windows?

Environment:

CPU: AMD EPYC 7302
GPUs: Nvidia RTX 3090, AMD Radeon Pro VII, Intel A770
OS: Ubuntu 24.04.3 LTS
Intel GPU Driver: Intel open-source Mesa driver, Mesa 25.3.0-devel (git-1f490c836b)

Let me know if you need more info.

@rillomas
Copy link
Contributor

@0cc4m
Thanks! Following is the result log of test-backend-ops -o ADD. I don't see any errors so probably a Linux specific thing like you say. I'll test/benchmark on few other platforms to see if there are significant performance improvements with this feature.

Log output
λ build\bin\Release\test-backend-ops.exe -o ADD
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(TM) A770 Graphics (Intel Corporation) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none
Testing 2 devices

Backend 1/2: Vulkan0
  Device description: Intel(R) Arc(TM) A770 Graphics
  Device memory: 16256 MB (16256 MB free)

  ADD(type=f16,ne=[1,1,8,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,1,1],nr=[32,1,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,320,320],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[10,5,1,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[10,5,4,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[2,1,1,1],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[1,2,1,1],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[1,1,2,1],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[1,1,1,2],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[1,1,2,2],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[1,2,2,2],nf=1): OK
  ADD(type=f16,ne=[10,5,4,3],nr=[2,2,2,2],nf=1): OK
  ADD(type=f16,ne=[1280,1,1,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[1280,1,1,1],nr=[1,16,16,1],nf=1): OK
  ADD(type=f16,ne=[1280,16,16,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[1280,1,1,1],nr=[1,256,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,1280,1],nr=[16,16,1,1],nf=1): OK
  ADD(type=f16,ne=[16,16,1280,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,1920,1],nr=[16,16,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,2560,1],nr=[16,16,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,1280,1],nr=[32,32,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,1920,1],nr=[32,32,1,1],nf=1): OK
  ADD(type=f16,ne=[1,1,640,1],nr=[32,32,1,1],nf=1): OK
  ADD(type=f16,ne=[5120,1,1,1],nr=[1,256,1,1],nf=1): OK
  ADD(type=f16,ne=[640,1,1,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,8,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,1,1],nr=[32,1,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,320,320],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,1,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[2,1,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,2,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,1,2,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,1,1,2],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,1,2,2],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,2,2,2],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[2,2,2,2],nf=1): OK
  ADD(type=f32,ne=[1280,1,1,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[1280,1,1,1],nr=[1,16,16,1],nf=1): OK
  ADD(type=f32,ne=[1280,16,16,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[1280,1,1,1],nr=[1,256,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,1280,1],nr=[16,16,1,1],nf=1): OK
  ADD(type=f32,ne=[16,16,1280,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,1920,1],nr=[16,16,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,2560,1],nr=[16,16,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,1280,1],nr=[32,32,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,1920,1],nr=[32,32,1,1],nf=1): OK
  ADD(type=f32,ne=[1,1,640,1],nr=[32,32,1,1],nf=1): OK
  ADD(type=f32,ne=[5120,1,1,1],nr=[1,256,1,1],nf=1): OK
  ADD(type=f32,ne=[640,1,1,1],nr=[1,1,1,1],nf=1): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[2,1,1,1],nf=2): OK
  ADD(type=f32,ne=[16,5,4,3],nr=[1,2,1,1],nf=3): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,1,2,1],nf=4): OK
  ADD(type=f32,ne=[16,5,4,3],nr=[1,1,1,2],nf=5): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,1,2,2],nf=6): OK
  ADD(type=f32,ne=[10,5,4,3],nr=[1,2,2,2],nf=7): OK
  ADD(type=f32,ne=[16,5,4,3],nr=[2,2,2,2],nf=8): OK
  ADD(type=f32,ne=[16,5,4,3],nr=[1,1,1,1],nf=16): OK
  10844/10844 tests passed
  Backend Vulkan0: OK
Backend 2/2: CPU
  Skipping CPU backend
2/2 backends passed
OK

@0cc4m
Copy link
Collaborator

0cc4m commented Aug 18, 2025

The proper way to report this is directly in the Mesa issues or do you have a more direct connection to the driver team?

@rillomas
Copy link
Contributor

I'm a Windows guy so don't have connections with the Linux driver team. I can check but probably better to first report to Mesa.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants