Skip to content

Conversation

izelnakri
Copy link

@izelnakri izelnakri commented Jun 30, 2025

Hi there!

I found a fix for the compile time bugs/errors I've been getting here:

#3297

I'm by no means a C++ expert & I'm sure there is a more elegant way to fix this.

This fixed the compilation for me, now I can run whisper models with SYCL hardware acceleration on my
Intel Ultra Series 2 laptop Omnibook Ultra Flip 14" running on NixOS(with a very hacky & advanced flake.nix) !!!

CPU: Intel Ultra 7 258V (8) @ 4.800GHz
GPU: Intel Arc Graphics 130V / 140V

@Rbiessy
Copy link
Contributor

Rbiessy commented Jun 30, 2025

I'm not keen to merge this workaround until we spend a bit of time to understand whether there is a better solution. I believe I've seen a similar workaround before, I just can't find what was done about it.
I won't have a lot of time to investigate it myself until later in July but I threw some ideas to start the investigation in ggml-org/llama.cpp#14440.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants