Skip to content

Conversation

0cc4m
Copy link
Contributor

@0cc4m 0cc4m commented Oct 11, 2025

I don't have a device to test this right now, so this is my best guess.

@ggerganov Maybe we should add an is_gpu helper function for this GPU or iGPU selection?

@ggerganov
Copy link
Member

@ggerganov Maybe we should add an is_gpu helper function for this GPU or iGPU selection?

Thanks for digging into this. Yes, sounds like a good idea to me. We can implement it in llama.cpp and I'll make sure to propagate it here.

Let's see if this fixes the issues in #3455.

@BruBetIta
Copy link

.#3469 fixes the problem for me

@ggerganov ggerganov merged commit c3b5c4d into ggml-org:master Oct 11, 2025
66 checks passed
@0cc4m 0cc4m deleted the support-igpus branch October 11, 2025 19:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants