Skip to content

Conversation

@jeffbolznv
Copy link
Collaborator

This fixes some failures on Turing where "round to zero" rounds to the max f16 value but the CPU reference value is infinite.

This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.
@jeffbolznv jeffbolznv requested a review from 0cc4m as a code owner September 22, 2025 03:15
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Sep 22, 2025
@0cc4m 0cc4m merged commit a20d810 into ggml-org:master Sep 22, 2025
60 of 68 checks passed
struct pushed a commit to struct/llama.cpp that referenced this pull request Sep 26, 2025
This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.
yael-works pushed a commit to yael-works/llama.cpp that referenced this pull request Oct 15, 2025
This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.
pwilkin pushed a commit to pwilkin/llama.cpp that referenced this pull request Oct 23, 2025
This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants