-
Notifications
You must be signed in to change notification settings - Fork 13.5k
CUDA: fuse rope + set_rows #16884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
CUDA: fuse rope + set_rows #16884
Conversation
406c867 to
dc814b8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the fusion itself is quite simple, I would still recommend to add a test to test-backend-ops for it nonetheless
There is already a test added in the vulkan PR #16769 |
| dst[idst + 0] = D(x[ix + 0]); | ||
| dst[idst + 1] = D(x[ix + 1]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest you use ggml_cuda_cast defined in convert.cuh. Otherwise there will potentially be issues with FP16 <-> BF16 conversions.
| dst[idst + 0] = D(x0 * cos_theta - x1 * sin_theta); | ||
| dst[idst + 1] = D(x0 * sin_theta + x1 * cos_theta); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you're already working on optimizing RoPE: I think the memory access pattern here is suboptimal because there are gaps between each thread and I don't know whether the compiler is smart enough to combine the first and second write into a single one. I would suggest grouping the values as float2/half2 and either casting dst to that type or using ggml_cuda_memcpy_1.
Based on #16769.
On a 4090: