Skip to content

Conversation

ikawrakow
Copy link
Owner

@ikawrakow ikawrakow merged commit 36e6e88 into main May 11, 2025
ikawrakow pushed a commit that referenced this pull request May 11, 2025
This reverts commit 36e6e88.
I should have tested. We get NaNs.
@ubergarm
Copy link
Contributor

ubergarm commented May 12, 2025

Just saw what looks like a small patch in mainline's earlier ggml-org/llama.cpp#13438 just updated in #13469 (linked here)

Could be related to my issue with DDDD showing up for longer contexts which I attributed to -ser as we were discussing here?

Though hrmm, yours has this in a similar area already, so may not be relevent.

      if (np > 1) {
          __syncthreads();
      }

fwiw I tested the following small change and still am seeing DDDD with longer context and -ser so might not be related.

--- a/ggml/src/ggml-cuda/fattn-mma-f16.cuh
+++ b/ggml/src/ggml-cuda/fattn-mma-f16.cuh
@@ -734,9 +734,10 @@ static __device__ __forceinline__ void flash_attn_ext_f16_process_tile(
             float2 * dstk_fixup_meta = dstk_fixup + (gridDim.x + blockIdx.x)*ncols;
             dstk_fixup_meta[(threadIdx.y/np)*cols_per_warp + threadIdx.x] = make_float2(KQ_cmn, KQ_crs);
         }
-    }
-
-    if (np > 1) {
+    } else if (np > 1) {
+        // Warps with threadIdx.y % np == 0 execute a __syncthreads() in the if branch.
+        // Therefore, all other warps also need to execute a __syncthreads().
+        // Otherwise the points at which warps synchronize with each other would become misaligned.
         __syncthreads();
     }

@ikawrakow
Copy link
Owner Author

Could be related to my issue with DDDD showing up for longer contexts which I attributed to -ser #386 (comment)?

Thanks for the alert. But isn't it easier to rerun without -ser to not have 2 potential causes at the same time? There has been a new report about SER not working, this time CPU only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants