-
Notifications
You must be signed in to change notification settings - Fork 0
UPSTREAM PR #17495: HIP: Add RDNA3 WMMA support to MMF #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
UPSTREAM PR #17495: HIP: Add RDNA3 WMMA support to MMF #319
Conversation
Key Changes Made:
1. ggml/src/ggml-cuda/common.cuh:
- Extended AMD_WMMA_AVAILABLE macro to include both RDNA3 and RDNA4
- Updated amd_wmma_available() to return true for both architectures
2. ggml/src/ggml-cuda/mma.cuh:
- Tile structures: Added RDNA3-specific tile sizes:
- RDNA4: 4 half2 = 8 FP16 elements (compact layout)
- RDNA3: 8 half2 = 16 FP16 elements (duplicate layout required by hardware)
- MMA operations: Added RDNA3 intrinsics:
- FP16: __builtin_amdgcn_wmma_f32_16x16x16_f16_w32 (no gfx12 suffix)
- BF16: __builtin_amdgcn_wmma_f32_16x16x16_bf16_w32
- Uses halfx16_t/bf16x16_t for RDNA3 vs halfx8_t/bf16x8_t for RDNA4
- Load operations: Added conditional handling for 32-byte RDNA3 tiles using two 16-byte copies
3. ggml/src/ggml-cuda/mmf.cu:
- Updated to use amd_wmma_available() for both RDNA3 and RDNA4
|
Explore the complete analysis inside the Version Insights Performance Analysis Summary: PR #319 - HIP RDNA3 WMMA SupportOverviewThis PR introduces WMMA (Wave Matrix Multiply-Accumulate) support for AMD RDNA3 GPUs (RX 7000 series), backporting functionality from RDNA4. The changes modify 4 files in the GGML CUDA backend, adding architecture-specific code paths for FP16 and BF16 matrix operations while restricting unsupported integer WMMA operations on RDNA3. Performance analysis shows 0.0% power consumption change across all 16 binaries, indicating the modifications are properly isolated to RDNA3-specific code paths with no impact on other architectures or CPU-based inference. Key FindingsPerformance-Critical Areas Impact: The changes target the GGML Backend System, specifically matrix multiplication kernels used during model inference. No functions show measurable response time or throughput changes in the static analysis, as modifications are architecture-specific (RDNA3 only) and do not affect the baseline x86_64 CPU execution path. Inference Performance: Core inference functions (llama_decode, llama_encode, llama_tokenize) show no response time or throughput changes. The tokens per second metric remains unaffected for CPU-based inference on the reference platform (12th Gen Intel Core i7-1255U). RDNA3 GPU users may experience improved tokens per second for FP16/BF16 models, but this is hardware-specific and not reflected in the CPU baseline measurements. Power Consumption: All binaries maintain identical power consumption between versions:
Sub-nanojoule variations are within measurement precision and represent no functional change. Code Changes: The implementation adds compile-time conditionals and runtime detection for RDNA3 WMMA capabilities, adjusts tile sizes from 8 to 16 FP16 elements to match RDNA3 hardware specifications, and restricts integer WMMA to RDNA4 only. These changes enable hardware acceleration for RDNA3 GPUs without modifying non-AMD code paths, maintaining zero performance impact on CPU and NVIDIA GPU execution. |
92ef8cd to
7dd50b8
Compare
Mirrored from ggml-org/llama.cpp#17495
Based on the work by @zhang-hui-yulo for RDNA4 I attempted to backport the WMMA MMF support to RDNA3.
The differences to RDNA4 are:
The results for granite 1b 400m look great:
The results for a more realistic GPT-OSS 20b (this is the Q8_0 GGUF) show a very mixed picture:
Help appreciated. I'm a novice when it comes to HIP and GPU intrinsics.
CC @jiachengjason