You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Further enhance Blackwell SM100 Attention kernels in example 77.
Add fused reduction kernel support for cutlass MLA.
Add softmax skip correction.
Support for GQA in FMHA backward kernel.
Fix an issue where get_unmasked_trip_count may return a negative value.
Fix an issue where mbarriers are initialized with a zero arrival count.
Fix a corner case issue where the sequence length of q is not a multiple of tile_q.
Remove tma padding for forward kernel inputs.
Add Blackwell SM100 kernels for MoEs (focusing on Low-Latency inference performance): example 92. It uses TMA (for weights) and CPASYNC (for tokens) to load input matrices and allow only one problem dimension to vary across groups/experts, unlike general Grouped GEMMs. Note: further API simplifications and kernel improvements are upcoming. Any feedback on API is welcome.
Further enhance blockwise and groupwise GEMMs on Hopper and Blackwell
On Blackwell SM120, a blockwise gemm kernel is added: example 87.
On Hopper, add K major scale factor support for SM90 blockwise kernels.
On Hopper, relax the restriction that the k dimension of the problem size has to be the multiple of the k dimension of the tile size.
On Hopper, grouped version supports the case when k = 0.
Support Blackwell SM120 mixed input blockscaled grouped GEMM.
Instantiating more Blackwell kernels in profiler.
Blackwell SM100 and SM103 kernels support CUTLASS_LIBRARY_INSTANTIATION_LEVEL to instantiate all possible combinations.
To use this feature, CUTLASS_LIBRARY_KERNELS must be non-empty. Profiler will combine CUTLASS_LIBRARY_KERNELS and CUTLASS_LIBRARY_INSTANTIATION_LEVEL to instantiate specific kernels.