Skip to content

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention #70444

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention

[CUDA] Support FP8 (E4M3) KV Cache for Group Query Attention #70444

Triggered via pull request February 14, 2026 04:12
Status Success
Total duration 37s
Artifacts
Validation
17s
Validation
Fit to window
Zoom out
Zoom in