Skip to content

ONNX: Fix FP8 quantization for the second MLP in LayerNormMLP #14109

ONNX: Fix FP8 quantization for the second MLP in LayerNormMLP

ONNX: Fix FP8 quantization for the second MLP in LayerNormMLP #14109

Re-run triggered January 9, 2026 22:04
Status Success
Total duration 1h 19m 18s
Artifacts

build.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

2 notices
sccache stats
92% - 524 hits, 43 misses, 0 errors
sccache stats
100% - 211 hits, 0 misses, 0 errors