Commit 3f3769b
authored
ggml : Enable MMA for BF16 in llamafile_sgemm (ggml-org#13148)
This patch upstreams llamafile's cpu matrix multiplication kernels for ppc64le using MMA builtins for BF16 data type.
This change results in 9x - 40x gains
in total speed S t/s (ie all tokens/total time), across various batch sizes tested using llama-batched-bench benchmark.
The patch is tested with Meta-Lllama-3-8B,
and Mistral-7B models (BF16 models generated by using llama-quantize from corresponding FP32 models) on an IBM POWER10 machine.
Signed-off-by: Shalini Salomi Bodapati <[email protected]>1 parent 2f56761 commit 3f3769b
1 file changed
+501
-0
lines changed
0 commit comments