Skip to content

Conversation

@shalinib-ibm
Copy link
Owner

@shalinib-ibm shalinib-ibm commented Aug 5, 2025

This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc (default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone test case (performs matmul and compared final mattrix C result with base).

Performance Testing:

Observed 50 ~ 70 % improvement in Prompt Processing Speed mesured using llama-bench with Meta-Llama3-8B FP32 Model. Similar gains observed with Mistral-7b-Instruct-v0.3 Model.

model                    Size Params Backend Threads Test Patch Base
llama 8B all F32        29.92 GiB     8.03 B CPU        20 pp512 98.58 60.3
llama 8B all F32        29.92 GiB     8.03 B CPU        20 pp1024 95.88 57.36
llama 8B all F32        29.92 GiB     8.03 B CPU        20 pp2048 85.46 53.26
llama 8B all F32        29.92 GiB     8.03 B CPU        20 pp4096 68.66 45.78
llama 8B all F32        29.92 GiB     8.03 B CPU        20 pp6144 57.35 40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch sizes ( 1, 2, 4, 8, 16)

Make sure to read the contributing guidelines before submitting a PR

@shalinib-ibm shalinib-ibm force-pushed the tb_ppc_sgemm_opt branch 4 times, most recently from 082d7a3 to 96b1f4d Compare August 5, 2025 11:09
This patch improves GEMM for FP32 Data Type on PowerPC

Implements GEMM on large blocks with configurable block size mc, nc, kc
(default: 256, 256, 256).
Packing Function optimized to access blocks as per memory layout.
GEMM Optimized to work on larger blocks.
Isolated Packing from GEMM Operations for better MMA utilization.

Verified functionality and correctness uing llama-cli and stand alone
test case (performs matmul and compares final mattrix C result with
base).

Performance Testing:

Observed 50% ~ 70% improvement in Prompt Processing Speed mesured using
llama-bench with Meta-Llama3-8B FP32 Model.  Similar gains observed with
Mistral-7b-Instruct-v0.3 Model.

 model                   Size	             Params	Backend	      Threads	Test	Patch	Base
 llama 8B all F32        29.92 GiB 	     8.03 B 	 CPU           20 	pp512	98.58	60.3
 llama 8B all F32        29.92 GiB 	     8.03 B 	 CPU           20 	pp1024	95.88	57.36
 llama 8B all F32        29.92 GiB 	     8.03 B 	 CPU           20 	pp2048	85.46	53.26
 llama 8B all F32        29.92 GiB 	     8.03 B 	 CPU           20 	pp4096	68.66	45.78
 llama 8B all F32        29.92 GiB 	     8.03 B 	 CPU           20 	pp6144	57.35	40.44

25 ~ 30% improvement in llama-batched-bench with Metla-Llama3-8B in
Prompt Processing Speed for large prompts (256, 512, 1024, 2048, 4096)tokens with various batch
sizes ( 1, 2, 4, 8, 16)

Signed-off-by: root <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants