Skip to content

Commit 77ad802

Browse files
committed
llamafile: fix fp16 loading typo
Signed-off-by: Aaron Teo <[email protected]>
1 parent f84a37b commit 77ad802

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

ggml/src/ggml-cpu/llamafile/sgemm.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@ template <> inline float32x4_t load(const ggml_fp16_t * p) {
253253
float tmp[4];
254254

255255
for (int i = 0; i < 4; i++) {
256-
tmp[i] = GGML_FP16_TO_FP32(x[i]);
256+
tmp[i] = GGML_FP16_TO_FP32(p[i]);
257257
}
258258

259259
return vec_xl(0, (const float *)(tmp));

0 commit comments

Comments
 (0)