Skip to content

Conversation

junchao-loongson
Copy link
Collaborator

@junchao-loongson junchao-loongson commented Sep 8, 2025

Using the -DGGML_LASX=OFF parameter during compilation results in the following compilation errors. This is due to some 128-SIMD code not being correctly included within the #if defined(__loongarch_asx)...#endif block, as well as missing explicit type conversions.
在编译时使用-DGGML_LASX=OFF参数,会出现以下编译错误,这是由于部分128-simd代码没有正确包含在#if defined(__loongarch_asx)...#endif中,以及缺少一些显示的类型转换

/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h: In function ‘__lsx_f16x4_load’:
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h:1055:12: error: incompatible types when returning type ‘__vector(2) long long int’ but ‘__m128’ {aka ‘__vector(4) float’} was expected
     return __lsx_vld(tmp, 0);
            ^~~~~~~~~
In file included from /home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/quants.c:5:
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h: In function ‘__lsx_f16x4_load’:
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h:1055:12: error: incompatible types when returning type ‘__vector(2) long long int’ but ‘__m128’ {aka ‘__vector(4) float’} was expected
     return __lsx_vld(tmp, 0);
            ^~~~~~~~~
In file included from /home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:14:
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/vec.h: In function ‘ggml_vec_dot_f16_unroll’:
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/vec.h:236:67: error: incompatible types when initializing type ‘float’ using type ‘__vector(2) long long int’
         GGML_F16_VEC sum[GGML_VEC_DOT_UNROLL][GGML_F16_ARR] = { { GGML_F16_VEC_ZERO } };
                                                                   ^~~~~~~~~~~~~~~~~
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/vec.h:255:13: error: incompatible types when initializing type ‘__m128’ {aka ‘const __vector(4) float’} using type ‘__vector(2) long long int’
             GGML_F16_VEC_REDUCE(sumf[k], sum[k]);
             ^~~~~~~~~~~~~~~~~~~
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/vec.h: In function ‘ggml_vec_mad_f32’:
/home/junchao/work/ai/llama.cpp/ggml/src/ggml-cpu/vec.h:370:27: error: incompatible types when initializing type ‘__m128’ {aka ‘__vector(4) float’} using type ‘__vector(2) long long int’
         GGML_F32_VEC vx = GGML_F32_VEC_SET1(v);
                           ^~~~~~~~~~~~~~~~~

  • Added explicit type conversions to prevent compilation errors
  • Moved functions implemented by lsx into the appropriate macros

Make sure to read the contributing guidelines before submitting a PR

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Sep 8, 2025
@junchao-loongson junchao-loongson changed the title Fix loongarch lsx compilation error CPU:Fix loongarch lsx compilation error Sep 9, 2025
@junchao-loongson junchao-loongson changed the title CPU:Fix loongarch lsx compilation error CPU: Fix loongarch lsx compilation error Sep 9, 2025
Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've improved support for adding self-hosted runners to the CI: https://github.com/ggml-org/llama.cpp/blob/master/ci/README.md

You can consider adding a loongarch runner to make things more stable in the future.

@ggerganov ggerganov merged commit aa719c2 into ggml-org:master Sep 25, 2025
46 of 48 checks passed
pwilkin pushed a commit to pwilkin/llama.cpp that referenced this pull request Sep 25, 2025
struct pushed a commit to struct/llama.cpp that referenced this pull request Sep 26, 2025
yael-works pushed a commit to yael-works/llama.cpp that referenced this pull request Oct 15, 2025
pwilkin pushed a commit to pwilkin/llama.cpp that referenced this pull request Oct 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants