Skip to content

Conversation

yeahdongcn
Copy link
Collaborator

@yeahdongcn yeahdongcn commented Aug 20, 2025

Make sure to read the contributing guidelines before submitting a PR

This PR introduces a new GGML_UNUSED_VARS macro to handle multiple unused variable declarations, replacing verbose individual GGML_UNUSED() calls throughout the codebase.

The current approach of marking unused variables requires multiple lines:

GGML_UNUSED(var1);
GGML_UNUSED(var2);
GGML_UNUSED(var3);

This creates visual clutter and is harder to maintain when adding/removing variables from the unused list.

Testing Done

  • MUSA build passed with no warning.

    cmake -B build -DGGML_MUSA=ON
    cmake --build build -j $(nproc) --config Release
  • docker build -t local/llama.cpp:server-cuda --target server -f .devops/cuda.Dockerfile .

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Aug 20, 2025
Signed-off-by: Xiaodong Ye <[email protected]>
@yeahdongcn yeahdongcn force-pushed the xd/GGML_UNUSED_VARS branch from cb9418d to f5278fa Compare August 21, 2025 01:54
@yeahdongcn yeahdongcn marked this pull request as ready for review August 21, 2025 02:31
@yeahdongcn yeahdongcn merged commit 8ad038c into ggml-org:master Aug 21, 2025
47 checks passed
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Aug 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants