Skip to content

Conversation

@ckastner
Copy link
Collaborator

A llama.cpp build could make use of an already installed libggml using the latter's cmake package. This would be the typical case for Linux distributions.

Note: This requires a ggml build where GGML_BIN_DIR is commented out, see 82d9ca4.

# Build and install ggml to some custom path
$ git clone https://github.com/ggml-org/ggml.git && cd ggml
$ cmake -B build -DCMAKE_INSTALL_PREFIX=/tmp/system-ggml -DCMAKE_INSTALL_RPATH='$ORIGIN'
$ cmake --build build && cmake --install build

# Build and install llama against this ggml, using ggml's cmake package
$ cmake -B build \
	-DLLAMA_USE_SYSTEM_GGML=ON \
	-DCMAKE_PREFIX_PATH=/tmp/system-ggml/lib/cmake/ggml \
	-DLLAMA_BUILD_EXAMPLES=ON \
	-DLLAMA_BUILD_TESTS=ON
$ cmake --build build
$ build/bin/llama-bench
...
$ bash ./ci/run.sh ./tmp/results ./tmp/mnt
...

# Negative check, to make sure we didn't break the regular build
$ cmake -B build \
	-DLLAMA_USE_SYSTEM_GGML=OFF \
	-DLLAMA_BUILD_EXAMPLES=ON \
	-DLLAMA_BUILD_TESTS=ON
$ cmake --build build
...
$ bash ./ci/run.sh ./tmp/results ./tmp/mnt
...

The tests passed but I get a ModuleNotFoundError (torch) from convert_hf_to_gguf.py, which is unexpected. I haven't investigated yet.

llama.cpps's build requires it, too, and we may want to make use of it
without add_subdirectory(ggml).
@github-actions github-actions bot added build Compilation issues examples ggml changes relating to the ggml tensor library for machine learning labels Mar 10, 2025
This facilitates package maintenance for Linux distributions, where the
libggml library most likely will be shipped as an individual package
upon which a llama.cpp package depends.
@@ -1,3 +1,5 @@
include("ggml/cmake/common.cmake")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to have llama.cpp-specific flags instead of reusing the ggml flags like this. But this also works.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I didn't know whether the re-use of ggml_get_flags was intentional or not, so I went with the least intrusive option. Well, it can easily be changed once the need arises.

@ggerganov ggerganov merged commit 374101f into ggml-org:master Mar 17, 2025
47 checks passed
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Mar 19, 2025
* cmake: Factor out compiler flag function from ggml

llama.cpps's build requires it, too, and we may want to make use of it
without add_subdirectory(ggml).

* cmake: Enable building against system ggml

This facilitates package maintenance for Linux distributions, where the
libggml library most likely will be shipped as an individual package
upon which a llama.cpp package depends.
@ckastner ckastner deleted the system-ggml branch May 26, 2025 08:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

build Compilation issues examples ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants