llama.cpp Base Images Release v2508242014-a7e6c5d
Released on: Invalid timestamp
Git commit: a7e6c5d
Available Images
All images include embedded version metadata and are ready for BodhiApp integration.
Build Results
✅ CPU Runtime (Platform: linux/amd64)
docker pull ghcr.io/bodhisearch/llama.cpp:cpu-2508242014-a7e6c5d
✅ CUDA Runtime (Platform: linux/amd64) - Includes CPU fallback
docker pull ghcr.io/bodhisearch/llama.cpp:cuda-2508242014-a7e6c5d
✅ ROCm Runtime (Platform: linux/amd64)
docker pull ghcr.io/bodhisearch/llama.cpp:rocm-2508242014-a7e6c5d
✅ Vulkan Runtime (Platform: linux/amd64)
docker pull ghcr.io/bodhisearch/llama.cpp:vulkan-2508242014-a7e6c5d
Usage with BodhiApp
# Use as base image in BodhiApp
ARG BASE_VARIANT=cpu
FROM ghcr.io/bodhisearch/llama.cpp:${BASE_VARIANT}-2508242014-a7e6c5d AS runtime-base
# Your BodhiApp build continues...
# llama-server binary available at /app/bin/llama-server
Version Information
Each image contains version metadata accessible at runtime:
# Via Docker labels
docker inspect ghcr.io/bodhisearch/llama.cpp:cpu-2508242014-a7e6c5d
# Via version file in container
docker run --rm ghcr.io/bodhisearch/llama.cpp:cpu-2508242014-a7e6c5d cat /app/version.json
Changelog
- Simplified Architecture: Focus on llama-server binary only
- Embedded Metadata: Version, commit, timestamp info in images
- Multi-platform Support: ARM64 support for CPU and Vulkan variants
- BodhiApp Integration: Clean inheritance pattern with predictable paths
- Tag-based Versioning: Timestamp-based versions for chronological ordering