Skip to content

llama.cpp server + small language model in Docker container

License

Notifications You must be signed in to change notification settings

kth8/llama-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama.cpp server and small language model bundled together inside a Docker image for easy deployment similar to a llamafile. Uses CPU for inference. Requires CPU with AVX2 support from Intel Haswell/AMD Excavator or later generations. Override server settings with environment variables.

docker run -d --name llama1b --init -p 8001:8080/tcp ghcr.io/kth8/llama-server:llama-3.2-1b-instruct

Verify if the server is running by going to http://127.0.0.1:8001 in your web browser or using the terminal:

curl http://127.0.0.1:8001/v1/chat/completions -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Hello"}]}'

Check if your CPU supports AVX2 on Linux:

grep -o 'avx2' /proc/cpuinfo

Available tags:

ghcr.io/kth8/llama-server:llama-3.2-1b-instruct

ghcr.io/kth8/llama-server:llama-3.2-3b-instruct

ghcr.io/kth8/llama-server:mistralai_ministral-3-3b-reasoning-2512

ghcr.io/kth8/llama-server:gemma-3-270m-it

ghcr.io/kth8/llama-server:granite-4.0-350m

ghcr.io/kth8/llama-server:qwen3-0.6b

ghcr.io/kth8/llama-server:granite-4.0-h-350m

ghcr.io/kth8/llama-server:granite-4.0-h-1b

ghcr.io/kth8/llama-server:qwen3-1.7b

ghcr.io/kth8/llama-server:gemma-3-1b-it

ghcr.io/kth8/llama-server:gemma-3-4b-it

ghcr.io/kth8/llama-server:granite-4.0-1b

ghcr.io/kth8/llama-server:qwen3-4b-thinking-2507

ghcr.io/kth8/llama-server:granite-4.0-h-micro

ghcr.io/kth8/llama-server:granite-4.0-micro

ghcr.io/kth8/llama-server:ministral-3-3b-instruct-2512

ghcr.io/kth8/llama-server:smollm3-3b

ghcr.io/kth8/llama-server:qwen3-4b-instruct-2507

ghcr.io/kth8/llama-server:ministral-3-3b-reasoning-2512

ghcr.io/kth8/llama-server:lfm2-2.6b-exp

All model GGUF files provided by bartowski.

About

llama.cpp server + small language model in Docker container

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages