llama.cpp server-cuda-b6408 Public Latest
Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b6408
Recent tagged image versions
- 302 Version downloads
- 1 Version downloads
- 6 Version downloads
- 0 Version downloads
- 67 Version downloads
Loading
Sorry, something went wrong.
Details
-
ggml-org
- llama.cpp
- MIT License
- 86.2k stars
Last published
7 hours ago
Discussions
2.55K
Issues
881
Total downloads