Skip to content

llama.cpp server-cuda-b6408 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b6408

Recent tagged image versions

  • Published about 7 hours ago · Digest
    sha256:335039bc714be4ec6aa9f6eecb1cb2c4153922d94055fa30649c988f1f62e267
    302 Version downloads
  • Published about 7 hours ago · Digest
    sha256:299a64eeccaa765921e2da0a9fd67326f9f1afe4ef6e9b8a9e0291ec48de8c6b
    1 Version downloads
  • Published about 7 hours ago · Digest
    sha256:cc7cedd3867ad6270d14dc2f67e108e9f3585ced58920978facb3cb29e68c983
    6 Version downloads
  • Published about 7 hours ago · Digest
    sha256:d5eb05af56cd3197ca8bc82cfdc765e70411e94d4c7d8491343ec3796dd14834
    0 Version downloads
  • Published about 7 hours ago · Digest
    sha256:de49d08464910a187e3afa34953a34b8c03e3e3091edb2fdc4c059b30f9b8299
    67 Version downloads

Loading

Details


Last published

7 hours ago

Discussions

2.55K

Issues

881

Total downloads

451K