Skip to content

llama.cpp server-cuda-b4102 Public Latest

Install from the command line
$ docker pull ghcr.io/autonomicperfectionist/llama.cpp:server-cuda-b4102

Recent tagged image versions

  • Published over 1 year ago · Digest
    sha256:6e5052d9b0e9985130586d31d29eeede99d5bb968a48fee06bbbf00463a390e4
    10 Version downloads
  • Published over 1 year ago · Digest
    sha256:dab3c9bef2538a6d0ed932d84586ff7f17a28d7a929d73de198dd248861f320d
    9 Version downloads
  • Published over 1 year ago · Digest
    sha256:4615b0f037637f8841f636776f8f55a7b8404e730355208927135b304c18de2d
    10 Version downloads
  • Published over 1 year ago · Digest
    sha256:dbce8c3313ac1345ac2c901535a576ad33b2e8faca524ad6128c7b4ea9d454a2
    9 Version downloads
  • Published over 1 year ago · Digest
    sha256:d4cc806729731ed9eebc88053bd294efb7bfaf73c2b99854b0669773a967873a
    10 Version downloads

Loading


Last published

over 1 year ago

Total downloads

200