Skip to content

llama.cpp server-cuda-b6098 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/stevenkuang-tencent/llama.cpp:server-cuda-b6098

Recent tagged image versions

  • Published about 4 hours ago · Digest
    sha256:bb5640b07e5f9c5c581562f2918296291cc927835f0c2eaf91d877312801b539
    0 Version downloads
  • Published about 4 hours ago · Digest
    sha256:4afe7ca031969fc0995cf11411d119655c72977ca6bbc863a14c729c91327a2c
    0 Version downloads
  • Published about 4 hours ago · Digest
    sha256:f828a95ecc1450648658329677fec9ceed3e051f57fbd1d72ebdeee937a122c9
    0 Version downloads
  • Published about 4 hours ago · Digest
    sha256:2cc9c86f025922b4c9daac48e707b48fd13be7698a7ebd96410c83730cf9b80e
    0 Version downloads
  • Published about 5 hours ago · Digest
    sha256:e6fa217cd6853695f4405dff2a00ae2469f5763d91d4145caed7f02a2ae0256b
    0 Version downloads

Loading


Last published

4 hours ago

Total downloads

0