llama.cpp full-cuda--b1-8f824ff Public Latest
Install from the command line
$ docker pull ghcr.io/standby24x7/llama.cpp:full-cuda--b1-8f824ff
linux/amd64
$ docker pull ghcr.io/standby24x7/llama.cpp:full-cuda--b1-8f824ff@sha256:6a31d33f06afe8644c420578b8af1af46025d3619dd8b4f47c4122672f850bf2
unknown/unknown
$ docker pull ghcr.io/standby24x7/llama.cpp:full-cuda--b1-8f824ff@sha256:0e9840a45d2bf01f7825a1897a0530d8c7975664826f42719578a51d0a67bb47
Recent tagged image versions
- 4 Version downloads
- 4 Version downloads
- 4 Version downloads
- 2 Version downloads
- 2 Version downloads
Loading
Sorry, something went wrong.