Skip to content

llama.cpp full-cuda--b1-8f824ff Public Latest

Install from the command line
$ docker pull ghcr.io/standby24x7/llama.cpp:full-cuda--b1-8f824ff

Recent tagged image versions

Loading

Details


Last published

1 year ago

Total downloads

951