File tree Expand file tree Collapse file tree 1 file changed +2
-5
lines changed
Expand file tree Collapse file tree 1 file changed +2
-5
lines changed Original file line number Diff line number Diff line change @@ -38,14 +38,11 @@ llama-cpp-python 0.1.62
3838
3939```
4040
41- ** (4) Download a v3 ggml llama/vicuna/alpaca model**
41+ ** (4) Download a v3 ggml model**
4242 - ** ggmlv3**
4343 - file name ends with ** q4_0.bin** - indicating it is 4bit quantized, with quantisation method 0
4444
45- https://huggingface.co/vicuna/ggml-vicuna-13b-1.1/blob/main/ggml-vic13b-q4_0.bin
46- https://huggingface.co/vicuna/ggml-vicuna-13b-1.1/blob/main/ggml-vic13b-uncensored-q4_0.bin
47- https://huggingface.co/TheBloke/LLaMa-7B-GGML/blob/main/llama-7b.ggmlv3.q4_0.bin
48- https://huggingface.co/TheBloke/LLaMa-13B-GGML/blob/main/llama-13b.ggmlv3.q4_0.bin
45+ https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GGML
4946
5047
5148** (6) run the llama-cpp-python API server with MacOS Metal GPU support**
You can’t perform that action at this time.
0 commit comments