Skip to content

Releases: bartowski1182/llama.cpp

b2943

20 May 00:19
b442ab0

Choose a tag to compare

Merge branch 'ggerganov:master' into master

b2940

19 May 17:52
063b0d4

Choose a tag to compare

Merge branch 'ggerganov:master' into master

b2937

19 May 17:31

Choose a tag to compare

Add Smaug tokenizer support

b2936

19 May 17:28
5ca49cb

Choose a tag to compare

ggml: implement quantized KV cache for FA (#7372)