Skip to content

Commit 592b9fb

Browse files
committed
Update README.md
1 parent 3a0324b commit 592b9fb

File tree

1 file changed

+218
-0
lines changed

1 file changed

+218
-0
lines changed

README.md

Lines changed: 218 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,220 @@
1+
# Croco.Cpp (CCPP) :
2+
3+
<details>
4+
<summary>Unroll DISCLAIMER:</summary>
5+
6+
Croco.Cpp is a fork of KoboldCPP, already known as KoboldCPP Frankenstein, Frankenfork, or shortened in KCPP-F.
7+
The namechange is due to my boredom with the Frankenstein marker I myself initiated a year ago.
8+
As usual, the Croco.Cpp builds are NOT supported by the KoboldCPP (KCPP) team, Github, or Discord channel.
9+
They are for greedy-test and amusement only.
10+
Any potential support found them is a courtesy, not a due.
11+
My CCPP version number bumps as soon as the version number in the official experimental branch bumps in the following way x.xxx : (KCPP)x.xx.x.(CCPP)xx.
12+
They are not "upgrades" over the official version. And they might be bugged at time: only the official KCPP releases are to be considered correctly numbered, reliable and "fixed".
13+
The LllamaCPP version + the additional PRs integrated follow my CCPP versioning in the title, so everybody knows what version they deal with.
14+
Important : New models sometimes integrated in my builds (like recently Mistral Nemo, which posed problems for several users) are for personal testing only, and CAN'T be fixed if they fail because their support come from third party PRs coming from LlamaCPP merged "savagely" in my builds, sometimes before even being merged on LlamaCPP master.
15+
</details>
16+
17+
Presentation :
18+
19+
Croco.Cpp (CCPP) is a fork of the experimental branch of KoboldCPP (KCPP), mainly aimed at NVidia Cuda users (I'm myself using Ampere GPUs, it MIGHT support the other backends also, everything is compîled but Hipblas/ROCm, but it's not tested), with a few modifications accordingly to my own needs :
20+
- More context steps in GUI, as well as more Blas Batch Size (supports MMVQ 1-8 for example)
21+
- 26 different modes of quantization for the context cache (F16, 20 KV modes with Flash Attention, 5 K modes without Flash Attention for models like Gemma)
22+
- A slightly different benchmark (one flag per column instead of a single flag space).
23+
- 8 Stories slots instead of 6 in the web-interface (KLite).
24+
- Often some PRs unsupported/not yet supported in KCPP.
25+
- More infos displayed in the CLI, without activating debug mode.
26+
- Smartcontext instead of contextshift by default in GUI for compatibility with Gemma
27+
- Since 1.71010, an enhanced model layers autoloader on GPU, based on Concedo's code and Pyroserenus formulas, but different from Henky's subsequent commit on KCPP-official. It's compatible with KV_Quants, works in single and multi-GPU, is accessible in CLI and GUI modes, and can be configured easily in tandem with tensor split for an entirely customized loading accordingly to one's rig and needs.
28+
29+
Recommanded settings for Commande Line Interface / GUI :
30+
```
31+
--flashattention (except for Gemma)
32+
--blastbatchsize 128 (256 for Gemma)
33+
--usecublas mmq (for NVidia users, MMQ mode is faster)
34+
```
35+
Check the help section (koboldcpp.exe --help or python koboldcpp.py --help) for more infos.
36+
37+
## Croco.Cpp specifics :
38+
39+
<details>
40+
<summary>Unroll the 26 KV cache options (all should be considered experimental except F16, KV Q8_0, and KV Q4_0)</summary>
41+
42+
With Flash Attention :
43+
- F16 -> Fullproof (the usual KV quant since the beginning of LCPP/KCPP)
44+
- K F16 with : V Q8_0, Q5_1, Q5_0, Q4_1, Q4_0
45+
- K Q8_0 with : V F16, Q8_0 (stable, my current main, part of the LCPP/KCPP main triplet), Q5_1 (maybe unstable), Q5_0 (maybe unstable), Q4_1 (maybe stable), the rest is untested beyond benches), Q4_0 (maybe stable)
46+
- K Q5_1 with : V Q5_1, Q5_0, Q4_1, Q4_0
47+
- K Q5_0 with : V Q5_0, Q4_1, V Q4_0
48+
- K Q4_1 with : V Q4_1 (stable), Q4_0 (maybe stable)
49+
- KV Q4_0 (quite stable, if we consider that it's part of the LCPP/KCPP main triplet)
50+
Works in command line, normally also via the GUI, and normally saves on .KCPPS config files.
51+
52+
Without Flash Attention nor MMQ (for models like Gemma) :
53+
- V F16 with KQ8_0, Q5_1, Q5_0, Q4_1, and Q4_0.
54+
</details>
55+
56+
<details>
57+
<summary>Unroll the options to set KV Quants</summary>
58+
59+
KCPP official KV quantized modes (modes 1 and 2 require Flash Attention) :
60+
61+
0 = 1616/F16 (16 BPW),
62+
1 = FA8080/KVq8_0 (8.5 BPW),
63+
2 = FA4040/KVq4_0 (4.5BPW),
64+
65+
CCPP unofficial KV quantized modes (require flash attention) :
66+
67+
3 = FA1680/Kf16-Vq8_0 (12.25BPW),
68+
4 = FA1651/Kf16-Vq5_1 (11BPW),
69+
5 = FA1650/Kf16-Vq5_0 (10.75BPW),
70+
6 = FA1641/Kf16-Vq4_1 (10.5BPW),
71+
7 = FA1640/Kf16-Vq4_0 (10.25BPW),
72+
8 = FA8051/Kq8_0-Vq5_1 (7.25BPW),
73+
9 = FA8050/Kq8_0-Vq5_0 (7BPW),
74+
10 = FA8041/Kq8_0-Vq4_1 (6.75BPW),
75+
11 = FA8040/Kq8_0-Vq4_0 (6.5BPW),
76+
12 = FA5151/KVq5_1 (6BPW),
77+
13 = FA5150/Kq5_1-Vq5_0 (5.75BPW),
78+
14 = FA5141/Kq5_1-Vq4_1 (5.5BPW),
79+
15 = FA5140/Kq5_1-Vq4_0 (5.25BPW),
80+
16 = FA5050/Kq5_0-Vq5_0 (5.5BPW),
81+
17 = FA5041/Kq5_0-Vq4_1 (5.25BPW),
82+
18 = FA5040/Kq5_0-Vq4_0 (5BPW),
83+
19 = FA4141/Kq4_1-Vq4_1 (5BPW),
84+
20 = FA4140/Kq4_1-Vq4_0 (4.75BPW)
85+
86+
21 = 1616/F16 (16 BPW), (same as 0, I just used it for the GUI slider).
87+
88+
22 = 8016/Kq8_0, Vf16 (12.25BPW), FA and no-FA both
89+
90+
23 = 5116/Kq5_1-Vf16 (11BPW), no-FA
91+
24 = 5016/Kq5_1-Vf16 (10.75BPW), no-FA
92+
25 = 4116/Kq4_1-Vf16 (10.50BPW), no-FA
93+
26 = 4016/Kq4_0-Vf16 (10.25BPW), no-FA
94+
95+
choices=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26], default=0)
96+
</details>
97+
98+
<details>
99+
<summary>Unroll the details about Emphasisfsm by Yoshqu</summary>
100+
101+
The common problem during text generiation are misplaced emphasis characters.
102+
103+
*looks at you "why* this is here?"
104+
105+
while it should be
106+
107+
*looks at you* "why this is here?"
108+
109+
This emphasisfsm solves this by simple (and fast) grammar expressed by deterministic finite state machine.
110+
111+
![Letters](emphasis-dfsm-letters.png)
112+
113+
Single letters are not practical in LLMs as tokens often contains more than one.
114+
115+
Emphasisfsm uses LLM tokens as its alphabet making it very fast.
116+
117+
![Tokens](emphasis-dfsm-tokens.png)
118+
119+
Those are only most obvious examples. There are more, eg. ' "***' is a valid token to transition from qout to star. and '*this' is vaild for quot->none or none->quot.
120+
121+
### Usage
122+
123+
To support variety of GUIs this extension shamefully exploits GBNF grammar string. *This is not a proper GBNF grammar, it only uses the field which is easily editable in most GUIs*
124+
125+
![KoboldCpp hack](gbnf-kob.png) ![SillyTavern hack](gbnf-st.png)
126+
127+
128+
emphasisfsm "_bias_[D][_emph1_][,_emphn_]"
129+
130+
Empty string emphasisfsm is disabled. The easiest way to enable is to
131+
132+
emphasisfsm "-20"
133+
134+
which defaults to
135+
136+
emphasisfsm "-20 \" \" * *"
137+
138+
(no debug, only * and " are considered)
139+
140+
141+
### how it works
142+
143+
Main loop is extended from:
144+
145+
- retrieve logits
146+
- sample logits, select token (top_k and friends)
147+
- output token
148+
149+
to
150+
151+
- retrieve logits
152+
- ban forbidden emphasisfsm transitions from current state (stetting their logits low)
153+
- sample logits, select token (top_k and friends)
154+
- emphasisfsm trasition on selected token
155+
- output token
156+
157+
158+
### TODO
159+
160+
- find split utf8 letters over more than one token (i don't plant to support it, but warning would be nice)
161+
- banning end tokens generation inside of emphasis - forcing LLM to finsh his 'thought' ?
162+
163+
164+
### Meta-Llama-3-8B stats for default (" *) emphasisfsm
165+
166+
empcats_gen: ban bias: -17.500000
167+
empcats_gen: emphasis indifferent tokens: 126802
168+
empcats_gen: tokens for emphasis '"' '"': 1137
169+
empcats_gen: tokens for emphasis '*' '*': 315
170+
empcats_gen: always banned tokens: 2
171+
empcats_gen: total tokens: 128256
172+
173+
Always banned tokens are :
174+
175+
<pre>' "*"', ' "*"'</pre>
176+
177+
### Tests
178+
179+
emphasisfsm "-20 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 0 0"
180+
181+
This forces that every digit is a citation, so example text completion looks like:
182+
183+
184+
```
185+
Give me math vector of random numbers.Here is a 3-dimensional math vector with random numbers:
186+
Vector:
187+
[
188+
3.445,
189+
-5.117,
190+
7.992
191+
]
192+
```
193+
194+
There is no other digit between two 3, two 4, two 5 and so on....
195+
</details>
196+
197+
<details>
198+
<summary>Unroll the Dry sampler recommanded settings</summary>
199+
200+
Multiplier : 0.8
201+
Base : 1.75
202+
Allowed length : 2
203+
Range : Uses the repetition penalty range (usual parameter is 2048)
204+
Usual sequence breakers : '\n', ':', '"', '*'
205+
</details>
206+
207+
#Croco.Cpp notes :
208+
209+
- I often mislabel the Cuda specifics of the builds. Here's the right nomenclature :
210+
Cuda 12.2 arch 60617075 : Cu12.2_SMC2_Ar60617075_DmmvX64Y2_MMY2_KQIt2
211+
Cuda 12.1 arch 52617075 : CuCML_ArCML_SMC2_DmmvX32Y1 (CML : CMakeList)
212+
Cuda 11.4.4/11.5 arch 35375052 : CuCML_ArCML_SMC2_DmmvX32Y1
213+
214+
# koboldcpp-experimental
215+
216+
KoboldCpp-experimental is a sligthly extended KoboldCpp with [custom](experimental/README.md) functionality.
217+
1218
# koboldcpp
2219

3220
KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models, inspired by the original **KoboldAI**. It's a single self-contained distributable from Concedo, that builds off llama.cpp, and adds a versatile **KoboldAI API endpoint**, additional format support, Stable Diffusion image generation, speech-to-text, backward compatibility, as well as a fancy UI with persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything KoboldAI and KoboldAI Lite have to offer.
@@ -179,3 +396,4 @@ when you can't use the precompiled binary directly, we provide an automated buil
179396
- [Stable Diffusion 1.5 and SDXL safetensor models](https://github.com/LostRuins/koboldcpp/wiki#can-i-generate-images-with-koboldcpp)
180397
- [LLaVA based Vision models and multimodal projectors (mmproj)](https://github.com/LostRuins/koboldcpp/wiki#what-is-llava-and-mmproj)
181398
- [Whisper models for Speech-To-Text](https://huggingface.co/koboldcpp/whisper/tree/main)
399+

0 commit comments

Comments
 (0)