You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,14 +25,18 @@ The app supports three types of suggestion services:
25
25
26
26
- Models with chat completions API
27
27
- Models with completions API
28
+
- Models with FIM API
28
29
-[Tabby](https://tabby.tabbyml.com)
29
30
30
31
If you are new to running a model locally, you can try [Ollama](https://ollama.com) and [LM Studio](https://lmstudio.ai).
31
32
32
33
### Recommended Settings
33
34
34
35
- Use Tabby since they have extensive experience in code completion.
35
-
- Use models with completions API with Fill-in-the-Middle support (for example, codellama:7b-code), and use the "Codellama Fill-in-the-Middle" strategy.
36
+
- Use models with completions API with Fill-in-the-Middle support (for example, codellama:7b-code), and use the "Fill-in-the-Middle" strategy.
37
+
- Use models with FIM API.
38
+
39
+
When using custom models to generate suggestions, it is recommended to setup a lower suggestion limit for faster generation.
36
40
37
41
### Others
38
42
@@ -53,8 +57,8 @@ The template format differs in different tools.
53
57
- Default: This strategy meticulously explains the context to the model, prompting it to generate a suggestion.
54
58
- Naive: This strategy rearranges the code in a naive way to trick the model into believing it's appending code at the end of a file.
55
59
- Continue: This strategy employs the "Please Continue" technique to persuade the model that it has started a suggestion and must continue to complete it. (Only effective with the chat completion API).
56
-
-CodeLlama Fill-in-the-Middle: It uses special tokens to guide the models to generate suggestions. The models need to support FIM to use it (codellama:xb-code, startcoder, etc.). This strategy uses the special tokens documented by CodeLlama.
57
-
-CodeLlama Fill-in-the-Middle with System Prompt: The previous one doesn't have a system prompt telling it what to do. You can try to use it in models that don't support FIM.
60
+
- Fill-in-the-Middle: It uses special tokens to guide the models to generate suggestions. The models need to support FIM to use it (codellama:xb-code, startcoder, etc.). You need to setup a prompt format to allow it to work properly. The default prompt format is for codellama.
61
+
- Fill-in-the-Middle with System Prompt: The previous one doesn't have a system prompt telling it what to do. You can try to use it in models that don't support FIM.
0 commit comments