You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: codecompanion-workspace.json
+24Lines changed: 24 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -81,6 +81,30 @@
81
81
"path": "tests/helpers.lua"
82
82
}
83
83
]
84
+
},
85
+
{
86
+
"name": "Adapters",
87
+
"system_prompt": "In the CodeCompanion plugin, adapters are used to connect to LLMs. The adapters contain various options for the LLM's endpoint alongside a defined schema for properties such as the model, temperature, top k, top p etc. The adapters also contain various handler functions which define how messages which are sent to the LLM should be formatted alongside how output from the LLM should be received and displayed in the chat buffer. The adapters are defined in the `adapters` directory.",
88
+
"opts": {
89
+
"remove_config_system_prompt": true
90
+
},
91
+
"vars": {
92
+
"base_dir": "lua/codecompanion"
93
+
},
94
+
"files": [
95
+
{
96
+
"description": "Each LLM has their own adapter. This allows for LLM settings to be generated from the schema table in an adapter before they're sent to the LLM via the http file. ",
97
+
"path": "${base_dir}/adapters/init.lua"
98
+
},
99
+
{
100
+
"description": "Adapters are then passed to the http client which sends requests to LLMs via Curl:",
101
+
"path": "${base_dir}/http.lua"
102
+
},
103
+
{
104
+
"description": "Adapters must follow a schema. The validation and how schema values are extracted from the table schema is defined in:",
Copy file name to clipboardExpand all lines: doc/extending/adapters.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,8 @@ Let's take a look at the interface of an adapter as per the `adapter.lua` file:
18
18
---@fieldenv_replaced? table Replacement of environment variables with their actual values
19
19
---@fieldheaderstable The headers to pass to the request
20
20
---@fieldparameterstable The parameters to pass to the request
21
+
---@fieldbodytable Additional body parameters to pass to the request
22
+
---@fieldchat_promptstring The system chat prompt to send to the LLM
21
23
---@fieldraw? table Any additional curl arguments to pass to the request
22
24
---@fieldopts? table Additional options for the adapter
23
25
---@fieldhandlerstable Functions which link the output from the request to CodeCompanion
@@ -448,4 +450,5 @@ temperature = {
448
450
},
449
451
```
450
452
451
-
You'll see we've specified a function call for the `condition` key. We're simply checking that the model name doesn't being with `o1` as these models don't accept temperature as a parameter. You'll also see we've specified a functioncallfor the `validate` key. We're simply checking that the value of the temperature is between 0 and 2
453
+
You'll see we've specified a function call for the `condition` key. We're simply checking that the model name doesn't being with `o1` as these models don't accept temperature as a parameter. You'll also see we've specified a functioncallfor the `validate` key. We're simply checking that the value of the temperature is between 0 and 2.
Copy file name to clipboardExpand all lines: lua/codecompanion/adapters/copilot.lua
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -254,7 +254,7 @@ return {
254
254
order=4,
255
255
mapping="parameters",
256
256
type="integer",
257
-
default=4096,
257
+
default=15000,
258
258
desc="The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.",
desc="The maximum number of tokens to include in a response candidate. Note: The default value varies by model",
195
+
validate=function(n)
196
+
returnn>0, "Must be greater than 0"
197
+
end,
198
+
},
199
+
temperature= {
200
+
order=3,
201
+
mapping="body.generationConfig",
202
+
type="number",
203
+
optional=true,
204
+
default=nil,
205
+
desc="Controls the randomness of the output.",
206
+
validate=function(n)
207
+
returnn>=0andn<=2, "Must be between 0 and 2"
208
+
end,
209
+
},
210
+
topP= {
211
+
order=4,
212
+
mapping="body.generationConfig",
213
+
type="integer",
214
+
optional=true,
215
+
default=nil,
216
+
desc="The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and Top-p (nucleus) sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.",
217
+
validate=function(n)
218
+
returnn>0, "Must be greater than 0"
219
+
end,
220
+
},
221
+
topK= {
222
+
order=5,
223
+
mapping="body.generationConfig",
224
+
type="integer",
225
+
optional=true,
226
+
default=nil,
227
+
desc="The maximum number of tokens to consider when sampling",
228
+
validate=function(n)
229
+
returnn>0, "Must be greater than 0"
230
+
end,
231
+
},
232
+
presencePenalty= {
233
+
order=6,
234
+
mapping="body.generationConfig",
235
+
type="number",
236
+
optional=true,
237
+
default=nil,
238
+
desc="Presence penalty applied to the next token's logprobs if the token has already been seen in the response",
239
+
},
240
+
frequencyPenalty= {
241
+
order=7,
242
+
mapping="body.generationConfig",
243
+
type="number",
244
+
optional=true,
245
+
default=nil,
246
+
desc="Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the response so far.",
0 commit comments