-
-
Notifications
You must be signed in to change notification settings - Fork 362
feat/move-to-function-calling #1141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Deploying codecompanion with
|
| Latest commit: |
6bafcaf
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://9d7c6d3d.codecompanion.pages.dev |
| Branch Preview URL: | https://feat-move-to-function-callin.codecompanion.pages.dev |
|
Hi @olimorris. Nice to see that you have started function calling. From the code, I see that tools no longer support system prompt. I understand that this keeps the tools super simple and easy to build. But will you consider keeping the system prompt? For e.g coming to mcphub.nvim's Are there any issues with keeping the architecture this way? I know you have to consider wider use cases. Eager to hear your thoughts on this. Thank you |
@ravitemer - none at all. I'll revert those changes back later today. For you and @Davidyz, I'm hoping the only changes will be on the schema side of things and I'm keen to keep it that way. As I mentioned yesterday, this PR will not be merged until you both are happy with it. |
|
Thank you 😃. |
|
From my side, I'd also prefer to have a system prompt (whatever it's called, as long as there's a way for me to write instructions to teach the LLM how to call the tool). Otherwise, I'm quite happy with the existing implementation, so if there are no significant changes in the interface, it should be easy for me to keep up with. To be completely honest, I think the VectorCode codecompanion tool should eventually be deprecated in favour of MCP. I've been receiving feedback from CopilotChat devs, and it looks like they're doing their own tool stuff. It'll be a lot easier if I could just maintain one tool implementation instead of multiple ones for each AI plugin. The MCP server is not quite ready though, so I'll still keep maintaining the tool for codecompanion and copilotchat for the time being. |
This certainly feels like the direction of travel. I can't open a newsletter or browse Reddit without seeing reference to MCP. |
|
A common pattern in LLM-related frameworks, when implementing function calling, is to leverage the type annotation to automatically parse the code of a function into the JSON representation of the function call. For example:
With the high level of code introspection offered by Python with the syntactic sugar offered by function decorators, this translates to the "@tool" decorator expected to be applied to a typed annotated function. I wonder if something similar is achievable in Lua through Lua-language-server (which is already extensively used in this project judging by the Lua CATS scatter in the code base). This would be considered simply a parser from Lua code to JSON/Lua-table LLM tool API. flowchart LR
subgraph s1["code2tool"]
n1["lua function<br>with string annotation<br>(source code)"]
n2["OpenAI tool repr. <br>(json or lua table)"]
n5["Anthropic tool repr. <br>(json or lua table)"]
n6["Other tool repr. <br>(json or lua table)"]
n1 --> n2
n1 --> n5
n1 --> n6
end
n2 -- opeai adapter --> n7
n5 -- anthropic adapter --> n7
n6 -- other adapter --> n7
n7["codecompanion.nvim"]
This would simplify the creation of new tools: they are just Lua functions with annotations. |
|
@S1M0N38 thanks for sharing those links. My plan will be to get OpenAI working first, executing the tools as expected, ensure i've got test coverage etc. Then I'll investigate the other adapters. I'm hoping that I can have a transformation class that can map between the adapters although Gemini's looks suitably different that this may be troublesome. I'll hopefully know early next week. |
|
Yep and I've merged the |
|
I've done the PR for the function-calling VectorCode tool and I'm looking into openai-compatible adapters, and it looks like there's no ## Me
> Context:
> - <tool>vectorcode</tool>
@vectorcode reranker implementation in this project?but in the request body there's no {
"model":"Qwen/Qwen2.5-72B-Instruct-128K",
"stream_options":{"include_usage":true},
"stream":true,
"messages":[
// some messages here
]
} |
Should be good now |
|
Sadly no, but the following patch from gemini did the trick: with this patch the gemini 2.0 flash exp on openrouter works flawlessly. |
|
Ah so dumb of me. Included that in the latest commit.
So the provider isn't OpenAI-compatible when it comes to tools? Or is there an issue in how we're putting the tool calls back in the messages request? |
There was an extra EDIT: deepseek from the same provider worked fine so it's just qwen2.5 being dumb. It'll be nice to have the aforementioned fail-safe when the tool putout can't be decoded tho. btw, with the outcome we're getting from this PR, I'll merge Davidyz/VectorCode#59 by tmr. It'll work for both function-calling and xml-based tools, depending on |
Okay and when you say "froze the chat", I assume the chat buffer wasn't unlocked, ready for you to add another user prompt? I must have run 100+ tests now and I haven't had a model produce incorrect JSON (I thought they were fine-tuned to be able to call functions ffs 😆). So, I'll make sure we let the LLM know of the error in the chat. |
It's stuck here: with the buffer lock and no |
I've addressed this now. If the JSON can't be decoded then an error's thrown, the chat buffer's updated and the lock is released. I'll handle any additional errors that arise as users raise them after the merge. |
|
🚀 |
Description
This PR will see the plugin move to native function calling (sometimes called tool use), in the adapters listed in the checklist section below. This will result in much more accurate tool usage for those adapters and likely result in more useful responses from models.
This PR will see the plugin move away from the XML schemas implementation that I built c. 12 months ago. The impact of this is that adapters that aren't listed below will no longer have access to tools within CodeCompanion.
As a solo maintainer, I can't support two different versions of a tools implementation at the same time, so I will be asking those affected by this PR to use a separate branch. I will keep the branch live and open to PRs from the community that add features from the main branch, keeping the two in sync (somewhat).
This PR is scoped to just add function calling for the adapters below although this is subject to change.
Checklist
Function calling added for:
Ollama