gemini 3 support #2405
Replies: 3 comments 7 replies
-
|
Sigh...thanks for sharing this. The sooner we can move to a standardized protocol for LLM comms, the better. Can you give me more background to this? What your prompt was etc? I can see that gemini-3 works with Copilot for conversation but not function calling |
Beta Was this translation helpful? Give feedback.
-
|
This has now been added in #2411 👍🏼 |
Beta Was this translation helpful? Give feedback.
-
|
I know it's a slight tangent, but in the past I was able to basically copy-paste the gemini adapter to make a vertex adapter(here). Sure there are a couple of small changes to make for the endpoint and the env vars, but nothing too tricky. I did the same after your latest changes, I sent a request the usual way, but codecompanion doesn't display the response from the model. There is no error in the UI (if I use an endpoint that doesn't exist, I do get a bit red error, unmissable). Here I'm not getting any error, see screenshot. I checked the logs, and this is what I got: I have no tool enabled for this session. I even disabled the tools in the vertex adapter (in the opts map). If I change the model to gemini-2.5-pro, it works and I get a response. Would you have any idea about what's happening? If you could point me in the right direction I could try and fix the problem myself. Right now I'm not sure the problem is caused by codecompanion, I'd be tempted to say the error is from the API. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
When naively trying to use the
gemini-3-pro-previewmodel it requires athought_signature, see https://ai.google.dev/gemini-api/docs/thought-signatures. It would be great if that could be added to the plugin, I'd be happy to test out any changes.Beta Was this translation helpful? Give feedback.
All reactions