Let a cheaper LLM fix tool call issues instead of retrying with the same provider #2521
CaliLuke
started this conversation in
Feature Requests
Replies: 2 comments
-
Great idea |
Beta Was this translation helpful? Give feedback.
0 replies
-
Please run a local llm that does the job |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Gemini makes a lot of tool call mistakes, and the constant round tripping can get expensive. Instead it would be cheaper and faster if a local llm or a cheaper model like 2.0-flash (or a different llm like Claude) could take care of those errors until the edit succeeds, then switch back to the original llm to continue the job.
Beta Was this translation helpful? Give feedback.
All reactions