OpenManus with Gemini 2 million token #714
Replies: 18 comments 1 reply
-
how much you pay? |
Beta Was this translation helpful? Give feedback.
-
Gemini 2.0 is free. You can create an API key here. |
Beta Was this translation helpful? Give feedback.
-
can you shows the config example without your real api key? it's not working for me and shows |
Beta Was this translation helpful? Give feedback.
-
Can you show which api base url you used for Gemini? |
Beta Was this translation helpful? Give feedback.
-
` Global LLM configuration Optional configuration for specific LLM models ` |
Beta Was this translation helpful? Give feedback.
-
2025-03-10 09:01:08.741 | ERROR | app.llm:ask_tool:256 - Authentication failed. Check API key. |
Beta Was this translation helpful? Give feedback.
-
if I use |
Beta Was this translation helpful? Give feedback.
-
This Config works for me:
taken from https://ai.google.dev/gemini-api/docs/openai#python |
Beta Was this translation helpful? Give feedback.
-
Look Here: https://ai.google.dev/gemini-api/docs/openai#python |
Beta Was this translation helpful? Give feedback.
-
Thanks! It's working. |
Beta Was this translation helpful? Give feedback.
-
But there is other problems: |
Beta Was this translation helpful? Give feedback.
-
Thanks,i have supply one |
Beta Was this translation helpful? Give feedback.
-
can we use thinking models as well? |
Beta Was this translation helpful? Give feedback.
-
thank you for the config, that work, but that leads to rate limiting errors, now this is free 1mil token api, so it cannot be on server side and from the front side I see only one or more actual request made, so something is off in that setup |
Beta Was this translation helpful? Give feedback.
-
Using gemini api will end up with: Validation error in ask_tool: Message must contain either 'content' or 'tool_calls' |
Beta Was this translation helpful? Give feedback.
-
You can easily bypass this by editing app/agent/toolcall.py. Add this call into a file
then add a while clause before calling the LLM.
It will try to call LLM until it provides any action to take. Sometimes, LLM is not able to provide the next steps, so we end up with the issue. |
Beta Was this translation helpful? Give feedback.
-
Would this resolve an issue with "Error in PlanningFlow: Parameter 'title' is required for command: create"? |
Beta Was this translation helpful? Give feedback.
-
If gemini allows millions of tokens, why keep max_tokens = 4096? Wasn't it supposed to be max_token= 1056768? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I manage to use with Gemini 2.0 with 2 million token. Very useful projects.
Beta Was this translation helpful? Give feedback.
All reactions