Token limit rate problem #328
therenansimoes
started this conversation in
Ideas
Replies: 2 comments
-
Even simpler, throw a --ratelimit argument that specifies a usage rate tier, and can have simple internal logic throttle accordingly. https://platform.openai.com/settings/organization/limits Or could even extract the data from the HTTP headers if I'm not mistaken. https://platform.openai.com/docs/guides/rate-limits#usage-tiers I won't have time to look into this until next weekend, but seems achievable. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Ya this needs to be fixed. When working with a big file you literally just waste money. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'd like to suggest an option on menu to set a max tokens per minute on requests to don't loose the conversation because of rate limits. Or, if codex get rate limit error, just wait instead of crashing and loosing session
Beta Was this translation helpful? Give feedback.
All reactions