Log better error messages for streaming reply from ollama /v1/chat/completions API#14
Log better error messages for streaming reply from ollama /v1/chat/completions API#14JarbasAl merged 2 commits intoOpenVoiceOS:devfrom
Conversation
…mpletions API I was getting a cryptic log message `ERROR - choices` in my logs and couldn't figure it out. Turns out ollama was returning errors in a specific format; by adding a few lines here we get a more useful error message `ERROR - API returned an error: model "gpt-3.5-turbo" not found, try pulling it first`
|
Warning Rate limit exceeded@devvmh has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 22 minutes and 26 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
Warning Walkthrough skippedFile diffs could not be summarized. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
| if chunk: | ||
| chunk = chunk.decode("utf-8") | ||
| chunk = json.loads(chunk.split("data: ", 1)[-1]) | ||
| if chunk["error"] and chunk["error"]["message"]: |
There was a problem hiding this comment.
should be chunk.get("error") or "error" in chunk as the key will be missing when things go as expected
There was a problem hiding this comment.
thanks, i am not a python pro. posting a revision which i'm hoping will be better; feel free to suggest more improvements
I was getting a cryptic log message
ERROR - choicesin my logs and couldn't figure it out. Turns out ollama was returning errors in a specific format; by adding a few lines here we get a more useful error messageERROR - API returned an error: model "gpt-3.5-turbo" not found, try pulling it first