-
Notifications
You must be signed in to change notification settings - Fork 4
feat: prevent UI from showing raw JSON when AI response is empty #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: prevent UI from showing raw JSON when AI response is empty #95
Conversation
- Fixed backend LLM response handling to properly fallback when values are empty - Replaced dict.get(key, default) with value-or-default fallback logic - Added a shared "no response" message constant on the frontend - Updated frontend to show a friendly message instead of raw JSON responses
|
Thanks for the pull request, @Pavilion4ik! This repository is currently maintained by Once you've gone through the following steps feel free to tag them in a comment and let them know that your changes are ready for engineering review. 🔘 Get product approvalIf you haven't already, check this list to see if your contribution needs to go through the product review process.
🔘 Provide contextTo help your reviewers and other members of the community understand the purpose and larger context of your changes, feel free to add as much of the following information to the PR description as you can:
🔘 Get a green buildIf one or more checks are failing, continue working on your changes until this is no longer the case and your build turns green. DetailsWhere can I find more information?If you'd like to get more details on all aspects of the review process for open source pull requests (OSPRs), check out the following resources: When can I expect my changes to be merged?Our goal is to get community contributions seen and reviewed as efficiently as possible. However, the amount of time that it takes to review and merge a PR can vary significantly based on factors such as:
💡 As a result it may take up to several weeks or months to complete a review and merge your PR. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #95 +/- ##
=======================================
Coverage 90.58% 90.58%
=======================================
Files 47 47
Lines 4310 4310
Branches 271 271
=======================================
Hits 3904 3904
Misses 317 317
Partials 89 89
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@Pavilion4ik the PR makes sense. Where you able to consistently reproduce the error? I know it happened for me sometimes, but I could not tell if it was related to streaming+functions at the same time |
frontend/src/services/constants.js
Outdated
| @@ -0,0 +1 @@ | |||
| export const NO_RESPONSE_MSG = 'The server responded, but no readable content was found. Please try again or contact support if the issue persists.'; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency I think we should leave it the same as the backend: "No response available"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed — I’ve updated it to match the backend value.
- Updated NO_RESPONSE_MSG to match the backend default message
|
To reproduce the error, I mocked the LLM response to return None. This can be done either in the llm_processor or in the orchestrator. Such a situation may occur, for example, when the AI service is busy. There are several cases in which the AI service can return an empty response (e.g., service overload, timeouts, internal model errors, or safety filtering). The problem is reproducible only in the non-streaming flow. The reason is that all streaming flows always yield strings (either chunks of the AI response or error messages), whereas the non-streaming flow always returns a dictionary (JSON structure). The issue is caused by incorrect logic when handling dictionary responses from the AI. Specifically, the .get() method only applies the default value when the key is missing, not when the key exists but its value is None. Example:
When this reaches the orchestrator, the previous logic attempted to handle the response using .get() with a default value. However, since the response key exists in the dictionary, the default value is ignored, and None is passed through to the frontend: As a result, when the frontend receives this JSON, it checks for required keys. If the expected values are None, the frontend falls back to converting the entire JSON object into a string and renders it in the UI:
|
What
Why