Skip to content

Conversation

@Pavilion4ik
Copy link
Contributor

What

  • Fixed an issue where raw JSON responses were shown only in the non-streaming flow
  • Improved backend LLM response handling to properly fallback when response values are empty
  • Replaced dict.get(key, default) with explicit value-or-default logic
  • Added a shared frontend constant for no-response scenarios
  • Updated frontend logic to display a user-friendly fallback message

Why

  • The issue affected non-streaming responses only, since streaming mode yields plain text chunks and does not operate on JSON payloads
  • In the non-streaming flow, dict.get(key, default) returned empty values when keys existed but contained no data
  • This caused empty or unreadable LLM responses to reach the frontend and be rendered as raw JSON
  • The change ensures consistent backend behavior and improves UX by always showing a clear fallback message

- Fixed backend LLM response handling to properly fallback when values are empty
- Replaced dict.get(key, default) with value-or-default fallback logic
- Added a shared "no response" message constant on the frontend
- Updated frontend to show a friendly message instead of raw JSON responses
@openedx-webhooks openedx-webhooks added the open-source-contribution PR author is not from Axim or 2U label Jan 7, 2026
@openedx-webhooks
Copy link

Thanks for the pull request, @Pavilion4ik!

This repository is currently maintained by @felipemontoya.

Once you've gone through the following steps feel free to tag them in a comment and let them know that your changes are ready for engineering review.

🔘 Get product approval

If you haven't already, check this list to see if your contribution needs to go through the product review process.

  • If it does, you'll need to submit a product proposal for your contribution, and have it reviewed by the Product Working Group.
    • This process (including the steps you'll need to take) is documented here.
  • If it doesn't, simply proceed with the next step.
🔘 Provide context

To help your reviewers and other members of the community understand the purpose and larger context of your changes, feel free to add as much of the following information to the PR description as you can:

  • Dependencies

    This PR must be merged before / after / at the same time as ...

  • Blockers

    This PR is waiting for OEP-1234 to be accepted.

  • Timeline information

    This PR must be merged by XX date because ...

  • Partner information

    This is for a course on edx.org.

  • Supporting documentation
  • Relevant Open edX discussion forum threads
🔘 Get a green build

If one or more checks are failing, continue working on your changes until this is no longer the case and your build turns green.

Details
Where can I find more information?

If you'd like to get more details on all aspects of the review process for open source pull requests (OSPRs), check out the following resources:

When can I expect my changes to be merged?

Our goal is to get community contributions seen and reviewed as efficiently as possible.

However, the amount of time that it takes to review and merge a PR can vary significantly based on factors such as:

  • The size and impact of the changes that it introduces
  • The need for product review
  • Maintenance status of the parent repository

💡 As a result it may take up to several weeks or months to complete a review and merge your PR.

@codecov
Copy link

codecov bot commented Jan 7, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 90.58%. Comparing base (a8956e3) to head (8d784a3).

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #95   +/-   ##
=======================================
  Coverage   90.58%   90.58%           
=======================================
  Files          47       47           
  Lines        4310     4310           
  Branches      271      271           
=======================================
  Hits         3904     3904           
  Misses        317      317           
  Partials       89       89           
Flag Coverage Δ
unittests 90.58% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@mphilbrick211 mphilbrick211 moved this from Needs Triage to Ready for Review in Contributions Jan 7, 2026
@felipemontoya felipemontoya self-assigned this Jan 9, 2026
@felipemontoya
Copy link
Member

@Pavilion4ik the PR makes sense. Where you able to consistently reproduce the error? I know it happened for me sometimes, but I could not tell if it was related to streaming+functions at the same time

@@ -0,0 +1 @@
export const NO_RESPONSE_MSG = 'The server responded, but no readable content was found. Please try again or contact support if the issue persists.';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency I think we should leave it the same as the backend: "No response available"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — I’ve updated it to match the backend value.

Pavilion4ik and others added 2 commits January 9, 2026 18:22
@Pavilion4ik
Copy link
Contributor Author

To reproduce the error, I mocked the LLM response to return None. This can be done either in the llm_processor or in the orchestrator. Such a situation may occur, for example, when the AI service is busy. There are several cases in which the AI service can return an empty response (e.g., service overload, timeouts, internal model errors, or safety filtering).

The problem is reproducible only in the non-streaming flow. The reason is that all streaming flows always yield strings (either chunks of the AI response or error messages), whereas the non-streaming flow always returns a dictionary (JSON structure).

The issue is caused by incorrect logic when handling dictionary responses from the AI. Specifically, the .get() method only applies the default value when the key is missing, not when the key exists but its value is None.

Example:

content = response.choices[0].message.content
Here, content can be None, and we then return:

return {
    "response": content,
    "tokens_used": total_tokens,
    "model_used": self.provider,
    "status": "success",
}

When this reaches the orchestrator, the previous logic attempted to handle the response using .get() with a default value. However, since the response key exists in the dictionary, the default value is ignored, and None is passed through to the frontend:

response_data = {
    'response': llm_result.get('response', 'No response available'),
    'status': 'completed',
    'metadata': {
        'tokens_used': llm_result.get('tokens_used'),
        'model_used': llm_result.get('model_used')
    }
}
return response_data

As a result, when the frontend receives this JSON, it checks for required keys. If the expected values are None, the frontend falls back to converting the entire JSON object into a string and renders it in the UI:

setResponse(JSON.stringify(data, null, 2));

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open-source-contribution PR author is not from Axim or 2U

Projects

Status: Ready for Review

Development

Successfully merging this pull request may close these issues.

3 participants