Skip to content

fix: send Content-Type: application/json when calling inference service#25

Merged
JiriPapousek merged 1 commit intoRedHatInsights:mainfrom
JiriPapousek:fix-inference-request
Feb 27, 2026
Merged

fix: send Content-Type: application/json when calling inference service#25
JiriPapousek merged 1 commit intoRedHatInsights:mainfrom
JiriPapousek:fix-inference-request

Conversation

@JiriPapousek
Copy link
Contributor

Additional Context

Add Content-Type: application/json to upgrades-inference requests, since the header became required by default with move to newer version of FastAPI. This is blocking automatic version bump in RedHatInsights/ccx-upgrades-inference#22

Explanation from Claude:

FastAPI 0.133.1 (with newer starlette) is stricter about requiring Content-Type: application/json for JSON body parsing

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Bump-up dependent library (no changes in the code)

Testing steps

NA

Checklist

  • pre-commit run --all passes
  • updated documentation wherever necessary
  • added or modified tests if necessary
  • updated schemas and validators in insights-data-schemas in case of input/output change

@coderabbitai
Copy link

coderabbitai bot commented Feb 27, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 79e70b6 and cdf373e.

📒 Files selected for processing (1)
  • ccx_upgrades_data_eng/inference.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • ccx_upgrades_data_eng/inference.py

Summary by CodeRabbit

Release Notes

  • Refactor
    • Improved how inference requests serialize and send JSON payloads, switching to a native JSON structure for more reliable transmission. No changes to request method, timeouts, status handling, or downstream processing—behavior remains consistent while reducing serialization fragility.

Walkthrough

Modified the inference request to send a JSON-serializable dictionary via json=risk_predictors.model_dump() instead of a pre-stringified payload data=risk_predictors.json(), preserving HTTP method, timeout, and downstream status handling.

Changes

Cohort / File(s) Summary
Inference Request Serialization
ccx_upgrades_data_eng/inference.py
Replaced data=risk_predictors.json() with json=risk_predictors.model_dump() to send a JSON-serializable dictionary payload instead of a JSON string.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding Content-Type: application/json header to inference service requests.
Description check ✅ Passed The description covers required template sections including Additional Context, Type of change, and Testing steps with clear explanations.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ccx_upgrades_data_eng/inference.py`:
- Line 29: The requests.get call assigned to inference_response is misformatted
for Black; reformat the call to requests.get(inference_endpoint,
json=risk_predictors.model_dump(), timeout=5) by breaking arguments onto
separate lines (one argument per line and trailing comma) so Black will accept
it — locate the call using the inference_response variable name and the
requests.get invocation with inference_endpoint and risk_predictors.model_dump()
and update its formatting accordingly.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between ee18e92 and 79e70b6.

📒 Files selected for processing (1)
  • ccx_upgrades_data_eng/inference.py

@JiriPapousek JiriPapousek merged commit 708cee9 into RedHatInsights:main Feb 27, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants