-
Notifications
You must be signed in to change notification settings - Fork 203
fix: handle support for partial success introduce in dbt 1.9 with microbatching #2041
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: handle support for partial success introduce in dbt 1.9 with microbatching #2041
Conversation
|
👋 @nicor88 |
WalkthroughAdded a new enum member Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/unit/monitor/api/alerts/test_alert_filters.py (1)
855-876: Update assertions to account for the new partial success alert.With
model_alert_4added (status"partial success"), the default filter now excludes two model alerts:model_alert_3(skipped) andmodel_alert_4(partial success). The assertions need updating.Apply this diff:
assert len(filter_test_alerts) == len(test_alerts) - assert len(filter_model_alerts) == len(model_alerts) - 1 # 1 skipped model alert + assert len(filter_model_alerts) == len(model_alerts) - 2 # 1 skipped, 1 partial success # Test that passing no filter returns all the alerts. filter_test_alerts = filter_alerts(test_alerts) filter_model_alerts = filter_alerts(model_alerts) assert len(filter_test_alerts) == len(test_alerts) - assert len(filter_model_alerts) == len(model_alerts) - 1 # 1 skipped model alert + assert len(filter_model_alerts) == len(model_alerts) - 2 # 1 skipped, 1 partial success
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
elementary/monitor/data_monitoring/schema.py(1 hunks)tests/unit/monitor/api/alerts/test_alert_filters.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unit/monitor/api/alerts/test_alert_filters.py (4)
elementary/monitor/fetchers/alerts/schema/pending_alerts.py (3)
PendingAlertSchema(35-94)AlertTypes(23-26)AlertStatus(29-32)elementary/monitor/fetchers/alerts/schema/alert_data.py (1)
ModelAlertDataSchema(234-286)elementary/monitor/data_monitoring/schema.py (4)
ResourceType(27-30)FiltersSchema(146-342)FilterSchema(62-126)FilterType(33-37)elementary/monitor/api/alerts/alert_filters.py (1)
filter_alerts(80-95)
🔇 Additional comments (2)
elementary/monitor/data_monitoring/schema.py (1)
24-24: LGTM, but verify default filter behavior.The enum addition is correct and follows the existing naming convention. However, note that
PARTIAL_SUCCESSis not included in the default status filter (lines 137-143), meaning alerts with this status will be excluded by default—similar toSKIPPED. Please confirm this is the intended behavior for microbatch partial success statuses.tests/unit/monitor/api/alerts/test_alert_filters.py (1)
235-261: LGTM, but this addition will impact existing tests.The new alert with
status="partial success"is correctly structured and serves to test the newPARTIAL_SUCCESSstatus. However, be aware that:
- The default filter excludes
PARTIAL_SUCCESS(likeSKIPPED), so tests at lines 863 and 869 will need updating- If node names are extracted from
model_unique_id, the test at line 757 might also be affected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/unit/monitor/api/alerts/test_alert_filters.py (1)
879-894: LGTM! Default filter correctly excludes both skipped and partial success.The updated logic properly accounts for the new "partial success" status being excluded by default filters, alongside "skipped" status. The comment and assertion changes are accurate and well-documented.
Consider adding an explicit test case that filters by
Status.PARTIAL_SUCCESSto ensure the new enum value works correctly in positive filtering scenarios, similar to the existing status filter tests (lines 788-824). This would provide comprehensive coverage for the new status value.Example:
filter = FiltersSchema( statuses=[StatusFilterSchema(values=[Status.PARTIAL_SUCCESS], type=FilterType.IS)] ) filter_model_alerts = filter_alerts(model_alerts, filter) assert len(filter_model_alerts) == 1 assert filter_model_alerts[0].id == "model_alert_4"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tests/unit/monitor/api/alerts/test_alert_filters.py(14 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unit/monitor/api/alerts/test_alert_filters.py (3)
elementary/monitor/fetchers/alerts/schema/pending_alerts.py (3)
PendingAlertSchema(35-94)AlertTypes(23-26)AlertStatus(29-32)elementary/monitor/data_monitoring/schema.py (2)
ResourceType(27-30)FiltersSchema(146-342)elementary/monitor/api/alerts/alert_filters.py (1)
filter_alerts(80-95)
🔇 Additional comments (2)
tests/unit/monitor/api/alerts/test_alert_filters.py (2)
235-261: LGTM! Well-structured test fixture for microbatch partial success.The new
model_alert_4correctly represents a microbatch incremental model with "partial success" status, providing comprehensive test coverage for the dbt v1.9 feature. All required fields are properly populated, and the distinct owner ("alice") and model_id ("model_id_3") enable effective filtering test scenarios.
389-392: LGTM! All test updates correctly account for model_alert_4.The updated assertions throughout all filter test scenarios are accurate:
- Tag-based filters correctly include/exclude model_alert_4 based on its ["microbatch"] tag
- Owner-based filters correctly handle the new "alice" owner
- Model-based filters correctly recognize "model_id_3"
- Status-based IS_NOT filters correctly include "partial success" as it differs from FAIL/WARN/ERROR
- Node name filter correctly matches "model_id_3"
All expected counts and sorted ID lists have been properly adjusted.
Also applies to: 414-416, 439-444, 474-479, 504-508, 540-544, 604-606, 628-632, 696-698, 720-723, 777-778, 837-841
Fix #1894
Tweaked unit tests to handle the new model failure.
Remarks
I didn't validate that the above works in a real prod environment, anyhow remove the new Enum value for
partial successand run the unit tests changed in this PR, I was able to get the exactly same error that I had in production.FAILED tests/unit/monitor/api/alerts/test_alert_filters.py::test_filter_alerts - ValueError: 'partial success' is not a valid StatusPutting back the enum value, let the unit test run properly.
Summary by CodeRabbit
New Features
Tests