feat(preprod): Hook size analysis detector to diff#108209
feat(preprod): Hook size analysis detector to diff#108209chromy wants to merge 1 commit intochromy/2026-03-12-add-grouptypefrom
Conversation
|
This PR has a migration; here is the generated SQL for for --
-- Create model SizeAnalysisSubscription
--
CREATE TABLE "sentry_sizeanalysissubscription" ("id" bigint NOT NULL PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, "date_updated" timestamp with time zone NOT NULL, "date_added" timestamp with time zone NOT NULL, "project_id" bigint NOT NULL);
ALTER TABLE "sentry_sizeanalysissubscription" ADD CONSTRAINT "sentry_sizeanalysiss_project_id_41e3355e_fk_sentry_pr" FOREIGN KEY ("project_id") REFERENCES "sentry_project" ("id") DEFERRABLE INITIALLY DEFERRED NOT VALID;
ALTER TABLE "sentry_sizeanalysissubscription" VALIDATE CONSTRAINT "sentry_sizeanalysiss_project_id_41e3355e_fk_sentry_pr";
CREATE INDEX CONCURRENTLY "sentry_sizeanalysissubscription_project_id_41e3355e" ON "sentry_sizeanalysissubscription" ("project_id"); |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| def validate_data_sources(self, value: list[Any]) -> list[Any]: | ||
| if not value: | ||
| raise serializers.ValidationError("At least one data source is required") | ||
| return value |
There was a problem hiding this comment.
Validation bypass when dataSources field is omitted entirely
Low Severity
The validate_data_sources method enforces "at least one data source is required," but because the data_sources field is declared with required=False, DRF never calls validate_data_sources when the field is omitted entirely from the request. Only an explicitly empty list (dataSources: []) is caught. Omitting dataSources altogether allows creating a detector with no data sources, making it non-functional since no subscriptions or data sources would exist to trigger it.
| extra={ | ||
| "subscription_id": subscription.id, | ||
| "detector_count": len(results), | ||
| }, |
There was a problem hiding this comment.
Parameter base_metric is now unused after refactoring
Low Severity
The base_metric parameter is accepted by both maybe_emit_issues and _maybe_emit_issues but is never read in the new function body. The old code passed it to diff_to_occurrence for artifact metadata, but after the refactoring to use process_data_packet, only head_metric is used (to derive the project). This leaves base_metric as dead code in the function signature and at the call site.
Additional Locations (1)
a50e1e0 to
5317fb3
Compare
5317fb3 to
affc2c2
Compare
cb6d43f to
7b2967d
Compare
affc2c2 to
258f072
Compare
7b2967d to
f5ae972
Compare


Replace direct Kafka occurrence production with the workflow engine's
DataPacket pipeline. maybe_emit_issues() now looks up
SizeAnalysisSubscriptions for the project and feeds size deltas into
process_data_packet(), which resolves linked detectors and evaluates
them instead of using a hardcoded threshold.
PRs:
Design doc