-
Notifications
You must be signed in to change notification settings - Fork 7
fix: "commits where fail" number calculation #354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #354 +/- ##
==========================================
- Coverage 94.17% 94.17% -0.01%
==========================================
Files 1270 1272 +2
Lines 47088 47102 +14
Branches 1497 1497
==========================================
+ Hits 44344 44356 +12
- Misses 2438 2440 +2
Partials 306 306
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (70.58%) is below the target coverage (90.00%). You can increase the patch coverage or adjust the target coverage. 📢 Thoughts on this report? Let us know! |
CodSpeed Performance ReportMerging #354 will not alter performanceComparing Summary
|
63e52d5
to
37e6ae9
Compare
...red/django_apps/ta_timeseries/migrations/0028_branchtestaggregatedaily_testaggregatedaily.py
Outdated
Show resolved
Hide resolved
37e6ae9
to
872724d
Compare
...red/django_apps/ta_timeseries/migrations/0028_branchtestaggregatedaily_testaggregatedaily.py
Outdated
Show resolved
Hide resolved
fb3963b
to
fdb706f
Compare
4752941
to
e8d4189
Compare
I made a mistake in a previous migration where i wanted to run the creation of an index using timescale's with transaction_per_chunk, which is its alternative to create index concurrently, and since we let the hypertables be managed by django i created the index on the model, but django's state was never updated. so i introduce ts_create_index_on_managed_hypertable to avoid this situation in the future and introduce a migration to fix the mistake i had made before
this contains the update to the application logic that uses the new CAs we defined in the previous commit we're also improving the filtering of skipped tests by using information from the CAs which allows us to avoid doing more expensive queries during every API call we also use window function to calculate slow tests instead of subqueries I also choose to refactor the code out into modules for each type of data and a utils module for code reuse
e8d4189
to
e04b4b6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
depends on #427
we have to store the individual commit shas in the aggregate to be able to
get an accurate number, otherwise we might double count and inflate the number
so we have to re-define the CAs to use an array_agg(distinct ...) then use our
custom array_merge_dedup aggregate function in the query that's run against the
CAs and we end up with a list of all the commit shas where there were failures
then we take the cardinality and coalesce with 0 in case a null appeared at any
point of the computation