Skip to content

Reduce compressed stats per operation from 100 -> 20#610

Merged
hansott merged 1 commit intomainfrom
reduce-compressed-stats
May 26, 2025
Merged

Reduce compressed stats per operation from 100 -> 20#610
hansott merged 1 commit intomainfrom
reduce-compressed-stats

Conversation

@hansott
Copy link
Member

@hansott hansott commented May 16, 2025

We keep durations of invocations to measure how long our algo takes to detect attacks. So per fs.readFile or mysql.query...

When 5000 samples are reached, we compress the durations into percentiles.

100 * 5000 = 5M durations

That's a lot per operation. I think we can reduce this to 100K invocations per operation :)

So that we only keep 20 compressed percentiles per operation.

If we go over this 20, we drop the oldest compressed percentiles.

We keep durations of invocations to measure how long our algo takes to
detect attacks. So per `fs.readFile` or `mysql.query`...

When 5000 samples are reached, we compress the durations into
percentiles.

100 * 5000 = 5M durations

That's a lot per operation. I think we can reduce this to 100K
invocations per operation :)

So that we only keep 20 compressed percentiles per operation.

If we go over this 20, we drop the oldest compressed percentiles.
@codecov
Copy link

codecov bot commented May 16, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

📢 Thoughts on this report? Let us know!

@hansott hansott merged commit 031b06c into main May 26, 2025
14 checks passed
@hansott hansott deleted the reduce-compressed-stats branch May 26, 2025 12:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants