Skip to content

Commit 309f9dc

Browse files
committed
Reduce compressed stats per operation from 100 -> 20
We keep durations of invocations to measure how long our algo takes to detect attacks. So per `fs.readFile` or `mysql.query`... When 5000 samples are reached, we compress the durations into percentiles. 100 * 5000 = 5M durations That's a lot per operation. I think we can reduce this to 100K invocations per operation :) So that we only keep 20 compressed percentiles per operation. If we go over this 20, we drop the oldest compressed percentiles.
1 parent 192443f commit 309f9dc

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

library/agent/Agent.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ export class Agent {
5454
private rateLimiter: RateLimiter = new RateLimiter(5000, 120 * 60 * 1000);
5555
private statistics = new InspectionStatistics({
5656
maxPerfSamplesInMemory: 5000,
57-
maxCompressedStatsInMemory: 100,
57+
maxCompressedStatsInMemory: 20, // per operation
5858
});
5959
private middlewareInstalled = false;
6060
private attackLogger = new AttackLogger(1000);

0 commit comments

Comments
 (0)