You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: develop-docs/sdk/telemetry/telemetry-buffer/backend-telemetry-buffer.mdx
+23-11Lines changed: 23 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,7 +64,8 @@ Introduce a `Buffer` layer between the `Client` and the `Transport`. This `Buffe
64
64
#### How the Buffer works
65
65
66
66
-**Smart batching**: Logs are batched into single requests; errors, transactions, and monitors are sent immediately.
67
-
-**Pre-send rate limiting**: The scheduler checks rate limits before dispatching, avoiding unnecessary requests while keeping items buffered.
67
+
-**Pre-send rate limiting**: The scheduler checks rate limits before serialization to avoid unnecessary processing. When a telemetry is rate-limited the selected batch should
68
+
be dropped, to avoid filling up the buffers.
68
69
-**Category isolation**: Separate ring buffers for each telemetry type prevent head-of-line blocking.
69
70
-**Weighted scheduling**: High-priority telemetry gets sent more frequently via round-robin selection.
70
71
-**Transport compatibility**: Works with existing HTTP transport implementations without modification.
@@ -136,11 +137,20 @@ The scheduler runs as a background worker, coordinating the flow of telemetry fr
136
137
137
138
#### Transport
138
139
139
-
The transport layer handles HTTP communication with Sentry's ingestion endpoints:
140
+
The transport layer handles HTTP communication with Sentry's ingestion endpoints.
141
+
142
+
<Alertlevel="info">
143
+
144
+
The only layer responsible for dropping events is the Buffer. In case that the transport is full, then the Buffer should drop the batch.
145
+
146
+
</Alert>
140
147
141
148
### Configuration
142
149
143
-
#### Buffer Options
150
+
#### Transport Options
151
+
-**Capacity**: 1000 items.
152
+
153
+
#### Telemetry Buffer Options
144
154
-**Capacity**: 100 items for errors and check-ins, 10*BATCH_SIZE for logs, 1000 for transactions.
145
155
-**Overflow policy**: `drop_oldest`.
146
156
-**Batch size**: 1 for errors and monitors (immediate send), 100 for logs.
@@ -258,13 +268,15 @@ func (s *Scheduler) processNextBatch() {
258
268
259
269
// Find ready buffer for this priority
260
270
forcategory, buffer:=range s.buffers {
261
-
if buffer.Priority() == priority &&
262
-
!s.transport.IsRateLimited(category) &&
263
-
buffer.IsReadyToFlush() {
264
-
items:= buffer.PollIfReady()
265
-
s.sendItems(category, items)
266
-
// only process one batch per tick
267
-
break
271
+
if buffer.Priority() == priority && buffer.IsReadyToFlush() {
272
+
items:= buffer.PollIfReady()
273
+
if s.transport.IsRateLimited(category) {
274
+
// drop the batch and return
275
+
return
276
+
}
277
+
s.sendItems(category, items)
278
+
// only process one batch per tick
279
+
break
268
280
}
269
281
}
270
282
}
@@ -276,7 +288,7 @@ func (s *Scheduler) processNextBatch() {
276
288
```go
277
289
func(s *Scheduler) flush() {
278
290
// should process all store buffers and send to transport
0 commit comments