You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
changefeedccl: gate Kafka v2 message too large error detail behind
cluster setting
A recent change added detailed logging for Kafka v2 changefeed
messages that exceed the broker's size limit. These logs now
include the message key, size, and MVCC timestamp to aid in
debugging.
To make this safe for backporting, the behavior is now gated behind
the cluster setting:
changefeed.kafka_sink.log_message_too_large_details.enabled
In the main branch, this setting defaults to true to preserve the
enhanced observability. In release branch backports, it will default
to false.
When enabled, the log will include:
- The key of the offending message
- Combined key + value size
- MVCC timestamp
When disabled, the log reverts to the previous, minimal format.
Release note (general change): Kafka v2 changefeed sinks now support
a cluster setting that enables detailed error logging for messages
exceeding Kafka v2 size limit.
Copy file name to clipboardExpand all lines: docs/generated/settings/settings-for-tenants.txt
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,7 @@ changefeed.event_consumer_worker_queue_size integer 16 if changefeed.event_consu
18
18
changefeed.event_consumer_workers integer 0 the number of workers to use when processing events: <0 disables, 0 assigns a reasonable default, >0 assigns the setting value. for experimental/core changefeeds and changefeeds using parquet format, this is disabled application
19
19
changefeed.fast_gzip.enabled boolean true use fast gzip implementation application
20
20
changefeed.span_checkpoint.lag_threshold (alias: changefeed.frontier_highwater_lag_checkpoint_threshold) duration 10m0s the amount of time a changefeed's lagging (slowest) spans must lag behind its leading (fastest) spans before a span-level checkpoint to save leading span progress is written; if 0, span-level checkpoints due to lagging spans is disabled application
21
+
changefeed.kafka_v2_error_details.enabled boolean true if enabled, Kafka v2 sinks will include the message key, size, and MVCC timestamp in message too large errors application
21
22
changefeed.memory.per_changefeed_limit byte size 512 MiB controls amount of data that can be buffered per changefeed application
22
23
changefeed.resolved_timestamp.min_update_interval (alias: changefeed.min_highwater_advance) duration 0s minimum amount of time that must have elapsed since the last time a changefeed's resolved timestamp was updated before it is eligible to be updated again; default of 0 means no minimum interval is enforced but updating will still be limited by the average time it takes to checkpoint progress application
23
24
changefeed.node_throttle_config string specifies node level throttling configuration for all changefeeeds application
Copy file name to clipboardExpand all lines: docs/generated/settings/settings.html
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,7 @@
23
23
<tr><td><divid="setting-changefeed-event-consumer-workers" class="anchored"><code>changefeed.event_consumer_workers</code></div></td><td>integer</td><td><code>0</code></td><td>the number of workers to use when processing events: <0 disables, 0 assigns a reasonable default, >0 assigns the setting value. for experimental/core changefeeds and changefeeds using parquet format, this is disabled</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
24
24
<tr><td><divid="setting-changefeed-fast-gzip-enabled" class="anchored"><code>changefeed.fast_gzip.enabled</code></div></td><td>boolean</td><td><code>true</code></td><td>use fast gzip implementation</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
25
25
<tr><td><divid="setting-changefeed-frontier-highwater-lag-checkpoint-threshold" class="anchored"><code>changefeed.span_checkpoint.lag_threshold<br/>(alias: changefeed.frontier_highwater_lag_checkpoint_threshold)</code></div></td><td>duration</td><td><code>10m0s</code></td><td>the amount of time a changefeed's lagging (slowest) spans must lag behind its leading (fastest) spans before a span-level checkpoint to save leading span progress is written; if 0, span-level checkpoints due to lagging spans is disabled</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
26
+
<tr><td><divid="setting-changefeed-kafka-v2-error-details-enabled" class="anchored"><code>changefeed.kafka_v2_error_details.enabled</code></div></td><td>boolean</td><td><code>true</code></td><td>if enabled, Kafka v2 sinks will include the message key, size, and MVCC timestamp in message too large errors</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
26
27
<tr><td><divid="setting-changefeed-memory-per-changefeed-limit" class="anchored"><code>changefeed.memory.per_changefeed_limit</code></div></td><td>byte size</td><td><code>512 MiB</code></td><td>controls amount of data that can be buffered per changefeed</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
27
28
<tr><td><divid="setting-changefeed-min-highwater-advance" class="anchored"><code>changefeed.resolved_timestamp.min_update_interval<br/>(alias: changefeed.min_highwater_advance)</code></div></td><td>duration</td><td><code>0s</code></td><td>minimum amount of time that must have elapsed since the last time a changefeed's resolved timestamp was updated before it is eligible to be updated again; default of 0 means no minimum interval is enforced but updating will still be limited by the average time it takes to checkpoint progress</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
28
29
<tr><td><divid="setting-changefeed-node-throttle-config" class="anchored"><code>changefeed.node_throttle_config</code></div></td><td>string</td><td><code></code></td><td>specifies node level throttling configuration for all changefeeeds</td><td>Serverless/Dedicated/Self-Hosted</td></tr>
0 commit comments