Skip to content

Conversation

@fimanishi
Copy link
Member

What changed?
Fixed support for parallelizing the replication task fetcher with multiple goroutines and fixed metrics to measure fetch performance.
Metrics:

  • replication_task_fetch_latency_ns - Exponential histogram tracking RPC latency for fetching replication tasks from remote clusters
  • replication_tasks_fetched_size - Histogram tracking the number of tasks fetched per batch

The fetcher now supports multiple concurrent fetch goroutines (controlled by ReplicationTaskFetcherParallelism config), with shards distributed across goroutines using modulo arithmetic to maintain deterministic routing.

Why?
Parallelization allows the fetcher to handle more shards concurrently without losing the benefits of request batching. The new metrics provide visibility into whether fetch latency or batch sizing is the bottleneck.

How did you test it?
Unit test and simulation

Potential risks
Possible increase in memory usage and RPC load with the increase of the number of task fetchers. Setting high parallelism with low rate limits wastes resources as goroutines wait on rate limiters. Requires careful tuning.

Default value is set to 1 which was already the default behavior. With no changes to ReplicationTaskFetcherParallelism this change will be a NOOP.

Release notes
Fixed Feature: Replication task fetchers now support parallel fetch operations via the ReplicationTaskFetcherParallelism dynamic config (defaults to 1 for backward compatibility). This improves replication throughput in large deployments with thousands of shards.

New Metrics:

  • replication_task_fetch_latency_ns: Histogram measuring fetch operation latency
  • replication_tasks_fetched_size: Histogram measuring batch sizes

Documentation Changes

ExponentialReplicationTaskLatency: {metricName: "replication_task_latency_ns", metricType: Histogram, exponentialBuckets: Mid1ms24h},
ExponentialReplicationTaskFetchLatency: {metricName: "replication_task_fetch_latency_ns", metricType: Histogram, exponentialBuckets: Mid1ms24h},
ReplicationTasksFetchedSize: {metricName: "replication_tasks_fetched_size", metricType: Gauge},
ReplicationTasksFetchedSize: {metricName: "replication_tasks_fetched_size", metricType: Histogram, buckets: tally.ValueBuckets{0, 1, 10, 50, 100, 200, 500, 1000, 2000, 5000, 10000, 20000, 50000}},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason why the existing exponential buckets don't fit?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made a change to use an existing similar bucket

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants