Skip to content

Commit f5392e6

Browse files
authored
Partition ring proposal (#6932)
Signed-off-by: Daniel Deluiggi <[email protected]>
1 parent 6f9d0be commit f5392e6

File tree

1 file changed

+209
-0
lines changed

1 file changed

+209
-0
lines changed
Lines changed: 209 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,209 @@
1+
---
2+
title: "Partition Ring with Multi-AZ Replication"
3+
linkTitle: "Partition Ring Multi-AZ Replication"
4+
weight: 1
5+
slug: partition-ring-multi-az-replication
6+
---
7+
8+
- Author: [Daniel Blando](https://github.com/danielblando)
9+
- Date: July 2025
10+
- Status: Proposed
11+
12+
## Background
13+
14+
Distributors use a token-based ring to shard data across ingesters. Each ingester owns random tokens (32-bit numbers) in a hash ring. For each incoming series, the distributor:
15+
16+
1. Hashes the series labels to get a hash value
17+
2. Finds the primary ingester (smallest token > hash value)
18+
3. When replication is enabled, selects additional replicas by moving clockwise around the ring
19+
4. Ensures replicas are distributed across different availability zones
20+
21+
The issue arises when replication is enabled: each series in a request is hashed independently, causing each series to route to different groups of ingesters.
22+
23+
```mermaid
24+
graph TD
25+
A[Write Request] --> B[Distributor]
26+
B --> C[Hash Series 1] --> D[Ingesters: 5,7,9]
27+
B --> E[Hash Series 2] --> F[Ingesters: 5,3,10]
28+
B --> G[Hash Series 3] --> H[Ingesters: 7,27,28]
29+
B --> I[...] --> J[Different ingester sets<br/>for each series]
30+
```
31+
32+
## Problem
33+
34+
### Limited AZ Failure Tolerance with replication factor
35+
36+
While the token ring effectively distributes load across the ingester fleet, the independent hashing and routing of each series creates an amplification effect where a single ingester failure can impact a large number of write requests.
37+
38+
Consider a ring with 30 ingesters, each series gets distributed to three different ingesters:
39+
40+
```
41+
Sample 1: {name="http_request_latency",api="/push", status="2xx"}
42+
→ Ingesters: ing-5, ing-7, ing-9
43+
Sample 2: {name="http_request_latency",api="/push", status="4xx"}
44+
→ Ingesters: ing-5, ing-3, ing-10
45+
Sample 3: {name="http_request_latency",api="/push", status="2xx"}
46+
→ Ingesters: ing-7, ing-27, ing-28
47+
...
48+
```
49+
If ingesters `ing-15` and `ing-18` (in different AZs) are offline, any request containing a series that needs to write to both these ingesters will fail completely:
50+
51+
```
52+
Sample 15: {name="http_request_latency",api="/push", status="5xx"}
53+
→ Ingesters: ing-10, ing-15, ing-18 // Request fails
54+
```
55+
56+
With requests increasing their batch size, the probability of request failure becomes critical in replicated deployments. Given two failed ingesters in different AZs, each individual series has a small chance of requiring both failed ingesters. However, as request batch sizes increase, the probability that at least one series in the batch will hash to both failed ingesters approaches certainty.
57+
58+
**Note**: This problem specifically affects Cortex using replication. Replication as 1 are not impacted by this availability amplification issue.
59+
60+
## Proposed Solution
61+
62+
### Partition Ring Architecture
63+
64+
A new Partition Ring is proposed where the ring is divided into partitions, with each partition containing a set of tokens and a group of ingesters. Ingesters are allocated to partitions based on their order in the zonal StatefulSet, ensuring that scaling operations align with StatefulSet's LIFO behavior. Each partition contains a number of ingesters equal to the replication factor, with exactly one ingester per availability zone.
65+
66+
This approach provides **reduced failure probability** where the chances of getting two ingesters in the same partition down decreases significantly compared to random ingester failures affecting multiple series. It also enables **deterministic replication** where data sent to `ing-az1-1` always replicates to `ing-az2-1` and `ing-az3-1`, making the system behavior more predictable and easier to troubleshoot.
67+
68+
```mermaid
69+
graph TD
70+
subgraph "Partition Ring"
71+
subgraph "Partition 3"
72+
P1A[ing-az1-3]
73+
P1B[ing-az2-3]
74+
P1C[ing-az3-3]
75+
end
76+
subgraph "Partition 2"
77+
P2A[ing-az1-2]
78+
P2B[ing-az2-2]
79+
P2C[ing-az3-2]
80+
end
81+
subgraph "Partition 1"
82+
P3A[ing-az1-1]
83+
P3B[ing-az2-1]
84+
P3C[ing-az3-1]
85+
end
86+
end
87+
88+
T1[Tokens 34] --> P1A
89+
T2[Tokens 56] --> P2A
90+
T3[Tokens 12] --> P3A
91+
```
92+
93+
Within each partition, ingesters maintain identical data, acting as true replicas of each other. Distributors maintain similar hashing logic but select a partition instead of individual ingesters. Data is then forwarded to all ingesters within the selected partition, making the replication pattern deterministic.
94+
95+
### Protocol Buffer Definitions
96+
97+
```protobuf
98+
message PartitionRingDesc {
99+
map<string, PartitionDesc> partitions = 1;
100+
}
101+
102+
message PartitionDesc {
103+
PartitionState state = 1;
104+
repeated uint32 tokens = 2;
105+
map<string, InstanceDesc> instances = 3;
106+
int64 registered_timestamp = 4;
107+
}
108+
109+
// Unchanged from current implementation
110+
message InstanceDesc {
111+
string addr = 1;
112+
int64 timestamp = 2;
113+
InstanceState state = 3;
114+
string zone = 7;
115+
int64 registered_timestamp = 8;
116+
}
117+
```
118+
119+
### Partition States
120+
121+
Partitions maintain a simplified state model that provides **clear ownership** where each series belongs to exactly one partition, but requires **additional state management** for partition states and lifecycle management:
122+
123+
```go
124+
type PartitionState int
125+
126+
const (
127+
NON_READY PartitionState = iota // Insufficient ingesters
128+
ACTIVE // Fully operational
129+
READONLY // Scale-down in progress
130+
)
131+
```
132+
133+
State transitions:
134+
```mermaid
135+
stateDiagram-v2
136+
[*] --> NON_READY
137+
NON_READY --> ACTIVE : Required ingesters joined<br/>across all AZs
138+
ACTIVE --> READONLY : Scale-down initiated
139+
ACTIVE --> NON_READY : Ingester removed
140+
READONLY --> NON_READY : Ingesters removed
141+
NON_READY --> [*] : Partition deleted
142+
```
143+
144+
### Partition Lifecycle Management
145+
146+
#### Creating Partitions
147+
148+
When a new ingester joins the ring:
149+
1. Check if a suitable partition exists with available slots
150+
2. If no partition exists, create a new partition in `NON_READY` state
151+
3. Add partition's tokens to the ring
152+
4. Add the ingester to the partition
153+
5. Wait for required number of ingesters across all AZs (one per AZ)
154+
6. Once all AZs are represented, transition partition to `ACTIVE`
155+
156+
#### Removing Partitions
157+
158+
The scale-down process follows these steps:
159+
1. **Mark READONLY**: Partition stops accepting new writes but continues serving reads
160+
2. **Data Transfer**: Wait for all ingesters in partition to transfer data and become empty
161+
3. **Coordinated Removal**: Remove one ingester from each AZ simultaneously
162+
4. **State Transition**: Partition automatically transitions to `NON_READY` (insufficient replicas)
163+
5. **Cleanup**: Remove remaining ingesters and delete partition from ring
164+
165+
If not using READONLY mode, removing an ingester will make the partition as NON_READY. When all ingesters are removed, the last will delete the partition if configuration `unregister_on_shutdown` is true
166+
167+
### Multi-Ring Migration Strategy
168+
169+
To address the migration challenge for production clusters currently running token-based rings, this proposal also introduces a multi-ring infrastructure that allows gradual traffic shifting from token-based to partition-based rings:
170+
171+
```mermaid
172+
sequenceDiagram
173+
participant C as Client
174+
participant D as Distributor
175+
participant MR as Multi-Ring Router
176+
participant TR as Token Ring
177+
participant PR as Partition Ring
178+
179+
C->>D: Write Request (1000 series)
180+
D->>MR: Route request
181+
MR->>MR: Check percentage config<br/>(e.g., 80% token, 20% partition)
182+
MR->>TR: Route 800 series to Token Ring
183+
MR->>PR: Route 200 series to Partition Ring
184+
185+
Note over TR,PR: Both rings process their portion
186+
TR->>D: Response for 800 series
187+
PR->>D: Response for 200 series
188+
D->>C: Combined response
189+
```
190+
191+
Migration phases for production clusters:
192+
1. **Phase 1**: Deploy partition ring alongside existing token ring (0% traffic)
193+
2. **Phase 2**: Route 10% traffic to partition ring
194+
3. **Phase 3**: Gradually increase to 50% traffic
195+
4. **Phase 4**: Route 90% traffic to partition ring
196+
5. **Phase 5**: Complete migration (100% partition ring)
197+
198+
This multi-ring approach solves the migration problem for existing production deployments that cannot afford downtime during the transition from token-based to partition-based rings. It provides **zero downtime migration** with **rollback capability** and **incremental validation** at each step. However, it requires **dual ring participation** where ingesters must participate in both rings during migration, **increased memory usage** and **migration coordination** requiring careful percentage management and monitoring.
199+
200+
#### Read Path Considerations
201+
202+
During migration, the read path (queriers and rulers) must have visibility into both rings to ensure all functionality works correctly:
203+
204+
- **Queriers** must check both token and partition rings to locate series data, as data may be distributed across both ring types during migration
205+
- **Rulers** must evaluate rules against data from both rings to ensure complete rule evaluation
206+
- **Ring-aware components** (like shuffle sharding) must operate correctly across both ring types
207+
- **Metadata operations** (like label queries) must aggregate results from both rings
208+
209+
All existing Cortex functionality must continue to work seamlessly during the migration period, requiring components to transparently handle the dual-ring architecture.

0 commit comments

Comments
 (0)