You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,9 +43,9 @@ It distributes reconciliation of Kubernetes objects across multiple controller i
43
43
For this, the project applies proven sharding mechanisms used in distributed databases to Kubernetes controllers.
44
44
45
45
The project introduces a `sharder` component that implements sharding in a generic way and can be applied to any Kubernetes controller (independent of the used programming language and controller framework).
46
-
The `sharder` component is installed into the cluster along with a `ClusterRing` custom resource.
47
-
A `ClusterRing` declares a virtual ring of sharded controller instances and specifies API resources that should be distributed across shards in the ring.
48
-
It configures sharding on the cluster-scope level (i.e., objects in all namespaces), hence the `ClusterRing` name.
46
+
The `sharder` component is installed into the cluster along with a `ControllerRing` custom resource.
47
+
A `ControllerRing` declares a virtual ring of sharded controller instances and specifies API resources that should be distributed across shards in the ring.
48
+
It configures sharding on the cluster-scope level (i.e., objects in all namespaces), hence the `ControllerRing` name.
49
49
50
50
The watch cache is an expensive part of a controller regarding network transfer, CPU (decoding), and memory (local copy of all objects).
51
51
When running multiple instances of a controller, the individual instances must thus only watch the subset of objects they are responsible for.
Copy file name to clipboardExpand all lines: docs/design.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,17 +23,17 @@ Notably, no leader election is performed, and there is no designated single acti
23
23
Instead, each controller instance maintains an individual shard `Lease` labeled with the ring's name, allowing them to announce themselves to the sharder for membership and failure detection.
24
24
The sharder watches these leases to build a hash ring with the available instances.
25
25
26
-
### The `ClusterRing` Resource and Sharder Webhook
26
+
### The `ControllerRing` Resource and Sharder Webhook
27
27
28
-
Rings of controllers are configured through the use of the `ClusterRing` custom resource.
29
-
The sharder creates a `MutatingWebhookConfiguration` for each `ClusterRing` to perform assignments for objects associated with the ring.
28
+
Rings of controllers are configured through the use of the `ControllerRing` custom resource.
29
+
The sharder creates a `MutatingWebhookConfiguration` for each `ControllerRing` to perform assignments for objects associated with the ring.
30
30
The sharder webhook is called on `CREATE` and `UPDATE` requests for configured resources, but only for objects that don't have the ring-specific shard label, i.e., for unassigned objects.
31
31
32
32
The sharder uses the consistent hashing ring to determine the desired shard and adds the shard label during admission accordingly.
33
33
Shards then use a label selector for the shard label with their own instance name to restrict the cache and controller to the subset of objects assigned to them.
34
34
35
-
For the controller's "main" object (configured in `ClusterRing.spec.resources[]`), the object's `apiVersion`, `kind`, `namespace`, and `name` are concatenated to form its hash key.
36
-
For objects controlled by other objects (configured in `ClusterRing.spec.resources[].controlledResources[]`), the sharder utilizes information about the controlling object (`ownerReference` with `controller=true`) to calculate the object's hash key.
35
+
For the controller's "main" object (configured in `ControllerRing.spec.resources[]`), the object's `apiVersion`, `kind`, `namespace`, and `name` are concatenated to form its hash key.
36
+
For objects controlled by other objects (configured in `ControllerRing.spec.resources[].controlledResources[]`), the sharder utilizes information about the controlling object (`ownerReference` with `controller=true`) to calculate the object's hash key.
37
37
This ensures that owned objects are consistently assigned to the same shard as their owner.
38
38
39
39
### Object Movements and Rebalancing
@@ -88,7 +88,7 @@ The comparisons show that the sharder's resource consumption is almost constant
88
88
### Minimize Impact on the Critical Path
89
89
90
90
While the use of mutating webhooks might allow dropping watches for the sharded objects, they can have a significant impact on API requests, e.g., regarding request latency.
91
-
To minimize the impact of the sharder's webhook on the overall request latency, the webhook is configured to only react on precisely the set of objects configured in the `ClusterRing` and only for `CREATE` and `UPDATE` requests of unassigned objects.
91
+
To minimize the impact of the sharder's webhook on the overall request latency, the webhook is configured to only react on precisely the set of objects configured in the `ControllerRing` and only for `CREATE` and `UPDATE` requests of unassigned objects.
92
92
With this the webhook is only on the critical path during initial object creation and whenever the set of available shards requires reassignments.
93
93
94
94
Furthermore, webhooks can cause API requests to fail entirely.
Copy file name to clipboardExpand all lines: docs/development.md
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ Assuming a fresh kind cluster:
83
83
make run
84
84
```
85
85
86
-
Now, create the `example``ClusterRing` and run a local shard:
86
+
Now, create the `example``ControllerRing` and run a local shard:
87
87
88
88
```bash
89
89
make run-shard
@@ -92,13 +92,13 @@ make run-shard
92
92
You should see that the shard successfully announced itself to the sharder:
93
93
94
94
```bash
95
-
$ kubectl get lease -L alpha.sharding.timebertt.dev/clusterring,alpha.sharding.timebertt.dev/state
96
-
NAME HOLDER AGE CLUSTERRING STATE
97
-
shard-h9np6f8c shard-h9np6f8c8sexample ready
95
+
$ kubectl get lease -L alpha.sharding.timebertt.dev/controllerring,alpha.sharding.timebertt.dev/state
96
+
NAME HOLDER AGE CONTROLLERRING STATE
97
+
shard-fkpxhjk8 shard-fkpxhjk818s example ready
98
98
99
-
$ kubectl get clusterring
99
+
$ kubectl get controllerring
100
100
NAME READY AVAILABLE SHARDS AGE
101
-
example True 1 1 15s
101
+
example True 1 1 34s
102
102
```
103
103
104
104
Running the shard locally gives you the option to test non-graceful termination, i.e., a scenario where the shard fails to renew its lease in time.
@@ -113,19 +113,20 @@ make run-shard
113
113
114
114
## Testing the Sharding Setup
115
115
116
-
Independent of the used setup (skaffold-based or running on the host machine), you should be able to create sharded `ConfigMaps` in the `default` namespace as configured in the `example``ClusterRing`.
116
+
Independent of the used setup (skaffold-based or running on the host machine), you should be able to create sharded `ConfigMaps` in the `default` namespace as configured in the `example``ControllerRing`.
117
117
The `Secrets` created by the example shard controller should be assigned to the same shard as the owning `ConfigMap`:
118
118
119
119
```bash
120
120
$ kubectl create cm foo
121
121
configmap/foo created
122
122
123
-
$ kubectl get cm,secret -L shard.alpha.sharding.timebertt.dev/clusterring-50d858e0-example
124
-
NAME DATA AGE CLUSTERRING-50D858E0-EXAMPLE
125
-
configmap/foo 0 3s shard-5fc87c9fb7-kfb2z
123
+
$ kubectl get cm,secret -L shard.alpha.sharding.timebertt.dev/50d858e0-example
0 commit comments