You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -183,7 +191,6 @@ incorrectly or objects being garbage collected mistakenly.
183
191
184
192
### Non-Goals
185
193
186
-
* Change cluster installation procedures (no new certs etc)
187
194
* Lock particular clients to particular versions
188
195
189
196
## Proposal
@@ -265,7 +272,7 @@ This might be a good place to talk about core concepts and how they relate.
265
272
266
273
Cluster admins might not read the release notes and realize they should enable
267
274
network/firewall connectivity between apiservers. In this case clients will
268
-
recieve 503s instead of transparently being proxied. 503 is still safer than
275
+
receive 503s instead of transparently being proxied. 503 is still safer than
269
276
today's behavior.
270
277
271
278
Requests will consume egress bandwidth for 2 apiservers when proxied. We can cap
@@ -275,20 +282,53 @@ with a metric.
275
282
276
283
There could be a large volume of requests for a specific resource which might result in the identified apiserver being unable to serve the proxied requests. This scenario should not occur too frequently, since resource types which have large request volume should not be added or removed during an upgrade -- that would cause other problems, too.
277
284
278
-
TODO: security / cert stuff.
285
+
We should ensure at most one proxy, rather than proxying the request over and over again (if the source apiserver has an incorrect understanding of what the destination apiserver can serve).
286
+
287
+
To prevent server-side request forgeries we will not give control over information about apiserver IP/endpoint and the trust bundle (used to authenticate server while proxying) to users via REST APIs.
279
288
280
289
## Design Details
281
290
282
-
TODO: explanation of how the handler will determine a request is for a resource
283
-
that should be proxied.
291
+
### Aggregation Layer
292
+
293
+
1. A new filter will be added to the [handler chain] of the aggregation layer. This filter will maintain an internal map with the key being the group-version-resource and the value being a list of server IDs of apiservers that are capable of serving that group-version-resource
294
+
1. This internal map is populated using an informer for StorageVersion objects. An event handler will be added for this informer that will get the apiserver ID of the requested group-version-resource and update the internal map accordingly
295
+
296
+
2. This filter will pass on the request to the next handler in the local aggregator chain, if:
297
+
1. It is a non resource request
298
+
2. The StorageVersion informer cache hasn't synced yet or if `StorageVersionManager.Completed()` has returned false. We will serve error 503 in this case
299
+
3. The request has a header that indicates that this request has been proxied once already. If for some reason the resource is not found locally, we will serve error 503
300
+
4. No StorageVersion was retrieved for it, meaning the request is for an aggregated API or for a custom resource
301
+
5. If the local apiserver ID is found in the list of serviceable-by server IDs from the internal map
302
+
303
+
3. If the local apiserver ID is not found in the list of serviceable-by server IDs, a random apiserver ID will be selected from the retrieved list and the request will be proxied to this apiserver
304
+
305
+
4. If there is no apiserver ID retrieved for the requested GVR, we will serve 404 with error `GVR <group_version_resource> is not served by anything in this cluster`
306
+
307
+
5. If the proxy call fails for network issues or any reason, we serve 503 with error `Error while proxying request to destination apiserver`
StorageVersion API currently tells us whether a particular StorageVersion can be read from etcd by the listed apiserver. We will enhance this API to also include apiserver ID of the server that can serve this StoageVersion.
* TODO: We need to find a place to store and retrieve the destination apiserver's host and port information given the server's ID.
318
+
We do not want to store this information in
284
319
285
-
TODO: explanation of how the security handshake between apiservers works.
286
-
* What we need to fix: random processes / external users / etc should not be
287
-
able to proxy requests, so the receiving apiserver needs to be able to verify
288
-
the source apiserver.
289
-
* generate self-signed cert on startup, put pubkey in apiserver identity lease
290
-
object?
320
+
* StorageVersion : because we do not want to expose the network identity of the apiservers in this API that can be listed in multiple places where it may be unnecessary/redundant to do so
321
+
* Endpoint reconciler lease : because the IP present here could be that of a load balancer for the apiservers, but we need to know the definite address of the identified destination apiserver
291
322
323
+
#### Proxy transport between apiservers and authn
324
+
325
+
For the mTLS between source and destination apiservers, we will do the following
326
+
327
+
1. For server authentication by the client (source apiserver) : the client needs to validate the server certs (presented by the destination apiserver), for which it needs to know the CA bundle of the authority that signed those certs. We will introduce a new flag --peer-ca-file that must be passed to the kube-apiserver to verify the other kube-apiserver's server certs
328
+
329
+
2. For client authentication by the server (destination apiserver) : destination apiserver will check the source apiserver certs to determine that the proxy request is from an authenticated client. The destination apiserver will use requestheader authentication (and NOT client cert authentication) for this using the kube-aggregator proxy client cert/key and the --requestheader-client-ca-file passed to the apiserver upon bootstrap
330
+
331
+
### Discovery Merging
292
332
TODO: detailed description of discovery merging. (not scheduled until beta.)
293
333
294
334
### Test Plan
@@ -369,11 +409,11 @@ We expect no non-infra related flakes in the last month as a GA graduation crite
369
409
#### Alpha
370
410
371
411
- Proxying implemented (behind feature flag)
412
+
- mTLS or other secure system used for proxying
372
413
373
414
#### Beta
374
415
375
416
- Discovery document merging implemented
376
-
- mTLS or other secure system used for proxying
377
417
378
418
#### GA
379
419
@@ -651,18 +691,17 @@ These goals will help you determine what you need to measure (SLIs) in the next
651
691
question.
652
692
-->
653
693
694
+
This feature depends on the `StorageVersion` feature, that generates objects with a `storageVersion.status.serverStorageVersions[*].apiServerID` field which is used to find the destination apiserver's network location.
695
+
654
696
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
655
697
656
698
<!--
657
699
Pick one more of these and delete the rest.
658
700
-->
659
701
660
-
-[ ] Metrics
661
-
- Metric name:
662
-
-[Optional] Aggregation method:
663
-
- Components exposing the metric:
664
-
-[ ] Other (treat as last resort)
665
-
- Details:
702
+
-[X] Metrics
703
+
- Metric name: `kubernetes_uvip_count`
704
+
- Components exposing the metric: kube-apiserver
666
705
667
706
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
668
707
@@ -679,6 +718,8 @@ This section must be completed when targeting beta to a release.
679
718
680
719
###### Does this feature depend on any specific services running in the cluster?
681
720
721
+
No, but it does depend on the `StorageVersion` feature in kube-apiserver.
722
+
682
723
<!--
683
724
Think about both cluster-level services (e.g. metrics-server) as well
684
725
as node-level agents (e.g. specific version of CRI). Focus on external or
0 commit comments