You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
node: podresources: port to the latest KEP template
This change wants to do as much as mechanical translation
as possible; as consequence, we have now many gaps and TODOs,
which will be filled in followup PR.
Signed-off-by: Francesco Romani <[email protected]>
-[Add v1alpha1 Kubelet GRPC service, at <code>/var/lib/kubelet/pod-resources/kubelet.sock</code>, which returns a list of <ahref="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto#L734">CreateContainerRequest</a>s used to create containers.](#add-v1alpha1-kubelet-grpc-service-at--which-returns-a-list-of-createcontainerrequests-used-to-create-containers)
32
40
-[Add a field to Pod Status.](#add-a-field-to-pod-status)
@@ -41,13 +49,17 @@ Items marked with (R) are required *prior to targeting to a milestone / release*
41
49
-[X] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements](https://github.com/kubernetes/enhancements/issues/606)
42
50
-[X] (R) KEP approvers have approved the KEP status as `implementable`
43
51
-[X] (R) Design details are appropriately documented
44
-
-[X] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
52
+
-[X] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
53
+
-[X] e2e Tests for all Beta API Operations (endpoints)
54
+
-[X] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
55
+
-[X] (R) Minimum Two Week Window for GA e2e tests to prove flake free
45
56
-[X] (R) Graduation criteria is in place
57
+
-[X] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
46
58
-[X] (R) Production readiness review completed
47
-
-[X] Production readiness review approved
59
+
-[X](R) Production readiness review approved
48
60
-[X] "Implementation History" section is up-to-date for milestone
49
-
-~~ [] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] ~~
50
-
-[X] Supporting documentatione.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
61
+
-[X] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
62
+
-[X] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
@@ -70,12 +82,21 @@ As such the external monitoring agents need to be able to determine the set of d
70
82
* Deprecate and remove current device-specific knowledge from the kubelet, such as [accelerator metrics](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go#L229)
71
83
* Enable external device monitoring agents to provide metrics relevant to Kubernetes
72
84
85
+
### Non-Goals
86
+
87
+
TBD
88
+
73
89
## Proposal
74
90
75
91
### User Stories
76
92
77
-
* As a _Cluster Administrator_, I provide a set of devices from various vendors in my cluster. Each vendor independently maintains their own agent, so I run monitoring agents only for devices I provide. Each agent adheres to to the [node monitoring guidelines](https://docs.google.com/document/d/1_CdNWIjPBqVDMvu82aJICQsSCbh2BR-y9a8uXjQm4TI/edit?usp=sharing), so I can use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even though they are maintained by different vendors.
78
-
* As a _Device Vendor_, I manufacture devices and I have deep domain expertise in how to run and monitor them. Because I maintain my own Device Plugin implementation, as well as Device Monitoring Agent, I can provide consumers of my devices an easy way to consume and monitor my devices without requiring open-source contributions. The Device Monitoring Agent doesn't have any dependencies on the Device Plugin, so I can decouple monitoring from device lifecycle management. My Device Monitoring Agent works by periodically querying the `/devices/<ResourceName>` endpoint to discover which devices are being used, and to get the container/pod metadata associated with the metrics:
93
+
#### Story 1: Cluster administators: easier monitoring
94
+
95
+
As a _Cluster Administrator_, I provide a set of devices from various vendors in my cluster. Each vendor independently maintains their own agent, so I run monitoring agents only for devices I provide. Each agent adheres to to the [node monitoring guidelines](https://docs.google.com/document/d/1_CdNWIjPBqVDMvu82aJICQsSCbh2BR-y9a8uXjQm4TI/edit?usp=sharing), so I can use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even though they are maintained by different vendors.
96
+
97
+
#### Story 2: Device Vendors: decouple monitoring from device lifecycle management
98
+
99
+
As a _Device Vendor_, I manufacture devices and I have deep domain expertise in how to run and monitor them. Because I maintain my own Device Plugin implementation, as well as Device Monitoring Agent, I can provide consumers of my devices an easy way to consume and monitor my devices without requiring open-source contributions. The Device Monitoring Agent doesn't have any dependencies on the Device Plugin, so I can decouple monitoring from device lifecycle management. My Device Monitoring Agent works by periodically querying the `/devices/<ResourceName>` endpoint to discover which devices are being used, and to get the container/pod metadata associated with the metrics:
[X] I/we understand the owners of the involved components may require updates to
160
+
existing tests to make this code solid enough prior to committing the changes necessary
161
+
to implement this enhancement.
162
+
138
163
Given that the API allows observing what device has been associated to what container, we need to be testing different configurations, such as:
139
164
* Pods without devices assigned to any containers.
140
165
* Pods with devices assigned to some but not all containers.
@@ -148,6 +173,22 @@ We have identified two main ways of testing this API:
148
173
E2E tests are explicitly not written because they would require us to generate and deploy a custom container.
149
174
The infrastructure required is expensive and it is not clear what additional testing (and hence risk reduction) this would provide compare to node e2e tests.
150
175
176
+
##### Prerequisite testing updates
177
+
178
+
TBD
179
+
180
+
##### Unit tests
181
+
182
+
TBD
183
+
184
+
##### Integration tests
185
+
186
+
TBD
187
+
188
+
##### e2e tests
189
+
190
+
TBD
191
+
151
192
### Graduation Criteria
152
193
153
194
#### Alpha
@@ -179,60 +220,132 @@ Downgrades here are related to downgrading the plugin
179
220
Kubelet will always be backwards compatible, so going forward existing plugins are not expected to break.
180
221
181
222
## Production Readiness Review Questionnaire
182
-
### Feature enablement and rollback
183
223
184
-
***How can this feature be enabled / disabled in a live cluster?**
185
-
-[X] Feature gate (also fill in values in `kep.yaml`).
186
-
- Feature gate name: `KubeletPodResources`.
187
-
- Components depending on the feature gate: N/A.
224
+
### Feature Enablement and Rollback
225
+
226
+
###### How can this feature be enabled / disabled in a live cluster?
188
227
189
-
***Does enabling the feature change any default behavior?** No
190
-
***Can the feature be disabled once it has been enabled (i.e. can we rollback the enablement)?** Yes, through feature gates.
191
-
***What happens if we reenable the feature if it was previously rolled back?** The service recovers state from kubelet.
228
+
-[X] Feature gate (also fill in values in `kep.yaml`)
229
+
- Feature gate name: `KubeletPodResources`.
230
+
- Components depending on the feature gate: N/A
231
+
232
+
###### Does enabling the feature change any default behavior?
233
+
234
+
No.
192
235
***Are there any tests for feature enablement/disablement?** No, however no data is created or deleted.
193
236
237
+
###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)?
238
+
239
+
Yes, through feature gates.
240
+
241
+
###### What happens if we reenable the feature if it was previously rolled back?
242
+
243
+
The service recovers state from kubelet.
244
+
245
+
###### Are there any tests for feature enablement/disablement?
246
+
247
+
No, however no data is created or deleted.
248
+
194
249
### Rollout, Upgrade and Rollback Planning
195
250
196
-
***How can a rollout fail? Can it impact already running workloads?** Kubelet would fail to start. Errors would be caught in the CI.
197
-
***What specific metrics should inform a rollback?** Not Applicable, metrics wouldn't be available.
198
-
***Were upgrade and rollback tested? Was upgrade->downgrade->upgrade path tested?** Not Applicable.
199
-
***Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?** No.
251
+
###### How can a rollout or rollback fail? Can it impact already running workloads?
252
+
253
+
Kubelet would fail to start. Errors would be caught in the CI.
254
+
255
+
###### What specific metrics should inform a rollback?
256
+
257
+
Not Applicable, metrics wouldn't be available.
258
+
259
+
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
260
+
261
+
Not Applicable.
262
+
263
+
###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
264
+
265
+
No.
200
266
201
-
### Monitoring requirements
202
-
***How can an operator determine if the feature is in use by workloads?**
203
-
- Look at the `pod_resources_endpoint_requests_total` metric exposed by the kubelet.
204
-
- Look at hostPath mounts of privileged containers.
205
-
***What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?**
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
295
+
296
+
No.
213
297
214
298
### Dependencies
215
299
216
-
***Does this feature depend on any specific services running in the cluster?** Not aplicable.
300
+
###### Does this feature depend on any specific services running in the cluster?
301
+
302
+
Not applicable.
217
303
218
304
### Scalability
219
305
220
-
***Will enabling / using this feature result in any new API calls?** No.
221
-
***Will enabling / using this feature result in introducing new API types?** No.
222
-
***Will enabling / using this feature result in any new calls to cloud provider?** No.
223
-
***Will enabling / using this feature result in increasing size or count of the existing API objects?** No.
224
-
***Will enabling / using this feature result in increasing time taken by any operations covered by [existing SLIs/SLOs][]?** No. Feature is out of existing any paths in kubelet.
225
-
***Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?** In 1.18, DDOSing the API can lead to resource exhaustion. It is planned to be addressed as part of G.A.
306
+
###### Will enabling / using this feature result in any new API calls?
307
+
308
+
No.
309
+
310
+
###### Will enabling / using this feature result in introducing new API types?
311
+
312
+
No.
313
+
314
+
###### Will enabling / using this feature result in any new calls to the cloud provider?
315
+
316
+
No.
317
+
318
+
###### Will enabling / using this feature result in increasing size or count of the existing API objects?
319
+
320
+
No.
321
+
322
+
###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
323
+
324
+
No. Feature is out of existing any paths in kubelet.
325
+
326
+
###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
327
+
328
+
In 1.18, DDOSing the API can lead to resource exhaustion. It is planned to be addressed as part of G.A.
226
329
Feature only collects data when requests comes in, data is then garbage collected. Data collected is proportional to the number of pods on the node.
227
330
331
+
###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
332
+
333
+
TBD
334
+
228
335
### Troubleshooting
229
336
230
-
***How does this feature react if the API server and/or etcd is unavailable?**: No effect.
231
-
***What are other known failure modes?** No known failure modes
232
-
***What steps should be taken if SLOs are not being met to determine the problem?** N/A
0 commit comments