You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Add v1alpha1 Kubelet GRPC service, at <code>/var/lib/kubelet/pod-resources/kubelet.sock</code>, which returns a list of <ahref="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto#L734">CreateContainerRequest</a>s used to create containers.](#add-v1alpha1-kubelet-grpc-service-at--which-returns-a-list-of-createcontainerrequests-used-to-create-containers)
32
40
-[Add a field to Pod Status.](#add-a-field-to-pod-status)
@@ -41,13 +49,17 @@ Items marked with (R) are required *prior to targeting to a milestone / release*
41
49
-[X] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements](https://github.com/kubernetes/enhancements/issues/606)
42
50
-[X] (R) KEP approvers have approved the KEP status as `implementable`
43
51
-[X] (R) Design details are appropriately documented
44
-
-[X] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
52
+
-[X] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
53
+
-[X] e2e Tests for all Beta API Operations (endpoints)
54
+
-[X] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
55
+
-[X] (R) Minimum Two Week Window for GA e2e tests to prove flake free
45
56
-[X] (R) Graduation criteria is in place
57
+
-[X] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
46
58
-[X] (R) Production readiness review completed
47
-
-[X] Production readiness review approved
59
+
-[X](R) Production readiness review approved
48
60
-[X] "Implementation History" section is up-to-date for milestone
49
-
-~~ [] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] ~~
50
-
-[X] Supporting documentatione.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
61
+
-[X] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
62
+
-[X] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
@@ -70,12 +82,21 @@ As such the external monitoring agents need to be able to determine the set of d
70
82
* Deprecate and remove current device-specific knowledge from the kubelet, such as [accelerator metrics](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go#L229)
71
83
* Enable external device monitoring agents to provide metrics relevant to Kubernetes
72
84
85
+
### Non-Goals
86
+
87
+
* Enable cluster components to consume the API. The API is node-local only.
88
+
73
89
## Proposal
74
90
75
91
### User Stories
76
92
77
-
* As a _Cluster Administrator_, I provide a set of devices from various vendors in my cluster. Each vendor independently maintains their own agent, so I run monitoring agents only for devices I provide. Each agent adheres to to the [node monitoring guidelines](https://docs.google.com/document/d/1_CdNWIjPBqVDMvu82aJICQsSCbh2BR-y9a8uXjQm4TI/edit?usp=sharing), so I can use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even though they are maintained by different vendors.
78
-
* As a _Device Vendor_, I manufacture devices and I have deep domain expertise in how to run and monitor them. Because I maintain my own Device Plugin implementation, as well as Device Monitoring Agent, I can provide consumers of my devices an easy way to consume and monitor my devices without requiring open-source contributions. The Device Monitoring Agent doesn't have any dependencies on the Device Plugin, so I can decouple monitoring from device lifecycle management. My Device Monitoring Agent works by periodically querying the `/devices/<ResourceName>` endpoint to discover which devices are being used, and to get the container/pod metadata associated with the metrics:
93
+
#### Story 1: Cluster administators: easier monitoring
94
+
95
+
As a _Cluster Administrator_, I provide a set of devices from various vendors in my cluster. Each vendor independently maintains their own agent, so I run monitoring agents only for devices I provide. Each agent adheres to to the [node monitoring guidelines](https://docs.google.com/document/d/1_CdNWIjPBqVDMvu82aJICQsSCbh2BR-y9a8uXjQm4TI/edit?usp=sharing), so I can use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even though they are maintained by different vendors.
96
+
97
+
#### Story 2: Device Vendors: decouple monitoring from device lifecycle management
98
+
99
+
As a _Device Vendor_, I manufacture devices and I have deep domain expertise in how to run and monitor them. Because I maintain my own Device Plugin implementation, as well as Device Monitoring Agent, I can provide consumers of my devices an easy way to consume and monitor my devices without requiring open-source contributions. The Device Monitoring Agent doesn't have any dependencies on the Device Plugin, so I can decouple monitoring from device lifecycle management. My Device Monitoring Agent works by periodically querying the `/devices/<ResourceName>` endpoint to discover which devices are being used, and to get the container/pod metadata associated with the metrics:
[X] I/we understand the owners of the involved components may require updates to
160
+
existing tests to make this code solid enough prior to committing the changes necessary
161
+
to implement this enhancement.
162
+
138
163
Given that the API allows observing what device has been associated to what container, we need to be testing different configurations, such as:
139
164
* Pods without devices assigned to any containers.
140
165
* Pods with devices assigned to some but not all containers.
@@ -148,6 +173,20 @@ We have identified two main ways of testing this API:
148
173
E2E tests are explicitly not written because they would require us to generate and deploy a custom container.
149
174
The infrastructure required is expensive and it is not clear what additional testing (and hence risk reduction) this would provide compare to node e2e tests.
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
293
+
294
+
No.
213
295
214
296
### Dependencies
215
297
216
-
***Does this feature depend on any specific services running in the cluster?** Not aplicable.
298
+
###### Does this feature depend on any specific services running in the cluster?
299
+
300
+
Not applicable.
217
301
218
302
### Scalability
219
303
220
-
***Will enabling / using this feature result in any new API calls?** No.
221
-
***Will enabling / using this feature result in introducing new API types?** No.
222
-
***Will enabling / using this feature result in any new calls to cloud provider?** No.
223
-
***Will enabling / using this feature result in increasing size or count of the existing API objects?** No.
224
-
***Will enabling / using this feature result in increasing time taken by any operations covered by [existing SLIs/SLOs][]?** No. Feature is out of existing any paths in kubelet.
225
-
***Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?** In 1.18, DDOSing the API can lead to resource exhaustion. It is planned to be addressed as part of G.A.
304
+
###### Will enabling / using this feature result in any new API calls?
305
+
306
+
No.
307
+
308
+
###### Will enabling / using this feature result in introducing new API types?
309
+
310
+
No.
311
+
312
+
###### Will enabling / using this feature result in any new calls to the cloud provider?
313
+
314
+
No.
315
+
316
+
###### Will enabling / using this feature result in increasing size or count of the existing API objects?
317
+
318
+
No.
319
+
320
+
###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
321
+
322
+
No. Feature is out of existing any paths in kubelet.
323
+
324
+
###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
325
+
326
+
In 1.18, DDOSing the API can lead to resource exhaustion. It is planned to be addressed as part of G.A.
226
327
Feature only collects data when requests comes in, data is then garbage collected. Data collected is proportional to the number of pods on the node.
227
328
329
+
###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
330
+
331
+
No. Clients consume the API through the gRPC interface exposed from the unix domain socket. Only a single socket is created and managed by the kubelet,
332
+
shared among all the clients (typically one). No resources are reserved when a client connects, and the API is stateless (no state preserved across
333
+
calls, not concept of session). All the data needed to serve the calls is fetched by internal, already existing data structures internal to resource
334
+
managers.
335
+
228
336
### Troubleshooting
229
337
230
-
***How does this feature react if the API server and/or etcd is unavailable?**: No effect.
231
-
***What are other known failure modes?** No known failure modes
232
-
***What steps should be taken if SLOs are not being met to determine the problem?** N/A
0 commit comments