Skip to content

Commit f140b05

Browse files
authored
Merge pull request kubernetes#2939 from tam7t/tam7t/secrets-csi-kep
KEP-2907: secrets store csi driver
2 parents eb4e089 + 48af774 commit f140b05

File tree

2 files changed

+404
-0
lines changed

2 files changed

+404
-0
lines changed
Lines changed: 388 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,388 @@
1+
2+
# KEP-2907: Secrets Store CSI Provider
3+
4+
<!-- toc -->
5+
- [Release Signoff Checklist](#release-signoff-checklist)
6+
- [Summary](#summary)
7+
- [Motivation](#motivation)
8+
- [Goals](#goals)
9+
- [Non-Goals](#non-goals)
10+
- [Proposal](#proposal)
11+
- [User Stories (Optional)](#user-stories-optional)
12+
- [Story 1](#story-1)
13+
- [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)
14+
- [Risks and Mitigations](#risks-and-mitigations)
15+
- [Directory traversal vulnerabilities](#directory-traversal-vulnerabilities)
16+
- [Authenticating to external secret APIs](#authenticating-to-external-secret-apis)
17+
- [Design Details](#design-details)
18+
- [Test Plan](#test-plan)
19+
- [Graduation Criteria](#graduation-criteria)
20+
- [GA](#ga)
21+
- [Deprecation](#deprecation)
22+
- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)
23+
- [Feature Enablement and Rollback](#feature-enablement-and-rollback)
24+
- [Monitoring Requirements](#monitoring-requirements)
25+
- [Dependencies](#dependencies)
26+
- [Scalability](#scalability)
27+
- [Troubleshooting](#troubleshooting)
28+
- [Implementation History](#implementation-history)
29+
- [Drawbacks](#drawbacks)
30+
- [Alternatives](#alternatives)
31+
<!-- /toc -->
32+
33+
## Release Signoff Checklist
34+
35+
<!--
36+
**ACTION REQUIRED:** In order to merge code into a release, there must be an
37+
issue in [kubernetes/enhancements] referencing this KEP and targeting a release
38+
milestone **before the [Enhancement Freeze](https://git.k8s.io/sig-release/releases)
39+
of the targeted release**.
40+
41+
For enhancements that make changes to code or processes/procedures in core
42+
Kubernetes—i.e., [kubernetes/kubernetes], we require the following Release
43+
Signoff checklist to be completed.
44+
45+
Check these off as they are completed for the Release Team to track. These
46+
checklist items _must_ be updated for the enhancement to be released.
47+
-->
48+
49+
Items marked with (R) are required *prior to targeting to a milestone / release*.
50+
51+
- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
52+
- [ ] (R) KEP approvers have approved the KEP status as `implementable`
53+
- [ ] (R) Design details are appropriately documented
54+
- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
55+
- [ ] e2e Tests for all Beta API Operations (endpoints)
56+
- [ ] (R) Ensure GA e2e tests for meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
57+
- [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free
58+
- [ ] (R) Graduation criteria is in place
59+
- [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
60+
- [ ] (R) Production readiness review completed
61+
- [ ] (R) Production readiness review approved
62+
- [ ] "Implementation History" section is up-to-date for milestone
63+
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
64+
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
65+
66+
<!--
67+
**Note:** This checklist is iterative and should be reviewed and updated every time this enhancement is being considered for a milestone.
68+
-->
69+
70+
[kubernetes.io]: https://kubernetes.io/
71+
[kubernetes/enhancements]: https://git.k8s.io/enhancements
72+
[kubernetes/kubernetes]: https://git.k8s.io/kubernetes
73+
[kubernetes/website]: https://git.k8s.io/website
74+
75+
## Summary
76+
77+
The [secrets-store-csi-driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver) project provides a portable method for applications to consume secrets from external secret APIs through the filesystem. This effort was added to the `sig-auth` subproject in February 2020 and currently there are providers for Azure, AWS, GCP, and HashiCorp Vault. All the providers for the driver are out-of-tree. This KEP intends to cover making the core functionality of the driver GA.
78+
79+
## Motivation
80+
81+
<!--
82+
This section is for explicitly listing the motivation, goals, and non-goals of
83+
this KEP. Describe why the change is important and the benefits to users. The
84+
motivation section can optionally provide links to [experience reports] to
85+
demonstrate the interest in a KEP within the wider Kubernetes community.
86+
87+
[experience reports]: https://github.com/golang/go/wiki/ExperienceReports
88+
-->
89+
90+
### Goals
91+
92+
- Signal the stability of the driver interface and implementation for the core task of making secrets available to pod filesystem.
93+
94+
### Non-Goals
95+
96+
- Extending the Kubernetes Secret object
97+
- Introduce a new Kubernetes type
98+
- Consume only: The proposed CRD and implementation does not provide a way to add/write/edit secrets in the external stores.
99+
100+
## Proposal
101+
102+
This project introduces a new Container Storage Interface (CSI) driver for fetching secrets and writing to a `tmpfs` mount in the Pod filesystem. The driver is deployed as a `DaemonSet`. A new Custom Resource Definition (CRD) called a `SecretProviderClass` is introduced that informs the driver of which external secret storage API to contact and how to map the secrets from those APIs to file paths. The driver communicates with the external secret provider processes through a gRPC interface over a Unix Domain Socket.
103+
104+
### User Stories (Optional)
105+
106+
#### Story 1
107+
108+
1. Application reads secret from filesystem on startup
109+
2. Application watches secret for rotation
110+
3. Application Pod YAML remains unchanged and works across secret providers
111+
112+
### Notes/Constraints/Caveats (Optional)
113+
114+
Since the proposal is a storage driver, native support for presenting secrets to a process through environment variables is not possible. The driver includes a method to sync the mounted content as Kubernetes secret. This is an optional feature, isn't enabled by default, and will not be considered a GA feature of the driver.
115+
116+
### Risks and Mitigations
117+
118+
#### Directory traversal vulnerabilities
119+
120+
[CVE-2020-8567](https://github.com/kubernetes-sigs/secrets-store-csi-driver/issues/384) is an example of this risk. Since then, the interface
121+
between the driver and provider processes has been updated to:
122+
123+
- eliminate the `kubelet/pods` `hostPath` volume mount from providers
124+
- consolidate all filesystem IO to the driver process
125+
126+
The driver protects against directory traversal vulnerabilities by re-using the `atomic_writer` used by Kubernetes Secrets and ConfigMaps which includes protections against writing to unintended paths.
127+
128+
Providers need a single remaining `hostPath` to share a unix domain socket file with the driver process, but this path has no other system critical or security sensitive access.
129+
130+
#### Authenticating to external secret APIs
131+
132+
Authentication and authorization is largely up to the external API and its provider process, but the driver itself does include a few features that enable scoping access to secrets to individual pods.
133+
134+
[Pod info](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/603-csi-pod-info) is propagated to the external provider.
135+
136+
Additionally [KEP 1855](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1855-csi-driver-service-account-token) allows the driver to propagate a token to the provider. This enabled the providers to impersonate the pod for fetching secrets without the need granting large RBAC scopes.
137+
138+
## Design Details
139+
140+
### Test Plan
141+
142+
- Automated pre and post submit end-to-end integration tests
143+
- Periodic end-to-end integration tests
144+
- Supported providers each have their own integration test suites along with a reference provider
145+
146+
### Graduation Criteria
147+
148+
#### GA
149+
150+
- Month+ soak of minor release
151+
- Completion of milestone requirements
152+
- Agreement of stability documented on community call from 3+ provider maintainers
153+
- User facing API groups (`SecretProviderClass` and `SecretProviderClassPodStatus` CRDs) promoted to v1
154+
155+
#### Deprecation
156+
157+
There are currently no planned deprecations. The following rules will be followed if a deprecation is needed.
158+
159+
- Announce deprecation and support policy of the existing flag
160+
- Two versions passed since introducing the functionality that deprecates the flag (to address version skew)
161+
- Address feedback on usage/changed behavior, provided on GitHub issues
162+
- Deprecate the flag
163+
164+
## Production Readiness Review Questionnaire
165+
166+
<!--
167+
168+
Production readiness reviews are intended to ensure that features merging into
169+
Kubernetes are observable, scalable and supportable; can be safely operated in
170+
production environments, and can be disabled or rolled back in the event they
171+
cause increased failures in production. See more in the PRR KEP at
172+
https://git.k8s.io/enhancements/keps/sig-architecture/1194-prod-readiness.
173+
174+
The production readiness review questionnaire must be completed and approved
175+
for the KEP to move to `implementable` status and be included in the release.
176+
177+
In some cases, the questions below should also have answers in `kep.yaml`. This
178+
is to enable automation to verify the presence of the review, and to reduce review
179+
burden and latency.
180+
181+
The KEP must have a approver from the
182+
[`prod-readiness-approvers`](http://git.k8s.io/enhancements/OWNERS_ALIASES)
183+
team. Please reach out on the
184+
[#prod-readiness](https://kubernetes.slack.com/archives/CPNHUMN74) channel if
185+
you need any help or guidance.
186+
-->
187+
188+
### Feature Enablement and Rollback
189+
190+
This will be deployed to clusters as a standalone separate component.
191+
192+
### Monitoring Requirements
193+
194+
###### How can an operator determine if the feature is in use by workloads?
195+
196+
<!--
197+
Ideally, this should be a metric. Operations against the Kubernetes API (e.g.,
198+
checking if there are objects with field X set) may be a last resort. Avoid
199+
logs or events for this purpose.
200+
-->
201+
202+
###### How can someone using this feature know that it is working for their instance?
203+
204+
- non-zero `total_node_publish` metrics indicate the CSI driver is used by the workloads.
205+
- `total_sync_k8s_secret` metrics indicate the optional Sync as Kubernetes secret feature is used by the workloads.
206+
- `total_rotation_reconcile` metrics indicate the optional rotation reconciliation feature is used by the workloads.
207+
208+
<!--
209+
For instance, if this is a pod-related feature, it should be possible to determine if the feature is functioning properly
210+
for each individual pod.
211+
Pick one more of these and delete the rest.
212+
Please describe all items visible to end users below with sufficient detail so that they can verify correct enablement
213+
and operation of this feature.
214+
Recall that end users cannot usually observe component logs or access metrics.
215+
-->
216+
217+
###### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
218+
219+
- `total_node_publish_error`
220+
- any rising count of this metric indicates a problem with mounting the volume for pod.
221+
- `total_node_unpublish_error`
222+
- any rising count of this metric indicates a problem with unmounting the volume for pod.
223+
224+
<!--
225+
This is your opportunity to define what "normal" quality of service looks like
226+
for a feature.
227+
228+
It's impossible to provide comprehensive guidance, but at the very
229+
high level (needs more precise definitions) those may be things like:
230+
- per-day percentage of API calls finishing with 5XX errors <= 1%
231+
- 99% percentile over day of absolute value from (job creation time minus expected
232+
job creation time) for cron job <= 10%
233+
- 99.9% of /health requests per day finish with 200 code
234+
235+
These goals will help you determine what you need to measure (SLIs) in the next
236+
question.
237+
-->
238+
239+
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
240+
241+
<!--
242+
Pick one more of these and delete the rest.
243+
-->
244+
245+
- [x] Metrics
246+
- Metric name: `total_node_publish`
247+
- Components exposing the metric: `secrets-store-csi-driver`
248+
- [x] Other
249+
- Details: The CSI driver is configured with liveness and readiness probes. The liveness check is performed using the [liveness-probe](https://github.com/kubernetes-csi/livenessprobe) sidecar container. The liveness probe sidecar container exposes an HTTP `/healthz` endpoint which serves as kubelet's livenessProbe hook to monitor health of the CSI driver. The liveness probe uses `Probe()` RPC call to check the CSI driver is healthy.
250+
251+
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
252+
253+
<!--
254+
Describe the metrics themselves and the reasons why they weren't added (e.g., cost,
255+
implementation difficulties, etc.).
256+
-->
257+
258+
### Dependencies
259+
260+
- [Kubernetes Container Storage Interface](https://github.com/kubernetes/community/blob/98b3d97d2e7f91bb62b8e88710c29c1675efb689/contributors/design-proposals/storage/container-storage-interface.md)
261+
- [KEP 596: CSI Inline Volume Support GA](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/596-csi-inline-volumes)
262+
- [KEP 1855: Service Account Token for CSI Driver](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1855-csi-driver-service-account-token)
263+
- Supports windows containers (Kubernetes version v1.18+)
264+
265+
The driver uses CSI Inline Volumes to mount the external secrets-store objects in the pod. The CSI Inline Volumes feature is enabled by default in Kubernetes 1.16+. For windows containers, the CSI Inline Volumes feature is enabled by default in Kubernetes 1.18+.
266+
267+
The minimum supported Kubernetes version is 1.16 for Linux and 1.18 for Windows.
268+
269+
###### Does this feature depend on any specific services running in the cluster?
270+
271+
- Kubelet
272+
- If kubelet service is not running, the pods referencing the csi driver for volume will fail to start.
273+
274+
<!--
275+
Think about both cluster-level services (e.g. metrics-server) as well
276+
as node-level agents (e.g. specific version of CRI). Focus on external or
277+
optional services that are needed. For example, if this feature depends on
278+
a cloud provider API, or upon an external software-defined storage or network
279+
control plane.
280+
281+
For each of these, fill in the following—thinking about running existing user workloads
282+
and creating new ones, as well as about cluster-level services (e.g. DNS):
283+
- [Dependency name]
284+
- Usage description:
285+
- Impact of its outage on the feature:
286+
- Impact of its degraded performance or high-error rates on the feature:
287+
-->
288+
289+
### Scalability
290+
291+
Load test results: https://secrets-store-csi-driver.sigs.k8s.io/load-tests.html
292+
293+
<!--
294+
For alpha, this section is encouraged: reviewers should consider these questions
295+
and attempt to answer them.
296+
297+
For beta, this section is required: reviewers must answer these questions.
298+
299+
For GA, this section is required: approvers should be able to confirm the
300+
previous answers based on experience in the field.
301+
-->
302+
303+
###### Will enabling / using this feature result in any new API calls?
304+
305+
<!--
306+
Describe them, providing:
307+
- API call type (e.g. PATCH pods)
308+
- estimated throughput
309+
- originating component(s) (e.g. Kubelet, Feature-X-controller)
310+
Focusing mostly on:
311+
- components listing and/or watching resources they didn't before
312+
- API calls that may be triggered by changes of some Kubernetes resources
313+
(e.g. update of object X triggers new updates of object Y)
314+
- periodic API calls to reconcile state (e.g. periodic fetching state,
315+
heartbeats, leader election, etc.)
316+
-->
317+
318+
###### Will enabling / using this feature result in introducing new API types?
319+
320+
<!--
321+
Describe them, providing:
322+
- API type
323+
- Supported number of objects per cluster
324+
- Supported number of objects per namespace (for namespace-scoped objects)
325+
-->
326+
327+
###### Will enabling / using this feature result in any new calls to the cloud provider?
328+
329+
<!--
330+
Describe them, providing:
331+
- Which API(s):
332+
- Estimated increase:
333+
-->
334+
335+
###### Will enabling / using this feature result in increasing size or count of the existing API objects?
336+
337+
<!--
338+
Describe them, providing:
339+
- API type(s):
340+
- Estimated increase in size: (e.g., new annotation of size 32B)
341+
- Estimated amount of new objects: (e.g., new Object X for every existing Pod)
342+
-->
343+
344+
###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
345+
346+
<!--
347+
Look at the [existing SLIs/SLOs].
348+
349+
Think about adding additional work or introducing new steps in between
350+
(e.g. need to do X to start a container), etc. Please describe the details.
351+
352+
[existing SLIs/SLOs]: https://git.k8s.io/community/sig-scalability/slos/slos.md#kubernetes-slisslos
353+
-->
354+
355+
###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
356+
357+
<!--
358+
Things to keep in mind include: additional in-memory state, additional
359+
non-trivial computations, excessive access to disks (including increased log
360+
volume), significant amount of data sent and/or received over network, etc.
361+
This through this both in small and large cases, again with respect to the
362+
[supported limits].
363+
364+
[supported limits]: https://git.k8s.io/community//sig-scalability/configs-and-limits/thresholds.md
365+
-->
366+
367+
### Troubleshooting
368+
369+
https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html
370+
371+
## Implementation History
372+
373+
- Dec 2018 - [First Commit](https://github.com/kubernetes-sigs/secrets-store-csi-driver/commit/56fb54bfdb2058ef043ff36363b90787a52c51b7#diff-bc37d034bad564583790a46f19d807abfe519c5671395fd494d8cce506c42947)
374+
- Feb 2020 - [Incorporated into sig-auth with pluggable provider model](https://github.com/kubernetes/org/issues/1245)
375+
- May 2020 - [v0.0.10](https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/tag/v0.0.10) (First release)
376+
- July 2021 - [v0.1.0](https://github.com/kubernetes-sigs/secrets-store-csi-driver/releases/tag/v0.1.0) (First minor release)
377+
378+
## Drawbacks
379+
380+
- Environment Variables: There appears to be a strong desire to consume secrets using environment variables but the only way for this to work currently is through syncing secrets to Kubernetes Secrets.
381+
382+
## Alternatives
383+
384+
<!--
385+
What other approaches did you consider, and why did you rule them out? These do
386+
not need to be as detailed as the proposal, but should include enough
387+
information to express the idea and why it was not acceptable.
388+
-->

0 commit comments

Comments
 (0)