Skip to content

Commit ee1b7ef

Browse files
authored
Merge pull request #34291 from shannonxtreme/secrets-good-practices
Add good practices page for secrets
2 parents b38e3e1 + b8ac776 commit ee1b7ef

File tree

4 files changed

+166
-87
lines changed

4 files changed

+166
-87
lines changed

content/en/docs/concepts/configuration/secret.md

Lines changed: 13 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,12 @@ Additionally, anyone who is authorized to create a Pod in a namespace can use th
3636
In order to safely use Secrets, take at least the following steps:
3737

3838
1. [Enable Encryption at Rest](/docs/tasks/administer-cluster/encrypt-data/) for Secrets.
39-
1. [Enable or configure RBAC rules](/docs/reference/access-authn-authz/authorization/) that
40-
restrict reading and writing the Secret. Be aware that secrets can be obtained
41-
implicitly by anyone with the permission to create a Pod.
42-
1. Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed
43-
to create new Secrets or replace existing ones.
39+
1. [Enable or configure RBAC rules](/docs/reference/access-authn-authz/authorization/) with least-privilege access to Secrets.
40+
1. Restrict Secret access to specific containers.
41+
1. [Consider using external Secret store providers](https://secrets-store-csi-driver.sigs.k8s.io/concepts.html#provider-for-the-secrets-store-csi-driver).
42+
43+
For more guidelines to manage and improve the security of your Secrets, refer to
44+
[Good practices for Kubernetes Secrets](/docs/concepts/security/secrets-good-practices).
4445

4546
{{< /caution >}}
4647

@@ -174,7 +175,7 @@ systems on your behalf.
174175

175176
Secret volume sources are validated to ensure that the specified object
176177
reference actually points to an object of type Secret. Therefore, a Secret
177-
needs to be created before any Pods that depend on it.
178+
needs to be created before any Pods that depend on it.
178179

179180
If the Secret cannot be fetched (perhaps because it does not exist, or
180181
due to a temporary lack of connection to the API server) the kubelet
@@ -324,7 +325,7 @@ secret volume mount have permission `0400`.
324325
{{< note >}}
325326
If you're defining a Pod or a Pod template using JSON, beware that the JSON
326327
specification doesn't support octal notation. You can use the decimal value
327-
for the `defaultMode` (for example, 0400 in octal is 256 in decimal) instead.
328+
for the `defaultMode` (for example, 0400 in octal is 256 in decimal) instead.
328329
If you're writing YAML, you can write the `defaultMode` in octal.
329330
{{< /note >}}
330331

@@ -931,7 +932,7 @@ data:
931932
After creating the Secret, wait for Kubernetes to populate the `token` key in the `data` field.
932933

933934
See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/)
934-
documentation for more information on how service accounts work.
935+
documentation for more information on how service accounts work.
935936
You can also check the `automountServiceAccountToken` field and the
936937
`serviceAccountName` field of the
937938
[`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
@@ -1280,61 +1281,15 @@ Secrets that a Pod requests are potentially visible within its containers.
12801281
Therefore, one Pod does not have access to the Secrets of another Pod.
12811282

12821283
{{< warning >}}
1283-
Any privileged containers on a node are liable to have access to all Secrets used
1284-
on that node.
1284+
Any containers that run with `privileged: true` on a node can access all
1285+
Secrets used on that node.
12851286
{{< /warning >}}
12861287

12871288

1288-
### Security recommendations for developers
1289-
1290-
- Applications still need to protect the value of confidential information after reading it
1291-
from an environment variable or volume. For example, your application must avoid logging
1292-
the secret data in the clear or transmitting it to an untrusted party.
1293-
- If you are defining multiple containers in a Pod, and only one of those
1294-
containers needs access to a Secret, define the volume mount or environment
1295-
variable configuration so that the other containers do not have access to that
1296-
Secret.
1297-
- If you configure a Secret through a {{< glossary_tooltip text="manifest" term_id="manifest" >}},
1298-
with the secret data encoded as base64, sharing this file or checking it in to a
1299-
source repository means the secret is available to everyone who can read the manifest.
1300-
Base64 encoding is _not_ an encryption method, it provides no additional confidentiality
1301-
over plain text.
1302-
- When deploying applications that interact with the Secret API, you should
1303-
limit access using
1304-
[authorization policies](/docs/reference/access-authn-authz/authorization/) such as
1305-
[RBAC](/docs/reference/access-authn-authz/rbac/).
1306-
- In the Kubernetes API, `watch` and `list` requests for Secrets within a namespace
1307-
are extremely powerful capabilities. Avoid granting this access where feasible, since
1308-
listing Secrets allows the clients to inspect the values of every Secret in that
1309-
namespace.
1310-
1311-
### Security recommendations for cluster administrators
1312-
1313-
{{< caution >}}
1314-
A user who can create a Pod that uses a Secret can also see the value of that Secret. Even
1315-
if cluster policies do not allow a user to read the Secret directly, the same user could
1316-
have access to run a Pod that then exposes the Secret.
1317-
{{< /caution >}}
1318-
1319-
- Reserve the ability to `watch` or `list` all secrets in a cluster (using the Kubernetes
1320-
API), so that only the most privileged, system-level components can perform this action.
1321-
- When deploying applications that interact with the Secret API, you should
1322-
limit access using
1323-
[authorization policies](/docs/reference/access-authn-authz/authorization/) such as
1324-
[RBAC](/docs/reference/access-authn-authz/rbac/).
1325-
- In the API server, objects (including Secrets) are persisted into
1326-
{{< glossary_tooltip term_id="etcd" >}}; therefore:
1327-
- only allow cluster administrators to access etcd (this includes read-only access);
1328-
- enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
1329-
for Secret objects, so that the data of these Secrets are not stored in the clear
1330-
into {{< glossary_tooltip term_id="etcd" >}};
1331-
- consider wiping / shredding the durable storage used by etcd once it is
1332-
no longer in use;
1333-
- if there are multiple etcd instances, make sure that etcd is
1334-
using SSL/TLS for communication between etcd peers.
1335-
13361289
## {{% heading "whatsnext" %}}
13371290

1291+
- For guidelines to manage and improve the security of your Secrets, refer to
1292+
[Good practices for Kubernetes Secrets](/docs/concepts/security/secrets-good-practices).
13381293
- Learn how to [manage Secrets using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
13391294
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
13401295
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

content/en/docs/concepts/security/multi-tenancy.md

Lines changed: 24 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Multi-tenancy
33
content_type: concept
4-
weight: 70
4+
weight: 80
55
---
66

77
<!-- overview -->
@@ -27,7 +27,7 @@ The first step to determining how to share your cluster is understanding your us
2727
evaluate the patterns and tools available. In general, multi-tenancy in Kubernetes clusters falls
2828
into two broad categories, though many variations and hybrids are also possible.
2929

30-
### Multiple teams
30+
### Multiple teams
3131

3232
A common form of multi-tenancy is to share a cluster between multiple teams within an
3333
organization, each of whom may operate one or more workloads. These workloads frequently need to
@@ -39,7 +39,7 @@ automation tools. There is often some level of trust between members of differen
3939
Kubernetes policies such as RBAC, quotas, and network policies are essential to safely and fairly
4040
share clusters.
4141

42-
### Multiple customers
42+
### Multiple customers
4343

4444
The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor
4545
running multiple instances of a workload for customers. This business model is so strongly
@@ -83,7 +83,7 @@ services.
8383

8484
There are several ways to design and build multi-tenant solutions with Kubernetes. Each of these
8585
methods comes with its own set of tradeoffs that impact the isolation level, implementation
86-
effort, operational complexity, and cost of service.
86+
effort, operational complexity, and cost of service.
8787

8888
A Kubernetes cluster consists of a control plane which runs Kubernetes software, and a data plane
8989
consisting of worker nodes where tenant workloads are executed as pods. Tenant isolation can be
@@ -95,7 +95,7 @@ implies strong isolation, and “soft” multi-tenancy, which implies weaker iso
9595
often from security and resource sharing perspectives (e.g. guarding against attacks such as data
9696
exfiltration or DoS). Since data planes typically have much larger attack surfaces, "hard"
9797
multi-tenancy often requires extra attention to isolating the data-plane, though control plane
98-
isolation also remains critical.
98+
isolation also remains critical.
9999

100100
However, the terms "hard" and "soft" can often be confusing, as there is no single definition that
101101
will apply to all users. Rather, "hardness" or "softness" is better understood as a broad
@@ -118,7 +118,7 @@ your needs or capabilities change.
118118
## Control plane isolation
119119

120120
Control plane isolation ensures that different tenants cannot access or affect each others'
121-
Kubernetes API resources.
121+
Kubernetes API resources.
122122

123123
### Namespaces
124124

@@ -161,15 +161,15 @@ are less useful for multi-tenant clusters.
161161

162162
In a multi-team environment, RBAC must be used to restrict tenants' access to the appropriate
163163
namespaces, and ensure that cluster-wide resources can only be accessed or modified by privileged
164-
users such as cluster administrators.
164+
users such as cluster administrators.
165165

166166
If a policy ends up granting a user more permissions than they need, this is likely a signal that
167167
the namespace containing the affected resources should be refactored into finer-grained
168168
namespaces. Namespace management tools may simplify the management of these finer-grained
169169
namespaces by applying common RBAC policies to different namespaces, while still allowing
170170
fine-grained policies where necessary.
171171

172-
### Quotas
172+
### Quotas
173173

174174
Kubernetes workloads consume node resources, like CPU and memory. In a multi-tenant environment,
175175
you can use [Resource Quotas](/docs/concepts/policy/resource-quotas/) to manage resource usage of
@@ -188,7 +188,7 @@ than built-in quotas.
188188

189189
Quotas prevent a single tenant from consuming greater than their allocated share of resources
190190
hence minimizing the “noisy neighbor” issue, where one tenant negatively impacts the performance
191-
of other tenants' workloads.
191+
of other tenants' workloads.
192192

193193
When you apply a quota to namespace, Kubernetes requires you to also specify resource requests and
194194
limits for each container. Limits are the upper bound for the amount of resources that a container
@@ -242,11 +242,11 @@ However, they can be significantly more complex to manage and may not be appropr
242242

243243
Kubernetes offers several types of volumes that can be used as persistent storage for workloads.
244244
For security and data-isolation, [dynamic volume provisioning](/docs/concepts/storage/dynamic-provisioning/)
245-
is recommended and volume types that use node resources should be avoided.
245+
is recommended and volume types that use node resources should be avoided.
246246

247247
[StorageClasses](/docs/concepts/storage/storage-classes/) allow you to describe custom "classes"
248248
of storage offered by your cluster, based on quality-of-service levels, backup policies, or custom
249-
policies determined by the cluster administrators.
249+
policies determined by the cluster administrators.
250250

251251
Pods can request storage using a [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).
252252
A PersistentVolumeClaim is a namespaced resource, which enables isolating portions of the storage
@@ -291,7 +291,7 @@ sandboxing implementations are available:
291291
userspace kernel, written in Go, with limited access to the underlying host.
292292
* [Kata Containers](https://katacontainers.io/) is an OCI compliant runtime that allows you to run
293293
containers in a VM. The hardware virtualization available in Kata offers an added layer of
294-
security for containers running untrusted code.
294+
security for containers running untrusted code.
295295

296296
### Node Isolation
297297

@@ -308,7 +308,7 @@ services. A skilled attacker could use the permissions assigned to the kubelet o
308308
running on the node to move laterally within the cluster and gain access to tenant workloads
309309
running on other nodes. If this is a major concern, consider implementing compensating controls
310310
such as seccomp, AppArmor or SELinux or explore using sandboxed containers or creating separate
311-
clusters for each tenant.
311+
clusters for each tenant.
312312

313313
Node isolation is a little easier to reason about from a billing standpoint than sandboxing
314314
containers since you can charge back per node rather than per pod. It also has fewer compatibility
@@ -332,7 +332,7 @@ feature that allows you to assign a priority to certain pods running within the
332332
When an application calls the Kubernetes API, the API server evaluates the priority assigned to pod.
333333
Calls from pods with higher priority are fulfilled before those with a lower priority.
334334
When contention is high, lower priority calls can be queued until the server is less busy or you
335-
can reject the requests.
335+
can reject the requests.
336336

337337
Using API priority and fairness will not be very common in SaaS environments unless you are
338338
allowing customers to run applications that interface with the Kubernetes API, for example,
@@ -346,7 +346,7 @@ service that comes with fewer performance guarantees and features and a for-fee
346346
specific performance guarantees. Fortunately, there are several Kubernetes constructs that can
347347
help you accomplish this within a shared cluster, including network QoS, storage classes, and pod
348348
priority and preemption. The idea with each of these is to provide tenants with the quality of
349-
service that they paid for. Let’s start by looking at networking QoS.
349+
service that they paid for. Let’s start by looking at networking QoS.
350350

351351
Typically, all pods on a node share a network interface. Without network QoS, some pods may
352352
consume an unfair share of the available bandwidth at the expense of other pods.
@@ -356,7 +356,7 @@ for networking that allows you to use Kubernetes resources constructs, i.e. requ
356356
apply rate limits to pods by using Linux tc queues.
357357
Be aware that the plugin is considered experimental as per the
358358
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping)
359-
documentation and should be thoroughly tested before use in production environments.
359+
documentation and should be thoroughly tested before use in production environments.
360360

361361
For storage QoS, you will likely want to create different storage classes or profiles with
362362
different performance characteristics. Each storage profile can be associated with a different
@@ -396,14 +396,14 @@ that supports multiple tenants.
396396
[Operators](/docs/concepts/extend-kubernetes/operator/) are Kubernetes controllers that manage
397397
applications. Operators can simplify the management of multiple instances of an application, like
398398
a database service, which makes them a common building block in the multi-consumer (SaaS)
399-
multi-tenancy use case.
399+
multi-tenancy use case.
400400

401401
Operators used in a multi-tenant environment should follow a stricter set of guidelines.
402-
Specifically, the Operator should:
402+
Specifically, the Operator should:
403403

404404
* Support creating resources within different tenant namespaces, rather than just in the namespace
405405
in which the Operator is deployed.
406-
* Ensure that the Pods are configured with resource requests and limits, to ensure scheduling and fairness.
406+
* Ensure that the Pods are configured with resource requests and limits, to ensure scheduling and fairness.
407407
* Support configuration of Pods for data-plane isolation techniques such as node isolation and
408408
sandboxed containers.
409409

@@ -416,7 +416,7 @@ There are two primary ways to share a Kubernetes cluster for multi-tenancy: usin
416416
plane per tenant).
417417

418418
In both cases, data plane isolation, and management of additional considerations such as API
419-
Priority and Fairness, is also recommended.
419+
Priority and Fairness, is also recommended.
420420

421421
Namespace isolation is well-supported by Kubernetes, has a negligible resource cost, and provides
422422
mechanisms to allow tenants to interact appropriately, such as by allowing service-to-service
@@ -492,25 +492,24 @@ referred to as a _virtual control plane_.
492492
A virtual control plane typically consists of the Kubernetes API server, the controller manager,
493493
and the etcd data store. It interacts with the super cluster via a metadata synchronization
494494
controller which coordinates changes across tenant control planes and the control plane of the
495-
super-cluster.
495+
super-cluster.
496496

497497
By using per-tenant dedicated control planes, most of the isolation problems due to sharing one
498498
API server among all tenants are solved. Examples include noisy neighbors in the control plane,
499499
uncontrollable blast radius of policy misconfigurations, and conflicts between cluster scope
500500
objects such as webhooks and CRDs. Hence, the virtual control plane model is particularly
501501
suitable for cases where each tenant requires access to a Kubernetes API server and expects the
502-
full cluster manageability.
502+
full cluster manageability.
503503

504504
The improved isolation comes at the cost of running and maintaining an individual virtual control
505505
plane per tenant. In addition, per-tenant control planes do not solve isolation problems in the
506506
data plane, such as node-level noisy neighbors or security threats. These must still be addressed
507507
separately.
508508

509509
The Kubernetes [Cluster API - Nested (CAPN)](https://github.com/kubernetes-sigs/cluster-api-provider-nested/tree/main/virtualcluster)
510-
project provides an implementation of virtual control planes.
510+
project provides an implementation of virtual control planes.
511511

512512
#### Other implementations
513513

514514
* [Kamaji](https://github.com/clastix/kamaji)
515-
* [vcluster](https://github.com/loft-sh/vcluster)
516-
515+
* [vcluster](https://github.com/loft-sh/vcluster)

0 commit comments

Comments
 (0)