Skip to content

Commit 678f09c

Browse files
committed
merged zh master feature-gates changes to dev-1.21 feature-gates
2 parents 6453dc5 + 6d25262 commit 678f09c

File tree

14 files changed

+1584
-1038
lines changed

14 files changed

+1584
-1038
lines changed

assets/scss/_custom.scss

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -639,3 +639,10 @@ body.td-documentation {
639639
}
640640
}
641641

642+
.td-content {
643+
table code {
644+
background-color: inherit !important;
645+
color: inherit !important;
646+
font-size: inherit !important;
647+
}
648+
}
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
---
2+
layout: blog
3+
title: "PodSecurityPolicy Deprecation: Past, Present, and Future"
4+
date: 2021-04-06
5+
slug: podsecuritypolicy-deprecation-past-present-and-future
6+
---
7+
8+
**Author:** Tabitha Sable (Kubernetes SIG Security)
9+
10+
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week. This starts the countdown to its removal, but doesn’t change anything else. PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely. In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.
11+
12+
What are Pod Security Policies? Why did we need them? Why are they going away, and what’s next? How does this affect you? These key questions come to mind as we prepare to say goodbye to PSP, so let’s walk through them together. We’ll start with an overview of how features get removed from Kubernetes.
13+
14+
## What does deprecation mean in Kubernetes?
15+
16+
Whenever a Kubernetes feature is set to go away, our [deprecation policy](/docs/reference/using-api/deprecation-policy/) is our guide. First the feature is marked as deprecated, then after enough time has passed, it can finally be removed.
17+
18+
Kubernetes 1.21 starts the deprecation process for PodSecurityPolicy. As with all feature deprecations, PodSecurityPolicy will continue to be fully functional for several more releases. The current plan is to remove PSP from Kubernetes in the 1.25 release.
19+
20+
Until then, PSP is still PSP. There will be at least a year during which the newest Kubernetes releases will still support PSP, and nearly two years until PSP will pass fully out of all supported Kubernetes versions.
21+
22+
## What is PodSecurityPolicy?
23+
24+
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) is a built-in [admission controller](/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/) that allows a cluster administrator to control security-sensitive aspects of the Pod specification.
25+
26+
First, one or more PodSecurityPolicy resources are created in a cluster to define the requirements Pods must meet. Then, RBAC rules are created to control which PodSecurityPolicy applies to a given pod. If a pod meets the requirements of its PSP, it will be admitted to the cluster as usual. In some cases, PSP can also modify Pod fields, effectively creating new defaults for those fields. If a Pod does not meet the PSP requirements, it is rejected, and cannot run.
27+
28+
One more important thing to know about PodSecurityPolicy: it’s not the same as [PodSecurityContext](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
29+
30+
A part of the Pod specification, PodSecurityContext (and its per-container counterpart `SecurityContext`) is the collection of fields that specify many of the security-relevant settings for a Pod. The security context dictates to the kubelet and container runtime how the Pod should actually be run. In contrast, the PodSecurityPolicy only constrains (or defaults) the values that may be set on the security context.
31+
32+
The deprecation of PSP does not affect PodSecurityContext in any way.
33+
34+
## Why did we need PodSecurityPolicy?
35+
36+
In Kubernetes, we define resources such as Deployments, StatefulSets, and Services that represent the building blocks of software applications. The various controllers inside a Kubernetes cluster react to these resources, creating further Kubernetes resources or configuring some software or hardware to accomplish our goals.
37+
38+
In most Kubernetes clusters, RBAC (Role-Based Access Control) [rules](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) control access to these resources. `list`, `get`, `create`, `edit`, and `delete` are the sorts of API operations that RBAC cares about, but _RBAC does not consider what settings are being put into the resources it controls_. For example, a Pod can be almost anything from a simple webserver to a privileged command prompt offering full access to the underlying server node and all the data. It’s all the same to RBAC: a Pod is a Pod is a Pod.
39+
40+
To control what sorts of settings are allowed in the resources defined in your cluster, you need Admission Control in addition to RBAC. Since Kubernetes 1.3, PodSecurityPolicy has been the built-in way to do that for security-related Pod fields. Using PodSecurityPolicy, you can prevent “create Pod” from automatically meaning “root on every cluster node,” without needing to deploy additional external admission controllers.
41+
42+
## Why is PodSecurityPolicy going away?
43+
44+
In the years since PodSecurityPolicy was first introduced, we have realized that PSP has some serious usability problems that can’t be addressed without making breaking changes.
45+
46+
The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them. It is easy to accidentally grant broader permissions than intended, and difficult to inspect which PSP(s) apply in a given situation. The “changing Pod defaults” feature can be handy, but is only supported for certain Pod settings and it’s not obvious when they will or will not apply to your Pod. Without a “dry run” or audit mode, it’s impractical to retrofit PSP to existing clusters safely, and it’s impossible for PSP to ever be enabled by default.
47+
48+
For more information about these and other PSP difficulties, check out SIG Auth’s KubeCon NA 2019 Maintainer Track session video: {{< youtube "SFtHRmPuhEw?start=953" youtube-quote-sm >}}
49+
50+
Today, you’re not limited only to deploying PSP or writing your own custom admission controller. Several external admission controllers are available that incorporate lessons learned from PSP to provide a better user experience. [K-Rail](https://github.com/cruise-automation/k-rail), [Kyverno](https://github.com/kyverno/kyverno/), and [OPA/Gatekeeper](https://github.com/open-policy-agent/gatekeeper/) are all well-known, and each has its fans.
51+
52+
Although there are other good options available now, we believe there is still value in having a built-in admission controller available as a choice for users. With this in mind, we turn toward building what’s next, inspired by the lessons learned from PSP.
53+
54+
## What’s next?
55+
56+
Kubernetes SIG Security, SIG Auth, and a diverse collection of other community members have been working together for months to ensure that what’s coming next is going to be awesome. We have developed a Kubernetes Enhancement Proposal ([KEP 2579](https://github.com/kubernetes/enhancements/issues/2579)) and a prototype for a new feature, currently being called by the temporary name "PSP Replacement Policy." We are targeting an Alpha release in Kubernetes 1.22.
57+
58+
PSP Replacement Policy starts with the realization that since there is a robust ecosystem of external admission controllers already available, PSP’s replacement doesn’t need to be all things to all people. Simplicity of deployment and adoption is the key advantage a built-in admission controller has compared to an external webhook, so we have focused on how to best utilize that advantage.
59+
60+
PSP Replacement Policy is designed to be as simple as practically possible while providing enough flexibility to really be useful in production at scale. It has soft rollout features to enable retrofitting it to existing clusters, and is configurable enough that it can eventually be active by default. It can be deactivated partially or entirely, to coexist with external admission controllers for advanced use cases.
61+
62+
## What does this mean for you?
63+
64+
What this all means for you depends on your current PSP situation. If you’re already using PSP, there’s plenty of time to plan your next move. Please review the PSP Replacement Policy KEP and think about how well it will suit your use case.
65+
66+
If you’re making extensive use of the flexibility of PSP with numerous PSPs and complex binding rules, you will likely find the simplicity of PSP Replacement Policy too limiting. Use the next year to evaluate the other admission controller choices in the ecosystem. There are resources available to ease this transition, such as the [Gatekeeper Policy Library](https://github.com/open-policy-agent/gatekeeper-library).
67+
68+
If your use of PSP is relatively simple, with a few policies and straightforward binding to service accounts in each namespace, you will likely find PSP Replacement Policy to be a good match for your needs. Evaluate your PSPs compared to the Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/) to get a feel for where you’ll be able to use the Restricted, Baseline, and Privileged policies. Please follow along with or contribute to the KEP and subsequent development, and try out the Alpha release of PSP Replacement Policy when it becomes available.
69+
70+
If you’re just beginning your PSP journey, you will save time and effort by keeping it simple. You can approximate the functionality of PSP Replacement Policy today by using the Pod Security Standards’ PSPs. If you set the cluster default by binding a Baseline or Restricted policy to the `system:serviceaccounts` group, and then make a more-permissive policy available as needed in certain Namespaces [using ServiceAccount bindings](/docs/concepts/policy/pod-security-policy/#run-another-pod), you will avoid many of the PSP pitfalls and have an easy migration to PSP Replacement Policy. If your needs are much more complex than this, your effort is probably better spent adopting one of the more fully-featured external admission controllers mentioned above.
71+
72+
We’re dedicated to making Kubernetes the best container orchestration tool we can, and sometimes that means we need to remove longstanding features to make space for better things to come. When that happens, the Kubernetes deprecation policy ensures you have plenty of time to plan your next move. In the case of PodSecurityPolicy, several options are available to suit a range of needs and use cases. Start planning ahead now for PSP’s eventual removal, and please consider contributing to its replacement! Happy securing!
73+
74+
**Acknowledgment:** It takes a wonderful group to make wonderful software. Thanks are due to everyone who has contributed to the PSP replacement effort, especially (in alphabetical order) Tim Allclair, Ian Coldwater, and Jordan Liggitt. It’s been a joy to work with y’all on this.

content/en/docs/reference/command-line-tools-reference/feature-gates.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -582,10 +582,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
582582
[CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
583583
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
584584
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
585-
troubleshoot a running Pod.
586585
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
587586
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
588-
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
587+
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
589588
based resource provisioning on nodes.
590589
- `DisableAcceleratorUsageMetrics`:
591590
[Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics).
@@ -786,8 +785,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
786785
topology of the cluster. See
787786
[ServiceTopology](/docs/concepts/services-networking/service-topology/)
788787
for more details.
789-
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
790-
See [volumes](/docs/concepts/storage/volumes) for more details.
791788
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
792789
Name(FQDN) as the hostname of a pod. See
793790
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).

content/ja/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ kind: ClusterConfiguration
7575
kubernetesVersion: v1.16.0
7676
scheduler:
7777
extraArgs:
78-
address: 0.0.0.0
78+
bind-address: 0.0.0.0
7979
config: /home/johndoe/schedconfig.yaml
8080
kubeconfig: /home/johndoe/kubeconfig.yaml
8181
```

content/ja/docs/tasks/run-application/run-stateless-application-deployment.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,6 @@ Kubernetes Deploymentオブジェクトを作成することでアプリケー
4949

5050
出力はこのようになります:
5151

52-
user@computer:~/website$ kubectl describe deployment nginx-deployment
5352
Name: nginx-deployment
5453
Namespace: default
5554
CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700

content/ja/docs/tutorials/stateless-application/guestbook.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ Deploymentはマニフェストファイル内に書かれた設定に基づい
185185
1. Podのリストを問い合わせて、3つのフロントエンドのレプリカが実行中になっていることを確認します。
186186

187187
```shell
188-
kubectl get pods -l app=guestbook -l tier=frontend
188+
kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend
189189
```
190190

191191
結果は次のようになるはずです。

content/pt/docs/concepts/architecture/master-node-communication.md renamed to content/pt/docs/concepts/architecture/control-plane-node-communication.md

Lines changed: 23 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,12 @@
11
---
2-
reviewers:
3-
- dchen1107
4-
- liggitt
5-
title: Comunicação entre Node e Master
2+
title: Comunicação entre Nó e Control Plane
63
content_type: concept
74
weight: 20
85
---
96

107
<!-- overview -->
118

12-
Este documento cataloga os caminhos de comunicação entre o Master (o
9+
Este documento cataloga os caminhos de comunicação entre o control plane (o
1310
apiserver) e o cluster Kubernetes. A intenção é permitir que os usuários
1411
personalizem sua instalação para proteger a configuração de rede
1512
então o cluster pode ser executado em uma rede não confiável (ou em IPs totalmente públicos em um
@@ -20,10 +17,10 @@ provedor de nuvem).
2017

2118
<!-- body -->
2219

23-
## Cluster para o Master
20+
## para o Control Plane
2421

25-
Todos os caminhos de comunicação do cluster para o Master terminam no
26-
apiserver (nenhum dos outros componentes do Master são projetados para expor
22+
Todos os caminhos de comunicação do cluster para o control plane terminam no
23+
apiserver (nenhum dos outros componentes do control plane são projetados para expor
2724
Serviços remotos). Em uma implantação típica, o apiserver é configurado para escutar
2825
conexões remotas em uma porta HTTPS segura (443) com uma ou mais clientes [autenticação](/docs/reference/access-authn-authz/authentication/) habilitado.
2926
Uma ou mais formas de [autorização](/docs/reference/access-authn-authz/authorization/)
@@ -41,30 +38,30 @@ para provisionamento automatizado de certificados de cliente kubelet.
4138
Os pods que desejam se conectar ao apiserver podem fazê-lo com segurança, aproveitando
4239
conta de serviço para que o Kubernetes injetará automaticamente o certificado raiz público
4340
certificado e um token de portador válido no pod quando ele é instanciado.
44-
O serviço `kubernetes` (em todos os namespaces) é configurado com um IP virtual
41+
O serviço `kubernetes` (no namespace `default`) é configurado com um IP virtual
4542
endereço que é redirecionado (via kube-proxy) para o endpoint com HTTPS no
4643
apiserver.
4744

48-
Os componentes principais também se comunicam com o apiserver do cluster através da porta segura.
45+
Os componentes do control plane também se comunicam com o apiserver do cluster através da porta segura.
4946

5047
Como resultado, o modo de operação padrão para conexões do cluster
51-
(nodes e pods em execução nos Nodes) para o Master é protegido por padrão
52-
e pode passar por redes não confiáveis ​​e / ou públicas.
48+
(nodes e pods em execução nos Nodes) para o control plane é protegido por padrão
49+
e pode passar por redes não confiáveis ​​e/ou públicas.
5350

54-
## Master para o Cluster
51+
## Control Plane para o
5552

56-
Existem dois caminhos de comunicação primários do mestre (apiserver) para o
57-
cluster. O primeiro é do apiserver para o processo do kubelet que é executado em
58-
cada Node no cluster. O segundo é do apiserver para qualquer Node, pod,
53+
Existem dois caminhos de comunicação primários do control plane (apiserver) para os nós.
54+
O primeiro é do apiserver para o processo do kubelet que é executado em
55+
cada no cluster. O segundo é do apiserver para qualquer , pod,
5956
ou serviço através da funcionalidade de proxy do apiserver.
6057

6158
### apiserver para o kubelet
6259

6360
As conexões do apiserver ao kubelet são usadas para:
6461

6562
* Buscar logs para pods.
66-
  * Anexar (através de kubectl) pods em execução.
67-
  * Fornecer a funcionalidade de encaminhamento de porta do kubelet.
63+
* Anexar (através de kubectl) pods em execução.
64+
* Fornecer a funcionalidade de encaminhamento de porta do kubelet.
6865

6966
Essas conexões terminam no endpoint HTTPS do kubelet. Por padrão,
7067
o apiserver não verifica o certificado de serviço do kubelet,
@@ -94,12 +91,18 @@ Estas conexões **não são atualmente seguras** para serem usados por redes nã
9491

9592
### SSH Túnel
9693

97-
O Kubernetes suporta túneis SSH para proteger o Servidor Master -> caminhos de comunicação no cluster. Nesta configuração, o apiserver inicia um túnel SSH para cada nó
94+
O Kubernetes suporta túneis SSH para proteger os caminhos de comunicação do control plane para os nós. Nesta configuração, o apiserver inicia um túnel SSH para cada nó
9895
no cluster (conectando ao servidor ssh escutando na porta 22) e passa
9996
todo o tráfego destinado a um kubelet, nó, pod ou serviço através do túnel.
10097
Este túnel garante que o tráfego não seja exposto fora da rede aos quais
10198
os nós estão sendo executados.
10299

103-
Atualmente, os túneis SSH estão obsoletos, portanto, você não deve optar por usá-los, a menos que saiba o que está fazendo. Um substituto para este canal de comunicação está sendo projetado.
100+
Atualmente, os túneis SSH estão obsoletos, portanto, você não deve optar por usá-los, a menos que saiba o que está fazendo. O serviço Konnectivity é um substituto para este canal de comunicação.
104101

102+
### Konnectivity service
105103

104+
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
105+
106+
Como uma substituição aos túneis SSH, o serviço Konnectivity fornece proxy de nível TCP para a comunicação do control plane para o cluster. O serviço Konnectivity consiste em duas partes: o servidor Konnectivity na rede control plane e os agentes Konnectivity na rede dos nós. Os agentes Konnectivity iniciam conexões com o servidor Konnectivity e mantêm as conexões de rede. Depois de habilitar o serviço Konnectivity, todo o tráfego do control plane para os nós passa por essas conexões.
107+
108+
Veja a [tarefa do Konnectivity](docs/tasks/extend-kubernetes/setup-konnectivity/) para configurar o serviço Konnectivity no seu cluster.

content/vi/docs/reference/glossary/api-group.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: API Group
33
id: api-group
44
date: 2019-12-16
5-
full_link: /docs/concepts/overview/kubernetes-api/#api-groups
5+
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
66
short_description: >
77
Một tập những đường dẫn tương đối đến Kubernetes API.
88
@@ -18,4 +18,4 @@ Một tập những đường dẫn tương đối đến Kubernetes API.
1818

1919
Bạn có thể cho phép hay vô hiệu từng API group bằng cách thay đổi cấu hình trên API server của mình. Đồng thời bạn cũng có thể vô hiệu hay kích hoạt các đường dẫn cho những tài nguyên cụ thể. API group đơn giản hóa việc mở rộng Kubernetes API. Nó được chỉ định dưới dạng REST và trong trường `apiVersion` của một đối tượng đã được chuyển hóa.
2020

21-
- Đọc thêm về [API Group](/docs/concepts/overview/kubernetes-api/#api-groups).
21+
- Đọc thêm về [API Group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning).

0 commit comments

Comments
 (0)