You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Depending on your cluster environment, this may just expose the service to your corporate network,
49
+
- Depending on your cluster environment, this may only expose the service to your corporate network,
50
50
or it may expose it to the internet. Think about whether the service being exposed is secure.
51
51
Does it do its own authentication?
52
52
- Place pods behind services. To access one specific pod from a set of replicas, such as for debugging,
@@ -148,15 +148,15 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
148
148
<!--
149
149
#### Manually constructing apiserver proxy URLs
150
150
151
-
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
151
+
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
关键插件进入挂起状态的例子有:集群利用率过高;被逐出的关键插件 Pod 释放了空间,但该空间被之前悬决的 Pod 占用;由于其它原因导致节点上可用资源的总量发生变化。
21
21
22
-
22
+
<!--
23
+
Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable.
24
+
A static pod marked as critical, can't be evicted. However, a non-static pods marked as critical are always rescheduled.
25
+
-->
26
+
注意,把某个 Pod 标记为关键 Pod 并不意味着完全避免该 Pod 被逐出;它只能防止该 Pod 变成永久不可用。
27
+
被标记为关键性的静态 Pod 不会被逐出。但是,被标记为关键性的非静态 Pod 总是会被重新调度。
23
28
24
29
<!-- body -->
25
30
@@ -29,12 +34,9 @@ vacated by the evicted critical add-on pod or the amount of resources available
29
34
### 标记关键 Pod
30
35
31
36
<!--
32
-
To be considered critical, the pod has to run in the `kube-system` namespace (configurable via flag) and
33
-
* Have the priorityClassName set as "system-cluster-critical" or "system-node-critical", the latter being the highest for entire cluster. Alternatively, you could add an annotation `scheduler.alpha.kubernetes.io/critical-pod` as key and empty string as value to your pod, but this annotation is deprecated as of version 1.13 and will be removed in 1.14.
37
+
To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`
34
38
-->
35
-
要将 pod 标记为关键性(critical),pod 必须在 kube-system 命名空间中运行(可通过参数配置)。
Copy file name to clipboardExpand all lines: content/zh/docs/tasks/administer-cluster/securing-a-cluster.md
+18-17Lines changed: 18 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ and provides recommendations on overall security.
29
29
<!--
30
30
## Controlling access to the Kubernetes API
31
31
32
-
As Kubernetes is entirely APIdriven, controlling and limiting who can access the cluster and what actions
32
+
As Kubernetes is entirely API-driven, controlling and limiting who can access the cluster and what actions
33
33
they are allowed to perform is the first line of defense.
34
34
-->
35
35
## 控制对 Kubernetes API 的访问
@@ -53,7 +53,7 @@ Kubernetes 期望集群中所有的 API 通信在默认情况下都使用 TLS
53
53
### API Authentication
54
54
55
55
Choose an authentication mechanism for the API servers to use that matches the common access patterns
56
-
when you install a cluster. For instance, small singleuser clusters may wish to use a simple certificate
56
+
when you install a cluster. For instance, small single-user clusters may wish to use a simple certificate
57
57
or static Bearer token approach. Larger clusters may wish to integrate an existing OIDC or LDAP server that
58
58
allow users to be subdivided into groups.
59
59
@@ -80,7 +80,7 @@ Consult the [authentication reference document](/docs/reference/access-authn-aut
80
80
Once authenticated, every API call is also expected to pass an authorization check. Kubernetes ships
81
81
an integrated [Role-Based Access Control (RBAC)](/docs/reference/access-authn-authz/rbac/) component that matches an incoming user or group to a
82
82
set of permissions bundled into roles. These permissions combine verbs (get, create, delete) with
83
-
resources (pods, services, nodes) and can be namespace or clusterscoped. A set of out of thebox
83
+
resources (pods, services, nodes) and can be namespace-scoped or cluster-scoped. A set of out-of-the-box
84
84
roles are provided that offer reasonable default separation of responsibility depending on what
85
85
actions a client might want to perform. It is recommended that you use the [Node](/docs/reference/access-authn-authz/node/) and [RBAC](/docs/reference/access-authn-authz/rbac/) authorizers together, in combination with the
@@ -110,8 +110,8 @@ With authorization, it is important to understand how updates on one object may
110
110
other places. For instance, a user may not be able to create pods directly, but allowing them to
111
111
create a deployment, which creates pods on their behalf, will let them create those pods
112
112
indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node
113
-
being terminated and recreated on other nodes. The out of thebox roles represent a balance
114
-
between flexibility and the common use cases, but more limited roles should be carefully reviewed
113
+
being terminated and recreated on other nodes. The out-of-the-box roles represent a balance
114
+
between flexibility and common use cases, but more limited roles should be carefully reviewed
115
115
to prevent accidental escalation. You can make roles specific to your use case if the out-of-box ones don't meet your needs.
116
116
117
117
Consult the [authorization reference section](/docs/reference/access-authn-authz/authorization/) for more information.
@@ -183,7 +183,7 @@ reserved resources like memory, or to provide default limits when none are speci
183
183
### Controlling what privileges containers run with
184
184
185
185
A pod definition contains a [security context](/docs/tasks/configure-pod-container/security-context/)
186
-
that allows it to request access to running as a specific Linux user on a node (like root),
186
+
that allows it to request access to run as a specific Linux user on a node (like root),
187
187
access to run privileged or access the host network, and other controls that would otherwise
188
188
allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/)
189
189
can limit which users or service accounts can provide dangerous security context settings. For example, pod security policies can limit volume mounts, especially `hostPath`, which are aspects of a pod that should be controlled.
@@ -227,11 +227,11 @@ now respect network policy.
227
227
228
228
<!--
229
229
Quota and limit ranges can also be used to control whether users may request node ports or
230
-
loadbalanced services, which on many clusters can control whether those users applications
230
+
load-balanced services, which on many clusters can control whether those users applications
231
231
are visible outside of the cluster.
232
232
233
-
Additional protections may be available that control network rules on a perplugin or per
234
-
environment basis, such as per-node firewalls, physically separating cluster nodes to
233
+
Additional protections may be available that control network rules on a per-plugin or
234
+
per-environment basis, such as per-node firewalls, physically separating cluster nodes to
235
235
prevent cross talk, or advanced networking policy.
236
236
-->
237
237
对于可以控制用户的应用程序是否在集群之外可见的许多集群,配额和限制范围也可用于
@@ -248,7 +248,7 @@ By default these APIs are accessible by pods running on an instance and can cont
248
248
credentials for that node, or provisioning data such as kubelet credentials. These credentials
249
249
can be used to escalate within the cluster or to other cloud services under the same account.
250
250
251
-
When running Kubernetes on a cloud platform limit permissions given to instance credentials, use
251
+
When running Kubernetes on a cloud platform, limit permissions given to instance credentials, use
252
252
[network policies](/docs/tasks/administer-cluster/declare-network-policy/) to restrict pod access
253
253
to the metadata API, and avoid using provisioning data to deliver secrets.
254
254
-->
@@ -268,7 +268,7 @@ to the metadata API, and avoid using provisioning data to deliver secrets.
268
268
269
269
By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a
270
270
[rich set of policies for controlling placement of pods onto nodes](/docs/concepts/configuration/assign-pod-node/)
271
-
and the [taintbased pod placement and eviction](/docs/concepts/configuration/taint-and-toleration/)
271
+
and the [taint-based pod placement and eviction](/docs/concepts/configuration/taint-and-toleration/)
272
272
that are available to end users. For many clusters use of these policies to separate workloads
273
273
can be a convention that authors adopt or enforce via tooling.
The shorter the lifetime of a secret or credential the harder it is for an attacker to make
361
361
use of that credential. Set short lifetimes on certificates and automate their rotation. Use
362
362
an authentication provider that can control how long issued tokens are available and use short
363
-
lifetimes where possible. If you use serviceaccount tokens in external integrations, plan to
363
+
lifetimes where possible. If you use service-account tokens in external integrations, plan to
364
364
rotate those tokens frequently. For example, once the bootstrap phase is complete, a bootstrap token used for setting up nodes should be revoked or its authorization removed.
365
365
-->
366
366
### 频繁回收基础设施证书
@@ -406,9 +406,10 @@ and may grant an attacker significant visibility into the state of your cluster.
406
406
your backups using a well reviewed backup and encryption solution, and consider using full disk
407
407
encryption where possible.
408
408
409
-
Kubernetes 1.7 contains [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/), an alpha feature that will encrypt `Secret` resources in etcd, preventing
409
+
Kubernetes supports [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/), a feature
410
+
introduced in 1.7, and beta since 1.13. This will encrypt `Secret` resources in etcd, preventing
410
411
parties that gain access to your etcd backups from viewing the content of those secrets. While
411
-
this feature is currently experimental, it may offer an additional level of defense when backups
412
+
this feature is currently beta, it offers an additional level of defense when backups
412
413
are not encrypted or an attacker gains read access to etcd.
413
414
-->
414
415
### 对 Secret 进行静态加密
@@ -417,9 +418,9 @@ are not encrypted or an attacker gains read access to etcd.
Copy file name to clipboardExpand all lines: content/zh/docs/tasks/administer-cluster/topology-manager.md
+10-6Lines changed: 10 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -132,6 +132,15 @@ To align CPU resources with other requested resources in a Pod Spec, the CPU Man
132
132
参看[控制 CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/).
133
133
{{< /note >}}
134
134
135
+
<!--
136
+
To align memory (and hugepages) resources with other requested resources in a Pod Spec, the Memory Manager should be enabled and proper Memory Manager policy should be configured on a Node. Examine [Memory Manager](/docs/tasks/administer-cluster/memory-manager/) documentation.
137
+
-->
138
+
{{< note >}}
139
+
为了将 Pod 规约中的 memory(和 hugepages)资源与所请求的其他资源对齐,需要启用内存管理器,
@@ -487,15 +496,10 @@ Using this information the Topology Manager calculates the optimal hint for the
487
496
1. The maximum number of NUMA nodes that Topology Manager allows is 8. With more than 8 NUMA nodes there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints.
488
497
489
498
2. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager.
490
-
491
-
3. The Device Manager and the CPU Manager are the only components to adopt the Topology Manager's HintProvider interface. This means that NUMA alignment can only be achieved for resources managed by the CPU Manager and the Device Manager. Memory or Hugepages are not considered by the Topology Manager for NUMA alignment.
0 commit comments