Skip to content

Commit 093df63

Browse files
authored
Merge pull request #22207 from fancc/konnectivity
translate 'set up Konnectivity service' into chinese
2 parents 0db6469 + 31aa397 commit 093df63

File tree

5 files changed

+242
-0
lines changed

5 files changed

+242
-0
lines changed
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
---
2+
title: 设置 Konnectivity 服务
3+
content_type: task
4+
weight: 70
5+
---
6+
7+
<!-- overview -->
8+
<!--
9+
The Konnectivity service provides a TCP level proxy for the control plane to cluster
10+
communication.
11+
-->
12+
Konnectivity 服务为控制平面提供集群通信的 TCP 级别代理。
13+
14+
## {{% heading "prerequisites" %}}
15+
16+
{{< include "task-tutorial-prereqs.md" >}}
17+
18+
<!-- steps -->
19+
<!--
20+
## Configure the Konnectivity service
21+
22+
The following steps require an egress configuration, for example:
23+
24+
{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
25+
26+
You need to configure the API Server to use the Konnectivity service
27+
and direct the network traffic to the cluster nodes:
28+
29+
1. Create an egress configuration file such as `admin/konnectivity/egress-selector-configuration.yaml`.
30+
1. Set the `--egress-selector-config-file` flag of the API Server to the path of
31+
your API Server egress configuration file.
32+
-->
33+
## 配置 Konnectivity 服务
34+
35+
接下来的步骤需要出口配置,比如:
36+
37+
{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
38+
39+
您需要配置 API 服务器来使用 Konnectivity 服务,并将网络流量定向到集群节点:
40+
41+
1. 创建一个出口配置文件比如 `admin/konnectivity/egress-selector-configuration.yaml`
42+
1. 将 API 服务器的 `--egress-selector-config-file` 参数设置为你的 API 服务器的出口配置文件路径。
43+
44+
<!--
45+
Next, you need to deploy the Konnectivity server and agents.
46+
[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy)
47+
is a reference implementation.
48+
49+
Deploy the Konnectivity server on your control plane node. The provided
50+
`konnectivity-server.yaml` manifest assumes
51+
that the Kubernetes components are deployed as a {{< glossary_tooltip text="static Pod"
52+
term_id="static-pod" >}} in your cluster. If not, you can deploy the Konnectivity
53+
server as a DaemonSet.
54+
-->
55+
接下来,你需要部署 Konnectivity 服务器和代理。[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy) 是参考实现。
56+
57+
在控制平面节点上部署 Konnectivity 服务,下面提供的 `konnectivity-server.yaml` 配置清单假定您在集群中
58+
将 Kubernetes 组件都是部署为{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}}。如果不是,你可以将 Konnectivity 服务部署为 DaemonSet。
59+
60+
{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}}
61+
62+
<!--
63+
Then deploy the Konnectivity agents in your cluster:
64+
-->
65+
在您的集群中部署 Konnectivity 代理:
66+
67+
{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}}
68+
69+
<!--
70+
Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:
71+
-->
72+
最后,如果您的集群开启了 RBAC,请创建相关的 RBAC 规则:
73+
74+
{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}}
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
apiVersion: apiserver.k8s.io/v1beta1
2+
kind: EgressSelectorConfiguration
3+
egressSelections:
4+
# Since we want to control the egress traffic to the cluster, we use the
5+
# "cluster" as the name. Other supported values are "etcd", and "master".
6+
- name: cluster
7+
connection:
8+
# This controls the protocol between the API Server and the Konnectivity
9+
# server. Supported values are "GRPC" and "HTTPConnect". There is no
10+
# end user visible difference between the two modes. You need to set the
11+
# Konnectivity server to work in the same mode.
12+
proxyProtocol: GRPC
13+
transport:
14+
# This controls what transport the API Server uses to communicate with the
15+
# Konnectivity server. UDS is recommended if the Konnectivity server
16+
# locates on the same machine as the API Server. You need to configure the
17+
# Konnectivity server to listen on the same UDS socket.
18+
# The other supported transport is "tcp". You will need to set up TLS
19+
# config to secure the TCP transport.
20+
uds:
21+
udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
apiVersion: apps/v1
2+
# Alternatively, you can deploy the agents as Deployments. It is not necessary
3+
# to have an agent on each node.
4+
kind: DaemonSet
5+
metadata:
6+
labels:
7+
addonmanager.kubernetes.io/mode: Reconcile
8+
k8s-app: konnectivity-agent
9+
namespace: kube-system
10+
name: konnectivity-agent
11+
spec:
12+
selector:
13+
matchLabels:
14+
k8s-app: konnectivity-agent
15+
template:
16+
metadata:
17+
labels:
18+
k8s-app: konnectivity-agent
19+
spec:
20+
priorityClassName: system-cluster-critical
21+
tolerations:
22+
- key: "CriticalAddonsOnly"
23+
operator: "Exists"
24+
containers:
25+
- image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
26+
name: konnectivity-agent
27+
command: ["/proxy-agent"]
28+
args: [
29+
"--logtostderr=true",
30+
"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
31+
# Since the konnectivity server runs with hostNetwork=true,
32+
# this is the IP address of the master machine.
33+
"--proxy-server-host=35.225.206.7",
34+
"--proxy-server-port=8132",
35+
"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
36+
]
37+
volumeMounts:
38+
- mountPath: /var/run/secrets/tokens
39+
name: konnectivity-agent-token
40+
livenessProbe:
41+
httpGet:
42+
port: 8093
43+
path: /healthz
44+
initialDelaySeconds: 15
45+
timeoutSeconds: 15
46+
serviceAccountName: konnectivity-agent
47+
volumes:
48+
- name: konnectivity-agent-token
49+
projected:
50+
sources:
51+
- serviceAccountToken:
52+
path: konnectivity-agent-token
53+
audience: system:konnectivity-server
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
apiVersion: rbac.authorization.k8s.io/v1
2+
kind: ClusterRoleBinding
3+
metadata:
4+
name: system:konnectivity-server
5+
labels:
6+
kubernetes.io/cluster-service: "true"
7+
addonmanager.kubernetes.io/mode: Reconcile
8+
roleRef:
9+
apiGroup: rbac.authorization.k8s.io
10+
kind: ClusterRole
11+
name: system:auth-delegator
12+
subjects:
13+
- apiGroup: rbac.authorization.k8s.io
14+
kind: User
15+
name: system:konnectivity-server
16+
---
17+
apiVersion: v1
18+
kind: ServiceAccount
19+
metadata:
20+
name: konnectivity-agent
21+
namespace: kube-system
22+
labels:
23+
kubernetes.io/cluster-service: "true"
24+
addonmanager.kubernetes.io/mode: Reconcile
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
apiVersion: v1
2+
kind: Pod
3+
metadata:
4+
name: konnectivity-server
5+
namespace: kube-system
6+
spec:
7+
priorityClassName: system-cluster-critical
8+
hostNetwork: true
9+
containers:
10+
- name: konnectivity-server-container
11+
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
12+
command: ["/proxy-server"]
13+
args: [
14+
"--log-file=/var/log/konnectivity-server.log",
15+
"--logtostderr=false",
16+
"--log-file-max-size=0",
17+
# This needs to be consistent with the value set in egressSelectorConfiguration.
18+
"--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
19+
# The following two lines assume the Konnectivity server is
20+
# deployed on the same machine as the apiserver, and the certs and
21+
# key of the API Server are at the specified location.
22+
"--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
23+
"--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
24+
# This needs to be consistent with the value set in egressSelectorConfiguration.
25+
"--mode=grpc",
26+
"--server-port=0",
27+
"--agent-port=8132",
28+
"--admin-port=8133",
29+
"--agent-namespace=kube-system",
30+
"--agent-service-account=konnectivity-agent",
31+
"--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
32+
"--authentication-audience=system:konnectivity-server"
33+
]
34+
livenessProbe:
35+
httpGet:
36+
scheme: HTTP
37+
host: 127.0.0.1
38+
port: 8133
39+
path: /healthz
40+
initialDelaySeconds: 30
41+
timeoutSeconds: 60
42+
ports:
43+
- name: agentport
44+
containerPort: 8132
45+
hostPort: 8132
46+
- name: adminport
47+
containerPort: 8133
48+
hostPort: 8133
49+
volumeMounts:
50+
- name: varlogkonnectivityserver
51+
mountPath: /var/log/konnectivity-server.log
52+
readOnly: false
53+
- name: pki
54+
mountPath: /etc/srv/kubernetes/pki
55+
readOnly: true
56+
- name: konnectivity-uds
57+
mountPath: /etc/srv/kubernetes/konnectivity-server
58+
readOnly: false
59+
volumes:
60+
- name: varlogkonnectivityserver
61+
hostPath:
62+
path: /var/log/konnectivity-server.log
63+
type: FileOrCreate
64+
- name: pki
65+
hostPath:
66+
path: /etc/srv/kubernetes/pki
67+
- name: konnectivity-uds
68+
hostPath:
69+
path: /etc/srv/kubernetes/konnectivity-server
70+
type: DirectoryOrCreate

0 commit comments

Comments
 (0)