Skip to content

Commit ac1d864

Browse files
author
Chao Xu
committed
Instructions on how to set up the Konnectivity service.
1 parent b8df304 commit ac1d864

File tree

6 files changed

+210
-0
lines changed

6 files changed

+210
-0
lines changed
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
title: "Setup Konnectivity Service"
3+
weight: 20
4+
---
5+
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
title: Setup Konnectivity Service
3+
content_template: templates/task
4+
weight: 110
5+
---
6+
7+
The Konnectivity service provides TCP level proxy for the Master → Cluster
8+
communication.
9+
10+
You can set it up with the following steps.
11+
12+
First, you need to configure the API Server to use the Konnectivity service
13+
to direct its network traffic to cluster nodes:
14+
1. Set the `--egress-selector-config-file` flag of the API Server, it is the
15+
path to the API Server egress configuration file.
16+
2. At the path, create a configuration file. For example,
17+
18+
{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
19+
20+
Next, you need to deploy the Konnectivity service server and agents.
21+
[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy)
22+
is a reference implementation.
23+
24+
Deploy the Konnectivity server on your master node. The provided yaml assuming
25+
Kubernetes components are deployed as {{< glossary_tooltip text="static pod"
26+
term_id="static-pod" >}} in your cluster. If not , you can deploy it as a
27+
Daemonset to be reliable.
28+
29+
{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}}
30+
31+
Then deploy the Konnectivity agents in your cluster:
32+
33+
{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}}
34+
35+
Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:
36+
37+
{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}}
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
apiVersion: apiserver.k8s.io/v1beta1
2+
kind: EgressSelectorConfiguration
3+
egressSelections:
4+
# Since we want to control the egress traffic to the cluster, we use the
5+
# "cluster" as the name. Other supported values are "etcd", and "master".
6+
- name: cluster
7+
connection:
8+
# This controls the protocol between the API Server and the Konnectivity
9+
# server. Supported values are "GRPC" and "HTTPConnect". There is no
10+
# end user visible difference between the two modes. You need to set the
11+
# Konnectivity server to work in the same mode.
12+
proxyProtocol: GRPC
13+
transport:
14+
# This controls what transport the API Server uses to communicate with the
15+
# Konnectivity server. UDS is recommended if the Konnectivity server
16+
# locates on the same machine as the API Server. You need to configure the
17+
# Konnectivity server to listen on the same UDS socket.
18+
# The other supported transport is "tcp". You will need to set up TLS
19+
# config to secure the TCP transport.
20+
uds:
21+
udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
apiVersion: apps/v1
2+
# Alternatively, you can deploy the agents as Deployments. It is not necessary
3+
# to have an agent on each node.
4+
kind: DaemonSet
5+
metadata:
6+
labels:
7+
addonmanager.kubernetes.io/mode: Reconcile
8+
k8s-app: konnectivity-agent
9+
namespace: kube-system
10+
name: konnectivity-agent
11+
spec:
12+
selector:
13+
matchLabels:
14+
k8s-app: konnectivity-agent
15+
template:
16+
metadata:
17+
labels:
18+
k8s-app: konnectivity-agent
19+
spec:
20+
priorityClassName: system-cluster-critical
21+
tolerations:
22+
- key: "CriticalAddonsOnly"
23+
operator: "Exists"
24+
containers:
25+
- image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
26+
name: konnectivity-agent
27+
command: ["/proxy-agent"]
28+
args: [
29+
"--logtostderr=true",
30+
"--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
31+
# Since the konnectivity server runs with hostNetwork=true,
32+
# this is the IP address of the master machine.
33+
"--proxy-server-host=35.225.206.7",
34+
"--proxy-server-port=8132",
35+
"--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
36+
]
37+
volumeMounts:
38+
- mountPath: /var/run/secrets/tokens
39+
name: konnectivity-agent-token
40+
livenessProbe:
41+
httpGet:
42+
port: 8093
43+
path: /healthz
44+
initialDelaySeconds: 15
45+
timeoutSeconds: 15
46+
serviceAccountName: konnectivity-agent
47+
volumes:
48+
- name: konnectivity-agent-token
49+
projected:
50+
sources:
51+
- serviceAccountToken:
52+
path: konnectivity-agent-token
53+
audience: system:konnectivity-server
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
apiVersion: rbac.authorization.k8s.io/v1
2+
kind: ClusterRoleBinding
3+
metadata:
4+
name: system:konnectivity-server
5+
labels:
6+
kubernetes.io/cluster-service: "true"
7+
addonmanager.kubernetes.io/mode: Reconcile
8+
roleRef:
9+
apiGroup: rbac.authorization.k8s.io
10+
kind: ClusterRole
11+
name: system:auth-delegator
12+
subjects:
13+
- apiGroup: rbac.authorization.k8s.io
14+
kind: User
15+
name: system:konnectivity-server
16+
---
17+
apiVersion: v1
18+
kind: ServiceAccount
19+
metadata:
20+
name: konnectivity-agent
21+
namespace: kube-system
22+
labels:
23+
kubernetes.io/cluster-service: "true"
24+
addonmanager.kubernetes.io/mode: Reconcile
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
apiVersion: v1
2+
kind: Pod
3+
metadata:
4+
name: konnectivity-server
5+
namespace: kube-system
6+
spec:
7+
priorityClassName: system-cluster-critical
8+
hostNetwork: true
9+
containers:
10+
- name: konnectivity-server-container
11+
image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
12+
command: ["/proxy-server"]
13+
args: [
14+
"--log-file=/var/log/konnectivity-server.log",
15+
"--logtostderr=false",
16+
"--log-file-max-size=0",
17+
# This needs to be consistent with the value set in egressSelectorConfiguration.
18+
"--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
19+
# The following two lines assume the Konnectivity server is
20+
# deployed on the same machine as the apiserver, and the certs and
21+
# key of the API Server are at the specified location.
22+
"--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
23+
"--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
24+
# This needs to be consistent with the value set in egressSelectorConfiguration.
25+
"--mode=grpc",
26+
"--server-port=0",
27+
"--agent-port=8132",
28+
"--admin-port=8133",
29+
"--agent-namespace=kube-system",
30+
"--agent-service-account=konnectivity-agent",
31+
"--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
32+
"--authentication-audience=system:konnectivity-server"
33+
]
34+
livenessProbe:
35+
httpGet:
36+
scheme: HTTP
37+
host: 127.0.0.1
38+
port: 8133
39+
path: /healthz
40+
initialDelaySeconds: 30
41+
timeoutSeconds: 60
42+
ports:
43+
- name: agentport
44+
containerPort: 8132
45+
hostPort: 8132
46+
- name: adminport
47+
containerPort: 8133
48+
hostPort: 8133
49+
volumeMounts:
50+
- name: varlogkonnectivityserver
51+
mountPath: /var/log/konnectivity-server.log
52+
readOnly: false
53+
- name: pki
54+
mountPath: /etc/srv/kubernetes/pki
55+
readOnly: true
56+
- name: konnectivity-uds
57+
mountPath: /etc/srv/kubernetes/konnectivity-server
58+
readOnly: false
59+
volumes:
60+
- name: varlogkonnectivityserver
61+
hostPath:
62+
path: /var/log/konnectivity-server.log
63+
type: FileOrCreate
64+
- name: pki
65+
hostPath:
66+
path: /etc/srv/kubernetes/pki
67+
- name: konnectivity-uds
68+
hostPath:
69+
path: /etc/srv/kubernetes/konnectivity-server
70+
type: DirectoryOrCreate

0 commit comments

Comments
 (0)