Boot one or more CoreOS nodes which will be used as Kubernetes Workers. You must use a CoreOS version 962.0.0+ for the /usr/lib/coreos/kubelet-wrapper script to be present in the image. See kubelet-wrapper for more information.
See the CoreOS Documentation for guides on launching nodes on supported platforms.
Place the TLS keypairs generated previously in the following locations. Note that each keypair is unique and should be installed on the worker node it was generated for:
- File:
/etc/kubernetes/ssl/ca.pem - File:
/etc/kubernetes/ssl/${WORKER_FQDN}-worker.pem - File:
/etc/kubernetes/ssl/${WORKER_FQDN}-worker-key.pem
And make sure you've set proper permission for private key:
$ sudo chmod 600 /etc/kubernetes/ssl/*-key.pem
$ sudo chown root:root /etc/kubernetes/ssl/*-key.pemCreate symlinks to the worker-specific certificate and key so that the remaining configurations on the workers do not have to be unique per worker.
$ cd /etc/kubernetes/ssl/
$ sudo ln -s ${WORKER_FQDN}-worker.pem worker.pem
$ sudo ln -s ${WORKER_FQDN}-worker-key.pem worker-key.pemNote: If the pod-network is being managed independently of flannel, then the flannel parts of this guide can be skipped. It's recommended that Calico is still used for providing network policy. See kubernetes networking for more detail.
Just like earlier, create /etc/flannel/options.env and modify these values:
- Replace
${ADVERTISE_IP}with this node's publicly routable IP. - Replace
${ETCD_ENDPOINTS}
/etc/flannel/options.env
FLANNELD_IFACE=${ADVERTISE_IP}
FLANNELD_ETCD_ENDPOINTS=${ETCD_ENDPOINTS}Next create a systemd drop-in, which will use the above configuration when flannel starts
/etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.envNote: If the pod-network is being managed independently, this step can be skipped. See kubernetes networking for more detail.
Require that flanneld is running prior to Docker start.
Create /etc/systemd/system/docker.service.d/40-flannel.conf
/etc/systemd/system/docker.service.d/40-flannel.conf
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.envCreate the Docker CNI Options file:
/etc/kubernetes/cni/docker_opts_cni.env
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""If using Flannel for networking, setup the Flannel CNI configuration with below. If you intend to use Calico for networking, setup using Set Up the CNI config (optional) instead.
/etc/kubernetes/cni/net.d/10-flannel.conf
{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}Create /etc/systemd/system/kubelet.service and substitute the following variables:
- Replace
${MASTER_HOST} - Replace
${ADVERTISE_IP}with this node's publicly routable IP. - Replace
${DNS_SERVICE_IP} - Replace
${K8S_VER}This will map to:quay.io/coreos/hyperkube:${K8S_VER}release, e.g.v1.5.2_coreos.0. - If using Calico for network policy
- Replace
${NETWORK_PLUGIN}withcni - Add the following to
RKT_OPS=--volume cni-bin,kind=host,source=/opt/cni/bin \ --mount volume=cni-bin,target=/opt/cni/bin - Add
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
- Replace
- Decide if you will use additional features such as:
/etc/systemd/system/kubelet.service
[Service]
Environment=KUBELET_VERSION=${K8S_VER}
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=${MASTER_HOST} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime=docker \
--register-node=true \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override=${ADVERTISE_IP} \
--cluster_dns=${DNS_SERVICE_IP} \
--cluster_domain=cluster.local \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetCreate /etc/kubernetes/manifests/kube-proxy.yaml:
- Replace
${MASTER_HOST}
/etc/kubernetes/manifests/kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- proxy
- --master=${MASTER_HOST}
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"In order to facilitate secure communication between Kubernetes components, kubeconfig can be used to define authentication settings. In this case, the kubelet and proxy are reading this configuration to communicate with the API.
Create /etc/kubernetes/worker-kubeconfig.yaml:
/etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-contextNow we can start the Worker services.
Tell systemd to rescan the units on disk:
$ sudo systemctl daemon-reloadStart the kubelet, which will start the proxy.
$ sudo systemctl start flanneld
$ sudo systemctl start kubeletEnsure that the services start on each boot:
$ sudo systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /etc/systemd/system/flanneld.service.
$ sudo systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.To check the health of the kubelet systemd unit that we created, run systemctl status kubelet.service.
Is the kubelet running?
Yes, ready to configure `kubectl`