-
Notifications
You must be signed in to change notification settings - Fork 4
Alpha Cluster Deployment
The alpha cluster is deployed under the same architecture described for the test cluster
The alpha cluster deploys the following instances of each class
1 master/control plane server:
- kubmaster01
2 node/worker servers:
- kubnode01
- kubnode02
1 NFS Storage server:
- kubvol01
All of the deployed alpha nodes are one of two type of instances:
- 1GB instance: (kubmaster01, kubvol01)
- 2GB instance: (kubnode01, kubnode02)
All instances were deployed in the same datacenter of the same provider in order to enable private network communication
- 1GB RAM
- 1 vCPU
- 20GB Storage
- 2GB RAM
- 1 vCPU
- 30GB Storage
All three of these machines are deployed as Fedora 25 instances
- Set the system hostname
- Apply shared cluster configurations
- Disable password logins for the root user
- Install netdata for node monitoring
- Open firewall port for netdata
- Secure public ports
- Allow private network traffic
- Disable SELinux
Before copy/pasting, set shell variable:
-
host: desired machine hostname
(
set -e
hostnamectl set-hostname ${host?}
dnf -y install git-core
git clone https://github.com/CodeForPhilly/ops.git /opt/ops
(
cd /opt/ops
ln -s kubernetes/alpha-cluster/post-merge .git/hooks/post-merge
.git/hooks/post-merge
)
sed -i 's/^PermitRootLogin yes/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl restart sshd
curl -Ss 'https://raw.githubusercontent.com/firehol/netdata-demo-site/master/install-required-packages.sh' >/tmp/kickstart.sh && bash /tmp/kickstart.sh -i netdata-all && rm -f /tmp/kickstart.sh
git clone https://github.com/firehol/netdata.git --depth=1
( cd netdata && ./netdata-installer.sh --install /opt )
firewallctl zone '' -p add port 19999/tcp
firewallctl zone '' -p remove service cockpit
firewallctl zone internal -p add source 192.168.0.0/16
firewall-cmd --permanent --zone=internal --set-target=ACCEPT # for some inexplicable reason, this version of firewallctl does not provide a way to do this
firewallctl reload
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
setenforce 0
)
This machine is deployed as an openSUSE Leap 42.2 instance
- Set the system hostname
- Apply the shared cluster configurations
- Disable password logins for the root user
- Install
mancommand (don't ask me why it's not there to start with...) or why it depends on 30 f'ing packages) - Install netdata for node monitoring
- Lockdown public firewall
- Open firewall to private network
(
set -e
hostnamectl set-hostname ${host?}
zypper in -y git-core
git clone https://github.com/CodeForPhilly/ops.git /opt/ops
(
cd /opt/ops
ln -s kubernetes/alpha-cluster/post-merge .git/hooks/post-merge
.git/hooks/post-merge
)
sed -i 's/^PermitRootLogin yes/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl restart sshd
zypper in -y man
curl -Ss 'https://raw.githubusercontent.com/firehol/netdata-demo-site/master/install-required-packages.sh' >/tmp/kickstart.sh && bash /tmp/kickstart.sh -i netdata-all && rm -f /tmp/kickstart.sh
git clone https://github.com/firehol/netdata.git --depth=1
( cd netdata && ./netdata-installer.sh --install /opt )
zypper in -y firewalld
systemctl start firewalld
systemctl enable firewalld
firewallctl zone '' -p add interface eth0
firewallctl zone '' -p add port 19999/tcp
firewallctl zone internal -p add source 192.168.0.0/16
firewall-cmd --permanent --zone=internal --set-target=ACCEPT
firewallctl reload
)
These instruction presume that the workstation from which the administrator is working has been appropriately configured with the necessary workstation resources.
These nodes are deployed using the kubernetes contrib ansible playbooks. The python environment from which ansible is run will require the python-netaddr module in order to use the playbooks.
Once the dependencies are satisfied, the following steps will provision the kubernetes nodes and master:
- Apply cluster configuration data to ansible playbooks
- Run ansible playbooks
- Open API port on master for remote use of kubectl
Before copy/pasting, set shell variables:
-
repo_contrib: path to kubernetes contrib repo -
repo_ops: path to the ops repo
(
set -e
cp "${repo_ops?}/kubernetes/alpha-cluster/workstation-resources/kubernetes-contrib.patch" "${repo_contrib?}/kubernetes-contrib.patch"
cd "${repo_contrib?}"
git apply kubernetes-contrib.patch
cd ansible/scripts
./deploy-cluster.sh
ssh root@kubmaster01 'firewallctl zone "" -p add service https && firewallctl reload'
)
- Install ZFS and NFS
- Load ZFS kernel module
- Create ZFS pool for container volumes
- Run NFS server
- Run ZFS programs
(
set -e
zypper ar obs://filesystems filesystems
zypper in zfs-kmp-default zfs yast2-nfs-server
modprobe zfs
echo zfs > /etc/modules-load.d/zfs.conf
zpool create -f kubvols /dev/sdc
systemctl start nfs-server
systemctl enable nfs-server
systemctl start zfs.target
systemctl enable zfs.target
)
There are still issues which need to be considered and handled in order for this cluster to be ready to host projects
Project owners need to be able to manage the files on persistent volumes for their containers.
One possible solution is to enable a FTP daemon on the NFS server and create chrooted FTP users which have access to each container's volumes.
Docker container logs are being shipped into an elasticsearch backend by default. Project owners need to be able to view and search these logs.
Project owners need to be able to tell the cluster that a new version of their container is available without the intervention of a cluster administrator.
The best solution for this will involve providing project-specific API keys which are permitted to perform API operations on their own containers.
Incoming HTTP and HTTPS traffic needs to be routed to the appropriate container.
For HTTP traffic, the best solution will be to configure an ingress controller and configure project-specific ingress resources.
Unfortunately, no current ingress controller implementations include support for SNI, so for the time being, we should deploy a HAProxy container which can receive all incoming HTTPS traffic and handle SSL termination.
For applications which wish to redirect all HTTP traffic over HTTPS, we should configure a nginx container whose sole job is to redirect all incoming HTTP requests to a matching HTTPS request and route all HTTP traffic for the application to this container
There is some shared infrastructure which it would be in our interest to deploy (e.g., shared databases). Project-specific access to such shared resources would obviously require the creation and distribution of credentials.
The best solution to solve this problem will be to store such credentials as kubernetes secrets which are exposed to the requiring container as environment variables.