-
Notifications
You must be signed in to change notification settings - Fork 4
Alpha Cluster Deployment
The alpha cluster is deployed under the same architecture described for the test cluster
The alpha cluster deploys the following instances of each class
1 master/control plane server:
- kubmaster01
2 node/worker servers:
- kubnode01
- kubnode02
1 NFS Storage server:
- kubvol01
All of the deployed alpha nodes are one of two type of instances:
- 1GB instance: (kubmaster01, kubvol01)
- 2GB instance: (kubnode01, kubnode02)
All instances were deployed in the same datacenter of the same provider in order to enable private network communication
- 1GB RAM
- 1 vCPU
- 20GB Storage
- 2GB RAM
- 1 vCPU
- 30GB Storage
All three of these machines are deployed as Fedora 25 instances
- Set the system hostname
- Apply shared cluster configurations
- Disable password logins for the root user
- Install netdata for node monitoring
- Open firewall port for netdata
- Secure public ports
- Allow private network traffic
- Disable SELinux
These steps assume you are connected to the server being configured using SSH
agent forwarding,(ssh -A $host) and that the SSH key being forwarded is associated with a
GitHub account.
Before copy/pasting, set shell variable:
-
host: desired machine hostname
(
set -e
hostnamectl set-hostname ${host?}
dnf -y install git-core
git clone [email protected]:CodeForPhilly/ops.git /opt/ops
(
cd /opt/ops
ln -s kubernetes/alpha-cluster/post-merge .git/hooks/post-merge
.git/hooks/post-merge
)
sed -i 's/^PermitRootLogin yes/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl restart sshd
curl -Ss 'https://raw.githubusercontent.com/firehol/netdata-demo-site/master/install-required-packages.sh' >/tmp/kickstart.sh && bash /tmp/kickstart.sh -i netdata-all && rm -f /tmp/kickstart.sh
git clone https://github.com/firehol/netdata.git --depth=1
( cd netdata && ./netdata-installer.sh --install /opt )
firewallctl zone '' -p add port 19999/tcp
firewallctl zone '' -p remove service cockpit
firewallctl zone internal -p add source 192.168.0.0/16
firewall-cmd --permanent --zone=internal --set-target=ACCEPT # for some inexplicable reason, this version of firewallctl does not provide a way to do this
firewallctl reload
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
setenforce 0
)
This machine is deployed as an openSUSE Leap 42.2 instance
- Set the system hostname
- Apply the shared cluster configurations
- Disable password logins for the root user
- Install
mancommand (don't ask me why it's not there to start with...) or why it depends on 30 f'ing packages) - Install netdata for node monitoring
- Lockdown public firewall
- Open firewall to private network
(
set -e
hostnamectl set-hostname ${host?}
zypper in -y git-core
git clone [email protected]:CodeForPhilly/ops.git /opt/ops
(
cd /opt/ops
ln -s kubernetes/alpha-cluster/post-merge .git/hooks/post-merge
.git/hooks/post-merge
)
sed -i 's/^PermitRootLogin yes/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl restart sshd
zypper in -y man
curl -Ss 'https://raw.githubusercontent.com/firehol/netdata-demo-site/master/install-required-packages.sh' >/tmp/kickstart.sh && bash /tmp/kickstart.sh -i netdata-all && rm -f /tmp/kickstart.sh
git clone https://github.com/firehol/netdata.git --depth=1
( cd netdata && ./netdata-installer.sh --install /opt )
zypper in -y firewalld
systemctl start firewalld
systemctl enable firewalld
firewallctl zone '' -p add interface eth0
firewallctl zone '' -p add port 19999/tcp
firewallctl zone internal -p add source 192.168.0.0/16
firewall-cmd --permanent --zone=internal --set-target=ACCEPT
firewallctl reload
)
These instruction presume that the workstation from which the administrator is working has been appropriately configured with the necessary workstation resources.
- Install ZFS and NFS
- Load ZFS kernel module
- Create ZFS pool for container volumes
- Run NFS server
- Run ZFS programs
(
set -e
zypper ar obs://filesystems filesystems
zypper in zfs-kmp-default zfs yast2-nfs-server
modprobe zfs
echo zfs > /etc/modules-load.d/zfs.conf
zpool create -f kubvols /dev/sdc
systemctl start nfs-server
systemctl enable nfs-server
systemctl start zfs.target
systemctl enable zfs.target
)
Creating and exposing new volumes for use by containers is a two step process:
- Create volume on NFS server
- Create kubernetes PersistentVolume resource which can be claimed
All container volumes should be contained within a top level volume which bears the container's name. The top level volume does not need any special properties applied to it; its only purpose is for hierarchical organization. For example, if a container named "nginx" has two volumes, one for config files and one for publicly served files, the volume hierarchy should look like:
kubvols/nginx
kubvols/nginx/configs
kubvols/nginx/html
Each subvolume of a container should be configured with the following ZFS properties:
- quota=AMOUNT: Appropriate amount may vary based on container and purpose, but each volume should be given a quota
- compression=lz4: Highly efficient compression helps make the most of available space
- sharenfs=on: Export the volume as a NFS share
The specific steps to creating new container volumes are as follows:
- Create top level container volume if necessary. Otherwise, move on to step 2.
- Create purpose-specific subvolume
Before copy/pasting, set shell variables:
-
container_name: Name of container volumes belong to -
volume_name: Name of specific volume being created -
quota_size: A quota for the volume; e.g.,5GB,512MB, etc
(
set -e
zfs get "kubvols/${container_name?}" >/dev/null 2>&1 || zfs create "kubvols/${container_name?}"
zfs create -o quota=${quota_size?} -o compression=lz4 -o sharenfs=on "kubvols/${container_name?}/${volume_name?}"
)