Skip to content

Alpha Cluster Deployment

James Lott edited this page Feb 23, 2017 · 27 revisions

Architecture

The alpha cluster is deployed under the same architecture described for the test cluster

Deployed servers

The alpha cluster deploys the following instances of each class

1 master/control plane server:

  • kubmaster01

2 node/worker servers:

  • kubnode01
  • kubnode02

1 NFS Storage server:

  • kubvol01

Instance information

All of the deployed alpha nodes are one of two type of instances:

  • 1GB instance: (kubmaster01, kubvol01)
  • 2GB instance: (kubnode01, kubnode02)

All instances were deployed in the same datacenter of the same provider in order to enable private network communication

1GB Instance

  • 1GB RAM
  • 1 vCPU
  • 20GB Storage

2GB Instance

  • 2GB RAM
  • 1 vCPU
  • 30GB Storage

Base system deployment

kubmaster01, kubnode01, kubnode02

All three of these machines are deployed as Fedora 25 instances

Post-deployment configuration

  1. Set the system hostname
  2. Apply shared cluster configurations
  3. Disable password logins for the root user
  4. Install netdata for node monitoring
  5. Open firewall port for netdata
  6. Secure public ports
  7. Allow private network traffic
  8. Disable SELinux

These steps assume you are connected to the server being configured using SSH agent forwarding,(ssh -A $host) and that the SSH key being forwarded is associated with a GitHub account.

Before copy/pasting, set shell variable:

  • host: desired machine hostname
(
  set -e
  hostnamectl set-hostname ${host?}
  dnf -y install git-core
  git clone [email protected]:CodeForPhilly/ops.git /opt/ops
  (
    cd /opt/ops
    ln -s kubernetes/alpha-cluster/post-merge .git/hooks/post-merge
    .git/hooks/post-merge  
  )
  sed -i 's/^PermitRootLogin yes/PermitRootLogin without-password/' /etc/ssh/sshd_config
  systemctl restart sshd
  curl -Ss 'https://raw.githubusercontent.com/firehol/netdata-demo-site/master/install-required-packages.sh' >/tmp/kickstart.sh && bash /tmp/kickstart.sh -i netdata-all && rm -f /tmp/kickstart.sh
  git clone https://github.com/firehol/netdata.git --depth=1
  ( cd netdata && ./netdata-installer.sh --install /opt )
  firewallctl zone '' -p add port 19999/tcp
  firewallctl zone '' -p remove service cockpit
  firewallctl zone internal -p add source 192.168.0.0/16
  firewall-cmd --permanent --zone=internal --set-target=ACCEPT  # for some inexplicable reason, this version of firewallctl does not provide a way to do this
  firewallctl reload
  sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
  setenforce 0
)

kubnode01

This machine is deployed as an openSUSE Leap 42.2 instance

Post-deployment configuration

  1. Set the system hostname
  2. Apply the shared cluster configurations
  3. Disable password logins for the root user
  4. Install man command (don't ask me why it's not there to start with...) or why it depends on 30 f'ing packages)
  5. Install netdata for node monitoring
  6. Lockdown public firewall
  7. Open firewall to private network
(
  set -e
  hostnamectl set-hostname ${host?}
  zypper in -y git-core
  git clone [email protected]:CodeForPhilly/ops.git /opt/ops
  (
    cd /opt/ops
    ln -s kubernetes/alpha-cluster/post-merge .git/hooks/post-merge
    .git/hooks/post-merge  
  )
  sed -i 's/^PermitRootLogin yes/PermitRootLogin without-password/' /etc/ssh/sshd_config
  systemctl restart sshd
  zypper in -y man
  curl -Ss 'https://raw.githubusercontent.com/firehol/netdata-demo-site/master/install-required-packages.sh' >/tmp/kickstart.sh && bash /tmp/kickstart.sh -i netdata-all && rm -f /tmp/kickstart.sh
  git clone https://github.com/firehol/netdata.git --depth=1
  ( cd netdata && ./netdata-installer.sh --install /opt )
  zypper in -y firewalld
  systemctl start firewalld
  systemctl enable firewalld
  firewallctl zone '' -p add interface eth0
  firewallctl zone '' -p add port 19999/tcp
  firewallctl zone internal -p add source 192.168.0.0/16
  firewall-cmd --permanent --zone=internal --set-target=ACCEPT
  firewallctl reload
)

Cluster provisioning

These instruction presume that the workstation from which the administrator is working has been appropriately configured with the necessary workstation resources.

kubnode01

  1. Install ZFS and NFS
  2. Load ZFS kernel module
  3. Create ZFS pool for container volumes
  4. Run NFS server
  5. Run ZFS programs
(
  set -e
  zypper ar obs://filesystems filesystems
  zypper in zfs-kmp-default zfs yast2-nfs-server
  modprobe zfs
  echo zfs > /etc/modules-load.d/zfs.conf
  zpool create -f kubvols /dev/sdc
  systemctl start nfs-server
  systemctl enable nfs-server
  systemctl start zfs.target
  systemctl enable zfs.target
)

Administrative tasks

Create and expose container volume

Creating and exposing new volumes for use by containers is a two step process:

  1. Create volume on NFS server
  2. Create kubernetes PersistentVolume resource which can be claimed

Create volume on NFS server

All container volumes should be contained within a top level volume which bears the container's name. The top level volume does not need any special properties applied to it; its only purpose is for hierarchical organization. For example, if a container named "nginx" has two volumes, one for config files and one for publicly served files, the volume hierarchy should look like:

kubvols/nginx
kubvols/nginx/configs
kubvols/nginx/html

Each subvolume of a container should be configured with the following ZFS properties:

  • quota=AMOUNT: Appropriate amount may vary based on container and purpose, but each volume should be given a quota
  • compression=lz4: Highly efficient compression helps make the most of available space
  • sharenfs=on: Export the volume as a NFS share

The specific steps to creating new container volumes are as follows:

  1. Create top level container volume if necessary. Otherwise, move on to step 2.
  2. Create purpose-specific subvolume

Before copy/pasting, set shell variables:

  • container_name: Name of container volumes belong to
  • volume_name: Name of specific volume being created
  • quota_size: A quota for the volume; e.g., 5GB, 512MB, etc
(
  set -e
  zfs get "kubvols/${container_name?}" >/dev/null 2>&1 || zfs create "kubvols/${container_name?}"
  zfs create -o quota=${quota_size?} -o compression=lz4 -o sharenfs=on "kubvols/${container_name?}/${volume_name?}"
)

Clone this wiki locally