-
Notifications
You must be signed in to change notification settings - Fork 4
Alpha Cluster Administration
The cluster can be managed entirely from a local workstation using the kubectl command. These instructions assume kubectl has already been installed on the workstation in question.
The steps to use the shared kubectl configuration are as follows:
- Decrypt kubectl configuration
- Place decrypted configuration in kubectl config directory
- Verify kubectl can connect to the cluster
From within the ops repo directory:
(
set -e
gpg -do kubectl.kubeconfig kubernetes/alpha-cluster/workstation-resources/kubectl.kubeconfig.asc
test -e ~/.kube || mkdir ~/.kube
mv -i kubectl.kubeconfig ~/.kube/config
kubectl get nodes
)
The alpha cluster runs several services, including web accessible resources such as kibana and grafana. Access to these resources must be proxied through the kubernetes master using kubectl. The steps to access a service are as follows, and access to the kibana dashboard is used in the example.
- Acquire proxy information for cluster services
- Open proxy to kubernetes master
- Access resource via HTTP proxy
Note: This example is illustrative, and is no good for copy/paste
$ kubectl cluster-info
Kubernetes master is running at https://kubmaster01:443
Elasticsearch is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
# This command will block until you send SIGINT
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
# Now you would access http://127.0.0.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging from your browser
Creating and exposing new volumes for use by containers is a two step process:
- Create volume on NFS server
- Create kubernetes PersistentVolume resource which can be claimed
In order to create a new volume for a container, define the container volume in the pillar of whichever storage machine will house the volume, then highstate the machine.
The creation of these resources is automated based on the pillar used to create the container volume on the NFS server. From the master, ensure pillar has been enabled and perform a highstate to create and update these resources.
(
set -e
test -e /srv/pillar || ln -s /ops/kubernetes/alpha-cluster/pillar /srv/pillar
salt-call --local state.highstate
)