Skip to content

Commit b7641c4

Browse files
authored
Merge branch 'k8s-tutorials' into argocd-install
2 parents 59a0a9b + 2522780 commit b7641c4

File tree

4 files changed

+65
-31
lines changed

4 files changed

+65
-31
lines changed

docs/tutorials.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ tutorials/debug_generic_ioc
2121
tutorials/support_module
2222
tutorials/setup_k8s
2323
tutorials/setup_k8s_new_beamline
24+
tutorials/add_k8s_ioc
2425
tutorials/rtems_setup
2526
tutorials/rtems_ioc
2627
```

docs/tutorials/add_k8s_ioc.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Add an IOC to the Kubernetes Beamline
2+
3+
In this tutorial we will add an additional IOC to the Kubernetes Simulation Beamline created in the previous tutorial.
4+
5+
This IOC will be a Simulation Area Detector IOC, very like the one we made in {any}`create_ioc`.
6+
7+
Here we will also take a look at the helm chart and resulting Kubernetes Manifest that is created as part of this process.
8+
9+
TODO: WIP

docs/tutorials/setup_k8s.md

Lines changed: 23 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -12,35 +12,21 @@ For this reason DLS users should skip this tutorial unless you have a spare linu
1212

1313
## Introduction
1414

15-
This is a very easy set of instructions for setting up an experimental
16-
single-node Kubernetes cluster, ready to test deployment of EPICS IOCs.
15+
This is a very easy set of instructions for setting up an experimental single-node Kubernetes cluster, ready for a test deployment of EPICS IOCs.
1716

1817
## Bring Your Own Cluster
1918

2019
If you already have a Kubernetes cluster then you can skip this section.
2120
and go straight to the next tutorial.
2221

23-
IMPORTANT: you will require appropriate permissions on the cluster to work
24-
with epics-containers. In particular you will need to be able to create
25-
pods that run with network=host. This is to allow Channel Access traffic
26-
to be routed to and from the IOCs. You will also need to be able to create
27-
a namespace and a service account, although you could use an existing
28-
namespace and service account as long as it has network=host capability.
22+
IMPORTANT: you will require appropriate permissions on the cluster to work with epics-containers. In particular you will need to be able to create pods that run with network=host. This is to allow Channel Access traffic to be routed to and from the IOCs. You will also need to be able to create a namespace and a service account, although you could use an existing namespace and service account as long as it has network=host capability. The alternative to running with network=host is to run a ca-gateway in the cluster and expose the PVs to the IOCs via the gateway.
2923

3024
Cloud based K8S offerings may not be appropriate because of the Channel Access
3125
routing requirement.
3226

3327
## Platform Choice
3428

35-
These instructions have been tested on the following platforms. The simplest
36-
option is to use a linux distribution that is supported by k3s.
37-
38-
```{eval-rst}
39-
========================== ============================================
40-
Ubuntu 22.04 and newer any modern linux distro should also work
41-
Raspberry Pi OS 2021-05-07 See `raspberry`
42-
========================== ============================================
43-
```
29+
These instructions have been tested on Ubuntu 22.04; however, any modern Linux distribution that is supported by k3s and running on a modern x86 machine should also work.
4430

4531
Note that K3S provides a good uninstaller that will clean up your system if you decide to back out. So there is no harm in trying it out.
4632

@@ -99,7 +85,7 @@ mkdir ~/.kube
9985
sudo scp /etc/rancher/k3s/k3s.yaml <YOUR_ACCOUNT>@<YOUR_WORKSTATION>:.kube/config
10086
```
10187

102-
If you do have separate workstation then edit the file .kube/config replacing 127.0.0.1 with your server's IP Address. For a single machine the file is leftas is.
88+
If you do have separate workstation then edit the file .kube/config replacing 127.0.0.1 with your server's IP Address. For a single machine the file is left as is.
10389

10490
### Install helm
10591

@@ -168,7 +154,25 @@ To install the `argocd` cli tool:
168154
```
169155
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
170156
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
171-
rm argocd-linux-amd64
157+
rm argocd-linux-amd64
158+
```
159+
160+
### Install persistent volume support
161+
162+
As per <https://docs.k3s.io/storage/>, the "Longhorn" distributed block storage system can be set up in our cluster. This is done in order to get support for ReadWriteMany persistent volume claims, which is not supported by the out of the box "Local Path Provisioner".
163+
164+
```bash
165+
# Install dependancies
166+
sudo apt-get update; sudo apt-get install -y open-iscsi nfs-common jq
167+
168+
# Set up longhorn
169+
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.0/deploy/longhorn.yaml
170+
171+
# Monitor while Longhorn starts up
172+
kubectl get pods --namespace longhorn-system --watch
173+
174+
# Confirm ready
175+
kubectl get storageclass
172176
```
173177

174178
### Completed

docs/tutorials/setup_k8s_new_beamline.md

Lines changed: 32 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ Helm is a package manager for Kubernetes that allows you to define a set of reso
88

99
Previously our beamline repository contained a **services** folder. Each subfolder of **services** contained a **compose.yaml** with details of the generic IOC container image, plus a **config** folder that provided an IOC instance definition.
1010

11-
In the Kubernetes world, each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
11+
In the Kubernetes world the structure is very similar. Each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
1212

13-
In this tutorial we will create a new beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
13+
In this tutorial we will create a new simulation beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
1414

1515
:::{note}
1616
DLS users: you should use your personal namespace in the test cluster **Pollux**. Your personal namespace is named after your *fedid*
@@ -20,12 +20,11 @@ DLS users: you should use your personal namespace in the test cluster **Pollux**
2020

2121
As before, we will use a copier template to create the new beamline repository. The steps are similar to the first tutorial {any}`create_beamline`.
2222

23-
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **bl03t** on the local cluster that we created in the last tutorial OR your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
23+
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **t03-beamline** on the local cluster that we created in the last tutorial **OR** your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
2424

2525
```bash
2626
# make sure your Python virtual environment is active and copier is pip installed
2727
copier copy gh:epics-containers/services-template-helm t03-services
28-
code t03-services
2928
```
3029

3130
Answer the copier template questions as follows for your own local cluster:
@@ -101,12 +100,12 @@ If you have brought your own cluster then you may need to edit the **environment
101100

102101
## Setup the epics containers CLI
103102

104-
To deploy and manage IOC istances requires helm and kubectl command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. This is the `ec` command line tool. Go ahead and add the `ec` python package to your virtual environment.
103+
To deploy and manage IOC istances requires **helm** and **kubectl** command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. Go ahead and add the `ec-cli` python package to your virtual environment.
105104

106105
```bash
107106
# make sure your Python virtual environment is active, then:
108107
pip install ec-cli
109-
# make sure ec is not currently aliased to docker compose
108+
# make sure ec is not currently aliased to docker compose! (maybe that was a bad idea?)
110109
unalias ec
111110
# setup the environment for ec to know how to talk to the cluster
112111
# (make sure you are in the t03-services directory)
@@ -115,24 +114,23 @@ source ./environment.sh
115114

116115
## Deploy an Example IOC Instance
117116

118-
The new repository has an example IOC that it comes with the template and is called t03-ea-test-01. It is a simple example IOC that is used for testing the deployment of IOCs to the cluster.
117+
The new repository has a simple example IOC that it comes with the template and is called t03-ea-test-01.
119118

120119
For a new beamline we will also need to deploy the shared resources that all IOCs expect to find in their namespace, namely:
121120
- epics-pvcs: some persistent volumes (Kubernetes data stores) for the IOCs to store autosave files, GUI files and other data
122121
- epics-opis: an nginx web server that serves the IOC GUI files out of one of the above persistent volumes
123122

124-
The ec tool can help with version tracking by deploying tagged version of services. So first lets go ahead and tag the current state of the repository.
123+
The ec tool can help with version tracking by deploying version of services from tagged commits in the git repo. So first lets go ahead and tag the current state of the repository.
125124

126125
```bash
127126
# make sure you are in the t03-services directory, then:
128127
git tag 2024.9.1
129128
git push origin 2024.9.1
130129
```
131130

132-
Now you can deploy the shared resources and the example IOC instance to the cluster. Using the version we just tagged. We will use the -v option which shows you the underlying commands that are being run.
131+
Now you can deploy the shared resources to the cluster, using the version we just tagged. We will use the -v option which shows the underlying commands that are being run.
133132

134133
```bash
135-
source environment.sh
136134
ec -v deploy epics-pvcs 2024.9.1
137135
ec -v deploy epic-opis 2024.9.1
138136
```
@@ -152,10 +150,32 @@ ec ps
152150
You could also investigate the other commands that `ec` provides by running `ec --help`.
153151

154152
:::{note}
153+
When things are not working as expected or you want to examine the resources you are deploying, you can use the `kubectl describe` command. If you prefer a more interactive approach, then look at the Kubernetes Dashboard.
154+
155+
For a k3s local cluster refer to the notes on installing the dashboard in the previous tutorial. TODO add link when available.
156+
155157
At DLS you can get to a Kubernetes Dashboard for your beamline via a landing page `https://pollux.diamond.ac.uk` for test beamlines on `Pollux` - remember to select the namespace from the dropdown in the top left.
156158

157159
For production beamlines with dedicated clusters, you can find the landing page for example:
158160
`https://k8s-i22.diamond.ac.uk/` for BL22I.
159-
`https://k8s-b01-1.diamond.ac.uk/` for the 2nd branch of BL01B.
160-
in this case the namespace will be ixx-beamline.
161+
`https://k8s-b01-1.diamond.ac.uk/` for the 2nd branch of BL01C.
162+
in this case the namespace will be i22-beamline, b01-1-beamline, etc.
161163
:::
164+
165+
## Verify that the IOC is working
166+
167+
Right now you cannot see PVs from your IOC because it is running in a container network and channel access clients won't be able to contact it.
168+
169+
For k3s users you can simply fix this by setting 'hostNetwork: true' in **services/values.yaml**. Then re-deploy the IOC instance (by pushing the change and making a new tag).
170+
171+
DLS users do not have permission to run host network in their personal namespaces.
172+
173+
The best solution is to use a channel access gateway to bridge the container network to the host network. We will do this in a later tutorial.
174+
175+
For now you can check you IOC by launching a shell inside it's container and using the `caget` command. All IOC containers have the epics-base tools installed. Try the following commands to confirm that the IOC is running and that the PVs are accessible.
176+
177+
```bash
178+
$ ec exec t03-ea-test-01
179+
root@t03-ea-test-01-0:/# caget T03:IBEK:A
180+
T03:IBEK:A 2.54
181+
```

0 commit comments

Comments
 (0)