Skip to content

Commit 2522780

Browse files
committed
setup_k8s_new_beamline ready for review
1 parent 6fc7cd9 commit 2522780

File tree

3 files changed

+42
-12
lines changed

3 files changed

+42
-12
lines changed

docs/tutorials.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ tutorials/debug_generic_ioc
2121
tutorials/support_module
2222
tutorials/setup_k8s
2323
tutorials/setup_k8s_new_beamline
24+
tutorials/add_k8s_ioc
2425
tutorials/rtems_setup
2526
tutorials/rtems_ioc
2627
```

docs/tutorials/add_k8s_ioc.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Add an IOC to the Kubernetes Beamline
2+
3+
In this tutorial we will add an additional IOC to the Kubernetes Simulation Beamline created in the previous tutorial.
4+
5+
This IOC will be a Simulation Area Detector IOC, very like the one we made in {any}`create_ioc`.
6+
7+
Here we will also take a look at the helm chart and resulting Kubernetes Manifest that is created as part of this process.
8+
9+
TODO: WIP

docs/tutorials/setup_k8s_new_beamline.md

Lines changed: 32 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ Helm is a package manager for Kubernetes that allows you to define a set of reso
88

99
Previously our beamline repository contained a **services** folder. Each subfolder of **services** contained a **compose.yaml** with details of the generic IOC container image, plus a **config** folder that provided an IOC instance definition.
1010

11-
In the Kubernetes world, each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
11+
In the Kubernetes world the structure is very similar. Each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
1212

13-
In this tutorial we will create a new beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
13+
In this tutorial we will create a new simulation beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
1414

1515
:::{note}
1616
DLS users: you should use your personal namespace in the test cluster **Pollux**. Your personal namespace is named after your *fedid*
@@ -20,12 +20,11 @@ DLS users: you should use your personal namespace in the test cluster **Pollux**
2020

2121
As before, we will use a copier template to create the new beamline repository. The steps are similar to the first tutorial {any}`create_beamline`.
2222

23-
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **bl03t** on the local cluster that we created in the last tutorial OR your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
23+
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **t03-beamline** on the local cluster that we created in the last tutorial **OR** your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
2424

2525
```bash
2626
# make sure your Python virtual environment is active and copier is pip installed
2727
copier copy gh:epics-containers/services-template-helm t03-services
28-
code t03-services
2928
```
3029

3130
Answer the copier template questions as follows for your own local cluster:
@@ -101,12 +100,12 @@ If you have brought your own cluster then you may need to edit the **environment
101100

102101
## Setup the epics containers CLI
103102

104-
To deploy and manage IOC istances requires helm and kubectl command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. This is the `ec` command line tool. Go ahead and add the `ec` python package to your virtual environment.
103+
To deploy and manage IOC istances requires **helm** and **kubectl** command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. Go ahead and add the `ec-cli` python package to your virtual environment.
105104

106105
```bash
107106
# make sure your Python virtual environment is active, then:
108107
pip install ec-cli
109-
# make sure ec is not currently aliased to docker compose
108+
# make sure ec is not currently aliased to docker compose! (maybe that was a bad idea?)
110109
unalias ec
111110
# setup the environment for ec to know how to talk to the cluster
112111
# (make sure you are in the t03-services directory)
@@ -115,24 +114,23 @@ source ./environment.sh
115114

116115
## Deploy an Example IOC Instance
117116

118-
The new repository has an example IOC that it comes with the template and is called t03-ea-test-01. It is a simple example IOC that is used for testing the deployment of IOCs to the cluster.
117+
The new repository has a simple example IOC that it comes with the template and is called t03-ea-test-01.
119118

120119
For a new beamline we will also need to deploy the shared resources that all IOCs expect to find in their namespace, namely:
121120
- epics-pvcs: some persistent volumes (Kubernetes data stores) for the IOCs to store autosave files, GUI files and other data
122121
- epics-opis: an nginx web server that serves the IOC GUI files out of one of the above persistent volumes
123122

124-
The ec tool can help with version tracking by deploying tagged version of services. So first lets go ahead and tag the current state of the repository.
123+
The ec tool can help with version tracking by deploying version of services from tagged commits in the git repo. So first lets go ahead and tag the current state of the repository.
125124

126125
```bash
127126
# make sure you are in the t03-services directory, then:
128127
git tag 2024.9.1
129128
git push origin 2024.9.1
130129
```
131130

132-
Now you can deploy the shared resources and the example IOC instance to the cluster. Using the version we just tagged. We will use the -v option which shows you the underlying commands that are being run.
131+
Now you can deploy the shared resources to the cluster, using the version we just tagged. We will use the -v option which shows the underlying commands that are being run.
133132

134133
```bash
135-
source environment.sh
136134
ec -v deploy epics-pvcs 2024.9.1
137135
ec -v deploy epic-opis 2024.9.1
138136
```
@@ -152,10 +150,32 @@ ec ps
152150
You could also investigate the other commands that `ec` provides by running `ec --help`.
153151

154152
:::{note}
153+
When things are not working as expected or you want to examine the resources you are deploying, you can use the `kubectl describe` command. If you prefer a more interactive approach, then look at the Kubernetes Dashboard.
154+
155+
For a k3s local cluster refer to the notes on installing the dashboard in the previous tutorial. TODO add link when available.
156+
155157
At DLS you can get to a Kubernetes Dashboard for your beamline via a landing page `https://pollux.diamond.ac.uk` for test beamlines on `Pollux` - remember to select the namespace from the dropdown in the top left.
156158

157159
For production beamlines with dedicated clusters, you can find the landing page for example:
158160
`https://k8s-i22.diamond.ac.uk/` for BL22I.
159-
`https://k8s-b01-1.diamond.ac.uk/` for the 2nd branch of BL01B.
160-
in this case the namespace will be ixx-beamline.
161+
`https://k8s-b01-1.diamond.ac.uk/` for the 2nd branch of BL01C.
162+
in this case the namespace will be i22-beamline, b01-1-beamline, etc.
161163
:::
164+
165+
## Verify that the IOC is working
166+
167+
Right now you cannot see PVs from your IOC because it is running in a container network and channel access clients won't be able to contact it.
168+
169+
For k3s users you can simply fix this by setting 'hostNetwork: true' in **services/values.yaml**. Then re-deploy the IOC instance (by pushing the change and making a new tag).
170+
171+
DLS users do not have permission to run host network in their personal namespaces.
172+
173+
The best solution is to use a channel access gateway to bridge the container network to the host network. We will do this in a later tutorial.
174+
175+
For now you can check you IOC by launching a shell inside it's container and using the `caget` command. All IOC containers have the epics-base tools installed. Try the following commands to confirm that the IOC is running and that the PVs are accessible.
176+
177+
```bash
178+
$ ec exec t03-ea-test-01
179+
root@t03-ea-test-01-0:/# caget T03:IBEK:A
180+
T03:IBEK:A 2.54
181+
```

0 commit comments

Comments
 (0)