You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/setup_k8s_new_beamline.md
+32-12Lines changed: 32 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ Helm is a package manager for Kubernetes that allows you to define a set of reso
8
8
9
9
Previously our beamline repository contained a **services** folder. Each subfolder of **services** contained a **compose.yaml** with details of the generic IOC container image, plus a **config** folder that provided an IOC instance definition.
10
10
11
-
In the Kubernetes world, each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
11
+
In the Kubernetes world the structure is very similar. Each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
12
12
13
-
In this tutorial we will create a new beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
13
+
In this tutorial we will create a new simulation beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
14
14
15
15
:::{note}
16
16
DLS users: you should use your personal namespace in the test cluster **Pollux**. Your personal namespace is named after your *fedid*
@@ -20,12 +20,11 @@ DLS users: you should use your personal namespace in the test cluster **Pollux**
20
20
21
21
As before, we will use a copier template to create the new beamline repository. The steps are similar to the first tutorial {any}`create_beamline`.
22
22
23
-
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **bl03t** on the local cluster that we created in the last tutorial OR your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
23
+
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **t03-beamline** on the local cluster that we created in the last tutorial **OR** your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
24
24
25
25
```bash
26
26
# make sure your Python virtual environment is active and copier is pip installed
Answer the copier template questions as follows for your own local cluster:
@@ -101,12 +100,12 @@ If you have brought your own cluster then you may need to edit the **environment
101
100
102
101
## Setup the epics containers CLI
103
102
104
-
To deploy and manage IOC istances requires helm and kubectl command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. This is the `ec`command line tool. Go ahead and add the `ec` python package to your virtual environment.
103
+
To deploy and manage IOC istances requires **helm** and **kubectl**command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. Go ahead and add the `ec-cli` python package to your virtual environment.
105
104
106
105
```bash
107
106
# make sure your Python virtual environment is active, then:
108
107
pip install ec-cli
109
-
# make sure ec is not currently aliased to docker compose
108
+
# make sure ec is not currently aliased to docker compose! (maybe that was a bad idea?)
110
109
unalias ec
111
110
# setup the environment for ec to know how to talk to the cluster
112
111
# (make sure you are in the t03-services directory)
@@ -115,24 +114,23 @@ source ./environment.sh
115
114
116
115
## Deploy an Example IOC Instance
117
116
118
-
The new repository has an example IOC that it comes with the template and is called t03-ea-test-01. It is a simple example IOC that is used for testing the deployment of IOCs to the cluster.
117
+
The new repository has a simple example IOC that it comes with the template and is called t03-ea-test-01.
119
118
120
119
For a new beamline we will also need to deploy the shared resources that all IOCs expect to find in their namespace, namely:
121
120
- epics-pvcs: some persistent volumes (Kubernetes data stores) for the IOCs to store autosave files, GUI files and other data
122
121
- epics-opis: an nginx web server that serves the IOC GUI files out of one of the above persistent volumes
123
122
124
-
The ec tool can help with version tracking by deploying tagged version of services. So first lets go ahead and tag the current state of the repository.
123
+
The ec tool can help with version tracking by deploying version of services from tagged commits in the git repo. So first lets go ahead and tag the current state of the repository.
125
124
126
125
```bash
127
126
# make sure you are in the t03-services directory, then:
128
127
git tag 2024.9.1
129
128
git push origin 2024.9.1
130
129
```
131
130
132
-
Now you can deploy the shared resources and the example IOC instance to the cluster. Using the version we just tagged. We will use the -v option which shows you the underlying commands that are being run.
131
+
Now you can deploy the shared resources to the cluster, using the version we just tagged. We will use the -v option which shows the underlying commands that are being run.
133
132
134
133
```bash
135
-
source environment.sh
136
134
ec -v deploy epics-pvcs 2024.9.1
137
135
ec -v deploy epic-opis 2024.9.1
138
136
```
@@ -152,10 +150,32 @@ ec ps
152
150
You could also investigate the other commands that `ec` provides by running `ec --help`.
153
151
154
152
:::{note}
153
+
When things are not working as expected or you want to examine the resources you are deploying, you can use the `kubectl describe` command. If you prefer a more interactive approach, then look at the Kubernetes Dashboard.
154
+
155
+
For a k3s local cluster refer to the notes on installing the dashboard in the previous tutorial. TODO add link when available.
156
+
155
157
At DLS you can get to a Kubernetes Dashboard foryour beamline via a landing page `https://pollux.diamond.ac.uk` for test beamlines on `Pollux` - remember to select the namespace from the dropdownin the top left.
156
158
157
159
For production beamlines with dedicated clusters, you can find the landing page for example:
158
160
`https://k8s-i22.diamond.ac.uk/`for BL22I.
159
-
`https://k8s-b01-1.diamond.ac.uk/`for the 2nd branch of BL01B.
160
-
in this case the namespace will be ixx-beamline.
161
+
`https://k8s-b01-1.diamond.ac.uk/`for the 2nd branch of BL01C.
162
+
in this case the namespace will be i22-beamline, b01-1-beamline, etc.
161
163
:::
164
+
165
+
## Verify that the IOC is working
166
+
167
+
Right now you cannot see PVs from your IOC because it is running in a container network and channel access clients won't be able to contact it.
168
+
169
+
For k3s users you can simply fix this by setting 'hostNetwork: true' in **services/values.yaml**. Then re-deploy the IOC instance (by pushing the change and making a new tag).
170
+
171
+
DLS users do not have permission to run host network in their personal namespaces.
172
+
173
+
The best solution is to use a channel access gateway to bridge the container network to the host network. We will do this in a later tutorial.
174
+
175
+
For now you can check you IOC by launching a shell inside it's container and using the `caget` command. All IOC containers have the epics-base tools installed. Try the following commands to confirm that the IOC is running and that the PVs are accessible.
0 commit comments