You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/setup_k8s.md
+23-19Lines changed: 23 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,35 +12,21 @@ For this reason DLS users should skip this tutorial unless you have a spare linu
12
12
13
13
## Introduction
14
14
15
-
This is a very easy set of instructions for setting up an experimental
16
-
single-node Kubernetes cluster, ready to test deployment of EPICS IOCs.
15
+
This is a very easy set of instructions for setting up an experimental single-node Kubernetes cluster, ready for a test deployment of EPICS IOCs.
17
16
18
17
## Bring Your Own Cluster
19
18
20
19
If you already have a Kubernetes cluster then you can skip this section.
21
20
and go straight to the next tutorial.
22
21
23
-
IMPORTANT: you will require appropriate permissions on the cluster to work
24
-
with epics-containers. In particular you will need to be able to create
25
-
pods that run with network=host. This is to allow Channel Access traffic
26
-
to be routed to and from the IOCs. You will also need to be able to create
27
-
a namespace and a service account, although you could use an existing
28
-
namespace and service account as long as it has network=host capability.
22
+
IMPORTANT: you will require appropriate permissions on the cluster to work with epics-containers. In particular you will need to be able to create pods that run with network=host. This is to allow Channel Access traffic to be routed to and from the IOCs. You will also need to be able to create a namespace and a service account, although you could use an existing namespace and service account as long as it has network=host capability. The alternative to running with network=host is to run a ca-gateway in the cluster and expose the PVs to the IOCs via the gateway.
29
23
30
24
Cloud based K8S offerings may not be appropriate because of the Channel Access
31
25
routing requirement.
32
26
33
27
## Platform Choice
34
28
35
-
These instructions have been tested on the following platforms. The simplest
36
-
option is to use a linux distribution that is supported by k3s.
These instructions have been tested on Ubuntu 22.04; however, any modern Linux distribution that is supported by k3s and running on a modern x86 machine should also work.
44
30
45
31
Note that K3S provides a good uninstaller that will clean up your system if you decide to back out. So there is no harm in trying it out.
If you do have separate workstation then edit the file .kube/config replacing 127.0.0.1 with your server's IP Address. For a single machine the file is leftas is.
88
+
If you do have separate workstation then edit the file .kube/config replacing 127.0.0.1 with your server's IP Address. For a single machine the file is left as is.
103
89
104
90
### Install helm
105
91
@@ -168,7 +154,25 @@ To install the `argocd` cli tool:
As per <https://docs.k3s.io/storage/>, the "Longhorn" distributed block storage system can be set up in our cluster. This is done in order to get support for ReadWriteMany persistent volume claims, which is not supported by the out of the box "Local Path Provisioner".
Copy file name to clipboardExpand all lines: docs/tutorials/setup_k8s_new_beamline.md
+32-12Lines changed: 32 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ Helm is a package manager for Kubernetes that allows you to define a set of reso
8
8
9
9
Previously our beamline repository contained a **services** folder. Each subfolder of **services** contained a **compose.yaml** with details of the generic IOC container image, plus a **config** folder that provided an IOC instance definition.
10
10
11
-
In the Kubernetes world, each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
11
+
In the Kubernetes world the structure is very similar. Each folder under **services** will be an individually deployable Helm Chart. This means that instead of a **compose.yaml** file we will have a **Chart.yaml** which describes the dependencies of the chart and a **values.yaml** that describes some arguments to it. There is also a file **services/values.yaml** that describes the default arguments for all the charts in the repository.
12
12
13
-
In this tutorial we will create a new beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
13
+
In this tutorial we will create a new simulation beamline in a Kubernetes cluster. Here we assume that the cluster is already setup and that there is a namespace configured for use by the beamline. See the previous tutorial for how to set one up if you do not have this already.
14
14
15
15
:::{note}
16
16
DLS users: you should use your personal namespace in the test cluster **Pollux**. Your personal namespace is named after your *fedid*
@@ -20,12 +20,11 @@ DLS users: you should use your personal namespace in the test cluster **Pollux**
20
20
21
21
As before, we will use a copier template to create the new beamline repository. The steps are similar to the first tutorial {any}`create_beamline`.
22
22
23
-
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **bl03t** on the local cluster that we created in the last tutorial OR your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
23
+
1. We are going to call the new beamline **bl03t** with the repository name **t03-services**. It will be created in the namespace **t03-beamline** on the local cluster that we created in the last tutorial **OR** your *fedid* namespace on the **Pollux** cluster if you are using the DLS cluster.
24
24
25
25
```bash
26
26
# make sure your Python virtual environment is active and copier is pip installed
Answer the copier template questions as follows for your own local cluster:
@@ -101,12 +100,12 @@ If you have brought your own cluster then you may need to edit the **environment
101
100
102
101
## Setup the epics containers CLI
103
102
104
-
To deploy and manage IOC istances requires helm and kubectl command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. This is the `ec`command line tool. Go ahead and add the `ec` python package to your virtual environment.
103
+
To deploy and manage IOC istances requires **helm** and **kubectl**command line tools. However we supply a simple wrapper for these tools that saves typing and helps with learning the commands. Go ahead and add the `ec-cli` python package to your virtual environment.
105
104
106
105
```bash
107
106
# make sure your Python virtual environment is active, then:
108
107
pip install ec-cli
109
-
# make sure ec is not currently aliased to docker compose
108
+
# make sure ec is not currently aliased to docker compose! (maybe that was a bad idea?)
110
109
unalias ec
111
110
# setup the environment for ec to know how to talk to the cluster
112
111
# (make sure you are in the t03-services directory)
@@ -115,24 +114,23 @@ source ./environment.sh
115
114
116
115
## Deploy an Example IOC Instance
117
116
118
-
The new repository has an example IOC that it comes with the template and is called t03-ea-test-01. It is a simple example IOC that is used for testing the deployment of IOCs to the cluster.
117
+
The new repository has a simple example IOC that it comes with the template and is called t03-ea-test-01.
119
118
120
119
For a new beamline we will also need to deploy the shared resources that all IOCs expect to find in their namespace, namely:
121
120
- epics-pvcs: some persistent volumes (Kubernetes data stores) for the IOCs to store autosave files, GUI files and other data
122
121
- epics-opis: an nginx web server that serves the IOC GUI files out of one of the above persistent volumes
123
122
124
-
The ec tool can help with version tracking by deploying tagged version of services. So first lets go ahead and tag the current state of the repository.
123
+
The ec tool can help with version tracking by deploying version of services from tagged commits in the git repo. So first lets go ahead and tag the current state of the repository.
125
124
126
125
```bash
127
126
# make sure you are in the t03-services directory, then:
128
127
git tag 2024.9.1
129
128
git push origin 2024.9.1
130
129
```
131
130
132
-
Now you can deploy the shared resources and the example IOC instance to the cluster. Using the version we just tagged. We will use the -v option which shows you the underlying commands that are being run.
131
+
Now you can deploy the shared resources to the cluster, using the version we just tagged. We will use the -v option which shows the underlying commands that are being run.
133
132
134
133
```bash
135
-
source environment.sh
136
134
ec -v deploy epics-pvcs 2024.9.1
137
135
ec -v deploy epic-opis 2024.9.1
138
136
```
@@ -152,10 +150,32 @@ ec ps
152
150
You could also investigate the other commands that `ec` provides by running `ec --help`.
153
151
154
152
:::{note}
153
+
When things are not working as expected or you want to examine the resources you are deploying, you can use the `kubectl describe` command. If you prefer a more interactive approach, then look at the Kubernetes Dashboard.
154
+
155
+
For a k3s local cluster refer to the notes on installing the dashboard in the previous tutorial. TODO add link when available.
156
+
155
157
At DLS you can get to a Kubernetes Dashboard foryour beamline via a landing page `https://pollux.diamond.ac.uk` for test beamlines on `Pollux` - remember to select the namespace from the dropdownin the top left.
156
158
157
159
For production beamlines with dedicated clusters, you can find the landing page for example:
158
160
`https://k8s-i22.diamond.ac.uk/`for BL22I.
159
-
`https://k8s-b01-1.diamond.ac.uk/`for the 2nd branch of BL01B.
160
-
in this case the namespace will be ixx-beamline.
161
+
`https://k8s-b01-1.diamond.ac.uk/`for the 2nd branch of BL01C.
162
+
in this case the namespace will be i22-beamline, b01-1-beamline, etc.
161
163
:::
164
+
165
+
## Verify that the IOC is working
166
+
167
+
Right now you cannot see PVs from your IOC because it is running in a container network and channel access clients won't be able to contact it.
168
+
169
+
For k3s users you can simply fix this by setting 'hostNetwork: true' in **services/values.yaml**. Then re-deploy the IOC instance (by pushing the change and making a new tag).
170
+
171
+
DLS users do not have permission to run host network in their personal namespaces.
172
+
173
+
The best solution is to use a channel access gateway to bridge the container network to the host network. We will do this in a later tutorial.
174
+
175
+
For now you can check you IOC by launching a shell inside it's container and using the `caget` command. All IOC containers have the epics-base tools installed. Try the following commands to confirm that the IOC is running and that the PVs are accessible.
0 commit comments