@@ -6,21 +6,29 @@ Cluster Options
6
6
7
7
Three cluster topology approaches were considered for this project.
8
8
9
- - **Separate Kubernetes cluster per beamline **. This could be as simple as
10
- a single server and the K3S installation described in
9
+ :Cluster per beamline:
10
+ This could be as simple as
11
+ a single server: the K3S installation described in
11
12
`setup_kubernetes ` may be sufficient. The documentation at
12
13
https://rancher.com/docs/k3s/ also details how to make a high availability
13
14
cluster, requiring a minimum of 4 servers.
14
-
15
- - **A Central Facility Cluster **. A central facility cluster that runs
15
+ This approach keeps the configuration of the clusters quite straightforward
16
+ but at the cost of having multiple separate clusters to maintain. Also
17
+ it requires control plane servers for every beamline, whereas a centralized
18
+ approach would only need a handful of control plane servers for the entire
19
+ facility.
20
+
21
+ :Central Facility Cluster:
22
+ A central facility cluster that runs
16
23
all IOCs on its own nodes would keep everything centralized and provide
17
24
economy of scale. However, there are significant issues with routing
18
25
Channel Access, PVA and some device protocols to IOCs running in a
19
26
different subnet to the beamline. DLS spent some time working around these
20
27
issues but eventually abandoned this approach.
21
28
22
- - **Central Cluster with Beamline nodes **. This approach uses the central
23
- cluster but adds beamline nodes that sit in the beamline itself,
29
+ :Beamline Worker Nodes:
30
+ This approach uses the central
31
+ cluster but adds remote beamline nodes located in the beamline itself,
24
32
connected to the beamline subnet. This has all the benefits of central
25
33
management but is able to overcome the problems with protocol routing.
26
34
The DLS argus cluster configuration described below is an example of
0 commit comments