You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/explanations/introduction.md
+30-40Lines changed: 30 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ forks of these repositories.
39
39
40
40
#### Generic IOCs and instances
41
41
42
-
An important principal of the approach presented here is that an IOC container image represents a 'Generic' IOC. The Generic IOC image is used for all IOC instances that connect to a given class of device. For example the Generic IOC image here: [ghcr.io/epics-containers/ioc-adaravisruntime:2024.2.2 ](https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravisruntime) uses the AreaDetector driver ADAravis to connect to GigE cameras.
42
+
An important principal of the approach presented here is that an IOC container image represents a 'Generic' IOC. The Generic IOC image is used for all IOC instances that connect to a given class of device. For example the Generic IOC image here: [ghcr.io/epics-containers/ioc-adaravis-runtime:2024.2.2 ](https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravisruntime) uses the AreaDetector driver ADAravis to connect to GigE cameras.
43
43
44
44
An IOC instance runs in a container runtime by loading two things:
45
45
@@ -78,11 +78,11 @@ Today it is by far the dominant orchestration technology for containers.
78
78
In this project we use Kubernetes and helm to provide a standard way of
79
79
implementing these features:
80
80
81
-
- Auto start IOCs the cluster comes up from power off
81
+
- Auto start IOCs when the cluster comes up from power off
82
82
- Manually Start and Stop IOCs
83
83
- Monitor IOC health and versions
84
84
- Deploy versioned IOCs to the beamline
85
-
- Rollback to a previous IOC version
85
+
- Rollback IOCs to a previous version
86
86
- Allocate a server with adequate resources on which to run each IOC
87
87
- Failover to another server (for soft IOCs not tied to hardware in a server)
88
88
- View the current log
@@ -91,12 +91,7 @@ implementing these features:
91
91
- debug an IOC by starting a bash shell inside it's container
92
92
93
93
### Kubernetes Alternative
94
-
95
-
If you do not have the resources to maintain a Kubernetes cluster then this project
96
-
supports installing IOC instances directly into the local docker or podman runtime
97
-
on the current server. Where a beamline has multiple servers the distribution of
98
-
IOCs across those servers is managed by the user. These tools would replace
99
-
Kubernetes and Helm in the technology stack.
94
+
If you do not have the resources to maintain a Kubernetes cluster then this project supports installing IOC instances directly into the local docker or podman runtime on the current server. Where a beamline has multiple servers the distribution of IOCs across those servers is managed by the user. These tools would replace Kubernetes and Helm in the technology stack. Docker compose allows us to describe a set of IOCs and other services for each beamline server.
100
95
101
96
If you choose to use this approach then you may find it useful to have another
102
97
tool for viewing and managing the set of containers you have deployed across
@@ -106,9 +101,14 @@ Portainer is a paid for product
106
101
that provides excellent visibility and control of your containers through a
107
102
web interface. It is very easy to install.
108
103
109
-
The downside of this approach is that you will need to manually manage the
110
-
resources available to each IOC instance and manually decide which server to
111
-
run each IOC on.
104
+
The downsides of not using Kuberenetes are:
105
+
106
+
- manually managing the resources. i.e. deciding up front which server to run each IOC on.
107
+
- no automatic failover to another server if the current server fails or becomes overloaded.
108
+
- no continuous deployment
109
+
-
110
+
111
+
is that you will need to manually manage the resources available to each IOC instance and manually decide which server to run each IOC on. It also means that you cannot take advantage of the Kubernetes feat
112
112
113
113
### Helm
114
114
@@ -139,25 +139,19 @@ IOCs. Which performs the following steps:
139
139
- Clone the beamline repository at a specific tag to a temporary folder
140
140
- install the resulting chart into the cluster
141
141
- remove the temporary folder
142
+
- helm templating for making multiple similar IOCs is not available
143
+
- no centralized access to ioc shell or debug shell - instead you must ssh to the correct server first.
142
144
143
145
144
146
### Repositories
145
147
146
-
All of the assets required to manage a
147
-
set of IOC Instances for a beamline are held in repositories.
148
+
All of the assets required to manage a set of IOC Instances for a beamline are held in repositories.
148
149
149
-
Thus all version control is done via these repositories and no special
150
-
locations in a shared filesystem are required.
151
-
(The legacy approach at DLS relied heavily on know locations in a
152
-
shared filesystem).
150
+
Thus all version control is done via these repositories and no special locations in a shared filesystem are required. (The legacy approach at DLS relied heavily on know locations in a shared filesystem).
153
151
154
-
In the **epics-containers** examples all repositories are held in the same
155
-
github organization. This is nicely contained and means that only one set
156
-
of credentials is required to access all the resources.
152
+
In the **epics-containers** examples all repositories are held in the same github organization. This is nicely contained and means that only one set of credentials is required to access all the resources.
157
153
158
-
There are many alternative services for storing these repositories, both
159
-
in the cloud and on premises. Below we list the choices we have tested
160
-
during the POC.
154
+
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the POC.
161
155
162
156
The classes of repository are as follows:
163
157
@@ -166,16 +160,16 @@ The classes of repository are as follows:
166
160
167
161
Holds the source code but also provides the Continuous Integration actions for testing, building and publishing to the image / helm repositories. These have been tested:
168
162
169
-
- github
170
-
- gitlab (on premises)
163
+
- GitHub
164
+
- GitLab (on premises)
171
165
172
166
:Generic IOC Source Repositories:
173
167
174
168
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed.
175
169
176
-
:EC Services Source Repositories:
170
+
:Services Source Repositories:
177
171
178
-
Define the IOC instances for a beamlineor accelerator area. This includes the IOC boot scripts and any other configuration required to make the IOC instance unique. For ibek based IOCs, each IOC instance is defined by an ibek yaml file only.
172
+
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy. This includes the IOC boot scripts and any other configuration required to make the IOC instance unique. For **ibek** based IOCs, each IOC instance is defined by an **ibek** yaml file only.
179
173
180
174
:An OCI Image Repository:
181
175
@@ -208,8 +202,8 @@ There are these types of CI:
208
202
- publishes the image to github packages (only if the commit is tagged)
209
203
or other OCI registry
210
204
211
-
:`ec-services-repo` source:
212
-
- prepares a helm chart from each IOC instance definition
205
+
:services source:
206
+
- prepares a helm chart from each IOC instance or other service definition
213
207
- tests that the helm chart is deployable (but does not deploy it)
214
208
- locally launches each IOC instance and loads its configuration to
215
209
verify that the configuration is valid (no system tests because this
@@ -239,24 +233,18 @@ in the future.
239
233
240
234
Python soft IOCs are also supported.
241
235
242
-
GUI generation for engineering screens will be supported via the PVI project.
243
-
See <https://github.com/epics-containers/pvi>.
236
+
GUI generation for engineering screens is supported via the PVI project. See <https://github.com/epics-containers/pvi>.
244
237
245
238
## Additional Tools
246
239
247
240
### edge-containers-cli
248
241
249
-
This is the developer's 'outside of the container' helper tool. The command
242
+
This is the 'outside of the container' helper tool. The command
250
243
line entry point is **ec**.
251
244
252
-
The project is a python package featuring simple command
253
-
line functions for deploying and monitoring IOC instances. It is a wrapper
254
-
around the standard command line tools kubectl, podman/docker, helm, and git
255
-
but saves typing and provides help and command line completion. It also
256
-
can teach you how to use these tools by showing you the commands it is
257
-
running.
245
+
The project is a python package featuring simple command line functions for deploying monitoring IOC instances. It is a thin wrapper around the argocd, kubectl, helm and git commands. This tool can be used by developers and beamline staff to get a quick CLI based view of IOCs running in the cluster, as well as stop/start and obtain logs from them.
258
246
259
-
See {any}`CLI` for moore details.
247
+
See {any}`CLI` for more details.
260
248
261
249
### **ibek**
262
250
@@ -267,6 +255,7 @@ container images. It is used:
267
255
- at container build time: to fetch and build EPICS support modules
268
256
- at container run time: to extract all useful build artifacts into a
269
257
runtime image
258
+
- inside the developer container: to assist with testing and debugging.
270
259
271
260
See <https://github.com/epics-containers/ibek>.
272
261
@@ -281,3 +270,4 @@ Process Variables allowing us to:
0 commit comments