@@ -46,25 +46,26 @@ An important principal of the approach presented here is that an IOC container
46
46
image represents a 'Generic' IOC. The Generic IOC image is used for all
47
47
IOC instances that connect to a given class of device. For example the
48
48
Generic IOC image here:
49
- ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2023.10.1
49
+ `ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2023.10.1
50
+ <https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravis-linux-runtime> `_
50
51
uses the AreaDetector driver ADAravis to connect to GigE cameras.
51
52
52
53
An IOC instance runs in a container runtime by loading two things:
53
54
54
55
- The Generic IOC image passed to the container runtime.
55
56
- The IOC instance configuration. This is mapped into the container at
56
- runtime by mounting it into the filesystem at runtime . The mount point
57
- for this configuration is alway /epics/ioc/config.
57
+ runtime by mounting it into the filesystem. The mount point
58
+ for this configuration is always /epics/ioc/config.
58
59
59
60
The configuration will bootstrap the unique properties of that instance.
60
61
The following contents for the configuration are supported:
61
62
62
63
- ioc.yaml: an **ibek ** IOC description file which **ibek ** will use to generate
63
- st.cmd and ioc.subst
64
- - st.cmd, ioc.subst: an IOC shell startup script and an optional substitution file
65
- st.cmd can refer any additional files in the configuration directory
64
+ st.cmd and ioc.subst.
65
+ - st.cmd, ioc.subst: an IOC shell startup script and an optional substitution file.
66
+ st.cmd can refer any additional files in the configuration directory.
66
67
- start.sh a bash script to fully override the startup of the IOC. start.sh
67
- can refer to any additional files in the configuration directory
68
+ can refer to any additional files in the configuration directory.
68
69
69
70
70
71
This approach reduces the number of images required and saves disk. It also
@@ -79,6 +80,8 @@ Kubernetes
79
80
https://kubernetes.io/
80
81
81
82
Kubernetes easily and efficiently manages containers across clusters of hosts.
83
+ When deploying an IOC into a Kubernetes cluster, you request the resources
84
+ required by the IOC and Kubernetes will then schedule the IOC onto a suitable host.
82
85
83
86
It builds upon 15 years of experience of running production workloads at
84
87
Google, combined with best-of-breed ideas and practices from the community,
@@ -106,12 +109,22 @@ Kubernetes Alternative
106
109
~~~~~~~~~~~~~~~~~~~~~~
107
110
108
111
If you do not have the resources to maintain a Kubernetes cluster then this project
109
- is experimentally supporing the use of podman-compose or docker-compose to deploy
110
- IOCs to a single server. Where a beamline has multiple servers the distribution of
112
+ supports installing IOC instances directly into the local docker or podman runtime
113
+ on the current server. Where a beamline has multiple servers the distribution of
111
114
IOCs across those servers is managed by the user. These tools would replace
112
115
Kubernetes and Helm in the technology stack.
113
116
114
- TODO: more on this once we have a working example.
117
+ If you choose to use this approach then you may find it useful to have another
118
+ tool for viewing and managing the set of containers you have deployed across
119
+ your beamline servers. There are various solutions for this, one that has
120
+ been tested with **epics-containers ** is Portainer https://www.portainer.io/.
121
+ Portainer is a paid for product
122
+ that provides excellent visibility and control of your containers through a
123
+ web interface. It is very easy to install.
124
+
125
+ The downside of this approach is that you will need to manually manage the
126
+ resources available to each IOC instance and manually decide which server to
127
+ run each IOC on.
115
128
116
129
Helm
117
130
~~~~
@@ -130,9 +143,9 @@ much like docker.
130
143
131
144
In this project we use Helm Charts to define and deploy IOC instances.
132
145
Each beamline (or accelerator domain) has its own git repository that holds
133
- the beamline Helm Chart for its IOCs . Each IOC instance need only provide a
134
- values.yaml file to override the default values in the Helm Chart and a config folder
135
- as described in `generic iocs `.
146
+ the domain Helm Chart for its IOC Instances . Each IOC instance need only
147
+ provide a values.yaml file to override the default values in the domain
148
+ Helm Chart and a config folder as described in `generic iocs `.
136
149
137
150
**epics-containers ** does not use helm repositories for storing IOC instances.
138
151
Such repositories only hold a zipped version of the chart and a values.yaml file,
@@ -154,13 +167,12 @@ Repositories
154
167
~~~~~~~~~~~~
155
168
156
169
All of the assets required to manage a
157
- set of IOCs for a beamline are held in repositories.
170
+ set of IOC Instances for a beamline are held in repositories.
158
171
159
- Thus all version control is done
160
- via these repositories and no special locations in
161
- a shared filesystem are required
162
- (The legacy approach at DLS relied heavily on
163
- know locations in a shared filesystem).
172
+ Thus all version control is done via these repositories and no special
173
+ locations in a shared filesystem are required.
174
+ (The legacy approach at DLS relied heavily on know locations in a
175
+ shared filesystem).
164
176
165
177
In the **epics-containers ** examples all repositories are held in the same
166
178
github organization. This is nicely contained and means that only one set
@@ -176,7 +188,9 @@ The 2 classes of repository are as follows:
176
188
177
189
- Holds the source code but also provides the
178
190
Continuous Integration actions for testing, building and publishing to
179
- the image / helm repositories. These have been tested:
191
+ the image / helm repositories.
192
+
193
+ These have been tested:
180
194
181
195
- github
182
196
- gitlab (on premises)
@@ -185,17 +199,24 @@ The 2 classes of repository are as follows:
185
199
186
200
- Generic IOC source. Defines how a Generic IOC image is built, this does
187
201
not typically include source code, but instead is a set of instructions
188
- for building the Generic IOC image.
202
+ for building the Generic IOC image by compiling source from a number
203
+ of upstream support module repositories. Boilerplate IOC source code
204
+ is also included in the Generic IOC source repository and can be
205
+ customized if needed.
206
+
189
207
- Beamline / Accelerator Domain source. Defines the IOC instances for a
190
- beamline or Accelerator Domain. This includes the IOC boot scripts and
208
+ Domain. This includes the IOC boot scripts and
191
209
any other configuration required to make the IOC instance unique.
192
210
For **ibek ** based IOCs, each IOC instance is defined by an **ibek **
193
211
yaml file only.
194
212
195
213
:An OCI image repository:
196
214
197
215
- Holds the Generic IOC container images and their
198
- dependencies. The following have been tested:
216
+ dependencies. Also used to hold the helm charts that define the shared
217
+ elements between all domains.
218
+
219
+ The following have been tested:
199
220
200
221
- Github Container Registry
201
222
- DockerHub
@@ -206,26 +227,27 @@ Continuous Integration
206
227
~~~~~~~~~~~~~~~~~~~~~~
207
228
208
229
Our examples all use continuous integration to get from pushed source
209
- to the published images.
230
+ to the published images, IOC instances helm charts and documentation .
210
231
211
232
This allows us to maintain a clean code base that is continually tested for
212
- integrity and also to maintain a direct relationship between source code tags
213
- and the tags of their built resources.
233
+ integrity and also to maintain a direct relationship between source code version
234
+ tags and the tags of their built resources.
214
235
215
236
There are these types of CI:
216
237
217
238
:Generic IOC source:
218
239
- builds a Generic IOC container image
219
- - runs some tests against that image
240
+ - runs some tests against that image - these will eventually include
241
+ system tests that talk to simulated hardware
220
242
- publishes the image to github packages (only if the commit is tagged)
221
243
or other OCI registry
222
244
223
245
:beamline definition source:
224
- - builds a helm chart from each ioc definition
246
+ - prepares a helm chart from each ioc instance definition
225
247
- tests that the helm chart is deployable (but does not deploy it)
226
248
- locally launches each IOC instance and loads its configuration to
227
249
verify that the configuration is valid (no system tests because this
228
- would require talking to beamline hardware).
250
+ would require talking to real hardware instances ).
229
251
230
252
:documentation source:
231
253
- builds the sphinx docs
@@ -236,26 +258,33 @@ Scope
236
258
This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running
237
259
on MVME5500 hardware. Soft IOCs that require access to hardware on the
238
260
server (e.g. USB or PCIe) will be supported by mounting the hardware into
239
- the container (theses IOCS will not support failover).
261
+ the container (theses IOCS will not support Kubernetes failover).
240
262
241
263
Other linux architectures could be added to the Kubernetes cluster. We have
242
264
tested arm64 native builds and will add this as a supported architecture
243
265
in the future.
244
266
267
+ Python soft IOCs are also supported.
268
+
269
+ GUI generation for engineering screens will be supported via the PVI project.
270
+ See https://github.com/epics-containers/pvi.
271
+
245
272
246
273
Additional Tools
247
274
----------------
248
275
249
276
epics-containers-cli
250
277
~~~~~~~~~~~~~~~~~~~~
251
- This define the developer's 'outside of the container' helper tool. The command
278
+ This is the developer's 'outside of the container' helper tool. The command
252
279
line entry point is **ec **.
253
280
254
281
The project is a python package featuring simple command
255
282
line functions for deploying, monitoring building and debugging
256
283
Generic IOCs and IOC instances. It is a wrapper
257
284
around the standard command line tools kubectl, podman/docker, helm, and git
258
- but saves typing and provides help and command line completion.
285
+ but saves typing and provides help and command line completion. It also
286
+ can teach you how to use these tools by showing you the commands it is
287
+ running.
259
288
260
289
See `CLI ` for details.
261
290
@@ -266,20 +295,23 @@ IOC Builder for EPICS and Kubernetes is the developer's 'inside the container'
266
295
helper tool. It is a python package that is installed into the Generic IOC
267
296
container images. It is used:
268
297
269
- - at container build time to fetch and build EPICS support modules
270
- - to generate the IOC source code and compile it
271
- - to extract all useful build artifacts into a runtime image
298
+ - at container build time: to fetch and build EPICS support modules
299
+ - at container build time: to generate the IOC source code and compile it
300
+ - at container run time: to extract all useful build artifacts into a
301
+ runtime image
272
302
273
303
See https://github.com/epics-containers/ibek.
274
304
275
305
PVI
276
306
~~~
277
- Process Variables Interface is a python package that is installed into the
278
- Generic IOC container images. It is used to give structure to the IOC's PVI
279
- interface allowing us to:
307
+ The Process Variables Interface project is a python package that is installed
308
+ inside Generic IOC container images. It is used to give structure to the IOC's
309
+ Process Variables allowing us to:
280
310
281
- - add metadata to the IOCs DB records for use by bluesky
311
+ - add metadata to the IOCs DB records for use by ` Bluesky `_ and ` Ophyd `_
282
312
- auto generate screens for the device (as bob, adl or edm files)
283
313
314
+ .. _Bluesky : https://blueskyproject.io/
315
+ .. _Ophyd : https://github.com/bluesky/ophyd-async
284
316
285
317
0 commit comments