You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
uses the AreaDetector driver ADAravis to connect to GigE cameras.
42
+
An important principal of the approach presented here is that an IOC container image represents a 'Generic' IOC. The Generic IOC image is used for all IOC instances that connect to a given class of device. For example the Generic IOC image here: [ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2024.1.2](https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravis-linux-runtime) uses the AreaDetector driver ADAravis to connect to GigE cameras.
48
43
49
44
An IOC instance runs in a container runtime by loading two things:
50
45
51
46
- The Generic IOC image passed to the container runtime.
52
47
- The IOC instance configuration. This is mapped into the container at
53
48
runtime by mounting it into the filesystem. The mount point
54
-
for this configuration is always /epics/ioc/config.
49
+
for this configuration is always `/epics/ioc/config`.
55
50
56
51
The configuration will bootstrap the unique properties of that instance.
57
52
The following contents for the configuration are supported:
@@ -63,8 +58,7 @@ The following contents for the configuration are supported:
63
58
- start.sh a bash script to fully override the startup of the IOC. start.sh
64
59
can refer to any additional files in the configuration directory.
65
60
66
-
This approach reduces the number of images required and saves disk. It also
67
-
makes for simple configuration management.
61
+
This approach reduces the number of images required and saves disk and memory. It also makes for simpler configuration management.
68
62
69
63
Throughout this documentation we will use the terms Generic IOC and
70
64
IOC Instance. The word IOC without this context is ambiguous.
@@ -77,18 +71,16 @@ Kubernetes easily and efficiently manages containers across clusters of hosts.
77
71
When deploying an IOC into a Kubernetes cluster, you request the resources
78
72
required by the IOC and Kubernetes will then schedule the IOC onto a suitable host.
79
73
80
-
It builds upon 15 years of experience of running production workloads at
81
-
Google, combined with best-of-breed ideas and practices from the community,
82
-
since it was open-sourced in 2014.
74
+
It builds upon years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community, since it was open-sourced in 2014.
83
75
84
76
Today it is by far the dominant orchestration technology for containers.
85
77
86
78
In this project we use Kubernetes and helm to provide a standard way of
87
79
implementing these features:
88
80
89
-
- Auto start IOCs when servers come up
81
+
- Auto start IOCs the cluster comes up from power off
90
82
- Manually Start and Stop IOCs
91
-
- Monitor IOC status and versions
83
+
- Monitor IOC health and versions
92
84
- Deploy versioned IOCs to the beamline
93
85
- Rollback to a previous IOC version
94
86
- Allocate a server with adequate resources on which to run each IOC
@@ -133,11 +125,10 @@ of the chart within the cluster.
133
125
It also supports registries for storing version history of charts,
134
126
much like docker.
135
127
136
-
In this project we use Helm Charts to define and deploy IOC instances.
137
-
Each beamline (or accelerator domain) has its own git repository that holds
138
-
the domain Helm Chart for its IOC Instances. Each IOC instance need only
139
-
provide a values.yaml file to override the default values in the domain
140
-
Helm Chart and a config folder as described in {any}`generic-iocs`.
128
+
In this project we use Helm Charts to define and deploy IOC instances. Each beamline or accelerator area has its own git {any}`ec-services-repo` that holds the Helm Charts for its IOC Instances. Each IOC instance need only provide:
129
+
130
+
- a values.yaml file to override the default values in the repository's global Helm Chart
131
+
- a config folder as described in {any}`generic-iocs`.
141
132
142
133
**epics-containers** does not use helm repositories for storing IOC instances.
143
134
Such repositories only hold a zipped version of the chart and a values.yaml file,
@@ -146,13 +137,9 @@ information. Instead we provide a command line tool for installing and updating
146
137
IOCs. Which performs the following steps:
147
138
148
139
- Clone the beamline repository at a specific tag to a temporary folder
149
-
- extract the beamline chart and apply the values.yaml to it
150
-
- additionally generate a config map from the config folder files
151
140
- install the resulting chart into the cluster
152
141
- remove the temporary folder
153
142
154
-
This means that we don't store the chart itself but we do store all of the
155
-
information required to re-generate it in a version tagged repository.
156
143
157
144
### Repositories
158
145
@@ -172,47 +159,33 @@ There are many alternative services for storing these repositories, both
172
159
in the cloud and on premises. Below we list the choices we have tested
173
160
during the POC.
174
161
175
-
The 2 classes of repository are as follows:
162
+
The classes of repository are as follows:
176
163
177
164
```{eval-rst}
165
+
:Source Repository:
178
166
179
-
:An OCI image repository:
167
+
Holds the source code but also provides the Continuous Integration actions for testing, building and publishing to the image / helm repositories. These have been tested:
180
168
181
-
- Holds the Generic IOC container images and their
182
-
dependencies. Also used to hold the helm charts that define the shared
183
-
elements between all domains.
169
+
- github
170
+
- gitlab (on premises)
184
171
185
-
The following have been tested:
172
+
:Generic IOC Source Repositories:
186
173
187
-
- Github Container Registry
188
-
- DockerHub
189
-
- Google Cloud Container Registry
174
+
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed.
190
175
176
+
:EC Services Source Repositories:
191
177
192
-
Continuous Integration
193
-
~~~~~~~~~~~~~~~~~~~~~~
178
+
Define the IOC instances for a beamline or accelerator area. This includes the IOC boot scripts and any other configuration required to make the IOC instance unique. For ibek based IOCs, each IOC instance is defined by an ibek yaml file only.
194
179
195
-
Our examples all use continuous integration to get from pushed source
196
-
to the published images, IOC instances helm charts and documentation.
180
+
:An OCI Image Repository:
197
181
198
-
This allows us to maintain a clean code base that is continually tested for
199
-
integrity and also to maintain a direct relationship between source code version
200
-
tags and the tags of their built resources.
182
+
Holds the Generic IOC container images and their dependencies. Also used to hold he helm charts that define the shared elements between all domains.
201
183
202
-
There are these types of CI:
184
+
The following have been tested:
203
185
204
-
:Generic IOC source:
205
-
- builds a Generic IOC container image
206
-
- runs some tests against that image - these will eventually include
207
-
system tests that talk to simulated hardware
208
-
- publishes the image to github packages (only if the commit is tagged)
209
-
or other OCI registry
210
-
211
-
:beamline definition source:
212
-
- prepares a helm chart from each IOC instance definition
213
-
- tests that the helm chart is deployable (but does not deploy it)
214
-
- locally launches each IOC instance and loads its configuration to
215
-
verify that the configuration is valid (no system tests because this
186
+
- Github Container Registry
187
+
- DockerHub
188
+
- Google Cloud Container Registry
216
189
```
217
190
218
191
### Continuous Integration
@@ -228,7 +201,14 @@ There are these types of CI:
228
201
229
202
```{eval-rst}
230
203
231
-
:beamline definition source:
204
+
:Generic IOC source:
205
+
- builds a Generic IOC container image
206
+
- runs some tests against that image - these will eventually include
207
+
system tests that talk to simulated hardware
208
+
- publishes the image to github packages (only if the commit is tagged)
209
+
or other OCI registry
210
+
211
+
:`ec-services-repo` source:
232
212
- prepares a helm chart from each IOC instance definition
233
213
- tests that the helm chart is deployable (but does not deploy it)
234
214
- locally launches each IOC instance and loads its configuration to
@@ -239,11 +219,11 @@ There are these types of CI:
239
219
- builds the sphinx docs
240
220
- publishes it to github.io pages with version tag or branch tag.
241
221
242
-
Scope
243
-
-----
244
-
This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running
245
-
on MVME5500 hardware. Soft IOCs that require access to hardware on the
246
-
server (e.g. USB or PCIe) will be supported by mounting the hardware into
222
+
:global helm chart source:
223
+
- ``ec-helm-chars`` repo only
224
+
- packages a helm chart from source
225
+
- publishes it to github packages (only if the commit is tagged)
226
+
or other OCI registry
247
227
```
248
228
249
229
## Scope
@@ -264,20 +244,19 @@ See <https://github.com/epics-containers/pvi>.
264
244
265
245
## Additional Tools
266
246
267
-
### epics-containers-cli
247
+
### edge-containers-cli
268
248
269
249
This is the developer's 'outside of the container' helper tool. The command
270
250
line entry point is **ec**.
271
251
272
252
The project is a python package featuring simple command
273
-
line functions for deploying, monitoring building and debugging
274
-
Generic IOCs and IOC instances. It is a wrapper
253
+
line functions for deploying and monitoring IOC instances. It is a wrapper
275
254
around the standard command line tools kubectl, podman/docker, helm, and git
276
255
but saves typing and provides help and command line completion. It also
277
256
can teach you how to use these tools by showing you the commands it is
278
257
running.
279
258
280
-
See {any}`CLI` for details.
259
+
See {any}`CLI` for moore details.
281
260
282
261
### **ibek**
283
262
@@ -286,7 +265,6 @@ helper tool. It is a python package that is installed into the Generic IOC
286
265
container images. It is used:
287
266
288
267
- at container build time: to fetch and build EPICS support modules
289
-
- at container build time: to generate the IOC source code and compile it
290
268
- at container run time: to extract all useful build artifacts into a
Copy file name to clipboardExpand all lines: docs/reference/glossary.md
+15-2Lines changed: 15 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,22 @@
4
4
(ec-services-repo)=
5
5
## ec-services repository
6
6
7
-
A repository that contains the definitions for a group of IOC and service instances that are deployed in a Kubernetes cluster. The grouping of instances is up to the facility. At DLS the instances are grouped by beamline, accelerator groupings are by location or by technical domain as appropriate.
7
+
A repository that contains the definitions for a group of IOC and service instances that are deployed in a Kubernetes cluster. The grouping of instances is up to the facility. At DLS the instances are grouped by beamline for beamline IOCs. Accelerator IOC groupings are by location or by technical domain as appropriate.
8
8
9
9
(edge-containers-cli)=
10
10
## edge-containers-cli
11
11
12
-
A command line tool for deploying and managing service and IOC instances within an [](ec-services-repo). So named 'edge' containers because these services all run close to the hardware. Historically this tool was called epics containers cli and both versions use the command line entry point ``ec``.
12
+
A Python command line tool for the developer that runs *outside* of containers. It provides features for deploying and managing service and IOC instances within an [](ec-services-repo).
13
+
14
+
So named 'edge' containers because these services all run close to the hardware. Uses the command line entry point `ec`.
15
+
16
+
(ibek)=
17
+
## ibek
18
+
A Python command line tool that provides services *inside* of the Generic IOC container such as:
19
+
20
+
- building support modules at build time
21
+
- configuring global assets such as the RELEASE file at build time
22
+
- converting developer containers into light-weight runtime containers
23
+
- Generating startup assets for an IOC Instance from a set of yaml files at runtime.
0 commit comments