You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/explanations/introduction.md
+48-71Lines changed: 48 additions & 71 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,74 +41,66 @@ forks of these repositories.
41
41
42
42
An important principal of the approach presented here is that an IOC container image represents a 'Generic' IOC. The Generic IOC image is used for all IOC instances that connect to a given class of device. For example the Generic IOC image here: [ghcr.io/epics-containers/ioc-adaravis-runtime:2024.2.2 ](https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravisruntime) uses the AreaDetector driver ADAravis to connect to GigE cameras.
43
43
44
+
The generic IOC image contains:
45
+
46
+
- a set of compiled support modules
47
+
- a compiled IOC binary that links all those modules
48
+
- a dbd file for all the support modules.
49
+
50
+
It does not contain a startup script pr EPICS database, these are instance specific and added at runtime.
51
+
44
52
An IOC instance runs in a container runtime by loading two things:
45
53
46
54
- The Generic IOC image passed to the container runtime.
47
-
- The IOC instance configuration. This is mapped into the container at
48
-
runtime by mounting it into the filesystem. The mount point
49
-
for this configuration is always `/epics/ioc/config`.
55
+
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is always `/epics/ioc/config`.
50
56
51
-
The configuration will bootstrap the unique properties of that instance.
52
-
The following contents for the configuration are supported:
57
+
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration are supported:
53
58
54
-
- ioc.yaml: an **ibek** IOC description file which **ibek** will use to generate
59
+
-``ioc.yaml``: an **ibek** IOC description file which **ibek** will use to generate
55
60
st.cmd and ioc.subst.
56
-
- st.cmd, ioc.subst: an IOC shell startup script and an optional substitution file.
57
-
st.cmd can refer any additional files in the configuration directory.
58
-
- start.sh a bash script to fully override the startup of the IOC. start.sh
61
+
-``st.cmd`` and ``ioc.subst``: an IOC shell startup script and an optional substitution file.
62
+
st.cmd can refer to any additional files in the configuration directory.
63
+
-``start.sh``: a bash script to fully override the startup of the IOC. start.sh
59
64
can refer to any additional files in the configuration directory.
60
65
61
66
This approach reduces the number of images required and saves disk and memory. It also makes for simpler configuration management.
62
67
63
-
Throughout this documentation we will use the terms Generic IOC and
64
-
IOC Instance. The word IOC without this context is ambiguous.
68
+
Throughout this documentation we will use the terms Generic IOC and IOC Instance. The word IOC without this context is ambiguous.
65
69
66
70
### Kubernetes
67
71
68
72
<https://kubernetes.io/>
69
73
70
-
Kubernetes easily and efficiently manages containers across clusters of hosts.
71
-
When deploying an IOC into a Kubernetes cluster, you request the resources
72
-
required by the IOC and Kubernetes will then schedule the IOC onto a suitable host.
74
+
Kubernetes efficiently manages containers across clusters of hosts. It builds upon years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community, since it was open-sourced in 2014.
73
75
74
-
It builds upon years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community, since it was open-sourced in 2014.
76
+
Today Kubernetes is by far the dominant orchestration system for containers. It is managed by The Cloud Native Computing Foundation (CNCF) which is part of the Linux Foundation. You can read about its active community here <https://www.cncf.io/>.
75
77
76
-
Today it is by far the dominant orchestration technology for containers.
78
+
When deploying an IOC into a Kubernetes cluster, you request the resources needed by the IOC and Kubernetes will then schedule the IOC onto a suitable host with sufficient resources.
77
79
78
-
In this project we use Kubernetes and helm to provide a standard way of
80
+
In this project we use Kubernetes and Helm (the package manager for Kubernetes) to provide a standard way of
79
81
implementing these features:
80
82
81
83
- Auto start IOCs when the cluster comes up from power off
84
+
- Allocate a server with adequate resources on which to run each IOC
82
85
- Manually Start and Stop IOCs
83
-
- Monitor IOC health and versions
86
+
- Monitor IOC health, automatically restart IOCs that have failed
84
87
- Deploy versioned IOCs to the beamline
85
-
-Rollback IOCs to a previous version
86
-
-Allocate a server with adequate resources on which to run each IOC
87
-
- Failover to another server (for soft IOCs not tied to hardware in a server)
88
+
-Report the versions, uptime, restarts and other metadata of the IOCs
89
+
-Rollback an IOC to a previous version
90
+
- Failover to another server (for soft IOCs not tied to hardware in a server) if the server fails
88
91
- View the current log
89
92
- View historical logs (via graylog or other centralized logging system)
90
93
- Connect to an IOC and interact with its shell
91
94
- debug an IOC by starting a bash shell inside it's container
92
95
93
-
### Kubernetes Alternative
94
-
If you do not have the resources to maintain a Kubernetes cluster then this project supports installing IOC instances directly into the local docker or podman runtime on the current server. Where a beamline has multiple servers the distribution of IOCs across those servers is managed by the user. These tools would replace Kubernetes and Helm in the technology stack. Docker compose allows us to describe a set of IOCs and other services for each beamline server.
95
-
96
-
If you choose to use this approach then you may find it useful to have another
97
-
tool for viewing and managing the set of containers you have deployed across
98
-
your beamline servers. There are various solutions for this, one that has
99
-
been tested with **epics-containers** is Portainer <https://www.portainer.io/>.
100
-
Portainer is a paid for product
101
-
that provides excellent visibility and control of your containers through a
102
-
web interface. It is very easy to install.
96
+
### Kubernetes Alternatives
97
+
If you do not wish to maintain a Kubernetes cluster then you could simply install IOCs directly into the local docker or podman instance on each server. Instead of using Kubernetes and Helm, you can use docker compose to manage such IOC instances. But this is just an example, epics-containers is intended to be modular so that you can make use of any parts of it without adopting the whole framework as used at DLS.
103
98
104
-
The downsides of not using Kuberenetes are:
99
+
We provide a template services project that uses docker compose with docker or podman as the runtime engine. Docker compose allows us to describe a set of IOCs and other services for each beamline server, similar to the way Helm does. Where a beamline has multiple servers, the distribution of IOCs across those servers could be managed by maintaining a separate docker-compose file for each server in the root of the services repository.
105
100
106
-
- manually managing the resources. i.e. deciding up front which server to run each IOC on.
107
-
- no automatic failover to another server if the current server fails or becomes overloaded.
108
-
- no continuous deployment
109
-
-
101
+
If you choose to use this approach then you may find it useful to have another tool for viewing and managing the set of containers you have deployed across your beamline servers. There are various solutions for this, one that has been tested with **epics-containers** is Portainer <https://www.portainer.io/>. Portainer is a paid for product that provides excellent visibility and control of your containers through a web interface. Such a tool could allow you to centrally manage the containers on all your servers.
110
102
111
-
is that you will need to manually manage the resources available to each IOC instance and manually decide which server to run each IOC on. It also means that you cannot take advantage of the Kubernetes feat
103
+
The multi-server orchestration tool Docker Swarm might also serve to replace some of the functionality of Kubernetes. The epics-containers team have not yet tried Swarm, but it is compatible with the docker-compose files we template.
112
104
113
105
### Helm
114
106
@@ -120,38 +112,28 @@ The packages are called Helm Charts. They contain templated YAML files to
120
112
define a set of resources to apply to a Kubernetes cluster.
121
113
122
114
Helm has functions to deploy Charts to a cluster and manage multiple versions
123
-
of the chart within the cluster.
115
+
of the Chart within the cluster.
124
116
125
-
It also supports registries for storing version history of charts,
117
+
It also supports registries for storing version history of Charts,
126
118
much like docker.
127
119
128
-
In this project we use Helm Charts to define and deploy IOC instances. Each beamline or accelerator area has its own git {any}`ec-services-repo` that holds the Helm Charts for its IOC Instances. Each IOC instance need only provide:
120
+
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only container:e
129
121
130
122
- a values.yaml file to override the default values in the repository's global Helm Chart
131
123
- a config folder as described in {any}`generic-iocs`.
124
+
- a couple of boilerplate files that are the same for all IOCs.
132
125
133
-
**epics-containers** does not use helm repositories for storing IOC instances.
134
-
Such repositories only hold a zipped version of the chart and a values.yaml file,
135
-
and this is seen as redundant when we have a git repository holding the same
136
-
information. Instead we provide a command line tool for installing and updating
137
-
IOCs. Which performs the following steps:
138
-
139
-
- Clone the beamline repository at a specific tag to a temporary folder
140
-
- install the resulting chart into the cluster
141
-
- remove the temporary folder
142
-
- helm templating for making multiple similar IOCs is not available
143
-
- no centralized access to ioc shell or debug shell - instead you must ssh to the correct server first.
144
-
126
+
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes the global chart as a dependency.
145
127
146
128
### Repositories
147
129
148
-
All of the assets required to manage a set of IOC Instances for a beamline are held in repositories.
130
+
All of the assets required to manage all of the IOC Instances for a facility are held in repositories.
149
131
150
132
Thus all version control is done via these repositories and no special locations in a shared filesystem are required. (The legacy approach at DLS relied heavily on know locations in a shared filesystem).
151
133
152
134
In the **epics-containers** examples all repositories are held in the same github organization. This is nicely contained and means that only one set of credentials is required to access all the resources.
153
135
154
-
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the POC.
136
+
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the proof of concept.
155
137
156
138
The classes of repository are as follows:
157
139
@@ -169,11 +151,11 @@ The classes of repository are as follows:
169
151
170
152
:Services Source Repositories:
171
153
172
-
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy. This includes the IOC boot scripts and any other configuration required to make the IOC instance unique. For **ibek** based IOCs, each IOC instance is defined by an **ibek** yaml file only.
154
+
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy.
173
155
174
156
:An OCI Image Repository:
175
157
176
-
Holds the Generic IOC container images and their dependencies. Also used to hold he helm charts that define the shared elements between all domains.
158
+
Holds the Generic IOC container images and their dependencies. Also used to hold the helm Charts that define the shared elements between all domains.
177
159
178
160
The following have been tested:
179
161
@@ -185,7 +167,7 @@ The classes of repository are as follows:
185
167
### Continuous Integration
186
168
187
169
Our examples all use continuous integration to get from pushed source
188
-
to the published images, IOC instances helm charts and documentation.
170
+
to the published images, IOC instances helm Charts and documentation.
189
171
190
172
This allows us to maintain a clean code base that is continually tested for
191
173
integrity and also to maintain a direct relationship between source code version
@@ -203,19 +185,19 @@ There are these types of CI:
203
185
or other OCI registry
204
186
205
187
:services source:
206
-
- prepares a helm chart from each IOC instance or other service definition
207
-
- tests that the helm chart is deployable (but does not deploy it)
188
+
- prepares a helm Chart from each IOC instance or other service definition
189
+
- tests that the helm Chart is deployable (but does not deploy it)
208
190
- locally launches each IOC instance and loads its configuration to
209
191
verify that the configuration is valid (no system tests because this
210
192
would require talking to real hardware instances).
211
193
212
194
:documentation source:
213
-
- builds the sphinx docs
195
+
- builds the sphinx docs that you are reading now
214
196
- publishes it to github.io pages with version tag or branch tag.
215
197
216
-
:global helm chart source:
198
+
:global helm Chart source:
217
199
- ``ec-helm-chars`` repo only
218
-
- packages a helm chart from source
200
+
- packages a helm Chart from source
219
201
- publishes it to github packages (only if the commit is tagged)
220
202
or other OCI registry
221
203
```
@@ -242,28 +224,23 @@ GUI generation for engineering screens is supported via the PVI project. See <ht
242
224
This is the 'outside of the container' helper tool. The command
243
225
line entry point is **ec**.
244
226
245
-
The project is a python package featuring simple command line functions for deploying monitoring IOC instances. It is a thin wrapper around the argocd, kubectl, helm and git commands. This tool can be used by developers and beamline staff to get a quick CLI based view of IOCs running in the cluster, as well as stop/start and obtain logs from them.
227
+
The project is a python package featuring simple command line functions for deploying and monitoring IOC instances. It is a thin wrapper around the ArgoCD, kubectl, helm and git commands. This tool can be used by developers and beamline staff to get a quick CLI based view of IOCs running in the cluster, as well as stop/start and obtain logs from them.
246
228
247
229
See {any}`CLI` for more details.
248
230
249
231
### **ibek**
250
232
251
-
IOC Builder for EPICS and Kubernetes is the developer's 'inside the container'
252
-
helper tool. It is a python package that is installed into the Generic IOC
253
-
container images. It is used:
233
+
IOC Builder for EPICS and Kubernetes is the developer's 'inside the container' helper tool. It is a python package that is installed into the Generic IOC container images. It is used:
254
234
255
235
- at container build time: to fetch and build EPICS support modules
256
-
- at container run time: to extract all useful build artifacts into a
257
-
runtime image
236
+
- at container run time: to generate all useful build artifacts into a runtime image e.g. generating the st.cmd and ioc.db file from the ioc.yaml configuration file.
258
237
- inside the developer container: to assist with testing and debugging.
259
238
260
239
See <https://github.com/epics-containers/ibek>.
261
240
262
241
### PVI
263
242
264
-
The Process Variables Interface project is a python package that is installed
265
-
inside Generic IOC container images. It is used to give structure to the IOC's
266
-
Process Variables allowing us to:
243
+
The Process Variables Interface project is a python package that is installed inside Generic IOC container images. It is used to give structure to the IOC's Process Variables allowing us to:
267
244
268
245
- add metadata to the IOCs DB records for use by [Bluesky] and [Ophyd]
269
246
- auto generate screens for the device (as bob, adl or edm files)
Copy file name to clipboardExpand all lines: docs/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ Update for February 2024
56
56
The tutorials have now been updated. Recent changes include:
57
57
58
58
- epics-containers-cli has been renamed to edge-containers-cli. It now supports the deployment of general services as well as IOCs. It still has the entrypoint `ec` but the namespace `ioc` has been dropped and its functions are now in the root (e.g. `ec ioc deploy` is now `ec deploy`).
59
-
- Improved CI for {any}`ec-services-repo`s and generic IOCs repos.
59
+
- Improved CI for {any}`services-repo`s and generic IOCs repos.
60
60
- copier template based creation of new beamline, accelerator and generic IOC repos.
61
61
- This provides greatly improved ability to adopt updates to the template into your own repositories.
Copy file name to clipboardExpand all lines: docs/reference/cli.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ The CLI is just a thin wrapper around the underlying tools that do the real work
16
16
:git: the git version control system client
17
17
```
18
18
19
-
`ec` is useful because it saves typing and provides a consistent interface when working on multiple {any}`ec-services-repo` s. This is because it uses the environment setup by the beamline repo's `environment.sh` script. See {any}`environment`.
19
+
`ec` is useful because it saves typing and provides a consistent interface when working on multiple {any}`services-repo` s. This is because it uses the environment setup by the beamline repo's `environment.sh` script. See {any}`environment`.
Copy file name to clipboardExpand all lines: docs/reference/glossary.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,20 @@
1
1
2
2
# Glossary
3
3
4
-
(ec-services-repo)=
5
-
## ec-services repository
4
+
(services-repo)=
5
+
## services repository
6
6
7
-
A repository that contains the definitions for a group of IOC and service instances that are deployed in a Kubernetes cluster. The grouping of instances is up to the facility. At DLS the instances are grouped by beamline for beamline IOCs. Accelerator IOC groupings are by location or by technical domain as appropriate.
7
+
A repository that contains the definitions for a group of IOCs instances and other services. The grouping of instances is up to the facility. At DLS the instances are grouped by beamline for beamline IOCs. Accelerator IOC groupings by technical domain as appropriate.
8
+
9
+
epics-containers supports two kinds of services repositories:
10
+
11
+
-**Kubernetes** services repositories. These are for deployment into a Kubernetes cluster. Each repositoriy contains a set of **Helm Charts** all of which will deploy into a single namespace in a single Kubernetes Cluster.
12
+
-**Local Machine** services repositories. These are for deployment to a local machine using docker-compose. Each repository contains a set *compose.yaml* files that describe how to deploy a set of services to the local machine. These could potentially be used for production at a facility which does not use Kuberentes, but are primarily for development, testing and the earlier tutorials in this documenttation.
8
13
9
14
(edge-containers-cli)=
10
15
## edge-containers-cli
11
16
12
-
A Python command line tool for the developer that runs *outside* of containers. It provides features for deploying and managing service and IOC instances within an[](ec-services-repo).
17
+
A Python command line tool for the developer that runs *outside* of containers. It provides simple features for and monitoring and managing and IOC instances within a[](services-repo).
13
18
14
19
So named 'edge' containers because these services all run close to the hardware. Uses the command line entry point `ec`.
0 commit comments