Skip to content

Commit 76c2567

Browse files
committed
review essential concepts
1 parent 79cbf6b commit 76c2567

File tree

7 files changed

+63
-81
lines changed

7 files changed

+63
-81
lines changed

docs/explanations/introduction.md

Lines changed: 48 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -41,74 +41,66 @@ forks of these repositories.
4141

4242
An important principal of the approach presented here is that an IOC container image represents a 'Generic' IOC. The Generic IOC image is used for all IOC instances that connect to a given class of device. For example the Generic IOC image here: [ghcr.io/epics-containers/ioc-adaravis-runtime:2024.2.2 ](https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravisruntime) uses the AreaDetector driver ADAravis to connect to GigE cameras.
4343

44+
The generic IOC image contains:
45+
46+
- a set of compiled support modules
47+
- a compiled IOC binary that links all those modules
48+
- a dbd file for all the support modules.
49+
50+
It does not contain a startup script pr EPICS database, these are instance specific and added at runtime.
51+
4452
An IOC instance runs in a container runtime by loading two things:
4553

4654
- The Generic IOC image passed to the container runtime.
47-
- The IOC instance configuration. This is mapped into the container at
48-
runtime by mounting it into the filesystem. The mount point
49-
for this configuration is always `/epics/ioc/config`.
55+
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is always `/epics/ioc/config`.
5056

51-
The configuration will bootstrap the unique properties of that instance.
52-
The following contents for the configuration are supported:
57+
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration are supported:
5358

54-
- ioc.yaml: an **ibek** IOC description file which **ibek** will use to generate
59+
- ``ioc.yaml``: an **ibek** IOC description file which **ibek** will use to generate
5560
st.cmd and ioc.subst.
56-
- st.cmd, ioc.subst: an IOC shell startup script and an optional substitution file.
57-
st.cmd can refer any additional files in the configuration directory.
58-
- start.sh a bash script to fully override the startup of the IOC. start.sh
61+
- ``st.cmd`` and ``ioc.subst``: an IOC shell startup script and an optional substitution file.
62+
st.cmd can refer to any additional files in the configuration directory.
63+
- ``start.sh``: a bash script to fully override the startup of the IOC. start.sh
5964
can refer to any additional files in the configuration directory.
6065

6166
This approach reduces the number of images required and saves disk and memory. It also makes for simpler configuration management.
6267

63-
Throughout this documentation we will use the terms Generic IOC and
64-
IOC Instance. The word IOC without this context is ambiguous.
68+
Throughout this documentation we will use the terms Generic IOC and IOC Instance. The word IOC without this context is ambiguous.
6569

6670
### Kubernetes
6771

6872
<https://kubernetes.io/>
6973

70-
Kubernetes easily and efficiently manages containers across clusters of hosts.
71-
When deploying an IOC into a Kubernetes cluster, you request the resources
72-
required by the IOC and Kubernetes will then schedule the IOC onto a suitable host.
74+
Kubernetes efficiently manages containers across clusters of hosts. It builds upon years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community, since it was open-sourced in 2014.
7375

74-
It builds upon years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community, since it was open-sourced in 2014.
76+
Today Kubernetes is by far the dominant orchestration system for containers. It is managed by The Cloud Native Computing Foundation (CNCF) which is part of the Linux Foundation. You can read about its active community here <https://www.cncf.io/>.
7577

76-
Today it is by far the dominant orchestration technology for containers.
78+
When deploying an IOC into a Kubernetes cluster, you request the resources needed by the IOC and Kubernetes will then schedule the IOC onto a suitable host with sufficient resources.
7779

78-
In this project we use Kubernetes and helm to provide a standard way of
80+
In this project we use Kubernetes and Helm (the package manager for Kubernetes) to provide a standard way of
7981
implementing these features:
8082

8183
- Auto start IOCs when the cluster comes up from power off
84+
- Allocate a server with adequate resources on which to run each IOC
8285
- Manually Start and Stop IOCs
83-
- Monitor IOC health and versions
86+
- Monitor IOC health, automatically restart IOCs that have failed
8487
- Deploy versioned IOCs to the beamline
85-
- Rollback IOCs to a previous version
86-
- Allocate a server with adequate resources on which to run each IOC
87-
- Failover to another server (for soft IOCs not tied to hardware in a server)
88+
- Report the versions, uptime, restarts and other metadata of the IOCs
89+
- Rollback an IOC to a previous version
90+
- Failover to another server (for soft IOCs not tied to hardware in a server) if the server fails
8891
- View the current log
8992
- View historical logs (via graylog or other centralized logging system)
9093
- Connect to an IOC and interact with its shell
9194
- debug an IOC by starting a bash shell inside it's container
9295

93-
### Kubernetes Alternative
94-
If you do not have the resources to maintain a Kubernetes cluster then this project supports installing IOC instances directly into the local docker or podman runtime on the current server. Where a beamline has multiple servers the distribution of IOCs across those servers is managed by the user. These tools would replace Kubernetes and Helm in the technology stack. Docker compose allows us to describe a set of IOCs and other services for each beamline server.
95-
96-
If you choose to use this approach then you may find it useful to have another
97-
tool for viewing and managing the set of containers you have deployed across
98-
your beamline servers. There are various solutions for this, one that has
99-
been tested with **epics-containers** is Portainer <https://www.portainer.io/>.
100-
Portainer is a paid for product
101-
that provides excellent visibility and control of your containers through a
102-
web interface. It is very easy to install.
96+
### Kubernetes Alternatives
97+
If you do not wish to maintain a Kubernetes cluster then you could simply install IOCs directly into the local docker or podman instance on each server. Instead of using Kubernetes and Helm, you can use docker compose to manage such IOC instances. But this is just an example, epics-containers is intended to be modular so that you can make use of any parts of it without adopting the whole framework as used at DLS.
10398

104-
The downsides of not using Kuberenetes are:
99+
We provide a template services project that uses docker compose with docker or podman as the runtime engine. Docker compose allows us to describe a set of IOCs and other services for each beamline server, similar to the way Helm does. Where a beamline has multiple servers, the distribution of IOCs across those servers could be managed by maintaining a separate docker-compose file for each server in the root of the services repository.
105100

106-
- manually managing the resources. i.e. deciding up front which server to run each IOC on.
107-
- no automatic failover to another server if the current server fails or becomes overloaded.
108-
- no continuous deployment
109-
-
101+
If you choose to use this approach then you may find it useful to have another tool for viewing and managing the set of containers you have deployed across your beamline servers. There are various solutions for this, one that has been tested with **epics-containers** is Portainer <https://www.portainer.io/>. Portainer is a paid for product that provides excellent visibility and control of your containers through a web interface. Such a tool could allow you to centrally manage the containers on all your servers.
110102

111-
is that you will need to manually manage the resources available to each IOC instance and manually decide which server to run each IOC on. It also means that you cannot take advantage of the Kubernetes feat
103+
The multi-server orchestration tool Docker Swarm might also serve to replace some of the functionality of Kubernetes. The epics-containers team have not yet tried Swarm, but it is compatible with the docker-compose files we template.
112104

113105
### Helm
114106

@@ -120,38 +112,28 @@ The packages are called Helm Charts. They contain templated YAML files to
120112
define a set of resources to apply to a Kubernetes cluster.
121113

122114
Helm has functions to deploy Charts to a cluster and manage multiple versions
123-
of the chart within the cluster.
115+
of the Chart within the cluster.
124116

125-
It also supports registries for storing version history of charts,
117+
It also supports registries for storing version history of Charts,
126118
much like docker.
127119

128-
In this project we use Helm Charts to define and deploy IOC instances. Each beamline or accelerator area has its own git {any}`ec-services-repo` that holds the Helm Charts for its IOC Instances. Each IOC instance need only provide:
120+
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only container:e
129121

130122
- a values.yaml file to override the default values in the repository's global Helm Chart
131123
- a config folder as described in {any}`generic-iocs`.
124+
- a couple of boilerplate files that are the same for all IOCs.
132125

133-
**epics-containers** does not use helm repositories for storing IOC instances.
134-
Such repositories only hold a zipped version of the chart and a values.yaml file,
135-
and this is seen as redundant when we have a git repository holding the same
136-
information. Instead we provide a command line tool for installing and updating
137-
IOCs. Which performs the following steps:
138-
139-
- Clone the beamline repository at a specific tag to a temporary folder
140-
- install the resulting chart into the cluster
141-
- remove the temporary folder
142-
- helm templating for making multiple similar IOCs is not available
143-
- no centralized access to ioc shell or debug shell - instead you must ssh to the correct server first.
144-
126+
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes the global chart as a dependency.
145127

146128
### Repositories
147129

148-
All of the assets required to manage a set of IOC Instances for a beamline are held in repositories.
130+
All of the assets required to manage all of the IOC Instances for a facility are held in repositories.
149131

150132
Thus all version control is done via these repositories and no special locations in a shared filesystem are required. (The legacy approach at DLS relied heavily on know locations in a shared filesystem).
151133

152134
In the **epics-containers** examples all repositories are held in the same github organization. This is nicely contained and means that only one set of credentials is required to access all the resources.
153135

154-
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the POC.
136+
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the proof of concept.
155137

156138
The classes of repository are as follows:
157139

@@ -169,11 +151,11 @@ The classes of repository are as follows:
169151
170152
:Services Source Repositories:
171153
172-
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy. This includes the IOC boot scripts and any other configuration required to make the IOC instance unique. For **ibek** based IOCs, each IOC instance is defined by an **ibek** yaml file only.
154+
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy.
173155
174156
:An OCI Image Repository:
175157
176-
Holds the Generic IOC container images and their dependencies. Also used to hold he helm charts that define the shared elements between all domains.
158+
Holds the Generic IOC container images and their dependencies. Also used to hold the helm Charts that define the shared elements between all domains.
177159
178160
The following have been tested:
179161
@@ -185,7 +167,7 @@ The classes of repository are as follows:
185167
### Continuous Integration
186168

187169
Our examples all use continuous integration to get from pushed source
188-
to the published images, IOC instances helm charts and documentation.
170+
to the published images, IOC instances helm Charts and documentation.
189171

190172
This allows us to maintain a clean code base that is continually tested for
191173
integrity and also to maintain a direct relationship between source code version
@@ -203,19 +185,19 @@ There are these types of CI:
203185
or other OCI registry
204186
205187
:services source:
206-
- prepares a helm chart from each IOC instance or other service definition
207-
- tests that the helm chart is deployable (but does not deploy it)
188+
- prepares a helm Chart from each IOC instance or other service definition
189+
- tests that the helm Chart is deployable (but does not deploy it)
208190
- locally launches each IOC instance and loads its configuration to
209191
verify that the configuration is valid (no system tests because this
210192
would require talking to real hardware instances).
211193
212194
:documentation source:
213-
- builds the sphinx docs
195+
- builds the sphinx docs that you are reading now
214196
- publishes it to github.io pages with version tag or branch tag.
215197
216-
:global helm chart source:
198+
:global helm Chart source:
217199
- ``ec-helm-chars`` repo only
218-
- packages a helm chart from source
200+
- packages a helm Chart from source
219201
- publishes it to github packages (only if the commit is tagged)
220202
or other OCI registry
221203
```
@@ -242,28 +224,23 @@ GUI generation for engineering screens is supported via the PVI project. See <ht
242224
This is the 'outside of the container' helper tool. The command
243225
line entry point is **ec**.
244226

245-
The project is a python package featuring simple command line functions for deploying monitoring IOC instances. It is a thin wrapper around the argocd, kubectl, helm and git commands. This tool can be used by developers and beamline staff to get a quick CLI based view of IOCs running in the cluster, as well as stop/start and obtain logs from them.
227+
The project is a python package featuring simple command line functions for deploying and monitoring IOC instances. It is a thin wrapper around the ArgoCD, kubectl, helm and git commands. This tool can be used by developers and beamline staff to get a quick CLI based view of IOCs running in the cluster, as well as stop/start and obtain logs from them.
246228

247229
See {any}`CLI` for more details.
248230

249231
### **ibek**
250232

251-
IOC Builder for EPICS and Kubernetes is the developer's 'inside the container'
252-
helper tool. It is a python package that is installed into the Generic IOC
253-
container images. It is used:
233+
IOC Builder for EPICS and Kubernetes is the developer's 'inside the container' helper tool. It is a python package that is installed into the Generic IOC container images. It is used:
254234

255235
- at container build time: to fetch and build EPICS support modules
256-
- at container run time: to extract all useful build artifacts into a
257-
runtime image
236+
- at container run time: to generate all useful build artifacts into a runtime image e.g. generating the st.cmd and ioc.db file from the ioc.yaml configuration file.
258237
- inside the developer container: to assist with testing and debugging.
259238

260239
See <https://github.com/epics-containers/ibek>.
261240

262241
### PVI
263242

264-
The Process Variables Interface project is a python package that is installed
265-
inside Generic IOC container images. It is used to give structure to the IOC's
266-
Process Variables allowing us to:
243+
The Process Variables Interface project is a python package that is installed inside Generic IOC container images. It is used to give structure to the IOC's Process Variables allowing us to:
267244

268245
- add metadata to the IOCs DB records for use by [Bluesky] and [Ophyd]
269246
- auto generate screens for the device (as bob, adl or edm files)

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Update for February 2024
5656
The tutorials have now been updated. Recent changes include:
5757

5858
- epics-containers-cli has been renamed to edge-containers-cli. It now supports the deployment of general services as well as IOCs. It still has the entrypoint `ec` but the namespace `ioc` has been dropped and its functions are now in the root (e.g. `ec ioc deploy` is now `ec deploy`).
59-
- Improved CI for {any}`ec-services-repo`s and generic IOCs repos.
59+
- Improved CI for {any}`services-repo`s and generic IOCs repos.
6060
- copier template based creation of new beamline, accelerator and generic IOC repos.
6161
- This provides greatly improved ability to adopt updates to the template into your own repositories.
6262

docs/overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ There are 5 themes to this strategy:
1919
No shared file systems required.
2020
2121
:Continuous Integration / Deployment:
22-
Source repositories automatically build containers and helm charts
22+
Source repositories automatically build containers and helm charts,
2323
delivering them to OCI registries. Services repositories automatically deploy
24-
IOC containers to Kubernetes clusters.
24+
IOCs to Kubernetes clusters.
2525
```

docs/reference/cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ The CLI is just a thin wrapper around the underlying tools that do the real work
1616
:git: the git version control system client
1717
```
1818

19-
`ec` is useful because it saves typing and provides a consistent interface when working on multiple {any}`ec-services-repo` s. This is because it uses the environment setup by the beamline repo's `environment.sh` script. See {any}`environment`.
19+
`ec` is useful because it saves typing and provides a consistent interface when working on multiple {any}`services-repo` s. This is because it uses the environment setup by the beamline repo's `environment.sh` script. See {any}`environment`.
2020

2121
To see the available commands, run `ec --help`.
2222

docs/reference/glossary.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,20 @@
11

22
# Glossary
33

4-
(ec-services-repo)=
5-
## ec-services repository
4+
(services-repo)=
5+
## services repository
66

7-
A repository that contains the definitions for a group of IOC and service instances that are deployed in a Kubernetes cluster. The grouping of instances is up to the facility. At DLS the instances are grouped by beamline for beamline IOCs. Accelerator IOC groupings are by location or by technical domain as appropriate.
7+
A repository that contains the definitions for a group of IOCs instances and other services. The grouping of instances is up to the facility. At DLS the instances are grouped by beamline for beamline IOCs. Accelerator IOC groupings by technical domain as appropriate.
8+
9+
epics-containers supports two kinds of services repositories:
10+
11+
- **Kubernetes** services repositories. These are for deployment into a Kubernetes cluster. Each repositoriy contains a set of **Helm Charts** all of which will deploy into a single namespace in a single Kubernetes Cluster.
12+
- **Local Machine** services repositories. These are for deployment to a local machine using docker-compose. Each repository contains a set *compose.yaml* files that describe how to deploy a set of services to the local machine. These could potentially be used for production at a facility which does not use Kuberentes, but are primarily for development, testing and the earlier tutorials in this documenttation.
813

914
(edge-containers-cli)=
1015
## edge-containers-cli
1116

12-
A Python command line tool for the developer that runs *outside* of containers. It provides features for deploying and managing service and IOC instances within an [](ec-services-repo).
17+
A Python command line tool for the developer that runs *outside* of containers. It provides simple features for and monitoring and managing and IOC instances within a [](services-repo).
1318

1419
So named 'edge' containers because these services all run close to the hardware. Uses the command line entry point `ec`.
1520

0 commit comments

Comments
 (0)