You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/explanations/introduction.md
+27-31Lines changed: 27 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ and their APIs such that container images can be interchanged between
23
23
different frameworks.
24
24
25
25
Thus, in this project we develop, build and test our container images
26
-
using docker or docker but the images can be run under Kubernetes' own
26
+
using docker or podman but the images can be run under Kubernetes' own
27
27
container runtime.
28
28
29
29
This article does a good job of explaining the relationship between docker /
@@ -52,9 +52,9 @@ It does not contain a startup script pr EPICS database, these are instance speci
52
52
An IOC instance runs in a container runtime by loading two things:
53
53
54
54
- The Generic IOC image passed to the container runtime.
55
-
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is always`/epics/ioc/config`.
55
+
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is usually the folder`/epics/ioc/config`.
56
56
57
-
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration are supported:
57
+
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration folder are supported:
58
58
59
59
-``ioc.yaml``: an **ibek** IOC description file which **ibek** will use to generate
60
60
st.cmd and ioc.subst.
@@ -94,9 +94,9 @@ implementing these features:
94
94
- debug an IOC by starting a bash shell inside it's container
95
95
96
96
### Kubernetes Alternatives
97
-
If you do not wish to maintain a Kubernetes cluster then you could simply install IOCs directly into the local docker or docker instance on each server. Instead of using Kubernetes and Helm, you can use docker compose to manage such IOC instances. But this is just an example, epics-containers is intended to be modular so that you can make use of any parts of it without adopting the whole framework as used at DLS.
97
+
If you do not wish to maintain a Kubernetes cluster then you could simply install IOCs directly into the local docker or podman instance on each server. Instead of using Kubernetes and Helm, you can use docker compose to manage such IOC instances. But this is just an example, epics-containers is intended to be modular so that you can make use of any parts of it without adopting the whole framework as used at DLS.
98
98
99
-
We provide a template services project that uses docker compose with docker or docker as the runtime engine. Docker compose allows us to describe a set of IOCs and other services for each beamline server, similar to the way Helm does. Where a beamline has multiple servers, the distribution of IOCs across those servers could be managed by maintaining a separate docker-compose file for each server in the root of the services repository.
99
+
We provide a template services project that uses docker compose with docker or podman as the runtime engine. Docker compose allows us to describe a set of IOCs and other services for each beamline server, similar to the way Helm does. Where a beamline has multiple servers, the distribution of IOCs across those servers could be managed by maintaining a separate docker-compose file for each server in the root of the services repository.
100
100
101
101
If you choose to use this approach then you may find it useful to have another tool for viewing and managing the set of containers you have deployed across your beamline servers. There are various solutions for this, one that has been tested with **epics-containers** is Portainer <https://www.portainer.io/>. Portainer is a paid for product that provides excellent visibility and control of your containers through a web interface. Such a tool could allow you to centrally manage the containers on all your servers.
102
102
@@ -117,13 +117,13 @@ of the Chart within the cluster.
117
117
It also supports registries for storing version history of Charts,
118
118
much like docker.
119
119
120
-
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only container:e
120
+
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only contain:
121
121
122
-
- a values.yaml file to override the default values in the repository's global Helm Chart
122
+
- a values.yaml file to override the default values in the repository's global values.yaml.
123
123
- a config folder as described in {any}`generic-iocs`.
124
-
- a couple of boilerplate files that are the same for all IOCs.
124
+
- a couple of boilerplate Helm files that are the same for all IOCs.
125
125
126
-
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes the global chart as a dependency.
126
+
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes that global chart as a dependency.
127
127
128
128
### Repositories
129
129
@@ -135,25 +135,26 @@ In the **epics-containers** examples all repositories are held in the same githu
135
135
136
136
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the proof of concept.
137
137
138
-
The classes of repository are as follows:
138
+
The most common classes of repository are as follows:
139
139
140
140
```{eval-rst}
141
-
:Source Repository:
142
141
143
-
Holds the source code but also provides the Continuous Integration actions for testing, building and publishing to the image / helm repositories. These have been tested:
142
+
:Generic IOC Source Repositories:
143
+
144
+
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed. These have been tested:
144
145
145
146
- GitHub
146
147
- GitLab (on premises)
147
148
148
-
:Generic IOC Source Repositories:
149
+
:Services Source Repositories:
149
150
150
-
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed.
151
+
Define the IOC instances and other services for a beamline, accelerator technical area or any other grouping strategy. These have been tested:
151
152
152
-
:Services Source Repositories:
153
+
- GitHub
154
+
- GitLab (on premises)
153
155
154
-
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy.
155
156
156
-
:An OCI Image Repository:
157
+
:An OCI Image Registry:
157
158
158
159
Holds the Generic IOC container images and their dependencies. Also used to hold the helm Charts that define the shared elements between all domains.
159
160
@@ -167,7 +168,7 @@ The classes of repository are as follows:
167
168
### Continuous Integration
168
169
169
170
Our examples all use continuous integration to get from pushed source
170
-
to the published images, IOC instances helm Charts and documentation.
171
+
to the published images, IOC instances Helm Charts and documentation.
171
172
172
173
This allows us to maintain a clean code base that is continually tested for
173
174
integrity and also to maintain a direct relationship between source code version
@@ -184,18 +185,18 @@ There are these types of CI:
184
185
- publishes the image to github packages (only if the commit is tagged)
185
186
or other OCI registry
186
187
187
-
:services source:
188
+
:Services Source:
188
189
- prepares a helm Chart from each IOC instance or other service definition
189
190
- tests that the helm Chart is deployable (but does not deploy it)
190
191
- locally launches each IOC instance and loads its configuration to
191
192
verify that the configuration is valid (no system tests because this
192
193
would require talking to real hardware instances).
193
194
194
-
:documentation source:
195
+
:Documentation Source:
195
196
- builds the sphinx docs that you are reading now
196
197
- publishes it to github.io pages with version tag or branch tag.
197
198
198
-
:global helm Chart source:
199
+
:Global Helm Chart Source:
199
200
- ``ec-helm-chars`` repo only
200
201
- packages a helm Chart from source
201
202
- publishes it to github packages (only if the commit is tagged)
@@ -204,24 +205,19 @@ There are these types of CI:
204
205
205
206
### Continuous Deployment
206
207
207
-
ArgoCD is a Kubernetes controller that continuously monitors running applications and compares the current state with the desired state described in a git repository. If the current state does not match the desired state, ArgoCD will attempt to reconcile the two.
208
+
ArgoCD is a Kubernetes controller that continuously monitors running applications and compares their current state with the desired state described in a git repository. If the current state does not match the desired state, ArgoCD will attempt to reconcile the two.
208
209
209
-
For this purpose each services repository will have a companion deployment repository which tracks which versions of each IOC in the services repository should currently be deployed to the cluster. This list of IOC versions is in a single YAML file and updating this file and pushing it to the deployment repository will trigger ArgoCD to update the cluster.
210
+
For this purpose each services repository will have a companion deployment repository which tracks which version of each IOC in the services repository should currently be deployed to the cluster. This list of IOC versions is in a single YAML file and updating this file and pushing it to the deployment repository will trigger ArgoCD to update the cluster accordingly.
210
211
211
212
In this fashion changes to IOC versions are tracked in git and it is easy to roll back to the same state as a given date because there is a complete record.
212
213
213
214
## Scope
214
215
215
-
This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running
216
-
on MVME5500 hardware. Soft IOCs that require access to hardware on the
217
-
server (e.g. USB or PCIe) will be supported by mounting the hardware into
218
-
the container (these IOCS will not support Kubernetes failover).
216
+
This project initially targets x86_64 Linux Soft IOCs and RTEMS 'hard' IOCs running on MVME5500 hardware. Soft IOCs that require access to hardware on the server (e.g. USB or PCIe) will be supported by mounting the hardware into the container (these IOCS will not support Kubernetes failover).
219
217
220
-
Other linux architectures could be added to the Kubernetes cluster. We have
221
-
tested arm64 native builds and will add this as a supported architecture
222
-
in the future.
218
+
Other linux architectures could be added to the Kubernetes cluster. We have tested arm64 native builds and will add this as a supported architecture in the future.
223
219
224
-
Python soft IOCs are also supported.
220
+
Python soft IOCs are also supported. See <https://github.com/DiamondLightSource/pythonSoftIOC>
225
221
226
222
GUI generation for engineering screens is supported via the PVI project. See <https://github.com/epics-containers/pvi>.
Copy file name to clipboardExpand all lines: docs/tutorials/deploy_example.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
This tutorial will show you how to deploy and manage the example IOC Instance that came with the template beamline repository. You will need to have your own `t01-services` beamline repository from the previous tutorial.
6
6
7
-
For these early tutorials we are not using Kubernetes and instead are deploying IOCs to the local docker or docker instance. These kind of deployments are ideal for testing and development on a developer workstation. They could also potentially be used for production deployments to beamline servers where Kubernetes is not available.
7
+
For these early tutorials we are not using Kubernetes and instead are deploying IOCs to the local docker or podman instance. These kind of deployments are ideal for testing and development on a developer workstation. They could also potentially be used for production deployments to beamline servers where Kubernetes is not available.
Copy file name to clipboardExpand all lines: docs/tutorials/ioc_changes2.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ the `Remote` menu with `Ctrl+Alt+O` and select `Reopen in Container`.
54
54
The `build` script does two things.
55
55
56
56
- it fetches the git submodule called `ibek-support`. This submodule is shared between all the Generic IOC container images and contains the support YAML files that tell `ibek` how to build support modules inside the container environment and how to use them at runtime.
57
-
- it builds the Generic IOC container image developer target locally using docker or docker.
57
+
- it builds the Generic IOC container image developer target locally using docker or podman.
Copy file name to clipboardExpand all lines: docs/tutorials/setup_workstation.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -188,7 +188,7 @@ You don't need Kubernetes yet.
188
188
189
189
The following tutorials will take you through creating, deploying and debugging IOC instances, generic IOCs and support modules.
190
190
191
-
For simplicity we don't encourage using Kubernetes at this stage. Instead we will deploy containers to the local workstation's docker or docker instance using docker compose.
191
+
For simplicity we don't encourage using Kubernetes at this stage. Instead we will deploy containers to the local workstation's docker or podman instance using docker compose.
192
192
193
193
If you are planning not to use Kubernetes at all then now might be a good time to install an alternative container management platform such as [Portainer](https://www.portainer.io/). Such tools will help you visualise and manage your containers across a number of servers. These are not required and you could just manage everything from the docker compose command line if you prefer.
0 commit comments