You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/explanations/introduction.md
+30-26Lines changed: 30 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,9 +52,9 @@ It does not contain a startup script pr EPICS database, these are instance speci
52
52
An IOC instance runs in a container runtime by loading two things:
53
53
54
54
- The Generic IOC image passed to the container runtime.
55
-
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is always`/epics/ioc/config`.
55
+
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is usually the folder`/epics/ioc/config`.
56
56
57
-
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration are supported:
57
+
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration folder are supported:
58
58
59
59
-``ioc.yaml``: an **ibek** IOC description file which **ibek** will use to generate
60
60
st.cmd and ioc.subst.
@@ -117,13 +117,13 @@ of the Chart within the cluster.
117
117
It also supports registries for storing version history of Charts,
118
118
much like docker.
119
119
120
-
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only container:e
120
+
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only contain:
121
121
122
-
- a values.yaml file to override the default values in the repository's global Helm Chart
122
+
- a values.yaml file to override the default values in the repository's global values.yaml.
123
123
- a config folder as described in {any}`generic-iocs`.
124
-
- a couple of boilerplate files that are the same for all IOCs.
124
+
- a couple of boilerplate Helm files that are the same for all IOCs.
125
125
126
-
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes the global chart as a dependency.
126
+
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes that global chart as a dependency.
127
127
128
128
### Repositories
129
129
@@ -135,25 +135,26 @@ In the **epics-containers** examples all repositories are held in the same githu
135
135
136
136
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the proof of concept.
137
137
138
-
The classes of repository are as follows:
138
+
The most common classes of repository are as follows:
139
139
140
140
```{eval-rst}
141
-
:Source Repository:
142
141
143
-
Holds the source code but also provides the Continuous Integration actions for testing, building and publishing to the image / helm repositories. These have been tested:
142
+
:Generic IOC Source Repositories:
143
+
144
+
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed. These have been tested:
144
145
145
146
- GitHub
146
147
- GitLab (on premises)
147
148
148
-
:Generic IOC Source Repositories:
149
+
:Services Source Repositories:
149
150
150
-
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed.
151
+
Define the IOC instances and other services for a beamline, accelerator technical area or any other grouping strategy. These have been tested:
151
152
152
-
:Services Source Repositories:
153
+
- GitHub
154
+
- GitLab (on premises)
153
155
154
-
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy.
155
156
156
-
:An OCI Image Repository:
157
+
:An OCI Image Registry:
157
158
158
159
Holds the Generic IOC container images and their dependencies. Also used to hold the helm Charts that define the shared elements between all domains.
159
160
@@ -167,7 +168,7 @@ The classes of repository are as follows:
167
168
### Continuous Integration
168
169
169
170
Our examples all use continuous integration to get from pushed source
170
-
to the published images, IOC instances helm Charts and documentation.
171
+
to the published images, IOC instances Helm Charts and documentation.
171
172
172
173
This allows us to maintain a clean code base that is continually tested for
173
174
integrity and also to maintain a direct relationship between source code version
@@ -184,36 +185,39 @@ There are these types of CI:
184
185
- publishes the image to github packages (only if the commit is tagged)
185
186
or other OCI registry
186
187
187
-
:services source:
188
+
:Services Source:
188
189
- prepares a helm Chart from each IOC instance or other service definition
189
190
- tests that the helm Chart is deployable (but does not deploy it)
190
191
- locally launches each IOC instance and loads its configuration to
191
192
verify that the configuration is valid (no system tests because this
192
193
would require talking to real hardware instances).
193
194
194
-
:documentation source:
195
+
:Documentation Source:
195
196
- builds the sphinx docs that you are reading now
196
197
- publishes it to github.io pages with version tag or branch tag.
197
198
198
-
:global helm Chart source:
199
+
:Global Helm Chart Source:
199
200
- ``ec-helm-chars`` repo only
200
201
- packages a helm Chart from source
201
202
- publishes it to github packages (only if the commit is tagged)
202
203
or other OCI registry
203
204
```
204
205
206
+
### Continuous Deployment
207
+
208
+
ArgoCD is a Kubernetes controller that continuously monitors running applications and compares their current state with the desired state described in a git repository. If the current state does not match the desired state, ArgoCD will attempt to reconcile the two.
209
+
210
+
For this purpose each services repository will have a companion deployment repository which tracks which version of each IOC in the services repository should currently be deployed to the cluster. This list of IOC versions is in a single YAML file and updating this file and pushing it to the deployment repository will trigger ArgoCD to update the cluster accordingly.
211
+
212
+
In this fashion changes to IOC versions are tracked in git and it is easy to roll back to the same state as a given date because there is a complete record.
213
+
205
214
## Scope
206
215
207
-
This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running
208
-
on MVME5500 hardware. Soft IOCs that require access to hardware on the
209
-
server (e.g. USB or PCIe) will be supported by mounting the hardware into
210
-
the container (these IOCS will not support Kubernetes failover).
216
+
This project initially targets x86_64 Linux Soft IOCs and RTEMS 'hard' IOCs running on MVME5500 hardware. Soft IOCs that require access to hardware on the server (e.g. USB or PCIe) will be supported by mounting the hardware into the container (these IOCS will not support Kubernetes failover).
211
217
212
-
Other linux architectures could be added to the Kubernetes cluster. We have
213
-
tested arm64 native builds and will add this as a supported architecture
214
-
in the future.
218
+
Other linux architectures could be added to the Kubernetes cluster. We have tested arm64 native builds and will add this as a supported architecture in the future.
215
219
216
-
Python soft IOCs are also supported.
220
+
Python soft IOCs are also supported. See <https://github.com/DiamondLightSource/pythonSoftIOC>
217
221
218
222
GUI generation for engineering screens is supported via the PVI project. See <https://github.com/epics-containers/pvi>.
There are two main kinds of source repositories used in epics-containers:
16
17
17
18
- Generic IOC Source
18
-
- Beamline / Accelerator Domain IOC Instance Source
19
+
- Beamline / Accelerator Services Repositories for IOC instances and other services.
19
20
20
21
### Generic IOC Source Repositories
21
22
22
-
For Generic IOCs it is recommended that these be stored in public repositories
23
-
on GitHub. This allows the community to benefit from the work of others and
24
-
also contribute to the development of the IOC.
23
+
For public Generic IOC container images, the GitHub Container Registry is a good choice. It allows the containers to live at a URL related to the source repository that generated them. The default ioc-template comes with Github Actions that build the container and push it to the GitHub Container Registry.
25
24
26
25
The intention is that a Generic IOC container image is a reusable component
27
26
that can be used by multiple IOC instances in multiple domains. Because
@@ -36,32 +35,24 @@ Integration files for Generic IOCs work with GitHub actions, but also
36
35
can work with DLS's internal GitLab instance (this could be adapted for
37
36
other facilities' internal GitLab instances or alternative CI system).
38
37
39
-
### IOC Instance Domain Repositories
38
+
### IOC Services Repositories
40
39
41
40
These repositories are very much specific to a particular domain or beamline
42
41
in a particular facility. For this reason there is no strong reason to make
43
42
them public, other than to share with others how you are using epics-containers.
44
43
45
-
At DLS we have a private GitLab instance and we store our domain Repositories
44
+
At DLS we have a private GitLab instance and we store our Services Repositories
46
45
there.
47
46
48
47
The CI for domain repos works both with GitHub actions and with DLS's internal
49
48
GitLab instance (this could be adapted for
50
49
other facilities' internal GitLab instances or alternative CI system).
51
50
52
-
### BL45P
51
+
### p45-services
53
52
54
-
The test/example beamline at DLS for epics-containers is BL45P.
55
-
The domain repository for this
56
-
is at <https://github.com/epics-containers/bl45p>. This will always be
57
-
kept in a public repository as it is a live example of a domain repo.
53
+
The test/example beamline at DLS for epics-containers is p45.
54
+
The domain repository for this is at <https://github.com/epics-containers/p45-services>. This will always be kept in a public repository as it is a live example of a domain repo.
58
55
59
-
## Where to put Registries
56
+
This beamline is deployed to Kubernetes at DLS using Argo CD continuous deployment. The repository containing the Argo CD apps that control the deployment is at <https://github.com/epics-containers/p45-deployment>
60
57
61
-
### Generic IOC Container Images and Source Repos
62
58
63
-
Usually GHCR but internal supported for license e.g. Nexus Repository Manager
Copy file name to clipboardExpand all lines: docs/how-to/own_tools.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,6 @@ epics-containers has been tested with
17
17
- vscode
18
18
- Github Codespaces
19
19
20
-
If you prefer console based editors like vim or emacs, then you will get the best results by launching the development containers defined in the epics-containers using the devcontainer CLI as described here <https://containers.dev/supporting#devcontainer-cli>.
20
+
If you prefer console based editors like neovim or emacs, then you will get the best results by launching the development containers defined in the epics-containers using the devcontainer CLI as described here <https://containers.dev/supporting#devcontainer-cli>.
21
21
22
22
In addition you could install your editor inside the developer container by adding an apt-install command into the `epics-containers` user personalization file. See [details here](https://github.com/epics-containers/epics-containers.github.io/blob/3a87e808e1c7983430a30c5a4dcd0d0661895d60/.devcontainer/postCreateCommand#L23-L27)
0 commit comments