Skip to content

Commit 7a78dd8

Browse files
committed
proof read
1 parent 4a37680 commit 7a78dd8

File tree

6 files changed

+32
-36
lines changed

6 files changed

+32
-36
lines changed

docs/explanations/introduction.md

Lines changed: 27 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ and their APIs such that container images can be interchanged between
2323
different frameworks.
2424

2525
Thus, in this project we develop, build and test our container images
26-
using docker or docker but the images can be run under Kubernetes' own
26+
using docker or podman but the images can be run under Kubernetes' own
2727
container runtime.
2828

2929
This article does a good job of explaining the relationship between docker /
@@ -52,9 +52,9 @@ It does not contain a startup script pr EPICS database, these are instance speci
5252
An IOC instance runs in a container runtime by loading two things:
5353

5454
- The Generic IOC image passed to the container runtime.
55-
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is always `/epics/ioc/config`.
55+
- The IOC instance configuration. This is mapped into the container at runtime by mounting it into the filesystem. The mount point for this configuration is usually the folder `/epics/ioc/config`.
5656

57-
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration are supported:
57+
The configuration will bootstrap the unique properties of that instance. The following contents for the configuration folder are supported:
5858

5959
- ``ioc.yaml``: an **ibek** IOC description file which **ibek** will use to generate
6060
st.cmd and ioc.subst.
@@ -94,9 +94,9 @@ implementing these features:
9494
- debug an IOC by starting a bash shell inside it's container
9595

9696
### Kubernetes Alternatives
97-
If you do not wish to maintain a Kubernetes cluster then you could simply install IOCs directly into the local docker or docker instance on each server. Instead of using Kubernetes and Helm, you can use docker compose to manage such IOC instances. But this is just an example, epics-containers is intended to be modular so that you can make use of any parts of it without adopting the whole framework as used at DLS.
97+
If you do not wish to maintain a Kubernetes cluster then you could simply install IOCs directly into the local docker or podman instance on each server. Instead of using Kubernetes and Helm, you can use docker compose to manage such IOC instances. But this is just an example, epics-containers is intended to be modular so that you can make use of any parts of it without adopting the whole framework as used at DLS.
9898

99-
We provide a template services project that uses docker compose with docker or docker as the runtime engine. Docker compose allows us to describe a set of IOCs and other services for each beamline server, similar to the way Helm does. Where a beamline has multiple servers, the distribution of IOCs across those servers could be managed by maintaining a separate docker-compose file for each server in the root of the services repository.
99+
We provide a template services project that uses docker compose with docker or podman as the runtime engine. Docker compose allows us to describe a set of IOCs and other services for each beamline server, similar to the way Helm does. Where a beamline has multiple servers, the distribution of IOCs across those servers could be managed by maintaining a separate docker-compose file for each server in the root of the services repository.
100100

101101
If you choose to use this approach then you may find it useful to have another tool for viewing and managing the set of containers you have deployed across your beamline servers. There are various solutions for this, one that has been tested with **epics-containers** is Portainer <https://www.portainer.io/>. Portainer is a paid for product that provides excellent visibility and control of your containers through a web interface. Such a tool could allow you to centrally manage the containers on all your servers.
102102

@@ -117,13 +117,13 @@ of the Chart within the cluster.
117117
It also supports registries for storing version history of Charts,
118118
much like docker.
119119

120-
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only container:e
120+
In this project we use Helm Charts to define and deploy IOC instances. IOCs are grouped into a {any}`services-repo`. Typical services repositories represent a beamline or accelerator technical area but any grouping is allowed. Each of these repositories holds the Helm Charts for its IOC Instances and any other services we require. Each IOC instance folder need only contain:
121121

122-
- a values.yaml file to override the default values in the repository's global Helm Chart
122+
- a values.yaml file to override the default values in the repository's global values.yaml.
123123
- a config folder as described in {any}`generic-iocs`.
124-
- a couple of boilerplate files that are the same for all IOCs.
124+
- a couple of boilerplate Helm files that are the same for all IOCs.
125125

126-
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes the global chart as a dependency.
126+
**epics-containers** does not use helm registries for storing each IOC instance. Such registries only hold a zipped version of the Chart files, and this is redundant when we have a git repository holding the same information. Instead a single global helm chart represents the shared elements between all IOC instances and is stored in a helm registry. Each folder in the services repository is itself a helm chart that includes that global chart as a dependency.
127127

128128
### Repositories
129129

@@ -135,25 +135,26 @@ In the **epics-containers** examples all repositories are held in the same githu
135135

136136
There are many alternative services for storing these repositories, both in the cloud and on premises. Below we list the choices we have tested during the proof of concept.
137137

138-
The classes of repository are as follows:
138+
The most common classes of repository are as follows:
139139

140140
```{eval-rst}
141-
:Source Repository:
142141
143-
Holds the source code but also provides the Continuous Integration actions for testing, building and publishing to the image / helm repositories. These have been tested:
142+
:Generic IOC Source Repositories:
143+
144+
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed. These have been tested:
144145
145146
- GitHub
146147
- GitLab (on premises)
147148
148-
:Generic IOC Source Repositories:
149+
:Services Source Repositories:
149150
150-
Define how a Generic IOC image is built, this does not typically include source code, but instead is a set of instructions for building the Generic IOC image by compiling source from a number of upstream support module repositories. Boilerplate IOC source code is also included in the Generic IOC source repository and can be customized if needed.
151+
Define the IOC instances and other services for a beamline, accelerator technical area or any other grouping strategy. These have been tested:
151152
152-
:Services Source Repositories:
153+
- GitHub
154+
- GitLab (on premises)
153155
154-
Define the IOC instances and other services for a beamline, accelerator domain or any other grouping strategy.
155156
156-
:An OCI Image Repository:
157+
:An OCI Image Registry:
157158
158159
Holds the Generic IOC container images and their dependencies. Also used to hold the helm Charts that define the shared elements between all domains.
159160
@@ -167,7 +168,7 @@ The classes of repository are as follows:
167168
### Continuous Integration
168169

169170
Our examples all use continuous integration to get from pushed source
170-
to the published images, IOC instances helm Charts and documentation.
171+
to the published images, IOC instances Helm Charts and documentation.
171172

172173
This allows us to maintain a clean code base that is continually tested for
173174
integrity and also to maintain a direct relationship between source code version
@@ -184,18 +185,18 @@ There are these types of CI:
184185
- publishes the image to github packages (only if the commit is tagged)
185186
or other OCI registry
186187
187-
:services source:
188+
:Services Source:
188189
- prepares a helm Chart from each IOC instance or other service definition
189190
- tests that the helm Chart is deployable (but does not deploy it)
190191
- locally launches each IOC instance and loads its configuration to
191192
verify that the configuration is valid (no system tests because this
192193
would require talking to real hardware instances).
193194
194-
:documentation source:
195+
:Documentation Source:
195196
- builds the sphinx docs that you are reading now
196197
- publishes it to github.io pages with version tag or branch tag.
197198
198-
:global helm Chart source:
199+
:Global Helm Chart Source:
199200
- ``ec-helm-chars`` repo only
200201
- packages a helm Chart from source
201202
- publishes it to github packages (only if the commit is tagged)
@@ -204,24 +205,19 @@ There are these types of CI:
204205

205206
### Continuous Deployment
206207

207-
ArgoCD is a Kubernetes controller that continuously monitors running applications and compares the current state with the desired state described in a git repository. If the current state does not match the desired state, ArgoCD will attempt to reconcile the two.
208+
ArgoCD is a Kubernetes controller that continuously monitors running applications and compares their current state with the desired state described in a git repository. If the current state does not match the desired state, ArgoCD will attempt to reconcile the two.
208209

209-
For this purpose each services repository will have a companion deployment repository which tracks which versions of each IOC in the services repository should currently be deployed to the cluster. This list of IOC versions is in a single YAML file and updating this file and pushing it to the deployment repository will trigger ArgoCD to update the cluster.
210+
For this purpose each services repository will have a companion deployment repository which tracks which version of each IOC in the services repository should currently be deployed to the cluster. This list of IOC versions is in a single YAML file and updating this file and pushing it to the deployment repository will trigger ArgoCD to update the cluster accordingly.
210211

211212
In this fashion changes to IOC versions are tracked in git and it is easy to roll back to the same state as a given date because there is a complete record.
212213

213214
## Scope
214215

215-
This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running
216-
on MVME5500 hardware. Soft IOCs that require access to hardware on the
217-
server (e.g. USB or PCIe) will be supported by mounting the hardware into
218-
the container (these IOCS will not support Kubernetes failover).
216+
This project initially targets x86_64 Linux Soft IOCs and RTEMS 'hard' IOCs running on MVME5500 hardware. Soft IOCs that require access to hardware on the server (e.g. USB or PCIe) will be supported by mounting the hardware into the container (these IOCS will not support Kubernetes failover).
219217

220-
Other linux architectures could be added to the Kubernetes cluster. We have
221-
tested arm64 native builds and will add this as a supported architecture
222-
in the future.
218+
Other linux architectures could be added to the Kubernetes cluster. We have tested arm64 native builds and will add this as a supported architecture in the future.
223219

224-
Python soft IOCs are also supported.
220+
Python soft IOCs are also supported. See <https://github.com/DiamondLightSource/pythonSoftIOC>
225221

226222
GUI generation for engineering screens is supported via the PVI project. See <https://github.com/epics-containers/pvi>.
227223

docs/reference/environment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ be adjusted to suit your domain. The variables are as follows:
3939
need to create a namespace for your domain. This is the name you should
4040
use here. If you are not using Kubernetes then you can leave this as
4141
`EC_K8S_NAMESPACE=local` and this will deploy IOC Instances to the local server's
42-
docker or docker instance.
42+
docker or podman instance.
4343

4444
- **EC_SERVICES_REPO**: this is a link back to the repository that defines this
4545
domain. For example the bl38p reference beamline repository uses

docs/tutorials/deploy_example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
This tutorial will show you how to deploy and manage the example IOC Instance that came with the template beamline repository. You will need to have your own `t01-services` beamline repository from the previous tutorial.
66

7-
For these early tutorials we are not using Kubernetes and instead are deploying IOCs to the local docker or docker instance. These kind of deployments are ideal for testing and development on a developer workstation. They could also potentially be used for production deployments to beamline servers where Kubernetes is not available.
7+
For these early tutorials we are not using Kubernetes and instead are deploying IOCs to the local docker or podman instance. These kind of deployments are ideal for testing and development on a developer workstation. They could also potentially be used for production deployments to beamline servers where Kubernetes is not available.
88

99
## Continuous Integration
1010

docs/tutorials/ioc_changes2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ the `Remote` menu with `Ctrl+Alt+O` and select `Reopen in Container`.
5454
The `build` script does two things.
5555

5656
- it fetches the git submodule called `ibek-support`. This submodule is shared between all the Generic IOC container images and contains the support YAML files that tell `ibek` how to build support modules inside the container environment and how to use them at runtime.
57-
- it builds the Generic IOC container image developer target locally using docker or docker.
57+
- it builds the Generic IOC container image developer target locally using docker or podman.
5858

5959
## Verify the Example IOC Instance is working
6060

docs/tutorials/setup_k8s_new_beamline.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ with network host.
154154

155155
Every beamline repository has an `environment.sh` file used to configure
156156
your shell so that the command line tools know which cluster to talk to.
157-
Up to this point we have been using the local docker or docker instance,
157+
Up to this point we have been using the local docker or podman instance,
158158
but here we will configure it to use the beamline cluster.
159159

160160
For the detail of what goes into `environment.sh` see

docs/tutorials/setup_workstation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ You don't need Kubernetes yet.
188188

189189
The following tutorials will take you through creating, deploying and debugging IOC instances, generic IOCs and support modules.
190190

191-
For simplicity we don't encourage using Kubernetes at this stage. Instead we will deploy containers to the local workstation's docker or docker instance using docker compose.
191+
For simplicity we don't encourage using Kubernetes at this stage. Instead we will deploy containers to the local workstation's docker or podman instance using docker compose.
192192

193193
If you are planning not to use Kubernetes at all then now might be a good time to install an alternative container management platform such as [Portainer](https://www.portainer.io/). Such tools will help you visualise and manage your containers across a number of servers. These are not required and you could just manage everything from the docker compose command line if you prefer.
194194

0 commit comments

Comments
 (0)