Skip to content

Commit afc2308

Browse files
committed
refesh introductory pages
1 parent 04b0357 commit afc2308

File tree

5 files changed

+89
-97
lines changed

5 files changed

+89
-97
lines changed

docs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
Update for October 2023
77
=======================
8-
The final round of improvements done. The latest framework has
8+
The final round of improvements are done. The latest framework has
99
become much simpler and has a good developer experience.
1010

1111
**WARNING**

docs/user/explanations/introduction.rst

Lines changed: 71 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -46,25 +46,26 @@ An important principal of the approach presented here is that an IOC container
4646
image represents a 'Generic' IOC. The Generic IOC image is used for all
4747
IOC instances that connect to a given class of device. For example the
4848
Generic IOC image here:
49-
ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2023.10.1
49+
`ghcr.io/epics-containers/ioc-adaravis-linux-runtime:2023.10.1
50+
<https://github.com/epics-containers/ioc-adaravis/pkgs/container/ioc-adaravis-linux-runtime>`_
5051
uses the AreaDetector driver ADAravis to connect to GigE cameras.
5152

5253
An IOC instance runs in a container runtime by loading two things:
5354

5455
- The Generic IOC image passed to the container runtime.
5556
- The IOC instance configuration. This is mapped into the container at
56-
runtime by mounting it into the filesystem at runtime. The mount point
57-
for this configuration is alway /epics/ioc/config.
57+
runtime by mounting it into the filesystem. The mount point
58+
for this configuration is always /epics/ioc/config.
5859

5960
The configuration will bootstrap the unique properties of that instance.
6061
The following contents for the configuration are supported:
6162

6263
- ioc.yaml: an **ibek** IOC description file which **ibek** will use to generate
63-
st.cmd and ioc.subst
64-
- st.cmd, ioc.subst: an IOC shell startup script and an optional substitution file
65-
st.cmd can refer any additional files in the configuration directory
64+
st.cmd and ioc.subst.
65+
- st.cmd, ioc.subst: an IOC shell startup script and an optional substitution file.
66+
st.cmd can refer any additional files in the configuration directory.
6667
- start.sh a bash script to fully override the startup of the IOC. start.sh
67-
can refer to any additional files in the configuration directory
68+
can refer to any additional files in the configuration directory.
6869

6970

7071
This approach reduces the number of images required and saves disk. It also
@@ -79,6 +80,8 @@ Kubernetes
7980
https://kubernetes.io/
8081

8182
Kubernetes easily and efficiently manages containers across clusters of hosts.
83+
When deploying an IOC into a Kubernetes cluster, you request the resources
84+
required by the IOC and Kubernetes will then schedule the IOC onto a suitable host.
8285

8386
It builds upon 15 years of experience of running production workloads at
8487
Google, combined with best-of-breed ideas and practices from the community,
@@ -106,12 +109,22 @@ Kubernetes Alternative
106109
~~~~~~~~~~~~~~~~~~~~~~
107110

108111
If you do not have the resources to maintain a Kubernetes cluster then this project
109-
is experimentally supporing the use of podman-compose or docker-compose to deploy
110-
IOCs to a single server. Where a beamline has multiple servers the distribution of
112+
supports installing IOC instances directly into the local docker or podman runtime
113+
on the current server. Where a beamline has multiple servers the distribution of
111114
IOCs across those servers is managed by the user. These tools would replace
112115
Kubernetes and Helm in the technology stack.
113116

114-
TODO: more on this once we have a working example.
117+
If you choose to use this approach then you may find it useful to have another
118+
tool for viewing and managing the set of containers you have deployed across
119+
your beamline servers. There are various solutions for this, one that has
120+
been tested with **epics-containers** is Portainer https://www.portainer.io/.
121+
Portainer is a paid for product
122+
that provides excellent visibility and control of your containers through a
123+
web interface. It is very easy to install.
124+
125+
The downside of this approach is that you will need to manually manage the
126+
resources available to each IOC instance and manually decide which server to
127+
run each IOC on.
115128

116129
Helm
117130
~~~~
@@ -130,9 +143,9 @@ much like docker.
130143

131144
In this project we use Helm Charts to define and deploy IOC instances.
132145
Each beamline (or accelerator domain) has its own git repository that holds
133-
the beamline Helm Chart for its IOCs. Each IOC instance need only provide a
134-
values.yaml file to override the default values in the Helm Chart and a config folder
135-
as described in `generic iocs`.
146+
the domain Helm Chart for its IOC Instances. Each IOC instance need only
147+
provide a values.yaml file to override the default values in the domain
148+
Helm Chart and a config folder as described in `generic iocs`.
136149

137150
**epics-containers** does not use helm repositories for storing IOC instances.
138151
Such repositories only hold a zipped version of the chart and a values.yaml file,
@@ -154,13 +167,12 @@ Repositories
154167
~~~~~~~~~~~~
155168

156169
All of the assets required to manage a
157-
set of IOCs for a beamline are held in repositories.
170+
set of IOC Instances for a beamline are held in repositories.
158171

159-
Thus all version control is done
160-
via these repositories and no special locations in
161-
a shared filesystem are required
162-
(The legacy approach at DLS relied heavily on
163-
know locations in a shared filesystem).
172+
Thus all version control is done via these repositories and no special
173+
locations in a shared filesystem are required.
174+
(The legacy approach at DLS relied heavily on know locations in a
175+
shared filesystem).
164176

165177
In the **epics-containers** examples all repositories are held in the same
166178
github organization. This is nicely contained and means that only one set
@@ -176,7 +188,9 @@ The 2 classes of repository are as follows:
176188

177189
- Holds the source code but also provides the
178190
Continuous Integration actions for testing, building and publishing to
179-
the image / helm repositories. These have been tested:
191+
the image / helm repositories.
192+
193+
These have been tested:
180194

181195
- github
182196
- gitlab (on premises)
@@ -185,17 +199,24 @@ The 2 classes of repository are as follows:
185199

186200
- Generic IOC source. Defines how a Generic IOC image is built, this does
187201
not typically include source code, but instead is a set of instructions
188-
for building the Generic IOC image.
202+
for building the Generic IOC image by compiling source from a number
203+
of upstream support module repositories. Boilerplate IOC source code
204+
is also included in the Generic IOC source repository and can be
205+
customized if needed.
206+
189207
- Beamline / Accelerator Domain source. Defines the IOC instances for a
190-
beamline or Accelerator Domain. This includes the IOC boot scripts and
208+
Domain. This includes the IOC boot scripts and
191209
any other configuration required to make the IOC instance unique.
192210
For **ibek** based IOCs, each IOC instance is defined by an **ibek**
193211
yaml file only.
194212

195213
:An OCI image repository:
196214

197215
- Holds the Generic IOC container images and their
198-
dependencies. The following have been tested:
216+
dependencies. Also used to hold the helm charts that define the shared
217+
elements between all domains.
218+
219+
The following have been tested:
199220

200221
- Github Container Registry
201222
- DockerHub
@@ -206,26 +227,27 @@ Continuous Integration
206227
~~~~~~~~~~~~~~~~~~~~~~
207228

208229
Our examples all use continuous integration to get from pushed source
209-
to the published images.
230+
to the published images, IOC instances helm charts and documentation.
210231

211232
This allows us to maintain a clean code base that is continually tested for
212-
integrity and also to maintain a direct relationship between source code tags
213-
and the tags of their built resources.
233+
integrity and also to maintain a direct relationship between source code version
234+
tags and the tags of their built resources.
214235

215236
There are these types of CI:
216237

217238
:Generic IOC source:
218239
- builds a Generic IOC container image
219-
- runs some tests against that image
240+
- runs some tests against that image - these will eventually include
241+
system tests that talk to simulated hardware
220242
- publishes the image to github packages (only if the commit is tagged)
221243
or other OCI registry
222244

223245
:beamline definition source:
224-
- builds a helm chart from each ioc definition
246+
- prepares a helm chart from each ioc instance definition
225247
- tests that the helm chart is deployable (but does not deploy it)
226248
- locally launches each IOC instance and loads its configuration to
227249
verify that the configuration is valid (no system tests because this
228-
would require talking to beamline hardware).
250+
would require talking to real hardware instances).
229251

230252
:documentation source:
231253
- builds the sphinx docs
@@ -236,26 +258,33 @@ Scope
236258
This project initially targets x86_64 Linux Soft IOCs and RTEMS IOC running
237259
on MVME5500 hardware. Soft IOCs that require access to hardware on the
238260
server (e.g. USB or PCIe) will be supported by mounting the hardware into
239-
the container (theses IOCS will not support failover).
261+
the container (theses IOCS will not support Kubernetes failover).
240262

241263
Other linux architectures could be added to the Kubernetes cluster. We have
242264
tested arm64 native builds and will add this as a supported architecture
243265
in the future.
244266

267+
Python soft IOCs are also supported.
268+
269+
GUI generation for engineering screens will be supported via the PVI project.
270+
See https://github.com/epics-containers/pvi.
271+
245272

246273
Additional Tools
247274
----------------
248275

249276
epics-containers-cli
250277
~~~~~~~~~~~~~~~~~~~~
251-
This define the developer's 'outside of the container' helper tool. The command
278+
This is the developer's 'outside of the container' helper tool. The command
252279
line entry point is **ec**.
253280

254281
The project is a python package featuring simple command
255282
line functions for deploying, monitoring building and debugging
256283
Generic IOCs and IOC instances. It is a wrapper
257284
around the standard command line tools kubectl, podman/docker, helm, and git
258-
but saves typing and provides help and command line completion.
285+
but saves typing and provides help and command line completion. It also
286+
can teach you how to use these tools by showing you the commands it is
287+
running.
259288

260289
See `CLI` for details.
261290

@@ -266,20 +295,23 @@ IOC Builder for EPICS and Kubernetes is the developer's 'inside the container'
266295
helper tool. It is a python package that is installed into the Generic IOC
267296
container images. It is used:
268297

269-
- at container build time to fetch and build EPICS support modules
270-
- to generate the IOC source code and compile it
271-
- to extract all useful build artifacts into a runtime image
298+
- at container build time: to fetch and build EPICS support modules
299+
- at container build time: to generate the IOC source code and compile it
300+
- at container run time: to extract all useful build artifacts into a
301+
runtime image
272302

273303
See https://github.com/epics-containers/ibek.
274304

275305
PVI
276306
~~~
277-
Process Variables Interface is a python package that is installed into the
278-
Generic IOC container images. It is used to give structure to the IOC's PVI
279-
interface allowing us to:
307+
The Process Variables Interface project is a python package that is installed
308+
inside Generic IOC container images. It is used to give structure to the IOC's
309+
Process Variables allowing us to:
280310

281-
- add metadata to the IOCs DB records for use by bluesky
311+
- add metadata to the IOCs DB records for use by `Bluesky`_ and `Ophyd`_
282312
- auto generate screens for the device (as bob, adl or edm files)
283313

314+
.. _Bluesky: https://blueskyproject.io/
315+
.. _Ophyd: https://github.com/bluesky/ophyd-async
284316

285317

docs/user/how-to/own_tools.rst

Lines changed: 9 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -2,48 +2,23 @@ Choose Your Developer Environment
22
=================================
33

44
The tutorials walk through the use of a standard set of developer tools. You
5-
can use others if you wish as detailed below.
5+
can use others if you wish but support is limited currently.
66

77
.. _own_editor:
88

99
Working with your own code editor
1010
---------------------------------
1111

1212
If you have your own preferred code editor, you can use it instead of
13-
vscode. This does mean that you will forgo the benefits of the devcontainer
14-
integration.
13+
vscode. We recommend developing generic IOCs using
14+
a devcontainer. Devcontainer supporting tools are listed here
15+
https://containers.dev/supporting.
1516

16-
TODO: update dev-e7 to have its own launcher like
17-
https://github.com/dls-controls/dev-c7
18-
then link to it's documentation and discuss how to use it with epics-containers.
17+
epics-containers has been tested with
1918

20-
If you do not want to use a devcontainer at all and instead prefer to install all
21-
the tools natively on your workstation then please see below.
19+
- vscode
20+
- Github Codespaces
2221

23-
.. _no_devcontainer:
2422

25-
Working without a Devcontainer
26-
------------------------------
27-
28-
**Not recommended.**
29-
30-
If you do not want to do development inside of a container then you can
31-
install all the tools natively on your workstation. You are then responsible
32-
for keeping these updated as necessary.
33-
You will also be responsible for the configuration of these tools.
34-
35-
The tools required are (at least):-
36-
37-
- Python 3.10 or greater
38-
- pip
39-
- python package epics-containers-cli
40-
- docker (or podman)
41-
- kubernetes client tools appropriate to your cluster K8S version
42-
43-
- helm >= 4.2.0
44-
- kubectl >= 1.23.0
45-
- oidc-login (or whichever tool you use to authenticate to your cluster)
46-
47-
- git
48-
- build essentials tools
49-
- EPICS V7 client tools
23+
TODO: add instructions for using other editors. Potentially we could enhance
24+
the epics-containers-cli to support other editors with minimal effort.

docs/user/overview.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ There are 5 themes to this strategy:
77
Package IOC software and execute it in a lightweight virtual environment.
88

99
:Kubernetes:
10-
Centrally orchestrate all IOCs at a facility.
10+
Centrally orchestrates all IOCs at a facility.
1111

1212
:Helm Charts:
1313
Deploy IOCs into Kubernetes with version management.
@@ -17,5 +17,5 @@ There are 5 themes to this strategy:
1717
No shared file systems required.
1818

1919
:Continuous Integration / Delivery:
20-
Source repositories automatically build container and helm charts
20+
Source repositories automatically build containers and helm charts
2121
delivering them to OCI registries.

docs/user/tutorials/setup_workstation.rst

Lines changed: 6 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,10 @@ The only tools you need to install are:
77

88
- Visual Studio Code
99
- a container platform, either podman or docker
10+
- Python 3.9 or later
11+
- a Python virtual environment
1012

11-
That's it. The reason the list is so short is that we will be using
12-
a developer container which includes all the tools needed. Thus you only need
13-
docker or podman to get the devcontainer up and running.
14-
15-
Visual Studio Code is also recommended because it has excellent integration with
13+
Visual Studio Code is recommended because it has excellent integration with
1614
devcontainers. It also has useful extensions for working with Kubernetes,
1715
EPICS, WSL2 and more.
1816

@@ -26,13 +24,11 @@ Options
2624
-------
2725

2826
You are not required to use the tools above to develop with epics-containers.
29-
If you have your own preferred code editor you can use that. If you prefer
30-
not to work inside a container to do development that is also a possibility.
27+
If you have your own preferred code editor you can use that.
3128

3229
See these how-to pages for more information:
3330

3431
- `own_editor`
35-
- `no_devcontainer`
3632

3733
Platform Support
3834
----------------
@@ -77,9 +73,8 @@ devcontainer in the next tutorial.
7773
.. _Download Visual Studio Code: https://code.visualstudio.com/download
7874

7975

80-
Next install docker or podman as the your container platform. I am using
81-
podman 4.2.0 on RHEL8, docker *could* also be supported but note the warning below.
82-
All commands in these tutorials will use ``podman`` cli commands.
76+
Next install docker or podman as the your container platform. epics-containers
77+
has been tested with podman 4.4.1 on RedHat 8, and docker is also be supported.
8378
If you are using docker, simply replace ``podman`` with ``docker`` in the commands.
8479

8580
The podman version required is 4.0 or later. This is not easy to obtain on debian
@@ -101,13 +96,3 @@ CLI tools by clicking on the appropriate linux distribution link.
10196
.. _Install docker: https://docs.docker.com/engine/install/
10297
.. _Install podman: https://podman.io/getting-started/installation
10398

104-
.. Warning::
105-
106-
To support docker we need to do one of two things: 1) use the docker cli
107-
in user mode or 2) set the user id and gid when launching the container.
108-
If we don't do this then all files written to mounted volumes will be owned
109-
by root.
110-
111-
**TODO**: write up how to do this. **TODO** the container image may
112-
need some minor modifications to support docker. (I recently got this
113-
working `here <https://github.com/gilesknap/gphotos-sync/issues/279#issuecomment-1475317852>`_)

0 commit comments

Comments
 (0)