Skip to content

Commit 71d220b

Browse files
authored
Merge pull request #26 from epics-containers/dev
Added first pass of RTEMS tutorials.
2 parents 3449bcf + 3c63fb3 commit 71d220b

File tree

8 files changed

+350
-38
lines changed

8 files changed

+350
-38
lines changed

.github/CONTRIBUTING.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,4 @@ The `Developer Guide`_ contains information on setting up a development
2626
environment, building docs and what standards the documentation
2727
should follow.
2828

29-
.. _Developer Guide: https://diamondlightsource.github.io/epics-containers.github.io/main/developer/how-to/contribute.html
29+
.. _Developer Guide: https://epics-containers.github.io/main/developer/how-to/contribute.html

docs/user/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ side-bar.
2525
tutorials/debug_generic_ioc
2626
tutorials/test_generic_ioc
2727
tutorials/support_module
28-
29-
+++
28+
tutorials/rtems_setup
29+
tutorials/rtems_ioc
3030

3131
Tutorials for installation and typical usage. New users start here.
3232

docs/user/tutorials/epics_devcontainer.rst

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,13 @@ The EPICS Devcontainer
55
Introduction
66
------------
77

8-
You can setup a single devcontainer for managing all of your IOCs. In
9-
`devcontainer` we launched a devcontainer for a single project. Here we
10-
will create a workspace that is managed by a devcontainer. This will allow
11-
you to manage multiple projects in a single devcontainer.
8+
For working with epics-containers we provide a developer container with
9+
all the tools you need to build and deploy EPICS IOCs already installed.
10+
In this tutorial we will install and configure this devcontainer.
11+
12+
In `devcontainer` we demonstrated launching a container for a single project.
13+
Here we will create a workspace that will allow
14+
you to manage multiple projects within a single devcontainer.
1215

1316
The base container is defined in https://github.com/epics-containers/dev-e7
1417
but we will use a customizable image derived from that. The customizable

docs/user/tutorials/rtems_ioc.rst

Lines changed: 214 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,214 @@
1+
RTEMS - Deploying an Example IOC
2+
================================
3+
4+
The previous tutorials walked through how to create a generic linux soft
5+
IOC and how to deploy an IOC instance using that generic IOC.
6+
7+
epics-containers also supports RTEMS 5 running on MVVME5500. We will
8+
now will look at the differences for this architecture. Further
9+
architectures will be supported in future.
10+
11+
Each beamline or accelerator domain will require a server for
12+
serving the IOC binaries and instance files to the RTEMS devices. This
13+
needs to be set up for your test beamline before proceeding,
14+
see `rtems_setup`.
15+
16+
Once you have the file server set up, deploying an IOC instance that uses
17+
an RTEMS Generic IOC is very similar to `deploy_example`.
18+
19+
We will be adding
20+
a new IOC instance to the ``bl01t`` beamline that we created in the previous
21+
tutorials. You will need to have worked through the previous tutorials in
22+
order to complete this one.
23+
24+
Preparing the RTEMS Boot loader
25+
-------------------------------
26+
27+
To try this tutorial you will need a VME crate with an MVVME5500 processor card
28+
installed. You will also need access to the serial console over ethernet
29+
using a terminal server or similar.
30+
31+
.. note::
32+
33+
**DLS Users** for details of setting up RTEMS on your VME crates see
34+
this `internal link <https://confluence.diamond.ac.uk/pages/viewpage.action?spaceKey=CNTRLS&title=RTEMS>`_
35+
36+
The following crate is already running RTEMS and can be used for this
37+
tutorial, but check with the accelerator controls team before using it:
38+
39+
:console: ts0001 7007
40+
:crate monitor: ts0001 7008
41+
42+
It is likely already set up as per the example below.
43+
44+
Use telnet to connect to the console of your target IOC. e.g.
45+
``telnet ts0001 7007``. We want to get to the MOTLoad prompt which should look
46+
like ``MVME5500>``. If you see an IOC Shell prompt instead hit ``Ctrl-D`` to
47+
exit and then ``Esc`` when you see
48+
``Boot Script - Press <ESC> to Bypass, <SPC> to Continue``
49+
50+
Now you want to set the boot script to load the IOC binary from the network via
51+
TFTP and mount the instance files from the network via NFS. The command
52+
``gevShow`` will show you the current state of the global environment variables.
53+
e.g.
54+
55+
.. code-block::
56+
57+
MVME5500> gevShow
58+
mot-/dev/enet0-cipa=172.23.250.15
59+
mot-/dev/enet0-snma=255.255.240.0
60+
mot-/dev/enet0-gipa=172.23.240.254
61+
mot-boot-device=/dev/em1
62+
rtems-client-name=bl01t-ea-ioc-02
63+
epics-script=172.23.168.203:/iocs:bl01t/bl01t-ea-ioc-02/config/st.cmd
64+
mot-script-boot
65+
dla=malloc 0x230000
66+
tftpGet -d/dev/enet1 -fbl01t/bl01t-ea-ioc-02/bin/RTEMS-beatnik/ioc.boot -m255.255.240.0 -g172.23.240.254 -s172.23.168.203 -c172.23.250.15 -adla
67+
go -a0095F000
68+
69+
Total Number of GE Variables =7, Bytes Utilized =427, Bytes Free =3165
70+
71+
Now use ``gevEdit`` to change the global variables to the values you need.
72+
For this tutorial we will create an IOC called bl01t-ea-ioc-02 and for the
73+
example we assume the file server is on 172.23.168.203. For the details of
74+
setting up these parameters see your site documentation but the important
75+
values to change for this tutorial IOC would be:
76+
77+
:rtems-client-name: bl01t-ea-ioc-02
78+
:epics-script: 172.23.168.203:/iocs:bl01t/bl01t-ea-ioc-02/config/st.cmd
79+
:mot-script-boot (2nd line): tftpGet -d/dev/enet1 -fbl01t/bl01t-ea-ioc-02/bin/RTEMS-beatnik/ioc.boot -m255.255.240.0 -g172.23.240.254 -s172.23.168.203 -c172.23.250.15 -adla
80+
81+
Now your ``gevShow`` should look similar to the example above.
82+
83+
Meaning of the parameters:
84+
85+
:rtems-client-name: a name for the IOC crate
86+
:epics-script: an NFS address for the IOC's root folder
87+
:mot-script-boot: a TFTP address for the IOC's binary boot file
88+
89+
Note that the IP parameters to the tftpGet command are respectively:
90+
net mask, gateway, server address, client address.
91+
92+
93+
Creating an RTEMS IOC Instance
94+
------------------------------
95+
96+
We will be adding a new IOC instance to the ``bl01t`` beamline that we created in
97+
:doc:`create_beamline`. The first step is to make a copy of our existing IOC instance
98+
and make some modifications to it. We will call this new IOC instance
99+
``bl01t-ea-ioc-02``.
100+
101+
.. code-block:: bash
102+
103+
cd bl01t
104+
cp -r iocs/bl01t-ea-ioc-01 iocs/bl01t-ea-ioc-02
105+
# don't need this file for the new IOC
106+
rm iocs/bl01t-ea-ioc-02/config/extra.db
107+
108+
We are going to make a very basic IOC with some hand coded database with
109+
a couple of simple records. Therefore the generic IOC that we use can just
110+
be ioc-template.
111+
112+
Generic IOCs have multiple targets, they always have a
113+
``developer`` target which is used for building and debugging the generic IOC and
114+
a ``runtime`` target which is lightweight and usually used when running the IOC
115+
in the cluster. The matrix of targets also includes an architecture dimension,
116+
at present the ioc-template supports two architectures, ``linux`` and
117+
``rtems``, thus there are 4 targets in total as follows:
118+
119+
- ghcr.io/epics-containers/ioc-template-linux-runtime
120+
- ghcr.io/epics-containers/ioc-template-linux-developer
121+
- ghcr.io/epics-containers/ioc-template-rtems-runtime
122+
- ghcr.io/epics-containers/ioc-template-rtems-developer
123+
124+
We want to run the RTEMS runtime target on the cluster so this will appear
125+
at the top of the ``values.yaml`` file. In addition there are a number of
126+
environment variables required for the RTEMS target that we also specify in
127+
``values.yaml``.
128+
Edit the file
129+
``iocs/bl01t-ea-ioc-02/values.yaml`` to look like this:
130+
131+
.. code-block:: yaml
132+
133+
If you are not at DLS you will need to change the above to match the
134+
parameters of your RTEMS IOC. The environment variables are:
135+
136+
:K8S_IOC_ADDRESS: The IP address of the IOC (mot-/dev/enet0-cipa above)
137+
:RTEMS_VME_CONSOLE_ADDR: Address of terminal server for console access
138+
:RTEMS_VME_CONSOLE_PORT: Port of terminal server for console access
139+
:RTEMS_VME_AUTO_REBOOT: true to reboot the hard IOC when the IOC container changes
140+
:RTEMS_VME_AUTO_PAUSE: true to pause/unpause when the IOC container stops/starts
141+
142+
Edit the file ``iocs/bl01t-ea-ioc-02/Chart.yaml`` and change the 1st 4 lines
143+
to represent this new IOC (the rest of the file is boilerplate):
144+
145+
.. code-block:: yaml
146+
147+
apiVersion: v2
148+
name: bl01t-ea-ioc-02
149+
description: |
150+
example RTEMS IOC for bl01t
151+
152+
For configuration we will create a simple database with a few of records and
153+
a basic startup script. Add the following files to the
154+
``iocs/bl01t-ea-ioc-02/config`` directory.
155+
156+
.. code-block:: :caption: bl01t-ea-ioc-02.db
157+
158+
record(calc, "bl01t-ea-ioc-02:SUM") {
159+
field(DESC, "Sum A and B")
160+
field(CALC, "A+B")
161+
field(SCAN, ".1 second")
162+
field(INPA, "bl01t-ea-ioc-02:A")
163+
field(INPB, "bl01t-ea-ioc-02:B")
164+
}
165+
166+
record(ao, "bl01t-ea-ioc-02:A") {
167+
field(DESC, "A voltage")
168+
field(EGU, "Volts")
169+
field(VAL, "0.0")
170+
}
171+
172+
record(ao, "bl01t-ea-ioc-02:B") {
173+
field(DESC, "B voltage")
174+
field(EGU, "Volts")
175+
field(VAL, "0.0")
176+
}
177+
178+
.. code-block:: :caption: st.cmd
179+
180+
# RTEMS Test IOC bl01t-ea-ioc-02
181+
182+
dbLoadDatabase "/iocs/bl01t/bl01t-ea-ioc-02/dbd/ioc.dbd"
183+
ioc_registerRecordDeviceDriver(pdbbase)
184+
185+
# db files from the support modules are all held in this folder
186+
epicsEnvSet(EPICS_DB_INCLUDE_PATH, "/iocs/bl01t/bl01t-ea-ioc-02/support/db")
187+
188+
# load our hand crafted database
189+
dbLoadRecords("/iocs/bl01t/bl01t-ea-ioc-02/config/bl01t-ea-ioc-02.db")
190+
# also make Database records for DEVIOCSTATS
191+
dbLoadRecords(iocAdminSoft.db, "IOC=bl01t-ea-ioc-02")
192+
dbLoadRecords(iocAdminScanMon.db, "IOC=bl01t-ea-ioc-02")
193+
194+
iocInit
195+
196+
You now have a new helm chart in iocs/bl01t-ea-ioc-02 that describes an IOC
197+
instance for your RTEMS device. Recall that this is not literally where the IOC
198+
runs, it deploys a kubernetes pod that manages the RTEMS IOC. It does contain
199+
the IOC's configuration and the IOC's binary code, which it will copy to the
200+
file-server on startup.
201+
202+
You are now ready to deploy the IOC instance to the cluster and test it out.
203+
204+
205+
Deploying an RTEMS IOC Instance
206+
-------------------------------
207+
208+
TODO:
209+
210+
Once you have the correct configuration in your RTEMS boot-loader and you have
211+
deployed the kubernetes IOC instance, you can restart the IOC with
212+
the ``reset`` command. This will cause it to reboot and it should pick
213+
up your binary from the network and start the IOC. You should see the
214+
iocShell fire up and run

docs/user/tutorials/rtems_setup.rst

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
RTEMS - Creating a File Server
2+
==============================
3+
4+
Introduction
5+
------------
6+
7+
RTEMS IOCs are an example of a 'hard' IOC. Each IOC is a physical crate that
8+
contains a number of I/O cards and a processor card.
9+
10+
For these types of
11+
IOC, the Kubernetes cluster runs a pod that represents the individual IOC.
12+
However, the IOC code itself runs on the processor card instead of the pod.
13+
The pod provides the following features:
14+
15+
- Sets up the files to serve to the RTEMS OS
16+
- Provides a connection to the IOC console just like a linux IOC
17+
- Pauses, unpauses, restarts the IOC as necessary - thus the IOC is controlled
18+
by the Kubernetes cluster in the same way as a linux IOC
19+
- Provides logging of the IOC console in the same way as linux IOCs
20+
- Monitors the IOC and restarts it if it crashes - using the same mechanism
21+
as linux IOCs
22+
23+
At present epics-containers supports the MVVME5500 processor card running
24+
RTEMS 5. The same model as described above can be used for other 'hard' IOC
25+
types in future.
26+
27+
Create a File Server Service
28+
----------------------------
29+
30+
When an RTEMS 5 IOC boots the bootloader loads the IOC binary from a TFTP
31+
address, this binary is then given access to a filesystem over NFS V2, this is
32+
where the IOC startup script and other configuration is loaded.
33+
34+
Therefore we need a TFTP server and an NFS V2 server to serve the files to
35+
the IOC. For each EPICS domain a single service running in Kubernetes will
36+
supply a TFTP and NFS V2 server for all the IOCs in that domain.
37+
38+
In the tutorial :doc:`create_beamline` we created a beamline repository that
39+
defines the IOC instances in the beamline ``bl01t``. The template project
40+
that we copied contains a folder called ``services/nfsv2-tftp``. The folder
41+
is a helm chart that will deploy a TFTP and NFS V2 server to Kubernetes.
42+
43+
Before deploying the service we need to configure it. Make the following
44+
changes:
45+
46+
- Change the ``name`` value in ``Chart.yaml`` to ``bl01t-ioc-files``
47+
- Change the ``loadBalancerIP`` value in ``values.yaml`` to a free IP address
48+
in your cluster's Static Load Balancer range. This IP address will be used
49+
to access the TFTP and NFS V2 servers from the IOC.
50+
51+
.. note::
52+
53+
**DLS Users** The load balancer IP range on Pollux is
54+
``172.23.168.201-172.23.168.222``. Please use ``172.23.168.203``. The test
55+
RTEMS crate is likely to already be set up to point at this address. There
56+
are a limited number of addresses available, hence we have reserved a single
57+
address for the training purposes.
58+
59+
Also note that ``bl01t`` is a shared resource so if there is already a
60+
``bl01t-ioc-files`` service running then you could just use the existing
61+
service.
62+
63+
You can verify if the service is already running using kubectl. The command
64+
shown below will list all the services in the ``bl01t`` namespace, and the
65+
example output shows that there is already a ``bl01t-ioc-files`` service
66+
using the IP address ``172.23.168.203``.
67+
68+
.. code-block:: bash
69+
70+
$ kubectl get services -n bl01t
71+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
72+
bl01t-ioc-files LoadBalancer 10.108.219.193 172.23.168.203 111:31491/UDP,2049:30944/UDP,20048:32277/UDP,69:32740/UDP 32d
73+
74+
Once you have made the changes to the helm chart you can deploy it to the
75+
cluster using the following command:
76+
77+
.. code-block:: bash
78+
79+
cd bl01t
80+
helm upgrade --install bl01t-ioc-files services/nfsv2-tftp -n bl01t
81+
82+
Now if you run the ``kubectl get services`` command again you should see the
83+
new service.
84+
85+
Once you have this service up and running you can leave it alone. It will
86+
serve the files to the IOCs using the IP address you configured over both
87+
TFTP and NFS V2. It uses a persistent volume to store the files and this
88+
persistent volume is shared with hard IOC pods so that they can place the
89+
files they need to serve to the IOC.
90+
91+
See the next tutorial for how to deploy a hard IOC pod to the cluster.
92+
93+
94+

0 commit comments

Comments
 (0)