Skip to content

Commit 3c63fb3

Browse files
committed
added Creating an RTEMS IOC Instance
1 parent 76322d5 commit 3c63fb3

File tree

4 files changed

+171
-51
lines changed

4 files changed

+171
-51
lines changed

docs/user/tutorials/epics_devcontainer.rst

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,13 @@ The EPICS Devcontainer
55
Introduction
66
------------
77

8-
You can setup a single devcontainer for managing all of your IOCs. In
9-
`devcontainer` we launched a container for a single project. Here we
10-
will create a workspace that will allow
11-
you to manage multiple projects in a single devcontainer.
8+
For working with epics-containers we provide a developer container with
9+
all the tools you need to build and deploy EPICS IOCs already installed.
10+
In this tutorial we will install and configure this devcontainer.
11+
12+
In `devcontainer` we demonstrated launching a container for a single project.
13+
Here we will create a workspace that will allow
14+
you to manage multiple projects within a single devcontainer.
1215

1316
The base container is defined in https://github.com/epics-containers/dev-e7
1417
but we will use a customizable image derived from that. The customizable

docs/user/tutorials/rtems_ioc.rst

Lines changed: 129 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,18 @@ now will look at the differences for this architecture. Further
99
architectures will be supported in future.
1010

1111
Each beamline or accelerator domain will require a server for
12-
serving the IOC binaries and instance files. For details of how to set this
13-
up see `rtems_setup`.
12+
serving the IOC binaries and instance files to the RTEMS devices. This
13+
needs to be set up for your test beamline before proceeding,
14+
see `rtems_setup`.
1415

1516
Once you have the file server set up, deploying an IOC instance that uses
1617
an RTEMS Generic IOC is very similar to `deploy_example`.
1718

19+
We will be adding
20+
a new IOC instance to the ``bl01t`` beamline that we created in the previous
21+
tutorials. You will need to have worked through the previous tutorials in
22+
order to complete this one.
23+
1824
Preparing the RTEMS Boot loader
1925
-------------------------------
2026

@@ -33,6 +39,8 @@ using a terminal server or similar.
3339
:console: ts0001 7007
3440
:crate monitor: ts0001 7008
3541

42+
It is likely already set up as per the example below.
43+
3644
Use telnet to connect to the console of your target IOC. e.g.
3745
``telnet ts0001 7007``. We want to get to the MOTLoad prompt which should look
3846
like ``MVME5500>``. If you see an IOC Shell prompt instead hit ``Ctrl-D`` to
@@ -81,14 +89,126 @@ Meaning of the parameters:
8189
Note that the IP parameters to the tftpGet command are respectively:
8290
net mask, gateway, server address, client address.
8391

84-
Once you have the correct configuration you can restart the IOC with
85-
the ``reset`` command. But you need the kubernetes pod for this IOC to be
86-
up and running first so that it places the necessary files on the file server.
87-
See the next section for setting up the kubernetes pod.
8892

93+
Creating an RTEMS IOC Instance
94+
------------------------------
95+
96+
We will be adding a new IOC instance to the ``bl01t`` beamline that we created in
97+
:doc:`create_beamline`. The first step is to make a copy of our existing IOC instance
98+
and make some modifications to it. We will call this new IOC instance
99+
``bl01t-ea-ioc-02``.
100+
101+
.. code-block:: bash
102+
103+
cd bl01t
104+
cp -r iocs/bl01t-ea-ioc-01 iocs/bl01t-ea-ioc-02
105+
# don't need this file for the new IOC
106+
rm iocs/bl01t-ea-ioc-02/config/extra.db
107+
108+
We are going to make a very basic IOC with some hand coded database with
109+
a couple of simple records. Therefore the generic IOC that we use can just
110+
be ioc-template.
111+
112+
Generic IOCs have multiple targets, they always have a
113+
``developer`` target which is used for building and debugging the generic IOC and
114+
a ``runtime`` target which is lightweight and usually used when running the IOC
115+
in the cluster. The matrix of targets also includes an architecture dimension,
116+
at present the ioc-template supports two architectures, ``linux`` and
117+
``rtems``, thus there are 4 targets in total as follows:
118+
119+
- ghcr.io/epics-containers/ioc-template-linux-runtime
120+
- ghcr.io/epics-containers/ioc-template-linux-developer
121+
- ghcr.io/epics-containers/ioc-template-rtems-runtime
122+
- ghcr.io/epics-containers/ioc-template-rtems-developer
123+
124+
We want to run the RTEMS runtime target on the cluster so this will appear
125+
at the top of the ``values.yaml`` file. In addition there are a number of
126+
environment variables required for the RTEMS target that we also specify in
127+
``values.yaml``.
128+
Edit the file
129+
``iocs/bl01t-ea-ioc-02/values.yaml`` to look like this:
130+
131+
.. code-block:: yaml
132+
133+
If you are not at DLS you will need to change the above to match the
134+
parameters of your RTEMS IOC. The environment variables are:
135+
136+
:K8S_IOC_ADDRESS: The IP address of the IOC (mot-/dev/enet0-cipa above)
137+
:RTEMS_VME_CONSOLE_ADDR: Address of terminal server for console access
138+
:RTEMS_VME_CONSOLE_PORT: Port of terminal server for console access
139+
:RTEMS_VME_AUTO_REBOOT: true to reboot the hard IOC when the IOC container changes
140+
:RTEMS_VME_AUTO_PAUSE: true to pause/unpause when the IOC container stops/starts
141+
142+
Edit the file ``iocs/bl01t-ea-ioc-02/Chart.yaml`` and change the 1st 4 lines
143+
to represent this new IOC (the rest of the file is boilerplate):
144+
145+
.. code-block:: yaml
146+
147+
apiVersion: v2
148+
name: bl01t-ea-ioc-02
149+
description: |
150+
example RTEMS IOC for bl01t
151+
152+
For configuration we will create a simple database with a few of records and
153+
a basic startup script. Add the following files to the
154+
``iocs/bl01t-ea-ioc-02/config`` directory.
155+
156+
.. code-block:: :caption: bl01t-ea-ioc-02.db
157+
158+
record(calc, "bl01t-ea-ioc-02:SUM") {
159+
field(DESC, "Sum A and B")
160+
field(CALC, "A+B")
161+
field(SCAN, ".1 second")
162+
field(INPA, "bl01t-ea-ioc-02:A")
163+
field(INPB, "bl01t-ea-ioc-02:B")
164+
}
165+
166+
record(ao, "bl01t-ea-ioc-02:A") {
167+
field(DESC, "A voltage")
168+
field(EGU, "Volts")
169+
field(VAL, "0.0")
170+
}
171+
172+
record(ao, "bl01t-ea-ioc-02:B") {
173+
field(DESC, "B voltage")
174+
field(EGU, "Volts")
175+
field(VAL, "0.0")
176+
}
177+
178+
.. code-block:: :caption: st.cmd
179+
180+
# RTEMS Test IOC bl01t-ea-ioc-02
181+
182+
dbLoadDatabase "/iocs/bl01t/bl01t-ea-ioc-02/dbd/ioc.dbd"
183+
ioc_registerRecordDeviceDriver(pdbbase)
184+
185+
# db files from the support modules are all held in this folder
186+
epicsEnvSet(EPICS_DB_INCLUDE_PATH, "/iocs/bl01t/bl01t-ea-ioc-02/support/db")
187+
188+
# load our hand crafted database
189+
dbLoadRecords("/iocs/bl01t/bl01t-ea-ioc-02/config/bl01t-ea-ioc-02.db")
190+
# also make Database records for DEVIOCSTATS
191+
dbLoadRecords(iocAdminSoft.db, "IOC=bl01t-ea-ioc-02")
192+
dbLoadRecords(iocAdminScanMon.db, "IOC=bl01t-ea-ioc-02")
193+
194+
iocInit
195+
196+
You now have a new helm chart in iocs/bl01t-ea-ioc-02 that describes an IOC
197+
instance for your RTEMS device. Recall that this is not literally where the IOC
198+
runs, it deploys a kubernetes pod that manages the RTEMS IOC. It does contain
199+
the IOC's configuration and the IOC's binary code, which it will copy to the
200+
file-server on startup.
201+
202+
You are now ready to deploy the IOC instance to the cluster and test it out.
89203

90-
Creating an RTEMS Generic IOC
91-
-----------------------------
92204

93205
Deploying an RTEMS IOC Instance
94-
-------------------------------
206+
-------------------------------
207+
208+
TODO:
209+
210+
Once you have the correct configuration in your RTEMS boot-loader and you have
211+
deployed the kubernetes IOC instance, you can restart the IOC with
212+
the ``reset`` command. This will cause it to reboot and it should pick
213+
up your binary from the network and start the IOC. You should see the
214+
iocShell fire up and run

docs/user/tutorials/rtems_setup.rst

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -51,28 +51,25 @@ changes:
5151
.. note::
5252

5353
**DLS Users** The load balancer IP range on Pollux is
54-
``172.23.168.201-172.23.168.222``. The IP address you choose must be free
55-
and not already in use by another service. At the moment you have to try
56-
deploying and see if it works. If it doesn't work then try another IP.
54+
``172.23.168.201-172.23.168.222``. Please use ``172.23.168.203``. The test
55+
RTEMS crate is likely to already be set up to point at this address. There
56+
are a limited number of addresses available, hence we have reserved a single
57+
address for the training purposes.
5758

5859
Also note that ``bl01t`` is a shared resource so if there is already a
59-
``bl01t-ioc-files`` service running then you will have to choose a different
60-
IP address (or just use the existing service and don't bother with this step)
61-
62-
**Recommendation**: use ``172.23.168.203`` and if it is already in use and the
63-
service is already running then just use that one. There are a limited number
64-
of fixed IPs available so we should only use one for training purposes.
60+
``bl01t-ioc-files`` service running then you could just use the existing
61+
service.
6562

6663
You can verify if the service is already running using kubectl. The command
6764
shown below will list all the services in the ``bl01t`` namespace, and the
6865
example output shows that there is already a ``bl01t-ioc-files`` service
69-
using the IP address ``172.23.168.220``:
66+
using the IP address ``172.23.168.203``.
7067

7168
.. code-block:: bash
7269
7370
$ kubectl get services -n bl01t
7471
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
75-
bl01t-ioc-files LoadBalancer 10.108.219.193 172.23.168.220 111:31491/UDP,2049:30944/UDP,20048:32277/UDP,69:32740/UDP 32d
72+
bl01t-ioc-files LoadBalancer 10.108.219.193 172.23.168.203 111:31491/UDP,2049:30944/UDP,20048:32277/UDP,69:32740/UDP 32d
7673
7774
Once you have made the changes to the helm chart you can deploy it to the
7875
cluster using the following command:

docs/user/tutorials/setup_k8s.rst

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,33 @@
33
Setup a Kubernetes Server
44
=========================
55

6+
.. Note::
7+
8+
**DLS Users**: DLS already has the test cluster Pollux and further
9+
beamline and machine clusters are coming soon.
10+
11+
To use the Pollux cluster, run ``module load pollux`` outside of the
12+
devcontainer and then run the script ``.devcontainer/dls-copy-k8s-crt.sh``
13+
14+
The Pollux Cluster already has a beamline namespace ``bl01t``
15+
for you to use as a training area. *You will need
16+
to ask SciComp to add you as a user of this namespace.*
17+
Please be aware that this is a shared resource so others might be using
18+
it at the same time.
19+
20+
The Pollux Cluster already has the Kubernetes dashboard installed.
21+
To access it go to http://pollux.diamond.ac.uk and click
22+
``Pollux K8S Dashboard``.
23+
24+
Then select ``bl01t`` from the namespace drop down menu in the top left,
25+
to see the training namespace.
26+
27+
Introduction
28+
------------
29+
This is a very easy set of instructions for setting up an experimental
30+
single-node Kubernetes cluster,
31+
ready to test deployment of EPICS IOCs.
32+
633
.. note::
734

835
From this point onward the tutorials assume that you are using
@@ -19,12 +46,6 @@ Setup a Kubernetes Server
1946
take on Kubernetes then there are some hints as to how to achieve this
2047
here: `../how-to/run_iocs`.
2148

22-
Introduction
23-
------------
24-
This is a very easy set of instructions for setting up an experimental
25-
single-node Kubernetes cluster,
26-
ready to test deployment of EPICS IOCs.
27-
2849
Bring Your Own Cluster
2950
----------------------
3051

@@ -41,27 +62,6 @@ namespace and service account as long as it has network=host capability.
4162
Cloud based K8S offerings may not be appropriate because of the Channel Access
4263
routing requirement.
4364

44-
.. Note::
45-
46-
**DLS Users**: DLS already has the test cluster Pollux and further
47-
beamline and machine clusters are coming soon.
48-
49-
To use the Pollux cluster, run ``module load pollux`` outside of the
50-
devcontainer and then run the script ``.devcontainer/dls-copy-k8s-crt.sh``
51-
52-
The Pollux Cluster already has a beamline namespace ``bl01t``
53-
for you to use as a training area. *You will need
54-
to ask SciComp to add you as a user of this namespace.*
55-
Please be aware that this is a shared resource so others might be using
56-
it at the same time.
57-
58-
The Pollux Cluster already has the Kubernetes dashboard installed.
59-
To access it go to http://pollux.diamond.ac.uk and click
60-
``Pollux K8S Dashboard``.
61-
62-
Then select ``bl01t`` from the namespace drop down menu in the top left,
63-
to see the training namespace.
64-
6565
Platform Choice
6666
---------------
6767

0 commit comments

Comments
 (0)