|
1 | 1 | Add an IOC instance to a beamline
|
2 | 2 | =================================
|
3 |
| -Todo |
| 3 | + |
| 4 | +Introduction |
| 5 | +~~~~~~~~~~~~ |
| 6 | + |
| 7 | +In the tutorials section we created a beamline repository that contains |
| 8 | +a single IOC called example. Here we discuss how to create your own IOCs |
| 9 | +using this example as a template. |
| 10 | + |
| 11 | +Quick Start |
| 12 | +~~~~~~~~~~~ |
| 13 | + |
| 14 | +At present this discussion is around copying and modifying the Helm Chart |
| 15 | +definition for the IOC. In future the tool **ibek** will generate the Helm |
| 16 | +Chart from a YAML description of the IOC. |
| 17 | + |
| 18 | +Inside the beamline repo the folder iocs/example holds a Helm Chart. Much of |
| 19 | +the contents is boilerplate so creating a new IOC involves: |
| 20 | + |
| 21 | +- making a copy of the example IOC folder, giving it the name of your new IOC. |
| 22 | +- Then the modifying a few files as follows: |
| 23 | + |
| 24 | +:Chart.yaml: |
| 25 | + change the name and description fields |
| 26 | + |
| 27 | +:values.yaml: |
| 28 | + - update the name of the beamline for this IOC instance |
| 29 | + - choose a base image for the generic IOC |
| 30 | + - for production deployment you may want to change other fields discussed later |
| 31 | + |
| 32 | +:config/ioc.boot: |
| 33 | + - replace this with your IOC startup script |
| 34 | + - you can also put any other files needed by the startup script in this folder |
| 35 | + - but the total size cannot exceed 1MB |
| 36 | + |
| 37 | +:config/start.sh: |
| 38 | + - this is the script called on container startup |
| 39 | + - it is intended to be generic but can be altered if special startup is needed |
| 40 | + |
| 41 | +The generic IOC base image needs to contain all of the support modules required |
| 42 | +by your IOC. A selection of such images is available at |
| 43 | +https://github.com/orgs/epics-containers/packages. If you need an additional |
| 44 | +generic IOC image then see `create_ioc`. |
| 45 | + |
| 46 | +For the moment you will have to implement your own approach to providing a GUI, |
| 47 | +see `no_opi`. |
| 48 | + |
| 49 | +Production Deployment |
| 50 | +~~~~~~~~~~~~~~~~~~~~~ |
| 51 | + |
| 52 | +For a more complete deployment there are a few more fields in values.yaml |
| 53 | +that may be useful. Below is the default template value.yaml from example: |
| 54 | + |
| 55 | +.. code-block:: yaml |
| 56 | +
|
| 57 | + beamline: blxxi |
| 58 | + namespace: epics-iocs |
| 59 | + base_image: ghcr.io/epics-containers/ioc-adsimdetector:2.10r2.0 |
| 60 | +
|
| 61 | + # root folder of generic ioc source - not expected to change |
| 62 | + iocFolder: /epics/ioc |
| 63 | +
|
| 64 | + # when autosave is true: create PVC and mount at /autosave |
| 65 | + autosave: false |
| 66 | + # when useAffinity is true: only run on nodes with label beamline:blxxi |
| 67 | + useAffinity: false |
| 68 | + # resource limits |
| 69 | + memory: 2048Mi |
| 70 | + cpu: 4 |
| 71 | +
|
| 72 | +:autosave: |
| 73 | + |
| 74 | + If set to true then Kubernetes is instructed to create a Persistent |
| 75 | + Volume Claim and mount it at /autosave. You should configure autosave in |
| 76 | + your startup script to save its files in /autosave. The PVC will be |
| 77 | + persisted even through IOC upgrades. NOTE: this requires that your cluster |
| 78 | + has PVC dynamic provisioning. See `storage provisioning`_. |
| 79 | + |
| 80 | +:useAffinity: |
| 81 | + |
| 82 | + This allows us to target the worker nodes on which the IOC instance will |
| 83 | + run. At DLS we have beamline worker nodes remote from the central cluster |
| 84 | + that reside on the beamline's own subnet. This is a very useful way to |
| 85 | + solve network protocol issues. |
| 86 | + |
| 87 | + When useAffinity is true the IOC pod will only run on nodes with the label |
| 88 | + beamline:blxxi (where blxxi is the beamline name supplied at he top of |
| 89 | + values.yaml) |
| 90 | + |
| 91 | + Also the pod will be given a tolerance for the taint nodetype=blxxi , |
| 92 | + effect=NoSchedule. This means that you can create a taint on the |
| 93 | + beamline worker nodes so they only run beamline IOCs. For details of |
| 94 | + configuring your nodes like this see `taints and tolerations`_. |
| 95 | + |
| 96 | +:memory: |
| 97 | + |
| 98 | + This specifies a resource limit and helps Kubernetes balance resources |
| 99 | + across available nodes. This limit will be enforced on your IOC instance. |
| 100 | + |
| 101 | +:cpu: |
| 102 | + |
| 103 | + This is also a resource limit and is in units of whole cores. May be |
| 104 | + specified in milli-CPU e.g. 500m is half a CPU. The IOC instance will be |
| 105 | + throttled if it exceeds this limit. |
| 106 | + |
| 107 | +.. _storage provisioning: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/ |
| 108 | +.. _taints and tolerations: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| 109 | + |
| 110 | +Future Improvements |
| 111 | +~~~~~~~~~~~~~~~~~~~ |
| 112 | + |
| 113 | +This is the full set of options that the helm library supports at present. |
| 114 | +It is only a first pass implementation and much finer control of the |
| 115 | +Kubernetes deployment could be exposed in future. |
0 commit comments