@@ -31,8 +31,9 @@ In Cylc 8 "platforms" can be defined in the global configuration
31
31
(:cylc:conf: `global.cylc `) so that this configuration doesn't have to be
32
32
repeated for each task in each workflow.
33
33
34
- Cylc "platforms" may configure hostnames, job runners and more. Only the
35
- platform name needs to be specified in the task configuration.
34
+ There may be cases where sets of platforms (for example a group of
35
+ standalone compute servers, or a pair of mirrored HPC's) might be equally
36
+ suitable for a task. Such platforms can be set up to be ``platform groups ``
36
37
37
38
.. seealso ::
38
39
@@ -167,7 +168,7 @@ At Cylc 8 the equivalent might be:
167
168
168
169
[[mytask_login_to_hpc_and_submit]]
169
170
# Recommended:
170
- platform = just_run_it
171
+ platform = slurm_supercomputer
171
172
# ...but This is still legal:
172
173
#platform = $(selector-script)
173
174
@@ -183,13 +184,25 @@ And the platform settings for these examples might be:
183
184
# A computer with PBS, that takes local job submissions
184
185
job runner = pbs
185
186
hosts = localhost
187
+ install target = localhost
186
188
187
189
[[slurm_supercomputer]]
188
190
# This computer with Slurm requires you to use a login node.
189
191
hosts = login_node01, login_node02 # Cylc will pick a host.
190
192
job runner = slurm
191
193
192
194
195
+ Note that in these examples, it is assumed that ``linuxboxNN ``, ``pbs_local `` and
196
+ ``slurm_supercomputer `` have distinct file systems.
197
+ Sets of platforms which share a file system must specify
198
+ a single :ref: `install target <Install Targets >`.
199
+
200
+ .. note ::
201
+ If an install target is not set, a platform will use its own platform name
202
+ as the install target name. If multiple platforms share a file system
203
+ but have separate :ref: `install targets <Install Targets >` task initialization
204
+ will fail.
205
+
193
206
.. _host-to-platform-logic :
194
207
195
208
How Cylc 8 handles host-to-platform upgrades
@@ -216,6 +229,7 @@ platforms section:
216
229
[[supercomputer_A]]
217
230
hosts = localhost
218
231
job runner = slurm
232
+ install target = localhost
219
233
[[supercomputer_B]]
220
234
hosts = tigger, wol, eeyore
221
235
job runner = pbs
@@ -230,7 +244,7 @@ And you have a workflow runtime configuration:
230
244
batch system = slurm
231
245
[[task2]]
232
246
[[[remote]]]
233
- hosts = eeyore
247
+ host = eeyore
234
248
[[[job]]]
235
249
batch system = pbs
236
250
0 commit comments