@@ -154,6 +154,7 @@ mpiexec tclsh program.tic
154154On more complex, scheduled systems, users do not invoke +mpiexec+
155155directly; Turbine run scripts are provided by Swift/T.
156156
157+ [[turbine-pilot]]
157158=== Turbine Pilot
158159
159160The +turbine-pilot+ program can be used to run Swift/T interactively
@@ -202,11 +203,12 @@ SLURM:: +turbine-slurm-run.zsh+
202203Theta:: +turbine-theta-run.zsh+ (Cobalt with Cray's +aprun+)
203204ThetaGPU:: +turbine-theta-run.zsh+ (Cobalt with +mpirun+)
204205LSF:: +turbine-lsf-run.zsh+
206+ PSI/J:: +turbine-psij-run.zsh+ (See the <<psij,PSI/J notes>>.)
205207
206208Each script accepts input via environment variables and command-line options.
207209
208210The +swift-t+ and +turbine+ programs have a +-m+ (machine) option that accepts
209- +pbs+, +cobalt+, +cray+, +lsf +, +theta +, or +slurm +.
211+ +pbs+, +cobalt+, +cray+, +slurm +, +lsf +, or +psij +.
210212
211213A typical invocation is (one step compile-and-run):
212214----
@@ -260,7 +262,8 @@ Number of processes to use
260262Number of processes per node
261263
262264+PROJECT+::
263- The project name to use with the system scheduler
265+ The project name to use with the system scheduler. Some systems call
266+ this the "account" to charge
264267
265268+QUEUE+::
266269Name of queue in which to run
@@ -440,19 +443,23 @@ shell code in this script. This script is run before the
440443initialization script (+turbine -i+). This is an alternative to
441444placing the environment variables in a wrapper script.
442445
443- +-t <time>+:: Set scheduler walltime. The argument format is passed
446+ +-t <time>+::
447+ Set scheduler walltime. The argument format is passed
444448through to the scheduler
445449
446- +-V+:: Make script verbose. This typically just applies +set -x+,
450+ +-V+::
451+ Make script verbose. This typically just applies +set -x+,
447452allowing you to inspect variables and arguments as passed to the
448453system scheduler (e.g., +qsub+).
449454
450- +-x+:: Use +turbine_sh+ launcher with compiled-in libraries instead of +tclsh+
451- (reduces number of files that must be read from file system).
455+ +-x+::
456+ Use +turbine_sh+ launcher with compiled-in libraries instead of +tclsh+
457+ (reduces number of files that must be read from file system).
452458
453- +-X+:: Run standalone Turbine executable
454- (created by link:guide.html#mkstatic[+mkstatic.tcl+]) instead of
455- +program.tic+.
459+ +-X+::
460+ Run standalone Turbine executable
461+ (created by link:guide.html#mkstatic[+mkstatic.tcl+])
462+ instead of +program.tic+.
456463
457464++++
458465<a name="dry_run"></a>
@@ -848,6 +855,40 @@ module load openmpi/4.1.1-gcc
848855
849856Simply use the +dev/build/build-swift-t.sh+ method.
850857
858+ [[psij]]
859+ == PSI/J
860+
861+ https://exaworks.org/psij-python[PSI/J] is a Python abstraction layer
862+ over cluster schedulers. If you are already familiar with PSI/J, it
863+ may be a convenient way to get Swift/T running on your system.
864+
865+ Simply run:
866+
867+ ----
868+ $ swift-t -m psij workflow.swift
869+ ----
870+
871+ The normal Turbine settings are passed into a small Python program
872+ bundled with Swift/T called +turbine2psij.py+, which loads up PSI/J
873+ and submits the job. This uses the PSI/J
874+ https://exaworks.org/psij-python/docs/v/0.9.10/.generated/tree.html#mpirun[MPI launcher],
875+ which launches <<turbine-pilot,turbine-pilot>>.
876+
877+ === Environment Settings
878+
879+ These may be set in the shell or in a <<settings_file,settings file>>.
880+ The are recognized by +turbine2psij+ and passed into the PSI/J API.
881+
882+ +PSIJ_EXECUTOR+::
883+ The name of the PSI/J executor to use. See the
884+ https://exaworks.org/psij-python/docs/v/0.9.10/user_guide.html#executors-and-launchers[PSI/J
885+ executor list].
886+
887+ +PSIJ_DEBUG+::
888+ Enable additional debug messages to view what is happening.
889+
890+ Other <<variables,Turbine scheduler variables>> are recognized automatically.
891+
851892== Cray
852893
853894=== Polaris
0 commit comments