@@ -42,7 +42,7 @@ compute nodes:
4242
4343```r
4444> library(future.batchtools)
45- > plan(batchtools_torque )
45+ > plan(batchtools_slurm )
4646>
4747> x %<-% { Sys.sleep(5); 3.14 }
4848> y %<-% { Sys.sleep(5); 2.71 }
@@ -71,7 +71,7 @@ library(listenv)
7171## should be using multisession, where the number of
7272## parallel processes is automatically decided based on
7373## what the cluster grants to each compute node.
74- plan(list(batchtools_torque , multisession))
74+ plan(list(batchtools_slurm , multisession))
7575
7676## Find all samples (one FASTQ file per sample)
7777fqs <- dir(pattern = "[.]fastq$")
@@ -127,72 +127,73 @@ batchtools backends.
127127
128128| Backend | Description | Alternative in future package
129129|:-------------------------|:-------------------------------------------------------------------------|:------------------------------------
130- | `batchtools_torque` | Futures are evaluated via a [TORQUE] / PBS job scheduler | N/A
131- | `batchtools_slurm` | Futures are evaluated via a [Slurm] job scheduler | N/A
132- | `batchtools_sge` | Futures are evaluated via a [Sun/Oracle Grid Engine (SGE)] job scheduler | N/A
133130| `batchtools_lsf` | Futures are evaluated via a [Load Sharing Facility (LSF)] job scheduler | N/A
134131| `batchtools_openlava` | Futures are evaluated via an [OpenLava] job scheduler | N/A
132+ | `batchtools_sge` | Futures are evaluated via a [Sun/Oracle Grid Engine (SGE)] job scheduler | N/A
133+ | `batchtools_slurm` | Futures are evaluated via a [Slurm] job scheduler | N/A
134+ | `batchtools_torque` | Futures are evaluated via a [TORQUE] / PBS job scheduler | N/A
135135<%-- | `batchtools_docker` | Futures are evaluated via a [Docker Swarm] cluster | N/A --%>
136136| `batchtools_custom` | Futures are evaluated via a custom batchtools configuration R script or via a set of cluster functions | N/A
137137| `batchtools_multicore` | parallel evaluation by forking the current R process | `plan(multicore)`
138- | `batchtools_local` | sequential evaluation in a separate R process (on current machine) | `plan(cluster, workers = "localhost" )`
138+ | `batchtools_local` | sequential evaluation in a separate R process (on current machine) | `plan(cluster, workers = I(1) )`
139139
140140
141141### Examples
142142
143- Below is an examples illustrating how to use `batchtools_torque ` to
143+ Below is an examples illustrating how to use `batchtools_slurm ` to
144144configure the batchtools backend. For further details and examples on
145145how to configure batchtools, see the [batchtools configuration] wiki
146146page.
147147
148148To configure **batchtools** for job schedulers we need to setup a
149149`*.tmpl` template file that is used to generate the script used by the
150- scheduler. This is what a template file for TORQUE / PBS may look
151- like:
150+ scheduler. This is what a template file for Slurm may look like:
152151
153152```sh
154153#!/bin/bash
155154
156- ## Job name:
157- #PBS -N <%%= if (exists("job.name", mode = "character")) job.name else job.hash %%>
155+ <%%
156+ defaults <- list(
157+ nodes = 1, # single-host processing
158+ time = "00:05:00", # 5-min runtime
159+ mem = "100M" # 100 MiB memory
160+ )
161+ resources <- c(resources, defaults[setdiff(names(defaults), names(resources))])
162+ opts <- unlist(resources, use.names = TRUE)
163+ opts <- sprintf("--%s=%s", names(opts), opts)
164+ opts <- paste(opts, collapse = " ") %>
165+ %%>
166+
167+ #SBATCH --job-name=<%%= job.name %%>
168+ #SBATCH --output=<%%= log.file %%>
169+ #SBATCH <%%= opts %%>
158170
159- ## Direct streams to logfile:
160- #PBS -o <%%= log.file %%>
161-
162- ## Merge standard error and output:
163- #PBS -j oe
164-
165- ## Email on abort (a) and termination (e), but not when starting (b)
166- #PBS -m ae
167-
168- ## Resources needed:
169- <%% if (length(resources) > 0) {
170- opts <- unlist(resources, use.names = TRUE)
171- opts <- sprintf("%s=%s", names(opts), opts)
172- opts <- paste(opts, collapse = ",") %%>
173- #PBS -l <%%= opts %%>
174- <%% } %%>
175-
176- ## Launch R and evaluated the batchtools R job
177171Rscript -e 'batchtools::doJobCollection("<%%= uri %%>")'
178172```
179173
180- If this template is saved to file `batchtools.torque .tmpl` (without
181- period) in the working directory or as `.batchtools.torque .tmpl` (with
174+ If this template is saved to file `batchtools.slurm .tmpl` (without
175+ period) in the working directory or as `.batchtools.slurm .tmpl` (with
182176period) the user's home directory, then it will be automatically
183177located by the **batchtools** framework and loaded when doing:
184178
185179```r
186- > plan(batchtools_torque )
180+ plan(batchtools_slurm )
187181```
188182
183+ It is also possible to specify the template file explicitly, e.g.
184+
185+ ```r
186+ plan(batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
187+ ```
188+
189+
189190Resource parameters can be specified via argument `resources` which
190191should be a named list and is passed as is to the template file. For
191192example, to request that each job would get alloted 12 cores (one a
192193single machine) and up to 5 GiB of memory, use:
193194
194195```r
195- > plan(batchtools_torque , resources = list(nodes = "1:ppn=12", vmem = "5gb "))
196+ plan(batchtools_slurm , resources = list(ntasks = 12, mem = "5G "))
196197```
197198
198199To specify the `resources` argument at the same time as using nested
@@ -201,34 +202,19 @@ arguments. For instance,
201202
202203```r
203204plan(list(
204- tweak(batchtools_torque , resources = list(nodes = "1:ppn=12", vmem = "5gb ")),
205+ tweak(batchtools_slurm , resources = list(ntasks = 12, mem = "5G ")),
205206 multisession
206207))
207208```
208209
209- causes the first level of futures to be submitted via the TORQUE job
210+ causes the first level of futures to be submitted via the Slurm job
210211scheduler requesting 12 cores and 5 GiB of memory per job. The second
211212level of futures will be evaluated using multisession using the 12
212213cores given to each job by the scheduler.
213214
214- A similar filename format is used for the other types of job
215- schedulers supported. For instance, for Slurm the template file
216- should be named `./batchtools.slurm.tmpl` or
217- `~/.batchtools.slurm.tmpl` in order for
218-
219- ```r
220- > plan(batchtools_slurm)
221- ```
222-
223- to locate the file automatically. To specify this template file
224- explicitly, use argument `template`, e.g.
225-
226- ```r
227- > plan(batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
228- ```
229-
230- For further details and examples on how to configure **batchtools** per
231- se, see the [batchtools configuration] wiki page.
215+ For further details and examples on how to configure **batchtools**
216+ per se, see the [batchtools configuration] wiki page and the help
217+ pages for `batchtools_slurm`, `batchtools_sge`, etc.
232218
233219
234220## Demos
0 commit comments