Skip to content

Commit 3064d10

Browse files
README: Use Slurm as the main example
1 parent a161add commit 3064d10

File tree

2 files changed

+78
-106
lines changed

2 files changed

+78
-106
lines changed

README.md

Lines changed: 39 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ compute nodes:
2929

3030
```r
3131
> library(future.batchtools)
32-
> plan(batchtools_torque)
32+
> plan(batchtools_slurm)
3333
>
3434
> x %<-% { Sys.sleep(5); 3.14 }
3535
> y %<-% { Sys.sleep(5); 2.71 }
@@ -58,7 +58,7 @@ library(listenv)
5858
## should be using multisession, where the number of
5959
## parallel processes is automatically decided based on
6060
## what the cluster grants to each compute node.
61-
plan(list(batchtools_torque, multisession))
61+
plan(list(batchtools_slurm, multisession))
6262

6363
## Find all samples (one FASTQ file per sample)
6464
fqs <- dir(pattern = "[.]fastq$")
@@ -114,71 +114,72 @@ batchtools backends.
114114

115115
| Backend | Description | Alternative in future package
116116
|:-------------------------|:-------------------------------------------------------------------------|:------------------------------------
117-
| `batchtools_torque` | Futures are evaluated via a [TORQUE] / PBS job scheduler | N/A
118-
| `batchtools_slurm` | Futures are evaluated via a [Slurm] job scheduler | N/A
119-
| `batchtools_sge` | Futures are evaluated via a [Sun/Oracle Grid Engine (SGE)] job scheduler | N/A
120117
| `batchtools_lsf` | Futures are evaluated via a [Load Sharing Facility (LSF)] job scheduler | N/A
121118
| `batchtools_openlava` | Futures are evaluated via an [OpenLava] job scheduler | N/A
119+
| `batchtools_sge` | Futures are evaluated via a [Sun/Oracle Grid Engine (SGE)] job scheduler | N/A
120+
| `batchtools_slurm` | Futures are evaluated via a [Slurm] job scheduler | N/A
121+
| `batchtools_torque` | Futures are evaluated via a [TORQUE] / PBS job scheduler | N/A
122122
| `batchtools_custom` | Futures are evaluated via a custom batchtools configuration R script or via a set of cluster functions | N/A
123123
| `batchtools_multicore` | parallel evaluation by forking the current R process | `plan(multicore)`
124-
| `batchtools_local` | sequential evaluation in a separate R process (on current machine) | `plan(cluster, workers = "localhost")`
124+
| `batchtools_local` | sequential evaluation in a separate R process (on current machine) | `plan(cluster, workers = I(1))`
125125

126126

127127
### Examples
128128

129-
Below is an examples illustrating how to use `batchtools_torque` to
129+
Below is an examples illustrating how to use `batchtools_slurm` to
130130
configure the batchtools backend. For further details and examples on
131131
how to configure batchtools, see the [batchtools configuration] wiki
132132
page.
133133

134134
To configure **batchtools** for job schedulers we need to setup a
135135
`*.tmpl` template file that is used to generate the script used by the
136-
scheduler. This is what a template file for TORQUE / PBS may look
137-
like:
136+
scheduler. This is what a template file for Slurm may look like:
138137

139138
```sh
140139
#!/bin/bash
141140

142-
## Job name:
143-
#PBS -N <%= if (exists("job.name", mode = "character")) job.name else job.hash %>
141+
<%
142+
defaults <- list(
143+
nodes = 1, # single-host processing
144+
time = "00:05:00", # 5-min runtime
145+
mem = "100M" # 100 MiB memory
146+
)
147+
resources <- c(resources, defaults[setdiff(names(defaults), names(resources))])
148+
opts <- unlist(resources, use.names = TRUE)
149+
opts <- sprintf("--%s=%s", names(opts), opts)
150+
opts <- paste(opts, collapse = " ") %>
151+
%>
152+
153+
#SBATCH --job-name=<%= job.name %>
154+
#SBATCH --output=<%= log.file %>
155+
#SBATCH <%= opts %>
144156

145-
## Direct streams to logfile:
146-
#PBS -o <%= log.file %>
147-
148-
## Merge standard error and output:
149-
#PBS -j oe
150-
151-
## Email on abort (a) and termination (e), but not when starting (b)
152-
#PBS -m ae
153-
154-
## Resources needed:
155-
<% if (length(resources) > 0) {
156-
opts <- unlist(resources, use.names = TRUE)
157-
opts <- sprintf("%s=%s", names(opts), opts)
158-
opts <- paste(opts, collapse = ",") %>
159-
#PBS -l <%= opts %>
160-
<% } %>
161-
162-
## Launch R and evaluated the batchtools R job
163157
Rscript -e 'batchtools::doJobCollection("<%= uri %>")'
164158
```
165159

166-
If this template is saved to file `batchtools.torque.tmpl` (without
167-
period) in the working directory or as `.batchtools.torque.tmpl` (with
160+
If this template is saved to file `batchtools.slurm.tmpl` (without
161+
period) in the working directory or as `.batchtools.slurm.tmpl` (with
168162
period) the user's home directory, then it will be automatically
169163
located by the **batchtools** framework and loaded when doing:
170164

171165
```r
172-
> plan(batchtools_torque)
166+
plan(batchtools_slurm)
173167
```
174168

169+
It is also possible to specify the template file explicitly, e.g.
170+
171+
```r
172+
plan(batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
173+
```
174+
175+
175176
Resource parameters can be specified via argument `resources` which
176177
should be a named list and is passed as is to the template file. For
177178
example, to request that each job would get alloted 12 cores (one a
178179
single machine) and up to 5 GiB of memory, use:
179180

180181
```r
181-
> plan(batchtools_torque, resources = list(nodes = "1:ppn=12", vmem = "5gb"))
182+
plan(batchtools_slurm, resources = list(ntasks = 12, mem = "5G"))
182183
```
183184

184185
To specify the `resources` argument at the same time as using nested
@@ -187,34 +188,19 @@ arguments. For instance,
187188

188189
```r
189190
plan(list(
190-
tweak(batchtools_torque, resources = list(nodes = "1:ppn=12", vmem = "5gb")),
191+
tweak(batchtools_slurm, resources = list(ntasks = 12, mem = "5G")),
191192
multisession
192193
))
193194
```
194195

195-
causes the first level of futures to be submitted via the TORQUE job
196+
causes the first level of futures to be submitted via the Slurm job
196197
scheduler requesting 12 cores and 5 GiB of memory per job. The second
197198
level of futures will be evaluated using multisession using the 12
198199
cores given to each job by the scheduler.
199200

200-
A similar filename format is used for the other types of job
201-
schedulers supported. For instance, for Slurm the template file
202-
should be named `./batchtools.slurm.tmpl` or
203-
`~/.batchtools.slurm.tmpl` in order for
204-
205-
```r
206-
> plan(batchtools_slurm)
207-
```
208-
209-
to locate the file automatically. To specify this template file
210-
explicitly, use argument `template`, e.g.
211-
212-
```r
213-
> plan(batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
214-
```
215-
216-
For further details and examples on how to configure **batchtools** per
217-
se, see the [batchtools configuration] wiki page.
201+
For further details and examples on how to configure **batchtools**
202+
per se, see the [batchtools configuration] wiki page and the help
203+
pages for `batchtools_slurm`, `batchtools_sge`, etc.
218204

219205

220206
## Demos

vignettes/future.batchtools.md.rsp

Lines changed: 39 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ compute nodes:
4242

4343
```r
4444
> library(future.batchtools)
45-
> plan(batchtools_torque)
45+
> plan(batchtools_slurm)
4646
>
4747
> x %<-% { Sys.sleep(5); 3.14 }
4848
> y %<-% { Sys.sleep(5); 2.71 }
@@ -71,7 +71,7 @@ library(listenv)
7171
## should be using multisession, where the number of
7272
## parallel processes is automatically decided based on
7373
## what the cluster grants to each compute node.
74-
plan(list(batchtools_torque, multisession))
74+
plan(list(batchtools_slurm, multisession))
7575

7676
## Find all samples (one FASTQ file per sample)
7777
fqs <- dir(pattern = "[.]fastq$")
@@ -127,72 +127,73 @@ batchtools backends.
127127

128128
| Backend | Description | Alternative in future package
129129
|:-------------------------|:-------------------------------------------------------------------------|:------------------------------------
130-
| `batchtools_torque` | Futures are evaluated via a [TORQUE] / PBS job scheduler | N/A
131-
| `batchtools_slurm` | Futures are evaluated via a [Slurm] job scheduler | N/A
132-
| `batchtools_sge` | Futures are evaluated via a [Sun/Oracle Grid Engine (SGE)] job scheduler | N/A
133130
| `batchtools_lsf` | Futures are evaluated via a [Load Sharing Facility (LSF)] job scheduler | N/A
134131
| `batchtools_openlava` | Futures are evaluated via an [OpenLava] job scheduler | N/A
132+
| `batchtools_sge` | Futures are evaluated via a [Sun/Oracle Grid Engine (SGE)] job scheduler | N/A
133+
| `batchtools_slurm` | Futures are evaluated via a [Slurm] job scheduler | N/A
134+
| `batchtools_torque` | Futures are evaluated via a [TORQUE] / PBS job scheduler | N/A
135135
<%-- | `batchtools_docker` | Futures are evaluated via a [Docker Swarm] cluster | N/A --%>
136136
| `batchtools_custom` | Futures are evaluated via a custom batchtools configuration R script or via a set of cluster functions | N/A
137137
| `batchtools_multicore` | parallel evaluation by forking the current R process | `plan(multicore)`
138-
| `batchtools_local` | sequential evaluation in a separate R process (on current machine) | `plan(cluster, workers = "localhost")`
138+
| `batchtools_local` | sequential evaluation in a separate R process (on current machine) | `plan(cluster, workers = I(1))`
139139

140140

141141
### Examples
142142

143-
Below is an examples illustrating how to use `batchtools_torque` to
143+
Below is an examples illustrating how to use `batchtools_slurm` to
144144
configure the batchtools backend. For further details and examples on
145145
how to configure batchtools, see the [batchtools configuration] wiki
146146
page.
147147

148148
To configure **batchtools** for job schedulers we need to setup a
149149
`*.tmpl` template file that is used to generate the script used by the
150-
scheduler. This is what a template file for TORQUE / PBS may look
151-
like:
150+
scheduler. This is what a template file for Slurm may look like:
152151

153152
```sh
154153
#!/bin/bash
155154

156-
## Job name:
157-
#PBS -N <%%= if (exists("job.name", mode = "character")) job.name else job.hash %%>
155+
<%%
156+
defaults <- list(
157+
nodes = 1, # single-host processing
158+
time = "00:05:00", # 5-min runtime
159+
mem = "100M" # 100 MiB memory
160+
)
161+
resources <- c(resources, defaults[setdiff(names(defaults), names(resources))])
162+
opts <- unlist(resources, use.names = TRUE)
163+
opts <- sprintf("--%s=%s", names(opts), opts)
164+
opts <- paste(opts, collapse = " ") %>
165+
%%>
166+
167+
#SBATCH --job-name=<%%= job.name %%>
168+
#SBATCH --output=<%%= log.file %%>
169+
#SBATCH <%%= opts %%>
158170

159-
## Direct streams to logfile:
160-
#PBS -o <%%= log.file %%>
161-
162-
## Merge standard error and output:
163-
#PBS -j oe
164-
165-
## Email on abort (a) and termination (e), but not when starting (b)
166-
#PBS -m ae
167-
168-
## Resources needed:
169-
<%% if (length(resources) > 0) {
170-
opts <- unlist(resources, use.names = TRUE)
171-
opts <- sprintf("%s=%s", names(opts), opts)
172-
opts <- paste(opts, collapse = ",") %%>
173-
#PBS -l <%%= opts %%>
174-
<%% } %%>
175-
176-
## Launch R and evaluated the batchtools R job
177171
Rscript -e 'batchtools::doJobCollection("<%%= uri %%>")'
178172
```
179173

180-
If this template is saved to file `batchtools.torque.tmpl` (without
181-
period) in the working directory or as `.batchtools.torque.tmpl` (with
174+
If this template is saved to file `batchtools.slurm.tmpl` (without
175+
period) in the working directory or as `.batchtools.slurm.tmpl` (with
182176
period) the user's home directory, then it will be automatically
183177
located by the **batchtools** framework and loaded when doing:
184178

185179
```r
186-
> plan(batchtools_torque)
180+
plan(batchtools_slurm)
187181
```
188182

183+
It is also possible to specify the template file explicitly, e.g.
184+
185+
```r
186+
plan(batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
187+
```
188+
189+
189190
Resource parameters can be specified via argument `resources` which
190191
should be a named list and is passed as is to the template file. For
191192
example, to request that each job would get alloted 12 cores (one a
192193
single machine) and up to 5 GiB of memory, use:
193194

194195
```r
195-
> plan(batchtools_torque, resources = list(nodes = "1:ppn=12", vmem = "5gb"))
196+
plan(batchtools_slurm, resources = list(ntasks = 12, mem = "5G"))
196197
```
197198

198199
To specify the `resources` argument at the same time as using nested
@@ -201,34 +202,19 @@ arguments. For instance,
201202

202203
```r
203204
plan(list(
204-
tweak(batchtools_torque, resources = list(nodes = "1:ppn=12", vmem = "5gb")),
205+
tweak(batchtools_slurm, resources = list(ntasks = 12, mem = "5G")),
205206
multisession
206207
))
207208
```
208209

209-
causes the first level of futures to be submitted via the TORQUE job
210+
causes the first level of futures to be submitted via the Slurm job
210211
scheduler requesting 12 cores and 5 GiB of memory per job. The second
211212
level of futures will be evaluated using multisession using the 12
212213
cores given to each job by the scheduler.
213214

214-
A similar filename format is used for the other types of job
215-
schedulers supported. For instance, for Slurm the template file
216-
should be named `./batchtools.slurm.tmpl` or
217-
`~/.batchtools.slurm.tmpl` in order for
218-
219-
```r
220-
> plan(batchtools_slurm)
221-
```
222-
223-
to locate the file automatically. To specify this template file
224-
explicitly, use argument `template`, e.g.
225-
226-
```r
227-
> plan(batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
228-
```
229-
230-
For further details and examples on how to configure **batchtools** per
231-
se, see the [batchtools configuration] wiki page.
215+
For further details and examples on how to configure **batchtools**
216+
per se, see the [batchtools configuration] wiki page and the help
217+
pages for `batchtools_slurm`, `batchtools_sge`, etc.
232218

233219

234220
## Demos

0 commit comments

Comments
 (0)