Skip to content

Commit 5054392

Browse files
README: Simplify Slurm example [ci skip]
1 parent a66ade5 commit 5054392

File tree

3 files changed

+52
-144
lines changed

3 files changed

+52
-144
lines changed

DESCRIPTION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
Package: future.batchtools
2-
Version: 0.12.2-9966
2+
Version: 0.12.2-9967
33
Depends:
44
R (>= 3.2.0),
55
parallelly,

README.md

Lines changed: 26 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ users of your package to leverage the compute power of
2323
high-performance computing (HPC) clusters via a simple switch in
2424
settings - without having to change any code at all.
2525

26-
For instance, if **batchtools** is properly configures, the below two
26+
For instance, if **batchtools** is properly configured, the below two
2727
expressions for futures `x` and `y` will be processed on two different
2828
compute nodes:
2929

@@ -127,82 +127,36 @@ batchtools backends.
127127

128128
### Examples
129129

130-
Below is an examples illustrating how to use `batchtools_slurm` to
131-
configure the batchtools backend. For further details and examples on
132-
how to configure batchtools, see the [batchtools configuration] wiki
133-
page.
134-
135-
To configure **batchtools** for job schedulers we need to setup a
136-
`*.tmpl` template file that is used to generate the script used by the
137-
scheduler. This is what a template file for Slurm may look like:
138-
139-
```sh
140-
#!/bin/bash
141-
142-
<%
143-
defaults <- list(
144-
nodes = 1, # single-host processing
145-
time = "00:05:00", # 5-min runtime
146-
mem = "100M" # 100 MiB memory
147-
)
148-
resources <- c(resources, defaults[setdiff(names(defaults), names(resources))])
149-
opts <- unlist(resources, use.names = TRUE)
150-
opts <- sprintf("--%s=%s", names(opts), opts)
151-
opts <- paste(opts, collapse = " ") %>
152-
%>
153-
154-
#SBATCH --job-name=<%= job.name %>
155-
#SBATCH --output=<%= log.file %>
156-
#SBATCH <%= opts %>
157-
158-
Rscript -e 'batchtools::doJobCollection("<%= uri %>")'
159-
```
160-
161-
If this template is saved to file `batchtools.slurm.tmpl` (without
162-
period) in the working directory or as `.batchtools.slurm.tmpl` (with
163-
period) the user's home directory, then it will be automatically
164-
located by the **batchtools** framework and loaded when doing:
165-
166-
```r
167-
plan(future.batchtools::batchtools_slurm)
168-
```
169-
170-
It is also possible to specify the template file explicitly, e.g.
171-
172-
```r
173-
plan(future.batchtools::batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
174-
```
175-
176-
177-
Resource parameters can be specified via argument `resources` which
178-
should be a named list and is passed as is to the template file. For
179-
example, to request that each job would get alloted 12 cores (one a
180-
single machine) and up to 5 GiB of memory, use:
130+
Below is an examples on how use resolve futures via a Slurm scheduler.
181131

182132
```r
183-
plan(future.batchtools::batchtools_slurm, resources = list(ntasks = 12, mem = "5G"))
184-
```
185-
186-
To specify the `resources` argument at the same time as using nested
187-
future strategies, one can use `tweak()` to tweak the default
188-
arguments. For instance,
133+
library(future)
189134

190-
```r
191-
plan(list(
192-
tweak(future.batchtools::batchtools_slurm, resources = list(ntasks = 12, mem = "5G")),
193-
multisession
135+
# Limit runtime to 10 minutes and memory to 400 MiB per future,
136+
# request a parallel environment with four slots on a single host.
137+
# On this system, R is available via environment module 'r'. By
138+
# specifying 'r/4.5.1', 'module load r/4.5.1' will be added to
139+
# the submitted job script.
140+
plan(future.batchtools::batchtools_slurm, resources = list(
141+
time = "00:10:00", mem = "400M",
142+
asis = c("--nodes=1", "--ntasks=4"),
143+
modules = c("r/4.5.1")
194144
))
195-
```
196-
197-
causes the first level of futures to be submitted via the Slurm job
198-
scheduler requesting 12 cores and 5 GiB of memory per job. The second
199-
level of futures will be evaluated using multisession using the 12
200-
cores given to each job by the scheduler.
201-
202-
For further details and examples on how to configure **batchtools**
203-
per se, see the [batchtools configuration] wiki page and the help
204-
pages for `batchtools_slurm`, `batchtools_sge`, etc.
205145

146+
# Give it a spin
147+
f <- future({
148+
data.frame(
149+
hostname = Sys.info()[["nodename"]],
150+
os = Sys.info()[["sysname"]],
151+
cores = unname(parallelly::availableCores()),
152+
modules = Sys.getenv("LOADEDMODULES")
153+
)
154+
})
155+
info <- value(f)
156+
print(info)
157+
#> hostname os cores modules
158+
#> 1 n12 Linux 4 r/4.5.1
159+
```
206160

207161
## Demos
208162

vignettes/future.batchtools.md.rsp

Lines changed: 25 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -141,82 +141,36 @@ batchtools backends.
141141

142142
### Examples
143143

144-
Below is an examples illustrating how to use `batchtools_slurm` to
145-
configure the batchtools backend. For further details and examples on
146-
how to configure batchtools, see the [batchtools configuration] wiki
147-
page.
148-
149-
To configure **batchtools** for job schedulers we need to setup a
150-
`*.tmpl` template file that is used to generate the script used by the
151-
scheduler. This is what a template file for Slurm may look like:
152-
153-
```sh
154-
#!/bin/bash
155-
156-
<%%
157-
defaults <- list(
158-
nodes = 1, # single-host processing
159-
time = "00:05:00", # 5-min runtime
160-
mem = "100M" # 100 MiB memory
161-
)
162-
resources <- c(resources, defaults[setdiff(names(defaults), names(resources))])
163-
opts <- unlist(resources, use.names = TRUE)
164-
opts <- sprintf("--%s=%s", names(opts), opts)
165-
opts <- paste(opts, collapse = " ") %>
166-
%%>
167-
168-
#SBATCH --job-name=<%%= job.name %%>
169-
#SBATCH --output=<%%= log.file %%>
170-
#SBATCH <%%= opts %%>
171-
172-
Rscript -e 'batchtools::doJobCollection("<%%= uri %%>")'
173-
```
174-
175-
If this template is saved to file `batchtools.slurm.tmpl` (without
176-
period) in the working directory or as `.batchtools.slurm.tmpl` (with
177-
period) the user's home directory, then it will be automatically
178-
located by the **batchtools** framework and loaded when doing:
179-
180-
```r
181-
plan(future.batchtools::batchtools_slurm)
182-
```
183-
184-
It is also possible to specify the template file explicitly, e.g.
185-
186-
```r
187-
plan(future.batchtools::batchtools_slurm, template = "/path/to/batchtools.slurm.tmpl")
188-
```
189-
190-
191-
Resource parameters can be specified via argument `resources` which
192-
should be a named list and is passed as is to the template file. For
193-
example, to request that each job would get alloted 12 cores (one a
194-
single machine) and up to 5 GiB of memory, use:
144+
Below is an examples on how use resolve futures via a Slurm scheduler.
195145

196146
```r
197-
plan(future.batchtools::batchtools_slurm, resources = list(ntasks = 12, mem = "5G"))
198-
```
199-
200-
To specify the `resources` argument at the same time as using nested
201-
future strategies, one can use `tweak()` to tweak the default
202-
arguments. For instance,
147+
library(future)
203148

204-
```r
205-
plan(list(
206-
tweak(future.batchtools::batchtools_slurm, resources = list(ntasks = 12, mem = "5G")),
207-
multisession
149+
# Limit runtime to 10 minutes and memory to 400 MiB per future,
150+
# request a parallel environment with four slots on a single host.
151+
# On this system, R is available via environment module 'r'. By
152+
# specifying 'r/4.5.1', 'module load r/4.5.1' will be added to
153+
# the submitted job script.
154+
plan(future.batchtools::batchtools_slurm, resources = list(
155+
time = "00:10:00", mem = "400M",
156+
asis = c("--nodes=1", "--ntasks=4"),
157+
modules = c("r/4.5.1")
208158
))
209-
```
210-
211-
causes the first level of futures to be submitted via the Slurm job
212-
scheduler requesting 12 cores and 5 GiB of memory per job. The second
213-
level of futures will be evaluated using multisession using the 12
214-
cores given to each job by the scheduler.
215-
216-
For further details and examples on how to configure **batchtools**
217-
per se, see the [batchtools configuration] wiki page and the help
218-
pages for `batchtools_slurm`, `batchtools_sge`, etc.
219159

160+
# Give it a spin
161+
f <- future({
162+
data.frame(
163+
hostname = Sys.info()[["nodename"]],
164+
os = Sys.info()[["sysname"]],
165+
cores = unname(parallelly::availableCores()),
166+
modules = Sys.getenv("LOADEDMODULES")
167+
)
168+
})
169+
info <- value(f)
170+
print(info)
171+
#> hostname os cores modules
172+
#> 1 n12 Linux 4 r/4.5.1
173+
```
220174

221175
## Demos
222176

0 commit comments

Comments
 (0)