@@ -23,7 +23,7 @@ users of your package to leverage the compute power of
2323high-performance computing (HPC) clusters via a simple switch in
2424settings - without having to change any code at all.
2525
26- For instance, if ** batchtools** is properly configures , the below two
26+ For instance, if ** batchtools** is properly configured , the below two
2727expressions for futures ` x ` and ` y ` will be processed on two different
2828compute nodes:
2929
@@ -127,82 +127,36 @@ batchtools backends.
127127
128128### Examples
129129
130- Below is an examples illustrating how to use ` batchtools_slurm ` to
131- configure the batchtools backend. For further details and examples on
132- how to configure batchtools, see the [ batchtools configuration] wiki
133- page.
134-
135- To configure ** batchtools** for job schedulers we need to setup a
136- ` *.tmpl ` template file that is used to generate the script used by the
137- scheduler. This is what a template file for Slurm may look like:
138-
139- ``` sh
140- #! /bin/bash
141-
142- < %
143- defaults < - list(
144- nodes = 1, # single-host processing
145- time = " 00:05:00" , # 5-min runtime
146- mem = " 100M" # 100 MiB memory
147- )
148- resources < - c(resources, defaults[setdiff(names(defaults), names(resources))])
149- opts < - unlist(resources, use.names = TRUE)
150- opts < - sprintf(" --%s=%s" , names(opts), opts)
151- opts < - paste(opts, collapse = " " ) %>
152- %>
153-
154- # SBATCH --job-name=<%= job.name %>
155- # SBATCH --output=<%= log.file %>
156- # SBATCH <%= opts %>
157-
158- Rscript -e ' batchtools::doJobCollection("<%= uri %>")'
159- ```
160-
161- If this template is saved to file ` batchtools.slurm.tmpl ` (without
162- period) in the working directory or as ` .batchtools.slurm.tmpl ` (with
163- period) the user's home directory, then it will be automatically
164- located by the ** batchtools** framework and loaded when doing:
165-
166- ``` r
167- plan(future.batchtools :: batchtools_slurm )
168- ```
169-
170- It is also possible to specify the template file explicitly, e.g.
171-
172- ``` r
173- plan(future.batchtools :: batchtools_slurm , template = " /path/to/batchtools.slurm.tmpl" )
174- ```
175-
176-
177- Resource parameters can be specified via argument ` resources ` which
178- should be a named list and is passed as is to the template file. For
179- example, to request that each job would get alloted 12 cores (one a
180- single machine) and up to 5 GiB of memory, use:
130+ Below is an examples on how use resolve futures via a Slurm scheduler.
181131
182132``` r
183- plan(future.batchtools :: batchtools_slurm , resources = list (ntasks = 12 , mem = " 5G" ))
184- ```
185-
186- To specify the ` resources ` argument at the same time as using nested
187- future strategies, one can use ` tweak() ` to tweak the default
188- arguments. For instance,
133+ library(future )
189134
190- ``` r
191- plan(list (
192- tweak(future.batchtools :: batchtools_slurm , resources = list (ntasks = 12 , mem = " 5G" )),
193- multisession
135+ # Limit runtime to 10 minutes and memory to 400 MiB per future,
136+ # request a parallel environment with four slots on a single host.
137+ # On this system, R is available via environment module 'r'. By
138+ # specifying 'r/4.5.1', 'module load r/4.5.1' will be added to
139+ # the submitted job script.
140+ plan(future.batchtools :: batchtools_slurm , resources = list (
141+ time = " 00:10:00" , mem = " 400M" ,
142+ asis = c(" --nodes=1" , " --ntasks=4" ),
143+ modules = c(" r/4.5.1" )
194144))
195- ```
196-
197- causes the first level of futures to be submitted via the Slurm job
198- scheduler requesting 12 cores and 5 GiB of memory per job. The second
199- level of futures will be evaluated using multisession using the 12
200- cores given to each job by the scheduler.
201-
202- For further details and examples on how to configure ** batchtools**
203- per se, see the [ batchtools configuration] wiki page and the help
204- pages for ` batchtools_slurm ` , ` batchtools_sge ` , etc.
205145
146+ # Give it a spin
147+ f <- future({
148+ data.frame (
149+ hostname = Sys.info()[[" nodename" ]],
150+ os = Sys.info()[[" sysname" ]],
151+ cores = unname(parallelly :: availableCores()),
152+ modules = Sys.getenv(" LOADEDMODULES" )
153+ )
154+ })
155+ info <- value(f )
156+ print(info )
157+ # > hostname os cores modules
158+ # > 1 n12 Linux 4 r/4.5.1
159+ ```
206160
207161## Demos
208162
0 commit comments