You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,10 +45,10 @@ library("future")
45
45
library("listenv")
46
46
## The first level of futures should be submitted to the
47
47
## cluster using batchtools. The second level of futures
48
-
## should be using multiprocessing, where the number of
48
+
## should be using multisession, where the number of
49
49
## parallel processes is automatically decided based on
50
50
## what the cluster grants to each compute node.
51
-
plan(list(batchtools_torque, multiprocess))
51
+
plan(list(batchtools_torque, multisession))
52
52
53
53
## Find all samples (one FASTQ file per sample)
54
54
fqs<- dir(pattern="[.]fastq$")
@@ -77,11 +77,11 @@ bams <- as.list(bams)
77
77
```
78
78
Note that a user who do not have access to a cluster could use the same script processing samples sequentially and chromosomes in parallel on a single machine using:
79
79
```r
80
-
plan(list(sequential, multiprocess))
80
+
plan(list(sequential, multisession))
81
81
```
82
82
or samples in parallel and chromosomes sequentially using:
83
83
```r
84
-
plan(list(multiprocess, sequential))
84
+
plan(list(multisession, sequential))
85
85
```
86
86
87
87
For an introduction as well as full details on how to use futures,
@@ -157,10 +157,10 @@ To specify the `resources` argument at the same time as using nested future stra
causes the first level of futures to be submitted via the TORQUE job scheduler requesting 12 cores and 5 GiB of memory per job. The second level of futures will be evaluated using multiprocessing using the 12 cores given to each job by the scheduler.
163
+
causes the first level of futures to be submitted via the TORQUE job scheduler requesting 12 cores and 5 GiB of memory per job. The second level of futures will be evaluated using multisession using the 12 cores given to each job by the scheduler.
164
164
165
165
A similar filename format is used for the other types of job schedulers supported. For instance, for Slurm the template file should be named `./batchtools.slurm.tmpl` or `~/.batchtools.slurm.tmpl` in order for
Copy file name to clipboardExpand all lines: vignettes/future.batchtools.md.rsp
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -64,10 +64,10 @@ library("future")
64
64
library("listenv")
65
65
## The first level of futures should be submitted to the
66
66
## cluster using batchtools. The second level of futures
67
-
## should be using multiprocessing, where the number of
67
+
## should be using multisession, where the number of
68
68
## parallel processes is automatically decided based on
69
69
## what the cluster grants to each compute node.
70
-
plan(list(batchtools_torque, multiprocess))
70
+
plan(list(batchtools_torque, multisession))
71
71
72
72
## Find all samples (one FASTQ file per sample)
73
73
fqs <- dir(pattern = "[.]fastq$")
@@ -96,11 +96,11 @@ bams <- as.list(bams)
96
96
```
97
97
Note that a user who do not have access to a cluster could use the same script processing samples sequentially and chromosomes in parallel on a single machine using:
98
98
```r
99
-
plan(list(sequential, multiprocess))
99
+
plan(list(sequential, multisession))
100
100
```
101
101
or samples in parallel and chromosomes sequentially using:
102
102
```r
103
-
plan(list(multiprocess, sequential))
103
+
plan(list(multisession, sequential))
104
104
```
105
105
106
106
For an introduction as well as full details on how to use futures,
@@ -177,10 +177,10 @@ To specify the `resources` argument at the same time as using nested future stra
causes the first level of futures to be submitted via the TORQUE job scheduler requesting 12 cores and 5 GiB of memory per job. The second level of futures will be evaluated using multiprocessing using the 12 cores given to each job by the scheduler.
183
+
causes the first level of futures to be submitted via the TORQUE job scheduler requesting 12 cores and 5 GiB of memory per job. The second level of futures will be evaluated using multisession using the 12 cores given to each job by the scheduler.
184
184
185
185
A similar filename format is used for the other types of job schedulers supported. For instance, for Slurm the template file should be named `./batchtools.slurm.tmpl` or `~/.batchtools.slurm.tmpl` in order for
0 commit comments