Skip to content

Commit aa3b060

Browse files
authored
Replace "Channel" with the lowercase "channel" (#686)
* Replace "Channel" with the lowercase "channel" The uppercase version no longer provides autocomplete for the Nextflow language server. Also fix markdown heading ordering and heading level
1 parent 7e594a6 commit aa3b060

File tree

115 files changed

+507
-507
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

115 files changed

+507
-507
lines changed

docs/archive/advanced/groovy.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ params.input = "https://raw.githubusercontent.com/nf-core/test-datasets/rnaseq/s
1515
1616
workflow {
1717
18-
Channel.fromPath(params.input)
18+
channel.fromPath(params.input)
1919
.splitCsv(header: true)
2020
.view()
2121
}
@@ -26,7 +26,7 @@ Let's write a small closure to parse each row into the now-familiar map + files
2626
```groovy linenums="5" hl_lines="5-8"
2727
workflow {
2828
29-
samples = Channel.fromPath(params.input)
29+
samples = channel.fromPath(params.input)
3030
.splitCsv(header: true)
3131
.map { row ->
3232
def meta = row.subMap('sample', 'strandedness')
@@ -103,7 +103,7 @@ This is now able to be passed through to our FASTP process:
103103
```groovy linenums="5" hl_lines="15 17"
104104
workflow {
105105
106-
samples = Channel.fromPath(params.input)
106+
samples = channel.fromPath(params.input)
107107
.splitCsv(header: true)
108108
.map { row ->
109109
def (readKeys, metaKeys) = row.keySet().split { key -> key =~ /^fastq/ }
@@ -166,7 +166,7 @@ Now let's create a second entrypoint to quickly pass these JSON files through so
166166

167167
```groovy linenums="5"
168168
workflow Jsontest {
169-
Channel.fromPath("results/fastp/json/*.json")
169+
channel.fromPath("results/fastp/json/*.json")
170170
.view()
171171
}
172172
```
@@ -185,7 +185,7 @@ def getFilteringResult(json_file) {
185185
}
186186
187187
workflow Jsontest {
188-
Channel.fromPath("results/fastp/json/*.json")
188+
channel.fromPath("results/fastp/json/*.json")
189189
.view()
190190
}
191191
```

docs/archive/advanced/grouping.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ cd grouping
1212

1313
```groovy linenums="1"
1414
workflow {
15-
Channel.fromPath("data/samplesheet.csv")
15+
channel.fromPath("data/samplesheet.csv")
1616
.splitCsv( header:true )
1717
.map { row ->
1818
def meta = [id:row.id, repeat:row.repeat, type:row.type]
@@ -32,7 +32,7 @@ The first change we're going to make is to correct some repetitive code that we'
3232

3333
```groovy linenums="1" hl_lines="5"
3434
workflow {
35-
Channel.fromPath("data/samplesheet.csv")
35+
channel.fromPath("data/samplesheet.csv")
3636
.splitCsv( header:true )
3737
.map { row ->
3838
def meta = row.subMap('id', 'repeat', 'type')
@@ -64,7 +64,7 @@ workflow {
6464

6565
```groovy linenums="1"
6666
workflow {
67-
Channel.fromPath("data/samplesheet.csv")
67+
channel.fromPath("data/samplesheet.csv")
6868
.splitCsv( header:true )
6969
.map { row ->
7070
def meta = row.subMap('id', 'repeat', 'type')
@@ -100,7 +100,7 @@ workflow {
100100

101101
```groovy linenums="1"
102102
workflow {
103-
Channel.fromPath("data/samplesheet.csv")
103+
channel.fromPath("data/samplesheet.csv")
104104
.splitCsv( header:true )
105105
.map { row ->
106106
def meta = row.subMap('id', 'repeat', 'type')
@@ -146,9 +146,9 @@ process MapReads {
146146
}
147147
148148
workflow {
149-
reference = Channel.fromPath("data/genome.fasta").first()
149+
reference = channel.fromPath("data/genome.fasta").first()
150150
151-
samples = Channel.fromPath("data/samplesheet.csv")
151+
samples = channel.fromPath("data/samplesheet.csv")
152152
.splitCsv( header:true )
153153
.map { row ->
154154
def meta = row.subMap('id', 'repeat', 'type')
@@ -197,9 +197,9 @@ mapped_reads.view()
197197

198198
```groovy linenums="13" hl_lines="16-20"
199199
workflow {
200-
reference = Channel.fromPath("data/genome.fasta").first()
200+
reference = channel.fromPath("data/genome.fasta").first()
201201

202-
samples = Channel.fromPath("data/samplesheet.csv")
202+
samples = channel.fromPath("data/samplesheet.csv")
203203
.splitCsv( header:true )
204204
.map { row ->
205205
def meta = row.subMap('id', 'repeat', 'type')
@@ -263,7 +263,7 @@ The previous exercise demonstrated the fan-in approach using `groupTuple` and `g
263263
We can take an existing bed file, for example and turn it into a channel of Maps.
264264

265265
```groovy linenums="26"
266-
intervals = Channel.fromPath("data/intervals.bed")
266+
intervals = channel.fromPath("data/intervals.bed")
267267
.splitCsv(header: ['chr', 'start', 'stop', 'name'], sep: '\t')
268268
.collectFile { entry -> ["${entry.name}.bed", entry*.value.join("\t")] }
269269
.view()

docs/archive/advanced/metadata.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ A first pass attempt at pulling these files into Nextflow might use the `fromFil
5252

5353
```groovy linenums="1" hl_lines="2"
5454
workflow {
55-
Channel.fromFilePairs("data/reads/*/*_R{1,2}.fastq.gz")
55+
channel.fromFilePairs("data/reads/*/*_R{1,2}.fastq.gz")
5656
.view()
5757
}
5858
```
@@ -67,7 +67,7 @@ We can use the `tokenize` method to split our id. To sanity-check, I just pipe t
6767

6868
```groovy linenums="1" hl_lines="3-5"
6969
workflow {
70-
Channel.fromFilePairs("data/reads/*/*_R{1,2}.fastq.gz")
70+
channel.fromFilePairs("data/reads/*/*_R{1,2}.fastq.gz")
7171
.map { id, reads ->
7272
id.tokenize("_")
7373
}
@@ -175,7 +175,7 @@ In this particular example, we know ahead of time that the treatments must be th
175175

176176
```groovy linenums="1" hl_lines="5"
177177
workflow {
178-
Channel.fromFilePairs("data/reads/*/*_R{1,2}.fastq.gz")
178+
channel.fromFilePairs("data/reads/*/*_R{1,2}.fastq.gz")
179179
.map { id, reads ->
180180
def (sample, replicate, type) = id.tokenize("_")
181181
def (treatmentFwd, treatmentRev) = reads*.parent*.name*.minus(~/treatment/)

docs/archive/advanced/operators.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Map is certainly the most commonly used of the operators covered here. It's a wa
1010

1111
```groovy linenums="1"
1212
workflow {
13-
Channel.of( 1, 2, 3, 4, 5 )
13+
channel.of( 1, 2, 3, 4, 5 )
1414
.map { num -> num * num }
1515
.view()
1616
}
@@ -31,7 +31,7 @@ Groovy is an optionally typed language, and it is possible to specify the type o
3131

3232
```groovy linenums="1" hl_lines="3"
3333
workflow {
34-
Channel.of( 1, 2, 3, 4, 5 )
34+
channel.of( 1, 2, 3, 4, 5 )
3535
.map { Integer num -> num * num }
3636
.view()
3737
}
@@ -45,7 +45,7 @@ If you find yourself re-using the same closure multiple times in your pipeline,
4545
workflow {
4646
def squareIt = { Integer num -> num * num }
4747
48-
Channel.of( 1, 2, 3, 4, 5 )
48+
channel.of( 1, 2, 3, 4, 5 )
4949
.map( squareIt )
5050
.view()
5151
}
@@ -58,7 +58,7 @@ workflow {
5858
def squareIt = { num -> num * num }
5959
def addTwo = { num -> num + 2 }
6060
61-
Channel.of( 1, 2, 3, 4, 5 )
61+
channel.of( 1, 2, 3, 4, 5 )
6262
.map( squareIt >> addTwo )
6363
.view()
6464
}
@@ -81,7 +81,7 @@ workflow {
8181
def squareIt = { num -> num * num }
8282
def addTwo = { num -> num + 2 }
8383
84-
Channel.of( 1, 2, 3, 4, 5 )
84+
channel.of( 1, 2, 3, 4, 5 )
8585
.map( squareIt )
8686
.map( addTwo )
8787
.view()
@@ -95,7 +95,7 @@ workflow {
9595
def timesN = { multiplier, num -> num * multiplier }
9696
def timesTen = timesN.curry(10)
9797
98-
Channel.of( 1, 2, 3, 4, 5 )
98+
channel.of( 1, 2, 3, 4, 5 )
9999
.map( timesTen )
100100
.view()
101101
}
@@ -110,7 +110,7 @@ workflow {
110110
def timesN = { multiplier, num -> num * multiplier }
111111
def timesTen = timesN.curry(10)
112112
113-
Channel.of( 1, 2, 3, 4, 5 )
113+
channel.of( 1, 2, 3, 4, 5 )
114114
.map( timesTen )
115115
.view { value -> "Found '$value' (${value.getClass()})"}
116116
}
@@ -126,7 +126,7 @@ A common Nextflow pattern is for a simple samplesheet to be passed as primary in
126126

127127
```groovy linenums="1" hl_lines="2"
128128
workflow {
129-
Channel.fromPath("data/samplesheet.csv")
129+
channel.fromPath("data/samplesheet.csv")
130130
.splitCsv( header: true )
131131
.view()
132132
}
@@ -148,7 +148,7 @@ workflow {
148148

149149
```groovy linenums="1" hl_lines="4-6"
150150
workflow {
151-
Channel.fromPath("data/samplesheet.csv")
151+
channel.fromPath("data/samplesheet.csv")
152152
.splitCsv( header: true )
153153
.map { row ->
154154
[row.id, [file(row.fastq1), file(row.fastq2)]]
@@ -167,7 +167,7 @@ workflow {
167167

168168
```groovy linenums="1" hl_lines="4-7"
169169
workflow {
170-
Channel.fromPath("data/samplesheet.csv")
170+
channel.fromPath("data/samplesheet.csv")
171171
.splitCsv( header: true )
172172
.map { row ->
173173
def metaMap = [id: row.id, type: row.type, repeat: row.repeat]
@@ -194,7 +194,7 @@ Using the `splitCsv` operator would give us one entry that would contain all fou
194194

195195
```groovy linenums="1" hl_lines="4-11"
196196
workflow {
197-
Channel.fromPath("data/samplesheet.ugly.csv")
197+
channel.fromPath("data/samplesheet.ugly.csv")
198198
.splitCsv( header: true )
199199
.multiMap { row ->
200200
tumor:
@@ -223,7 +223,7 @@ In the example above, the `multiMap` operator was necessary because we were supp
223223

224224
```groovy linenums="1" hl_lines="5-8"
225225
workflow {
226-
Channel.fromPath("data/samplesheet.csv")
226+
channel.fromPath("data/samplesheet.csv")
227227
.splitCsv( header: true )
228228
.map { row -> [[id: row.id, repeat: row.repeat, type: row.type], [file(row.fastq1), file(row.fastq2)]] }
229229
.branch { meta, _reads ->
@@ -284,7 +284,7 @@ Certain Nextflow operators, such as `multiMap` and `branch`, return special obje
284284

285285
```groovy linenums="1" hl_lines="3-6"
286286
workflow {
287-
numbers = Channel.of( 1, 2, 3, 4, 5 )
287+
numbers = channel.of( 1, 2, 3, 4, 5 )
288288
.multiMap { num ->
289289
small: num
290290
large: num * 10
@@ -316,7 +316,7 @@ The following will be kept synchronous, allowing you to supply multiple channel
316316

317317
```groovy linenums="11" hl_lines="8"
318318
workflow {
319-
numbers = Channel.of( 1, 2, 3, 4, 5 )
319+
numbers = channel.of( 1, 2, 3, 4, 5 )
320320
.multiMap { num ->
321321
small: num
322322
large: num * 10
@@ -344,7 +344,7 @@ A common operation is to group elements from a _single_ channel where those elem
344344

345345
```groovy linenums="1" hl_lines="6"
346346
workflow {
347-
Channel.fromPath("data/samplesheet.csv")
347+
channel.fromPath("data/samplesheet.csv")
348348
.splitCsv(header: true)
349349
.map { row ->
350350
def meta = [id: row.id, type: row.type]
@@ -360,7 +360,7 @@ The `groupTuple` operator allows us to combine elements that share a common key:
360360

361361
```groovy linenums="1" hl_lines="8"
362362
workflow {
363-
Channel.fromPath("data/samplesheet.csv")
363+
channel.fromPath("data/samplesheet.csv")
364364
.splitCsv(header: true)
365365
.map { row ->
366366
def meta = [id: row.id, type: row.type]
@@ -379,7 +379,7 @@ Given a workflow that returns one element per sample, where we have grouped the
379379

380380
```groovy linenums="1"
381381
workflow {
382-
Channel.fromPath("data/samplesheet.csv")
382+
channel.fromPath("data/samplesheet.csv")
383383
.splitCsv(header: true)
384384
.map { row ->
385385
def meta = [id: row.id, type: row.type]
@@ -405,7 +405,7 @@ If we add in a `transpose`, each repeat number is matched back to the appropriat
405405

406406
```groovy linenums="1" hl_lines="9"
407407
workflow {
408-
Channel.fromPath("data/samplesheet.csv")
408+
channel.fromPath("data/samplesheet.csv")
409409
.splitCsv(header: true)
410410
.map { row ->
411411
def meta = [id: row.id, type: row.type]
@@ -436,7 +436,7 @@ As the name suggests, the `flatMap` operator allows you to modify the elements i
436436

437437
```groovy linenums="1" hl_lines="5"
438438
workflow {
439-
numbers = Channel.of(1, 2)
439+
numbers = channel.of(1, 2)
440440
441441
numbers
442442
.flatMap { n -> [ n, n*10, n*100 ] }
@@ -461,7 +461,7 @@ The input channel has two elements. For each element in the input channel, we re
461461

462462
```
463463
workflow {
464-
numbers = Channel.of(1, 2)
464+
numbers = channel.of(1, 2)
465465

466466
numbers
467467
.flatMap { n -> [ n, [n*10, n*100] ] }
@@ -484,7 +484,7 @@ The input channel has two elements. For each element in the input channel, we re
484484

485485
```groovy linenums="1" hl_lines="3-4"
486486
workflow {
487-
Channel.fromPath("data/datfiles/sample*/*.dat", checkIfExists: true)
487+
channel.fromPath("data/datfiles/sample*/*.dat", checkIfExists: true)
488488
.map { myfile -> [myfile.getParent().name, myfile] }
489489
.groupTuple()
490490
.view()
@@ -519,7 +519,7 @@ The input channel has two elements. For each element in the input channel, we re
519519

520520
```groovy linenums="1" hl_lines="6-11"
521521
workflow {
522-
Channel.fromPath("data/datfiles/sample*/*.dat", checkIfExists: true)
522+
channel.fromPath("data/datfiles/sample*/*.dat", checkIfExists: true)
523523
.map { myfile -> [myfile.getParent().name, myfile] }
524524
.groupTuple()
525525
.flatMap { id, files ->
@@ -542,7 +542,7 @@ At its most basic, this operator writes the contents of the elements of a channe
542542

543543
```groovy linenums="1"
544544
workflow {
545-
characters = Channel.of(
545+
characters = channel.of(
546546
['name': 'Jake', 'title': 'Detective'],
547547
['name': 'Rosa', 'title': 'Detective'],
548548
['name': 'Terry', 'title': 'Sergeant'],
@@ -632,7 +632,7 @@ process WriteBio {
632632
}
633633
634634
workflow {
635-
characters = Channel.of(
635+
characters = channel.of(
636636
['name': 'Jake', 'title': 'Detective'],
637637
['name': 'Rosa', 'title': 'Detective'],
638638
['name': 'Terry', 'title': 'Sergeant'],

0 commit comments

Comments
 (0)