You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Conda environment name change (since v2.2.0 or 6/13/2022)
7
7
8
-
New Caper is out. You need to update your Caper to work with the latest ENCODE ChIP-seq pipeline.
9
-
```bash
10
-
$ pip install caper --upgrade
8
+
Pipeline's Conda environment's names have been shortened to work around the following error:
11
9
```
12
-
13
-
## Local/HPC users and new Caper>=2.1
14
-
15
-
There are tons of changes for local/HPC backends: `local`, `slurm`, `sge`, `pbs` and `lsf`(added). Make a backup of your current Caper configuration file `~/.caper/default.conf` and run `caper init`. Local/HPC users need to reset/initialize Caper's configuration file according to your chosen backend. Edit the configuration file and follow instructions in there.
16
-
```bash
17
-
$ cd~/.caper
18
-
$ cp default.conf default.conf.bak
19
-
$ caper init [YOUR_BACKEND]
10
+
PaddingError: Placeholder of length '80' too short in package /XXXXXXXXXXX/miniconda3/envs/
20
11
```
21
12
22
-
In order to run a pipeline, you need to add one of the following flags to specify the environment to run each task within. i.e. `--conda`, `--singularity` and `--docker`. These flags are not required for cloud backend users (`aws` and `gcp`).
13
+
You need to reinstall pipeline's Conda environment. It's recommended to do this for every version update.
23
14
```bash
24
-
# for example
25
-
$ caper run ... --singularity
15
+
$ bash scripts/uninstall_conda_env.sh
16
+
$ bash scripts/install_conda_env.sh
26
17
```
27
18
28
-
For Conda users, **RE-INSTALL PIPELINE'S CONDA ENVIRONMENT AND DO NOT ACTIVATE CONDA ENVIRONMENT BEFORE RUNNING PIPELINES**. Caper will internally call `conda run -n ENV_NAME CROMWELL_JOB_SCRIPT`. Just make sure that pipeline's new Conda environments are correctly installed.
29
-
```bash
30
-
$ scripts/uninstall_conda_env.sh
31
-
$ scripts/install_conda_env.sh
32
-
```
33
19
20
+
## Introduction
34
21
35
-
## Introduction
36
22
This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor and histone ChIP-seq pipeline specifications (by Anshul Kundaje) in [this google doc](https://docs.google.com/document/d/1lG_Rd7fnYgRpSIqrIfuVlAz2dW1VaSQThzk836Db99c/edit#).
37
23
38
24
### Features
@@ -45,30 +31,44 @@ This ChIP-Seq pipeline is based off the ENCODE (phase-3) transcription factor an
45
31
46
32
1) Make sure that you have Python>=3.6. Caper does not work with Python2. Install Caper and check its version >=2.0.
47
33
```bash
48
-
$ python --version
49
34
$ pip install caper
35
+
36
+
# use caper version >= 2.3.0 for a new HPC feature (caper hpc submit/list/abort).
37
+
$ caper -v
50
38
```
51
-
2)Make a backup of your Caper configuration file `~/.caper/default.conf` if you are upgrading from old Caper(<2.0.0). Reset/initialize Caper's configuration file. Read Caper's [README](https://github.com/ENCODE-DCC/caper/blob/master/README.md) carefully to choose a backend for your system. Follow the instruction in the configuration file.
39
+
2) Read Caper's [README](https://github.com/ENCODE-DCC/caper/blob/master/README.md) carefully to choose a backend for your system. Follow the instruction in the configuration file.
52
40
```bash
53
-
# make a backup of ~/.caper/default.conf if you already have it
41
+
# this will overwrite the existing conf file ~/.caper/default.conf
42
+
# make a backup of it first if needed
54
43
$ caper init [YOUR_BACKEND]
55
44
56
-
# then edit ~/.caper/default.conf
45
+
# edit the conf file
57
46
$ vi ~/.caper/default.conf
58
47
```
59
48
60
49
3) Git clone this pipeline.
61
-
> **IMPORTANT**: use `~/chip-seq-pipeline2/chip.wdl` as `[WDL]` in Caper's documentation.
4) (Optional for Conda) Install pipeline's Conda environments if you don't have Singularity or Docker installed on your system. We recommend to use Singularity instead of Conda. If you don't have Conda on your system, install [Miniconda3](https://docs.conda.io/en/latest/miniconda.html).
55
+
4) (Optional for Conda) **DO NOT USE A SHARED CONDA. INSTALL YOUR OWN [MINICONDA3](https://docs.conda.io/en/latest/miniconda.html) AND USE IT.**Install pipeline's Conda environments if you don't have Singularity or Docker installed on your system. We recommend to use Singularity instead of Conda.
68
56
```bash
57
+
# check if you have Singularity on your system, if so then it's not recommended to use Conda
58
+
$ singularity --version
59
+
60
+
# check if you are not using a shared conda, if so then delete it or remove it from your PATH
61
+
$ which conda
62
+
63
+
# change directory to pipeline's git repo
69
64
$ cd chip-seq-pipeline2
70
-
# uninstall old environments (<2.0.0)
65
+
66
+
# uninstall old environments
71
67
$ bash scripts/uninstall_conda_env.sh
68
+
69
+
# install new envs, you need to run this for every pipeline version update.
70
+
# it may be killed if you run this command line on a login node.
71
+
# it's recommended to make an interactive node and run it there.
72
72
$ bash scripts/install_conda_env.sh
73
73
```
74
74
@@ -88,20 +88,22 @@ You can use URIs(`s3://`, `gs://` and `http(s)://`) in Caper's command lines and
88
88
89
89
According to your chosen platform of Caper, run Caper or submit Caper command line to the cluster. You can choose other environments like `--singularity` or `--docker` instead of `--conda`. But you must define one of the environments.
90
90
91
-
The followings are just examples. Please read [Caper's README](https://github.com/ENCODE-DCC/caper) very carefully to find an actual working command line for your chosen platform.
91
+
PLEASE READ [CAPER'S README](https://github.com/ENCODE-DCC/caper) VERY CAREFULLY BEFORE RUNNING ANY PIPELINES. YOU WILL NEED TO CORRECTLY CONFIGURE CAPER FIRST. These are just example command lines.
92
+
92
93
```bash
93
-
# Run it locally with Conda (You don't need to activate it, make sure to install Conda envs first)
94
+
# Run it locally with Conda (DO NOT ACTIVATE PIPELINE'S CONDA ENVIRONEMT)
94
95
$ caper run chip.wdl -i https://storage.googleapis.com/encode-pipeline-test-samples/encode-chip-seq-pipeline/ENCSR000DYI_subsampled_chr19_only.json --conda
95
96
96
-
# Or submit it as a leader job (with long/enough resources) to SLURM (Stanford Sherlock) with Singularity
97
-
# It will fail if you directly run the leader job on login nodes
Copy file name to clipboardExpand all lines: docs/build_genome_database.md
+2-6Lines changed: 2 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,11 +8,7 @@
8
8
9
9
# How to build genome database
10
10
11
-
1. [Install Conda](https://conda.io/miniconda.html). Skip this if you already have equivalent Conda alternatives (Anaconda Python). Download and run the [installer](https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh). Agree to the license term by typing `yes`. It will ask you about the installation location. On Stanford clusters (Sherlock and SCG4), we recommend to install it outside of your `$HOME` directory since its filesystem is slow and has very limited space. At the end of the installation, choose `yes` to add Miniconda's binary to `$PATH` in your BASH startup script.
2. Choose `GENOME` from `hg19`, `hg38`, `mm9` and `mm10` and specify a destination directory. This will take several hours. We recommend not to run this installer on a login node of your cluster. It will take >8GB memory and >2h time.
0 commit comments