Skip to content

Commit e1ec8f3

Browse files
committed
✨ Tutorial edits
- Provide environment file to ease reproducibility. - Add content on adapting tutorial to own data.
1 parent d68c76b commit e1ec8f3

File tree

3 files changed

+66
-29
lines changed

3 files changed

+66
-29
lines changed

.github/workflows/test_tutorial.yaml

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,17 @@ jobs:
1717
python-version: "3.11.11"
1818
activate-environment: claster-env
1919

20-
- name: Create Conda environment from YAML
20+
- name: Create or update Conda environment from YAML
2121
shell: bash -el {0}
2222
run: |
23-
conda env create -f environment/claster-env.yml
23+
# Check if environment exists and update it if it does
24+
if conda env list | grep -q "claster-env"; then
25+
echo "Environment already exists, updating..."
26+
conda env update -f environment/claster-env.yml --prune
27+
else
28+
echo "Creating new environment..."
29+
conda env create -f environment/claster-env.yml
30+
fi
2431
2532
- name: Install papermill
2633
shell: bash -el {0}
@@ -48,10 +55,17 @@ jobs:
4855
python-version: "3.11.11"
4956
activate-environment: claster-env
5057

51-
- name: Create Conda environment from YAML
58+
- name: Create or update Conda environment from YAML
5259
shell: bash -el {0}
5360
run: |
54-
conda env create -f environment/claster-env.yml
61+
# Check if environment exists and update it if it does
62+
if conda env list | grep -q "claster-env"; then
63+
echo "Environment already exists, updating..."
64+
conda env update -f environment/claster-env.yml --prune
65+
else
66+
echo "Creating new environment..."
67+
conda env create -f environment/claster-env.yml
68+
fi
5569
5670
- name: Install website dependencies
5771
shell: bash -el {0}

.gitignore

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,5 +29,4 @@ targets/test_targets_*.csv
2929
targets/*1kbp*
3030
targets/training_targets_Enformer.csv
3131
training_targets.csv
32-
HiC-Pro-3.1.0
3332
*.log

scripts/0_Tutorial.ipynb

Lines changed: 48 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
"\n",
2222
"Hi! \n",
2323
"\n",
24-
"Welcome to this small tutorial on how to build and run CLASTER using the EIR framework. Please clone this repository (CLASTER) on your computer to start. We will guide you through the main steps of the pipeline using a subset of our data, and finish the tutorial providing guidelines on how to extend the analyses to your own datasets.\n",
24+
"Welcome to this small tutorial on how to build and run CLASTER using the EIR framework. Please clone this repository (CLASTER) on your computer to start. We will guide you through the main steps required to train and predict using CLASTER, and also how to adapt the pipeline to work with own data.\n",
2525
"\n",
2626
"### About CLASTER\n",
2727
"\n",
@@ -58,13 +58,14 @@
5858
"\n",
5959
">💻 **Create an environment for the project:**\n",
6060
">\n",
61-
">The following steps will be performed from the terminal, and once the environment is set up we will run everything else from this notebook.\n",
62-
">First we need to create an environment for this project, where all the required dependencies will be installed. We provide an environment file ```../environment/claster-env.yml``` to ease reproducibility and avoid conflicts between versions of different packages.\n",
63-
">If you have anaconda, the environment can be created from the terminal by typing:\n",
61+
">The following steps will be performed from the terminal, and once the environment is set up we will run everything else from this notebook. \n",
62+
">We will first create an environment for this project, where we will install all the required dependencies. To ease the process and avoid conflicts between versions of different dependencies, we provide a working environment configuration file, ```../environment/claster-env.yml```.\n",
63+
">If you have anaconda, the environment can be created using the yml file from the terminal by typing:\n",
6464
">\n",
6565
">```bash\n",
66-
">conda env create -f ../environment/claster-env.yml # Create environment from predefined yml file\n",
67-
">conda activate claster-env # Activate it"
66+
">conda env create -f ../environment/claster-env.yml # Create environment\n",
67+
">conda activate claster-env # Activate it\n",
68+
">```"
6869
]
6970
},
7071
{
@@ -111,7 +112,19 @@
111112
"- Input arrays can be found at the folders ```inputs/landscape_arrays/test/``` and ```inputs/microC_rotated/test/```. \n",
112113
"- The matching target profiles are given in a tabular format and can be found in ```targets/test_targets.csv```.\n",
113114
"\n",
114-
"> *Note: As a standard data augmentation procedure, samples were provided in their natural orientation (SampleID_forward.npy) and flipped. (SampleID_forward.npy)*"
115+
"> *Note: As a standard data augmentation procedure, samples were provided in their natural orientation (SampleID_forward.npy) and flipped. (SampleID_forward.npy)*\n",
116+
"\n",
117+
">**How do I extend this to my dataset?**\n",
118+
">\n",
119+
">*Inputs:* \n",
120+
">\n",
121+
">We need to store all input samples in a folder, e.g. ```/inputs/```. Each sample will be a numpy array with the name {SAMPLE_ID}.npy, of shape (#tracks, sequence length). In our case, #tracks = 4 (ATAC, H3K4me3, H3K27ac, H3K27me3) and sequence length = 10001 (bins of 100bp).\n",
122+
">\n",
123+
">*Targets:*\n",
124+
">\n",
125+
">Targets are provided as a table, where:\n",
126+
">- Columns are ID + name_of_output_1, name_of_ouput_2, etc. In our case we called them ID, -200_ctrl, -199_ctrl, etc.\n",
127+
">- Rows correspond to the sample ID (without .npy) and the table is filled with target values. In our case these were 1kbp read averages for 401 output nodes."
115128
]
116129
},
117130
{
@@ -1125,30 +1138,41 @@
11251138
"cell_type": "markdown",
11261139
"metadata": {},
11271140
"source": [
1128-
"## 3. How do I adapt the pipeline to my data?\n",
1141+
"## 3. How to adapt it to work with your own data\n",
1142+
"\n",
1143+
"Input and target files were already given in an eir-friendly format for this tutorial. Now we will provide some guidelines on how to adapt this pipeline to train CLASTER with your own data. For more details, please have a look at the notebook ```I_Data_obtention.ipynb```. We also refer the reader to [eir.readthedocs.io](https://eir.readthedocs.io/en/latest/) for more information on how to extend the analyses to new data modalities.\n",
11291144
"\n",
1130-
"Here we provide some guidelines on how to transform BigWig files into our desired, eir-friendly input and target data formats. More detailed information can be found in `I_Data_obtention.ipynb`. We refer the reader to [eir.readthedocs.io](https://eir.readthedocs.io/en/latest/) to find more extensive documentation on how to potentially extend this approach to new contexts and data modalities.\n",
1145+
"**You need:**\n",
1146+
"- BigWig files containing genomewide chromatin mark enrichments. These can usually be found in [NCBI's GEO](https://www.ncbi.nlm.nih.gov/geo/).\n",
1147+
"- Gene annotations file. Gene annotations were obtained for hg19 from [gencode](https://www.gencodegenes.org/human/release_19.html). We used hg19 because the input data from the Danko lab was mapped to hg19 originally.\n",
11311148
"\n",
11321149
"**Inputs:**\n",
1150+
"- Chromatin landscape inputs are provided as numpy arrays of shape (number of tracks, sequence length). \n",
1151+
"- In CLASTER, number of tracks is 4 (ATAC-seq, H3K4me3, H3K27ac, H3K27me3) and sequence length is 10001 (bins at 100bp resolution). \n",
1152+
"- Given a BigWig file with the enrichment of a chromatin mark, we can use the python package `pyBigWig` to extract a numpy array with the enrichment of the signal inside predefined genomic boundaries in a desired number of bins as follows:\n",
1153+
"\n",
1154+
"```python \n",
1155+
" bw = pyBigWig.open(str(data_path / bw_path), \"r\")\n",
1156+
" stats = bw.stats(chrom,start,end,type=\"mean\",nBins=n_input_bins)\n",
1157+
" bw.close()\n",
1158+
" stats = np.array([float(value) if value is not None else 0. for value in stats])\n",
1159+
" stats = np.clip(np.array(stats),0,None) # ReLU\n",
1160+
"```\n",
1161+
"- We centered our samples at the TSS of protein coding genes found in the gene annotations file, and used their Ensemble ID to name each input sample (e.g. `ENSMUSG00000000085.16_forward.npy`). All input samples were stored in the same folder, `inputs/landscape_arrays/`.\n",
1162+
"- We then just need to stack the 1D arrays for the different marks in the same region, and give the resulting array a name or ID. \n",
11331163
"\n",
1134-
"- We need to store all input samples in a folder, e.g. ```/inputs/```. \n",
1135-
"- Each input sample is a numpy array with the name {SAMPLE_ID}.npy, of shape ($n_{tracks}$, $seqlen$). In our case, $n_{tracks}=4$ (ATAC, H3K4me3, H3K27ac, H3K27me3) and sequence length = 10001 (bins of 100bp). \n",
1136-
" - We can obtain these samples using the package ```pyBigWig```. In particular, given a BigWig file with our chromatin mark:\n",
1137-
" ```python\n",
1138-
" bw = pyBigWig.open(str(data_path / bw_path), \"r\")\n",
1139-
" stats = bw.stats(chromosome,start_coordinate,end_coordinate,type=\"mean\",nBins=n_input_bins)\n",
1140-
" stats = np.array([float(value) if value is not None else 0. for value in stats])\n",
1141-
" ```\n",
1142-
" This would provide us a row in the input array, and we'd need to join the different marks (See `I_Data_obtention.ipynb` for details). \n",
1164+
"**Targets:**\n",
11431165
"\n",
1144-
"- CLASTER can be extended to different numbers of tracks, but that requires editing the corresponding configuration file used to build the model. In our case, we would modify in ```input_cnn.yaml``` the value of ```first_kernel_expansion_height``` to the new value of $n_{channels}$.\n",
1166+
"- Targets are provided as a table, e.g. a csv file. \n",
1167+
"- The header contains an index as the first column ('ID') and as many columns as output targets. In CLASTER, we named them `-200_ctrl`,`-199_ctrl`,...,`199_ctrl`,`200_ctrl`, i.e. we had 401 output nodes.\n",
1168+
"- Rows are then named after the sample ID (without the .npy prefix), and are filled with the target values matching our inputs. In CLASTER, we had EU-seq enrichment values binned at 1kbp resolution for the central 401 bins.\n",
11451169
"\n",
1146-
"**Targets:**\n",
1170+
"- You can follow the same steps as above: \n",
1171+
" - Download the bigwig file for the target genomic track.\n",
1172+
" - Extract the signal inside our desired boundaries and resolution using `pyBigWig`.\n",
1173+
" - Add the array as a row in the targets csv file matching the ID of the corresponding input.\n",
11471174
"\n",
1148-
"Targets are provided as table or dataframe, where:\n",
1149-
"- Columns are ID + name_of_output_1, name_of_ouput_2, etc. In our case we called them ID, -200_ctrl, -199_ctrl, etc.\n",
1150-
"- Rows correspond to the sample ID ({SAMPLE_ID} without .npy) and the table is filled with target values. In the manuscript these were 1kbp read averages for EU-seq provided in 401 output nodes.\n",
1151-
"- Hence, we can follow the same principles as above. First, load the BigWig file with the output track we want to predict, extract the values as a numpy array within the boundaries of interest, and update the values in the row corresponding to our sample (See `I_Data_obtention.ipynb` for details.)"
1175+
"For more advanced analyses like _in silico_ perturbations of the inputs, we refer the reader to `IV_Revisions.ipynb` and `III_Data_analysis.ipynb` for an earlier version of the perturbations."
11521176
]
11531177
},
11541178
{

0 commit comments

Comments
 (0)