Skip to content

Commit fdd45e6

Browse files
Merge pull request #207 from daisybio/development
v1.3.0
2 parents 7c18391 + d389914 commit fdd45e6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+3958
-3972
lines changed

.github/workflows/labeler.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,6 @@ jobs:
1313
uses: actions/checkout@v4
1414

1515
- name: Run Labeler
16-
uses: crazy-max/ghaction-github-labeler@v5.2.0
16+
uses: crazy-max/ghaction-github-labeler@v5.3.0
1717
with:
1818
skip-delete: true

.github/workflows/publish_docs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ jobs:
1313
- name: Setup Python
1414
uses: actions/setup-python@v5
1515
with:
16-
python-version: "3.11"
16+
python-version: "3.12"
1717

1818
- name: Install pip
1919
run: |

.github/workflows/run_tests.yml

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,12 @@ jobs:
1616
fail-fast: false
1717
matrix:
1818
include:
19-
- { python-version: "3.11", os: ubuntu-latest, session: "pre-commit" }
20-
- { python-version: "3.11", os: ubuntu-latest, session: "safety" }
21-
- { python-version: "3.11", os: ubuntu-latest, session: "mypy" }
22-
- { python-version: "3.11", os: ubuntu-latest, session: "tests" }
23-
- { python-version: "3.11", os: windows-latest, session: "tests" }
24-
- { python-version: "3.11", os: ubuntu-latest, session: "typeguard" }
25-
- { python-version: "3.11", os: ubuntu-latest, session: "xdoctest" }
26-
- { python-version: "3.11", os: ubuntu-latest, session: "docs-build" }
19+
- { python-version: "3.12", os: ubuntu-latest, session: "pre-commit" }
20+
- { python-version: "3.12", os: ubuntu-latest, session: "mypy" }
21+
- { python-version: "3.12", os: ubuntu-latest, session: "tests" }
22+
- { python-version: "3.12", os: windows-latest, session: "typeguard" }
23+
- { python-version: "3.12", os: ubuntu-latest, session: "xdoctest" }
24+
- { python-version: "3.12", os: ubuntu-latest, session: "docs-build" }
2725

2826
env:
2927
NOXSESSION: ${{ matrix.session }}
@@ -66,7 +64,7 @@ jobs:
6664
print("::set-output name=result::{}".format(result))
6765
6866
- name: Restore pre-commit cache
69-
uses: actions/[email protected].2
67+
uses: actions/[email protected].3
7068
if: matrix.session == 'pre-commit'
7169
with:
7270
path: ~/.cache/pre-commit
@@ -99,10 +97,10 @@ jobs:
9997
- name: Check out the repository
10098
uses: actions/checkout@v4
10199

102-
- name: Set up Python 3.11
100+
- name: Set up Python 3.12
103101
uses: actions/setup-python@v5
104102
with:
105-
python-version: 3.11
103+
python-version: 3.12
106104

107105
- name: Install Poetry
108106
run: |
@@ -129,6 +127,6 @@ jobs:
129127
run: nox --force-color --session=coverage -- xml -i
130128

131129
- name: Upload coverage report
132-
uses: codecov/[email protected].0
130+
uses: codecov/[email protected].2
133131
with:
134132
token: ${{ secrets.CODECOV_TOKEN }}

.github/workflows/safety_scan.yml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
name: Safety Action
2+
3+
on:
4+
push: # Run on every push to any branch
5+
pull_request: # Run on new pull requests
6+
7+
jobs:
8+
security:
9+
runs-on: ubuntu-latest
10+
steps:
11+
- uses: actions/checkout@main
12+
- name: Run Safety CLI to check for vulnerabilities
13+
uses: pyupio/safety-action@v1
14+
with:
15+
api-key: ${{ secrets.SAFETY_API_KEY }}

README.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Focus on Innovating Your Models — DrEval Handles the Rest!
1616

1717
By contributing your model to the DrEval catalog, you can increase your work's exposure, reusability, and transferability.
1818

19-
![DrEval](assets/dreval.png)
19+
![DrEval](docs/_static/img/overview.png)
2020

2121
Use DrEval to Build Drug Response Models That Have an Impact
2222

@@ -82,7 +82,7 @@ results/my_first_run/LCO
8282
You can visualize them using
8383

8484
```bash
85-
python create_report.py --run_id my_first_run
85+
python create_report.py --run_id my_first_run --dataset GDSC2
8686
```
8787

8888
This will create an index.html file which you can open in your webbrowser.
@@ -91,7 +91,7 @@ You can also run a drug response experiment using Python:
9191

9292
```python
9393

94-
from drevalpy import drug_response_experiment
94+
from drevalpy.experiment import drug_response_experiment
9595

9696
drug_response_experiment(
9797
models=["MultiOmicsNeuralNetwork"],
@@ -106,9 +106,13 @@ drug_response_experiment(
106106

107107
We recommend the use of our nextflow pipeline for computational demanding runs and for improved reproducibility. No knowledge of nextflow is required to run it. The nextflow pipeline is available here: [nf-core-drugresponseeval](https://github.com/JudithBernett/nf-core-drugresponseeval).
108108

109+
## Example Report
110+
111+
[Browse our benchmark results here.](https://dilis-lab.github.io/drevalpy-report/)
112+
109113
## Contact
110114

111115
Main developers:
112116

113117
- [Judith Bernett](mailto:[email protected]), [Data Science in Systems Biology](https://www.mls.ls.tum.de/daisybio/startseite/), TUM
114-
- [Pascal Iversen](mailto:[email protected]), [Data Integration in the Life Sciences](https://www.mi.fu-berlin.de/inf/groups/ag-dilis/index.html), FU Berlin, Hasso Plattner Institute
118+
- [Pascal Iversen](mailto:[email protected]), [Data Integration in the Life Sciences](https://www.mi.fu-berlin.de/w/DILIS/WebHome), FU Berlin, Hasso Plattner Institute

README.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,15 @@ DrEvalPy: Python Cancer Cell Line Drug Response Prediction Suite
3131
:target: https://github.com/psf/black
3232
:alt: Black
3333

34+
.. image:: _static/img/overview.png
35+
:align: center
36+
:width: 80%
37+
:alt: Overview of the DrEval framework. Via input options, implemented state-of-the-art models can be compared against baselines of varying complexity. We address obstacles to progress in the field at each point in our pipeline: Our framework is available on PyPI and nf-core and we follow FAIReR standards for optimal reproducibility. DrEval is easily extendable as demonstrated here with an implementation of a proteomics-based random forest. Custom viability data can be preprocessed with CurveCurator, leading to more consistent data and metrics. DrEval supports five widely used datasets with application-aware train/test splits that enable detecting weak generalization. Models are free to use provided cell line- and drug features or custom ones. The pipeline supports randomization-based ablation studies and performs robust hyperparameter tuning for all models. Evaluation is conducted using meaningful, bias-resistant metrics to avoid inflated results from artifacts such as Simpson’s paradox. All results are compiled into an interactive HTML report.
38+
39+
40+
Overview
41+
=======
42+
3443
Focus on Innovating Your Models — DrEval Handles the Rest!
3544
- DrEval is a toolkit that ensures drug response prediction evaluations are statistically sound, biologically meaningful, and reproducible.
3645
- Focus on model innovation while using our automated standardized evaluation protocols and preprocessing workflows.

assets/dreval.png

-1.65 MB
Binary file not shown.

0 commit comments

Comments
 (0)