Skip to content

Commit 9881108

Browse files
committed
Renamed build to coderbuild. Hopefully I got all references, there were quite a few
1 parent 4630da8 commit 9881108

File tree

188 files changed

+1191
-2804
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

188 files changed

+1191
-2804
lines changed

README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -36,22 +36,22 @@ please see the [schema description](schema/README.md).
3636

3737
## Building a local version
3838

39-
The build process can be found in our [build
40-
directory](build/README.md). Here you can follow the instructions to
39+
The build process can be found in our [coderbuild
40+
directory](coderbuild/README.md). Here you can follow the instructions to
4141
build your own local copy of the data on your machine.
4242

4343
## Adding a new dataset
4444

45-
We have standardized the build process so an additional dataset can be
45+
We have standardized the build (coderbuild) process so an additional dataset can be
4646
built locally or as part of the next version of coder. Here are the
4747
steps to follow:
4848

49-
1. First visit the [build
50-
directory](build/README.md) and ensure you can build a local copy of
49+
1. First visit the [coderbuild
50+
directory](coderbuild/README.md) and ensure you can build a local copy of
5151
CoderData.
5252

5353
2. Checkout this repository and create a subdirectory of the
54-
[build directory](build) with your own build files.
54+
[coderbuild directory](coderbuild) with your own build files.
5555

5656
3. Develop your scripts to build the data files according to our
5757
[LinkML Schema](schema/coderdata.yaml]). This will require collecting
@@ -66,10 +66,10 @@ validator](https://linkml.io/linkml/data/validating-data) together
6666
with our schema file.
6767

6868
You can use the following scripts as part of your build process:
69-
- [build/utils/fit_curve.py](build/utils/fit_curve.py): This script
69+
- [coderbuild/utils/fit_curve.py](coderbuild/utils/fit_curve.py): This script
7070
takes dose-response data and generates the dose-response statistics
7171
required by CoderData/
72-
- [build/utils/pubchem_retrieval.py](build/utils/pubchem_retreival.py):
72+
- [coderbuild/utils/pubchem_retrieval.py](coderbuild/utils/pubchem_retreival.py):
7373
This script retreives structure and drug synonym information
7474
required to populate the `Drug` table.
7575

@@ -78,13 +78,13 @@ and arguments:
7878

7979
| shell script | arguments | description |
8080
|------------------|--------------------------|---------------------|
81-
| `build_samples.sh` | [latest_samples] | Latest version of samples generated by coderdata build |
81+
| `build_samples.sh` | [latest_samples] | Latest version of samples generated by coderbuild |
8282
| `build_omics.sh` | [gene file] [samplefile] | This includes the `genes.csv` that was generated in the original build as well as the sample file generated above. |
8383
| `build_drugs.sh` | [drugfile1,drugfile2,...] | This includes a comma-delimited list of all drugs files generated from previous build |
8484
| `build_exp.sh`| [samplfile ] [drugfile] | sample file and drug file generated by previous scripts |
8585

8686
5. Put the Docker container file inside the [Docker
87-
directory](./build/docker) with the name
87+
directory](./coderbuild/docker) with the name
8888
`Dockerfile.[datasetname]`.
8989

9090
6. Run `build_all.py` from the root directory, which should now add in

build/build_test/test_drugs.tsv

Lines changed: 0 additions & 2495 deletions
This file was deleted.

build/docker/Dockerfile.beataml

Lines changed: 0 additions & 22 deletions
This file was deleted.

build/docker/Dockerfile.hcmi

Lines changed: 0 additions & 17 deletions
This file was deleted.

build/docker/Dockerfile.pancpdo

Lines changed: 0 additions & 27 deletions
This file was deleted.

build/docker/Dockerfile.sarcpdo

Lines changed: 0 additions & 22 deletions
This file was deleted.

build/README.md renamed to coderbuild/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ figure below shows a brief description of the process, which is
66
designed to be run serially, as new identifiers are generated as data
77
are added.
88

9-
![Build process](coderDataBuild.jpg?raw=true "Build process")
9+
![CoderBuild process](coderDataBuild.jpg?raw=true "CoderBuild process")
1010

1111
## build_all.py script
1212

@@ -37,13 +37,13 @@ It requires the following authorization tokens to be set in the local environmen
3737
- Build all datasets and upload to Figshare and GitHub.
3838
Required tokens for the following command: `SYNAPSE_AUTH_TOKEN`, `FIGSHARE_TOKEN`, `GITHUB_TOKEN`.
3939
```bash
40-
python build/build_all.py --all --high_mem --validate --figshare --version 0.1.41 --github-username jjacobson95 --github-email [email protected]
40+
python coderbuild/build_all.py --all --high_mem --validate --figshare --version 0.1.41 --github-username jjacobson95 --github-email [email protected]
4141
```
4242

4343
- Build only the experiment files.
4444
**Note**: Preceding steps will not automatically be run. This assumes that docker images, samples, omics, and drugs were all previously built. Ensure all required tokens are set.
4545
```bash
46-
python build/build_all.py --exp
46+
python coderbuild/build_all.py --exp
4747
```
4848

4949
## build_dataset.py script
@@ -63,19 +63,19 @@ Example usage:
6363

6464
Build the broad_sanger dataset:
6565
```bash
66-
python build/build_dataset.py --build --dataset broad_sanger
66+
python coderbuild/build_dataset.py --build --dataset broad_sanger
6767
```
6868
Build the mpnst dataset continuing from broad_sanger sample and drug IDs:
6969
```bash
70-
python build/build_dataset.py --build --dataset mpnst --use_prev_dataset broad_sanger
70+
python coderbuild/build_dataset.py --build --dataset mpnst --use_prev_dataset broad_sanger
7171
```
7272
Build run schema validation on hcmi dataset:
7373
```bash
74-
python build/build_dataset.py --dataset hcmi --validate
74+
python coderbuild/build_dataset.py --dataset hcmi --validate
7575
```
7676
Build the broad_sanger dataset but skip previously built files in "local" directory:
7777
```bash
78-
python build/build_dataset.py --dataset broad_sanger --continue
78+
python coderbuild/build_dataset.py --dataset broad_sanger --continue
7979
```
8080

8181

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This directory builds the data for the BeatAML samples. To build and
44
test this module, run the following commands from the root directory.
55

66
## Build with test data
7-
Build commands should be similar to every other coderdata build
7+
Build commands should be similar to every other coderbuild
88
module.
99

1010

@@ -13,7 +13,7 @@ First we need to build the gene table
1313

1414
1. Build genes docker
1515
```
16-
docker build -f build/docker/Dockerfile.genes -t genes . --build-arg HTTPS_PROXY=$HTTPS_PROXY
16+
docker build -f coderbuild/docker/Dockerfile.genes -t genes . --build-arg HTTPS_PROXY=$HTTPS_PROXY
1717
```
1818

1919
2. Build gene file
@@ -24,7 +24,7 @@ First we need to build the gene table
2424
### Build AML data
2525
1. Build the Docker image:
2626
```
27-
docker build -f build/docker/Dockerfile.beataml -t beataml . --build-arg HTTPS_PROXY=$HTTPS_PROXY
27+
docker build -f coderbuild/docker/Dockerfile.beataml -t beataml . --build-arg HTTPS_PROXY=$HTTPS_PROXY
2828
```
2929

3030
2. Generate new identifiers for these samples to create a

0 commit comments

Comments
 (0)