-
Notifications
You must be signed in to change notification settings - Fork 1
Parallelize prediction #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 14 commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
85ae0e3
Convert tif to n5 file format.
schilling40 51cc1b9
Resize wrongly scaled cochleas
schilling40 236f7d0
Prediction distance unet with multiple GPUs
schilling40 8f2dfb1
Fixed argument parsing
schilling40 40f0813
Fixed requirement of SLURM_ARRAY_TASK_ID
schilling40 41067ef
Fix error in cochlea rescaling
constantinpape 230b806
Fixed missing import
schilling40 12799b1
Script for counting cells in segmentation
schilling40 e185ee1
Calculation of mean and standard deviation as preprocessing
schilling40 5473462
Extract sub-volume of n5 file
schilling40 d26f2c6
Small changes and documentation
schilling40 9e4588a
Support of S3 bucket for block extraction
schilling40 a6ff2d2
Distance U-Net prediction with CPU
schilling40 9d3a61f
Fixed issue with chunk reference
schilling40 5a5207d
Improved style
schilling40 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,40 @@ | ||
| import os, sys | ||
| import argparse | ||
| import pybdv | ||
| import imageio.v3 as imageio | ||
|
|
||
|
|
||
| def main(input_path, output_path): | ||
| """ | ||
| Convert tif file to n5 format. | ||
| If no output_path is supplied, the output file is created in the same directory as the input. | ||
| :param str input_path: Input tif | ||
| :param str output_path: Output path for n5 format | ||
| """ | ||
| if not os.path.isfile(input_path): | ||
| sys.exit("Input file does not exist.") | ||
|
|
||
| if input_path.split(".")[-1] not in ["TIFF", "TIF", "tiff", "tif"]: | ||
| sys.exit("Input file must be in tif format.") | ||
|
|
||
| basename = "".join(input_path.split("/")[-1].split(".")[:-1]) | ||
| input_dir = input_path.split(basename)[0] | ||
| input_dir = os.path.abspath(input_dir) | ||
|
|
||
| if "" == output_path: | ||
| output_path = os.path.join(input_dir, basename + ".n5") | ||
| img = imageio.imread(input_path) | ||
| pybdv.make_bdv(img, output_path) | ||
|
|
||
| if __name__ == "__main__": | ||
|
|
||
| parser = argparse.ArgumentParser( | ||
| description="Script to transform file from tif into n5 format.") | ||
|
|
||
| parser.add_argument('input', type=str, help="Input file") | ||
| parser.add_argument('-o', "--output", type=str, default="", help="Output file. Default: <basename>.n5") | ||
|
|
||
| args = parser.parse_args() | ||
|
|
||
| main(args.input, args.output) |
schilling40 marked this conversation as resolved.
Show resolved
Hide resolved
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,110 @@ | ||
| import os | ||
| import argparse | ||
| import numpy as np | ||
| import z5py | ||
| import zarr | ||
|
|
||
| import s3fs | ||
|
|
||
| """ | ||
| This script extracts data around an input center coordinate in a given ROI halo. | ||
|
|
||
| The support for using an S3 bucket is currently limited to the lightsheet-cochlea bucket with the endpoint url https://s3.fs.gwdg.de. | ||
| If more use cases appear, the script will be generalized. | ||
| The usage requires the export of the access and the secret access key within the environment before executing the script. | ||
| run the following commands in the shell of your choice, or add them to your ~/.bashrc: | ||
| export AWS_ACCESS_KEY_ID=<access key> | ||
| export AWS_SECRET_ACCESS_KEY=<secret access key> | ||
| """ | ||
|
|
||
|
|
||
| def main(input_file, output_dir, input_key, resolution, coords, roi_halo, s3): | ||
| """ | ||
|
|
||
| :param str input_file: File path to input folder in n5 format | ||
| :param str output_dir: output directory for saving cropped n5 file as <basename>_crop.n5 | ||
| :param str input_key: Key for accessing volume in n5 format, e.g. 'setup0/s0' | ||
| :param float resolution: Resolution of input data in micrometer | ||
| :param str coords: Center coordinates of extracted 3D volume in format 'x,y,z' | ||
| :param str roi_halo: ROI halo of extracted 3D volume in format 'x,y,z' | ||
| :param bool s3: Flag for using an S3 bucket | ||
| """ | ||
|
|
||
| coords = [int(r) for r in coords.split(",")] | ||
| roi_halo = [int(r) for r in roi_halo.split(",")] | ||
|
|
||
| coord_string = "-".join([str(c) for c in coords]) | ||
|
|
||
| # Dimensions are inversed to view in MoBIE (x y z) -> (z y x) | ||
| coords.reverse() | ||
| roi_halo.reverse() | ||
|
|
||
| input_content = list(filter(None, input_file.split("/"))) | ||
|
|
||
| if s3: | ||
| basename = input_content[0] + "_" + input_content[-1].split(".")[0] | ||
| else: | ||
| basename = "".join(input_content[-1].split(".")[:-1]) | ||
|
|
||
| input_dir = input_file.split(basename)[0] | ||
| input_dir = os.path.abspath(input_dir) | ||
|
|
||
| if output_dir == "": | ||
| output_dir = input_dir | ||
|
|
||
| output_file = os.path.join(output_dir, basename + "_crop_" + coord_string + ".n5") | ||
|
|
||
| coords = np.array(coords) | ||
| coords = coords / resolution | ||
| coords = np.round(coords).astype(np.int32) | ||
|
|
||
| roi = tuple(slice(co - rh, co + rh) for co, rh in zip(coords, roi_halo)) | ||
|
|
||
| if s3: | ||
|
|
||
| # Define S3 bucket and OME-Zarr dataset path | ||
|
|
||
| bucket_name = "cochlea-lightsheet" | ||
| zarr_path = f"{bucket_name}/{input_file}" | ||
|
|
||
| # Create an S3 filesystem | ||
| fs = s3fs.S3FileSystem( | ||
| client_kwargs={"endpoint_url": "https://s3.fs.gwdg.de"}, | ||
| anon=False | ||
| ) | ||
|
|
||
| if not fs.exists(zarr_path): | ||
| print("Error: Path does not exist!") | ||
|
|
||
| # Open the OME-Zarr dataset | ||
| store = zarr.storage.FSStore(zarr_path, fs=fs) | ||
| print(f"Opening file {zarr_path} from the S3 bucket.") | ||
|
|
||
| with zarr.open(store, mode="r") as f: | ||
| raw = f[input_key][roi] | ||
|
|
||
| else: | ||
| with z5py.File(input_file, "r") as f: | ||
| raw = f[input_key][roi] | ||
|
|
||
| with z5py.File(output_file, "w") as f_out: | ||
| f_out.create_dataset("raw", data=raw, compression="gzip") | ||
|
|
||
| if __name__ == "__main__": | ||
|
|
||
| parser = argparse.ArgumentParser( | ||
| description="Script to extract region of interest (ROI) block around center coordinate.") | ||
|
|
||
| parser.add_argument('input', type=str, help="Input file in n5 format.") | ||
| parser.add_argument('-o', "--output", type=str, default="", help="Output directory") | ||
| parser.add_argument('-c', "--coord", type=str, required=True, help="3D coordinate in format 'x,y,z' as center of extracted block.") | ||
|
|
||
| parser.add_argument('-k', "--input_key", type=str, default="setup0/timepoint0/s0", help="Input key for data in input file") | ||
| parser.add_argument('-r', "--resolution", type=float, default=0.38, help="Resolution of input in micrometer") | ||
|
|
||
| parser.add_argument("--roi_halo", type=str, default="128,128,64", help="ROI halo around center coordinate in format 'x,y,z'") | ||
| parser.add_argument("--s3", action="store_true", help="Use S3 bucket") | ||
|
|
||
| args = parser.parse_args() | ||
|
|
||
| main(args.input, args.output, args.input_key, args.resolution, args.coord, args.roi_halo, args.s3) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,32 @@ | ||
| import argparse | ||
| import os | ||
| import sys | ||
|
|
||
| from elf.parallel import unique | ||
| from elf.io import open_file | ||
|
|
||
| sys.path.append("../..") | ||
|
|
||
|
|
||
| def main(): | ||
| parser = argparse.ArgumentParser() | ||
| parser.add_argument("-o", "--output_folder", type=str, required=True, help="Output directory containing segmentation.zarr") | ||
| parser.add_argument("-m", "--min_size", type=int, default=1000, help="Minimal number of voxel size for counting object") | ||
| args = parser.parse_args() | ||
|
|
||
| seg_path = os.path.join(args.output_folder, "segmentation.zarr") | ||
| seg_key = "segmentation" | ||
|
|
||
| file = open_file(seg_path, mode='r') | ||
| dataset = file[seg_key] | ||
|
|
||
| ids, counts = unique(dataset, return_counts=True) | ||
|
|
||
| # You can change the minimal size for objects to be counted here: | ||
| min_size = args.min_size | ||
|
|
||
| counts = counts[counts > min_size] | ||
| print("Number of objects:", len(counts)) | ||
|
|
||
| if __name__ == "__main__": | ||
| main() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.