-
Notifications
You must be signed in to change notification settings - Fork 12
DynaCLR_V2 Submission #240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* caching dataloader * caching data module * black * ruff * Bump torch to 2.4.1 (#174) * update torch >2.4.1 * black * ruff * adding timeout to ram_dataloader * bandaid to cached dataloader * fixing the dataloader using torch collate_fn * replacing dictionary with single array * loading prior to epoch 0 * Revert "replacing dictionary with single array" This reverts commit 8c13f49. * using multiprocessing manager * add sharded distributed sampler * add example script for ddp caching * format and lint * addding the custom distrb sampler to hcs_ram.py * adding sampler to val train dataloader * fix divisibility of the last shard * hcs_ram format and lint * data module that only crops and does not collate * wip: execute transforms on the GPU * path for if not ddp * fix randomness in inversion transform * add option to pop the normalization metadata * move gpu transform definition back to data module * add tiled crop transform for validation * add stack channel transform for gpu augmentation * fix typing * collate before sending to gpu * inherit gpu transforms for livecell dataset * update fcmae engine to apply per-dataset augmentations * format and lint hcs_ram * fix abc type hint * update docstring style * disable grad for validation transforms * improve sample image logging in fcmae * fix dataset length when batch size is larger than the dataset * fix docstring * add option to disable normalization metadata * inherit gpu transform for ctmc * remove duplicate method overrride * update docstring for ctmc * allow skipping caching for large datasets * make the fcmae module compatible with image translation * remove prototype implementation * fix import path * Arbitrary prediction time transforms (#209) * fix spelling in docstring and comment * add batched zoom transform for tta * add standalone lightning module for arbitrary TTA * fix composition of different zoom factors * add docstrings * wip: segmentation module * avoid casting * update import path from iohub * make integer array in fixture * labels fixture * test segmentation metrics modules * less strings * test non-empty * select which wells to include in fit #205 * make well selection a mixin * wip: mmap cache data module * support exclusion of FOVs * wip: precompute normalization * add augmentations benchmark * fix cpu threads default * fix probability (affects cpu results) * disable metadata tracking * fix non-distributed initialization * refactor transforms into submodules * wip: bootstrap and distillation * wip: balance distillation loss * re-define cropping transforms * wip: joint only * redefine random flip dict transform * cell classification data module * supervised cell classifier * do not import type hints at runtime * update docstring * backwards compatible import path * fix annotations * fix style * fix dice score import * fix dice score parameters * apply formatting to exercise * fix labels data type * fix labels input shape --------- Co-authored-by: Eduardo Hirata-Miyasaki <[email protected]>
edyoshikun
commented
May 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
ziw-liu
reviewed
Jun 16, 2025
This was referenced Jun 16, 2025
Merged
|
Also need #260. |
* plot teacher model accuracy * use a fixed ylimit
ziw-liu
approved these changes
Jun 23, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the code of our revised dynaCLR pipeline.
As discussed, the goal is to merge this to
mainso we can have DynaCell branch can depend on some of these features. Additionally, we want these features to be ready prior to the preprint update.TODOS:
setup.shCellFeatures)