Project Roadblock in Jupyter #7138
Unanswered
nguyen-peter
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hi @nguyen-peter, I don't see anything special in the code you posted. Could you please try to separate these codes into several cells and see which line crashes or just start with one data first? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am (trying) using MONAI's unetr model to create an image segmentation model for CT scans. I have been following along this guide to help me in jupyterhub. Currently, the block of code I am trying to run does not produce an output, and causes my kernel to restart / crash. I don't really understand why. Here's the code for reference:
data_dir = "/panfs/jay/groups/25/barkerfk/nguy4214/"
split_json = "dataset_0.json"
datasets = data_dir + split_json
datalist = load_decathlon_datalist(datasets, True, "training", "/panfs/jay/groups/25/barkerfk/nguy4214/")
val_files = load_decathlon_datalist(datasets, True, "validation")
train_ds = CacheDataset(
data=datalist,
transform=train_transforms,
cache_num=24,
cache_rate=1.0,
num_workers=8,
)
train_loader = DataLoader(train_ds, batch_size=1, shuffle=True, num_workers=8, pin_memory=True)
val_ds = CacheDataset(data=val_files, transform=val_transforms, cache_num=6, cache_rate=1.0, num_workers=4)
val_loader = DataLoader(val_ds, batch_size=1, shuffle=False, num_workers=4, pin_memory=True)
The dataset/JSON that I am loading isn't very large >200 lines, and the format of the JSON should be correct (I just replaced the example data with my own files). I'm not quite sure why the code isn't running.
Beta Was this translation helpful? Give feedback.
All reactions