-
Describe the bug Since the MONAI workflow only accepts a single item for image, label, and prediction, I combined the two target labels and the predictions in a dictionary with a custion Unfortunately, when attempting to calculate the AUC using monai.handlers.ROCAUC, I encountered an error. The error message is as follows: [...]
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/handlers/utils.py", line 196, in _wrapper
ret = [data[0][k] if first else [i[k] for i in data] for k in _keys]
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/handlers/utils.py", line 196, in <listcomp>
ret = [data[0][k] if first else [i[k] for i in data] for k in _keys]
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/handlers/utils.py", line 196, in <listcomp>
ret = [data[0][k] if first else [i[k] for i in data] for k in _keys]
KeyError: 'label_clf' (see below for the full stack trace) Based on the error message, I suspect that the issue lies with the output transform of the ROCAUC handler, which is derived from I see three possible reasons for this error:
To Reproduce
python -m monai.bundle run training \
--meta_file configs/metadata.yaml \
--config_file configs/train.yaml \
--logging_file configs/logging.conf Expected behavior Environment ================================
Printing MONAI config...
================================
MONAI version: 1.1.0
Numpy version: 1.23.5
Pytorch version: 1.13.1
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: a2ec3752f54bfc3b40e7952234fbeb5452ed63e3
MONAI __file__: /usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/__init__.py
Optional dependencies:
Pytorch Ignite version: 0.4.11
Nibabel version: 5.0.0
scikit-image version: 0.19.3
Pillow version: 9.4.0
Tensorboard version: 2.12.0
gdown version: 4.6.4
TorchVision version: 0.14.1
tqdm version: 4.64.1
lmdb version: 1.4.0
psutil version: 5.9.4
pandas version: 1.5.3
einops version: 0.6.0
transformers version: 4.21.3
mlflow version: 2.2.1
pynrrd version: 1.0.0
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
================================
Printing system config...
================================
System: Darwin
Mac version: 10.16
Platform: macOS-10.16-x86_64-i386-64bit
Processor: i386
Machine: x86_64
Python version: 3.10.9
Process name: python3.10
Command: ['python', '-c', 'import monai; monai.config.print_debug_info()\n']
Open files: []
Num physical CPUs: 4
Num logical CPUs: 8
Num usable CPUs: UNKNOWN for given OS
CPU usage (%): [62.7, 1.0, 20.4, 1.8, 14.9, 1.0, 12.3, 1.3]
CPU freq. (MHz): 2400
Load avg. in last 1, 5, 15 mins (%): [31.2, 43.6, 41.8]
Disk usage (%): 82.4
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 16.0
Available memory (GB): 5.0
Used memory (GB): 8.4
================================
Printing GPU config...
================================
Num GPUs: 0
Has CUDA: False
cuDNN enabled: False Additional context python -m monai.bundle run training \
--meta_file configs/metadata.yaml \
--config_file configs/train.yaml \
--logging_file configs/logging.conf
2023-03-12 16:38:30,461 - INFO - --- input summary of monai.bundle.scripts.run ---
2023-03-12 16:38:30,461 - INFO - > runner_id: 'training'
2023-03-12 16:38:30,462 - INFO - > meta_file: 'configs/metadata.yaml'
2023-03-12 16:38:30,462 - INFO - > config_file: 'configs/train.yaml'
2023-03-12 16:38:30,462 - INFO - > logging_file: 'configs/logging.conf'
2023-03-12 16:38:30,462 - INFO - ---
2023-03-12 16:38:30,462 - INFO - set logging properties based on config: configs/logging.conf.
2023-03-12 16:38:30,579 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event IterationEvents.MODEL_COMPLETED
2023-03-12 16:38:30,579 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event IterationEvents.MODEL_COMPLETED
2023-03-12 16:38:30,579 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event Events.EPOCH_STARTED
2023-03-12 16:38:30,580 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event Events.ITERATION_COMPLETED
2023-03-12 16:38:30,580 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,580 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,580 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,580 - ignite.engine.engine.SupervisedEvaluator - DEBUG - Added handler for event Events.EXCEPTION_RAISED
2023-03-12 16:38:30,582 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event IterationEvents.MODEL_COMPLETED
2023-03-12 16:38:30,582 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event IterationEvents.MODEL_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.EPOCH_STARTED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.ITERATION_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.ITERATION_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.EPOCH_COMPLETED
2023-03-12 16:38:30,583 - ignite.engine.engine.SupervisedTrainer - DEBUG - Added handler for event Events.EXCEPTION_RAISED
torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
2023-03-12 16:38:30,584 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run resuming from iteration 0, epoch 0 until 2 epochs
2023-03-12 16:38:30,584 - ignite.engine.engine.SupervisedTrainer - DEBUG - 0 | 0, Firing handlers for event Events.STARTED
2023-03-12 16:38:30,584 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 0, Firing handlers for event Events.EPOCH_STARTED
2023-03-12 16:38:30,610 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 0, Firing handlers for event Events.GET_BATCH_STARTED
Setting affine, but the applied meta contains an affine. This will be overwritten.
2023-03-12 16:38:30,655 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 0, Firing handlers for event Events.GET_BATCH_COMPLETED
2023-03-12 16:38:30,656 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event Events.ITERATION_STARTED
User provided device_type of 'cuda', but CUDA is not available. Disabling
2023-03-12 16:38:30,699 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event IterationEvents.FORWARD_COMPLETED
2023-03-12 16:38:30,760 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event IterationEvents.LOSS_COMPLETED
2023-03-12 16:38:30,874 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event IterationEvents.BACKWARD_COMPLETED
2023-03-12 16:38:30,883 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event IterationEvents.MODEL_COMPLETED
2023-03-12 16:38:30,885 - PrintKeys - INFO - :: Key: image :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,886 - PrintKeys - INFO - :: Key: loss :: Item: <class 'float'>
2023-03-12 16:38:30,886 - PrintKeys - INFO - :: Key: pred_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,886 - PrintKeys - INFO - :: Key: pred_clf :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,886 - PrintKeys - INFO - :: Key: label_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,886 - PrintKeys - INFO - :: Key: label_clf :: Item: <class 'torch.Tensor'>
2023-03-12 16:38:30,894 - PrintKeys - INFO - :: Key: image :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,894 - PrintKeys - INFO - :: Key: loss :: Item: <class 'float'>
2023-03-12 16:38:30,894 - PrintKeys - INFO - :: Key: pred_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,894 - PrintKeys - INFO - :: Key: pred_clf :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,894 - PrintKeys - INFO - :: Key: label_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,894 - PrintKeys - INFO - :: Key: label_clf :: Item: <class 'torch.Tensor'>
2023-03-12 16:38:30,900 - PrintKeys - INFO - :: Key: image :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,900 - PrintKeys - INFO - :: Key: loss :: Item: <class 'float'>
2023-03-12 16:38:30,900 - PrintKeys - INFO - :: Key: pred_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,900 - PrintKeys - INFO - :: Key: pred_clf :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,900 - PrintKeys - INFO - :: Key: label_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,900 - PrintKeys - INFO - :: Key: label_clf :: Item: <class 'torch.Tensor'>
2023-03-12 16:38:30,906 - PrintKeys - INFO - :: Key: image :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,906 - PrintKeys - INFO - :: Key: loss :: Item: <class 'float'>
2023-03-12 16:38:30,906 - PrintKeys - INFO - :: Key: pred_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,908 - PrintKeys - INFO - :: Key: pred_clf :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,908 - PrintKeys - INFO - :: Key: label_segm :: Item: <class 'monai.data.meta_tensor.MetaTensor'>
2023-03-12 16:38:30,908 - PrintKeys - INFO - :: Key: label_clf :: Item: <class 'torch.Tensor'>
2023-03-12 16:38:30,914 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event Events.ITERATION_COMPLETED
2023-03-12 16:38:30,914 - ignite.engine.engine.SupervisedTrainer - ERROR - Current run is terminating due to exception: 'label_clf'
2023-03-12 16:38:30,914 - ignite.engine.engine.SupervisedTrainer - DEBUG - 1 | 1, Firing handlers for event Events.EXCEPTION_RAISED
2023-03-12 16:38:30,914 - ignite.engine.engine.SupervisedTrainer - ERROR - Exception: 'label_clf'
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/ignite/engine/engine.py", line 1069, in _run_once_on_dataset_as_gen
self._fire_event(Events.ITERATION_COMPLETED)
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/ignite/engine/engine.py", line 425, in _fire_event
func(*first, *(event_args + others), **kwargs)
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/ignite/metrics/metric.py", line 284, in iteration_completed
output = self._output_transform(engine.state.output)
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/handlers/utils.py", line 196, in _wrapper
ret = [data[0][k] if first else [i[k] for i in data] for k in _keys]
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/handlers/utils.py", line 196, in <listcomp>
ret = [data[0][k] if first else [i[k] for i in data] for k in _keys]
File "/usr/local/anaconda3/envs/monai/lib/python3.10/site-packages/monai/handlers/utils.py", line 196, in <listcomp>
ret = [data[0][k] if first else [i[k] for i in data] for k in _keys]
KeyError: 'label_clf' |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
Update Below is an annotated version of the function that highlights the changes made to the dictionary keys: def engine_apply_transform(batch: Any, output: Any, transform: Callable[..., dict]) -> tuple[Any, Any]:
"""
Apply transform on `batch` and `output`.
If `batch` and `output` are dictionaries, temporarily combine them for the transform,
otherwise, apply the transform for `output` data only.
"""
if isinstance(batch, dict) and isinstance(output, dict):
# print(batch.keys())
# >>> dict_keys(['image', 'label_segm', 'label_clf'])
# print(output.keys())
# >>> dict_keys([image, label, pred, loss])
data = dict(batch)
# print(data.keys())
# >>> dict_keys(['image', 'label_segm', 'label_clf'])
data.update(output)
# print(data.keys())
# >>> dict_keys(['image', 'label_segm', 'label_clf', label, pred, loss])
transformed_data = apply_transform(transform, data)
# print(transformed_data.keys())
# >>> dict_keys(['image', 'loss', 'pred_segm', 'pred_clf', 'label_segm', 'label_clf'])
if not isinstance(transformed_data, dict):
raise AssertionError("With a dict supplied to apply_transform a single dict return is expected.")
for k, v in transformed_data.items():
# --- original comments from monai start ---
# split the output data of post transforms into `output` and `batch`,
# `batch` should be read-only, so save the generated key-value into `output`
# --- original comments from monai end ---
if k in output or k not in batch:
output[k] = v
else:
# 'label_segm' and 'label_clf' are found in `batch`, so they are updated here and not in output.
# This lead to the unexpected behavior in the bug report, where the flattened keys are not updated
# in the output, leading to the subsequent errors.
# Note that values in batch are also updated, making it not read-only
batch[k] = v
else:
output = apply_transform(transform, output)
return batch, output A potential fix would be changing def engine_apply_transform(batch: Any, output: Any, transform: Callable[..., dict]) -> tuple[Any, Any]:
"""
Apply transform on `batch` and `output`.
If `batch` and `output` are dictionaries, temporarily combine them for the transform,
otherwise, apply the transform for `output` data only.
"""
if isinstance(batch, dict) and isinstance(output, dict):
data = dict(batch)
data.update(output)
transformed_data = apply_transform(transform, data)
if not isinstance(transformed_data, dict):
raise AssertionError("With a dict supplied to apply_transform a single dict return is expected.")
for k, v in transformed_data.items():
# split the output data of post transforms into `output` and `batch`,
# `batch` should be read-only, so save the generated key-value into `output`
output[k] = v
if k in batch:
batch[k] = v
else:
output = apply_transform(transform, output)
return batch, output However, this would still not make batch read-only. So maybe, batch should not be updated at all? Another fix could be updating the |
Beta Was this translation helpful? Give feedback.
-
Hi @kbressem, didn't look deep into this, but your case looks similar to the hovernet with one input and three output branches in a dictionary. Could you please take a look at these two utilities and find out whether they can solve your problem? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @KumoLiu, thank you for the suggestion. Iike in HoverNet, I also implemented a custom I think, an additional workaround would be adding a prefix to the unnesting of the labels in the output, so the names differ from the items in |
Beta Was this translation helpful? Give feedback.
-
Hi @kbressem, I take a look at your case, seems you can't get the correct label. I noticed that in Thanks! |
Beta Was this translation helpful? Give feedback.
-
I do not apply Line 269 in a8302ec they will not be transferred to the output and thus are not usable for metric calculation. However, the approach you linked with using a Lambda to apply transforms to the nested items would probably work together with a |
Beta Was this translation helpful? Give feedback.
I do not apply
FlattenSubKeysd
but only use it during postprocessing. I believe, the reason why the I have two labels in theengine.state.batch
is that the decollate function does not support nested keys, thus it takes both arguments for labels and puts them into the engine's state. However, they are then inengine.state.batch
and because of this line inengine_apply_transform
:MONAI/monai/engines/utils.py
Line 269 in a8302ec
they will not be transferred to the output and thus are not usable for metric calculation. However, the approach you linked with using a Lambda to apply transforms to the nested items would probably work together wi…