Replies: 3 comments
-
Thanks for reporting @Narc17. We'll investigate this |
Beta Was this translation helpful? Give feedback.
0 replies
-
@Narc17, I'm unable to reproduce this issue. The following script works on my side: from torchmetrics.classification import BinaryAccuracy
from anomalib.data import MVTec
from anomalib.engine import Engine
from anomalib.metrics import Evaluator, create_anomalib_metric
from anomalib.models import Patchcore
# Create a datamodule
datamodule = MVTec()
# Create an Anomalib metric from a torchmetrics metric
Accuracy = create_anomalib_metric(BinaryAccuracy)
evaluator = Evaluator(
test_metrics=[
Accuracy(fields=["pred_score", "gt_label"], prefix="image_"),
],
)
# Create a model with the evaluator
model = Patchcore(evaluator=evaluator)
# Create an engine to train the model
engine = Engine()
engine.train(model=model, datamodule=datamodule) Here are the terminal results when you run this: ❯ python debug_evaluator.py
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
/home/sa/Projects/anomalib/debug_evaluator.py:9: DeprecationWarning: MVTec is deprecated and will be removed in a future version. Please use MVTecAD instead.
datamodule = MVTec()
INFO:anomalib.models.components.base.anomalib_module:Initializing Patchcore model.
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/lightning/pytorch/utilities/parsing.py:209: Attribute 'evaluator' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['evaluator'])`.
INFO:timm.models._builder:Loading pretrained weights from Hugging Face hub (timm/wide_resnet50_2.racm_in1k)
INFO:timm.models._hub:[timm/wide_resnet50_2.racm_in1k] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
INFO:timm.models._builder:Missing keys (fc.weight, fc.bias) discovered while loading pretrained weights. This is expected if model is being adapted.
INFO:lightning_fabric.utilities.rank_zero:GPU available: True (cuda), used: True
INFO:lightning_fabric.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:lightning_fabric.utilities.rank_zero:HPU available: False, using: 0 HPUs
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/lightning/pytorch/utilities/parsing.py:45: Attribute 'evaluator' removed from hparams because it cannot be pickled. You can suppress this warning by setting `self.save_hyperparameters(ignore=['evaluator'])`.
INFO:lightning_fabric.utilities.rank_zero:You are using a CUDA device ('NVIDIA GeForce RTX 3090') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
/home/sa/.cursor-server/extensions/ms-python.debugpy-2025.8.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/_vendored/force_pydevd.py:18: UserWarning: incompatible copy of pydevd already imported:
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/pydevd_plugins/extensions/pydevd_plugin_omegaconf.py
warnings.warn(msg + ':\n {}'.format('\n '.join(_unvendored)))
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
/home/sa/Projects/anomalib/debug_evaluator.py:9: DeprecationWarning: MVTec is deprecated and will be removed in a future version. Please use MVTecAD instead.
datamodule = MVTec()
INFO:anomalib.models.components.base.anomalib_module:Initializing Patchcore model.
INFO:timm.models._builder:Loading pretrained weights from Hugging Face hub (timm/wide_resnet50_2.racm_in1k)
INFO:timm.models._hub:[timm/wide_resnet50_2.racm_in1k] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
INFO:timm.models._builder:Missing keys (fc.weight, fc.bias) discovered while loading pretrained weights. This is expected if model is being adapted.
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
INFO:lightning_fabric.utilities.rank_zero:----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
INFO:anomalib.data.datamodules.image.mvtecad:Found the dataset.
WARNING:anomalib.metrics.evaluator:Number of devices is greater than 1, setting compute_on_cpu to False.
WARNING:anomalib.metrics.evaluator:Number of devices is greater than 1, setting compute_on_cpu to False.
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/lightning/pytorch/core/optimizer.py:183: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer
| Name | Type | Params | Mode
----------------------------------------------------------
0 | pre_processor | PreProcessor | 0 | train
1 | post_processor | PostProcessor | 0 | train
2 | evaluator | Evaluator | 0 | train
3 | model | PatchcoreModel | 24.9 M | train
----------------------------------------------------------
24.9 M Trainable params
0 Non-trainable params
24.9 M Total params
99.450 Total estimated model params size (MB)
16 Modules in train mode
174 Modules in eval mode
Epoch 0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:05<00:00, 0.72it/s]INFO:anomalib.models.image.patchcore.lightning_model:Aggregating the embedding extracted from the training set.
INFO:anomalib.models.image.patchcore.lightning_model:Applying core-set subsampling to get the embedding.
INFO:anomalib.models.image.patchcore.lightning_model:Aggregating the embedding extracted from the training set. | 0/? [00:00<?, ?it/s]
INFO:anomalib.models.image.patchcore.lightning_model:Applying core-set subsampling to get the embedding.
Selecting Coreset Indices.: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 10752/10752 [00:09<00:00, 1099.51it/s]
Selecting Coreset Indices.: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 10752/10752 [00:10<00:00, 1070.63it/s]
Epoch 0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:28<00:00, 0.14it/s]INFO:lightning_fabric.utilities.rank_zero:`Trainer.fit` stopped: `max_epochs=1` reached.
Epoch 0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:29<00:00, 0.14it/s]
INFO:anomalib.callbacks.timer:Training took 31.32 seconds
INFO:lightning_fabric.utilities.rank_zero:The following callbacks returned in `LightningModule.configure_callbacks` will override existing callbacks passed to Trainer: Evaluator, ImageVisualizer, PostProcessor, PreProcessor
INFO:anomalib.callbacks.timer:Training took 31.33 seconds
INFO:anomalib.data.datamodules.image.mvtecad:Found the dataset.
WARNING:anomalib.metrics.evaluator:Number of devices is greater than 1, setting compute_on_cpu to False.
WARNING:anomalib.metrics.evaluator:Number of devices is greater than 1, setting compute_on_cpu to False.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
/home/sa/Projects/anomalib/.venv/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:216: Using `DistributedSampler` with the dataloaders. During `trainer.test()`, it is recommended to use `Trainer(devices=1, num_nodes=1)` to ensure each sample/batch gets evaluated exactly once. Otherwise, multi-device settings use `DistributedSampler` that replicates some samples to make sure all devices have same batch size in case of uneven inputs.
Testing DataLoader 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 0.86it/s]INFO:anomalib.callbacks.timer:Testing took 6.876540184020996 seconds
Throughput (batch_size=32) : 12.070023264441463 FPS
INFO:anomalib.callbacks.timer:Testing took 6.854757308959961 seconds
Throughput (batch_size=32) : 12.10837908024977 FPS
Testing DataLoader 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 0.67it/s]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Test metric ┃ DataLoader 0 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ image_BinaryAccuracy │ 0.988095223903656 │
└───────────────────────────┴───────────────────────────┘ Here are the package versions on which I tested this torch==2.6.0
torch-tb-profiler==0.4.3
torchmetrics==1.7.0
torchvision==0.21.0 |
Beta Was this translation helpful? Give feedback.
0 replies
-
I'm moving this to a Q&A. Feel free to convert it back to a bug report if you still think there is something wrong. Cheers! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
Greetings again!
I am trying to utilize
BinaryAccuracy
fromtorchmetrics
in test evaluator.I used
anomalib.metrics.create_anomalib_metric
to wrapMetric
class intoAnomalibMetric
class.When
Engine.trainer
is trying to create checkpoint, it raised an error that the wrapped class cannot be pickled.What should I do to avoid this error?
Thank you!
Dataset
MVTecAD
Model
PatchCore
Steps to reproduce the behavior
Evaluator
by:Evaluator(test_metrics=[anomalib.metrics.create_anomalib_metric(torchmetrics.classification.BinaryAccuracy)(fields=["pred_score", "gt_label"], prefix="image_")])
Patchcore
with the evaluator created in step 1Engine.fit()
_pickle.PicklingError
.OS information
OS information:
Evaluator(test_metrics=[anomalib.metrics.create_anomalib_metric(torchmetrics.classification.BinaryAccuracy)(fields=["pred_score", "gt_label"], prefix="image_")])
Expected behavior
Evaluate the model with
BinaryAccuracy
successfully.Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
I did not utilize yaml configs.
Logs
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions