Replies: 1 comment
-
You seem to pass a test dataloader to test the model here: trainer.test(model, test_loader) Based on the error message, PyTorch Lightning does not seem to like this. File [c:\Users\benedict\anaconda3\envs\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:638](file:///C:/Users/benedict/anaconda3/envs/anomalib_env/lib/site-packages/pytorch_lightning/trainer/trainer.py:638), in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
633 raise MisconfigurationException(
634 "You cannot pass `train_dataloader` or `val_dataloaders` to `trainer.fit(datamodule=...)`"
635 )
This is an interesting application; however the error stack you are getting is not an issue in anomalib. I will therefore move this to a Q&A in Discussions. Feel free to continue the discussion there. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
Hi,
while trying to find a way to implement kfold cross validation i'm facing the issue that i can't get checkpointing to work with custom dataloaders. The same config without the custom loaders additionally added works fine, with checkpoints.
I have tried directly passing the loaders to the fit method and also tried appending the dataloaders to the datamodule and passing that to the fit() method too, but it won't work.
Below is my function used:
Dataset
Folder
Model
Reverse Distillation
Steps to reproduce the behavior
OS information
OS information:
Expected behavior
the model callback should create checkpoints every epoch
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
0.4.0
Configuration YAML
Logs
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions