Replies: 1 comment
-
This happens if for some reason the dataset is divided into batches in a way where a single image is present in last batch. Instead of [1, 224, 224] it gets shape of [3, 224, 224] and that causes this problem. So a quick fix is to use a different batch size, but we'll need to investigate this further. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
anomalib fit -c configs/model/patchcore.yaml --data configs/data/folder.yaml 2024-03-15 15:21:31,091 - anomalib.utils.config - WARNING - Anomalib currently does not support multi-gpu training. Setting devices to 1. [03/15/24 15:21:31] WARNING Anomalib currently does not support multi-gpu training. Setting devices to 1. config.py:126 C:\Users\Administrator\Desktop\anomalib\.venv\Lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric
PrecisionRecallCurvewill save all targets and predictions in buffer. For large datasets this may lead to large memory footprint. warnings.warn(*args, **kwargs) 2024-03-15 15:21:31,123 - anomalib.models.components.base.anomaly_module - INFO - Initializing Patchcore model. INFO Initializing Patchcore model. anomaly_module.py:39 2024-03-15 15:21:32,785 - timm.models.helpers - INFO - Loading pretrained weights from url (https://github.com/rwightman/pytorch-image-models/releas es/download/v0.1-weights/wide_resnet50_racm-8234f177.pth) [03/15/24 15:21:32] INFO Loading pretrained weights from url helpers.py:247 (https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/wide_resnet50_racm-823 4f177.pth) 2024-03-15 15:21:32,998 - anomalib.callbacks - INFO - Loading the callbacks INFO Loading the callbacks __init__.py:43 2024-03-15 15:21:33,003 - anomalib.engine.engine - INFO - Overriding gradient_clip_val from None with 0 for Patchcore [03/15/24 15:21:33] INFO Overriding gradient_clip_val from None with 0 for Patchcore engine.py:84 2024-03-15 15:21:33,004 - anomalib.engine.engine - INFO - Overriding max_epochs from None with 1 for Patchcore INFO Overriding max_epochs from None with 1 for Patchcore engine.py:84 2024-03-15 15:21:33,006 - anomalib.engine.engine - INFO - Overriding num_sanity_val_steps from None with 0 for Patchcore INFO Overriding num_sanity_val_steps from None with 0 for Patchcore engine.py:84 2024-03-15 15:21:33,276 - lightning.pytorch.utilities.rank_zero - INFO - GPU available: True (cuda), used: True [03/15/24 15:21:33] INFO GPU available: True (cuda), used: True rank_zero.py:64 2024-03-15 15:21:33,279 - lightning.pytorch.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores INFO TPU available: False, using: 0 TPU cores rank_zero.py:64 2024-03-15 15:21:33,281 - lightning.pytorch.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs INFO IPU available: False, using: 0 IPUs rank_zero.py:64 2024-03-15 15:21:33,284 - lightning.pytorch.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs INFO HPU available: False, using: 0 HPUs rank_zero.py:64 2024-03-15 15:21:33,288 - lightning.pytorch.utilities.rank_zero - INFO - You are using a CUDA device ('NVIDIA GeForce RTX 3090') that has Tensor Cor es. To properly utilize them, you should set
torch.set_float32_matmul_precision('medium' | 'high')which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision INFO You are using a CUDA device ('NVIDIA GeForce RTX 3090') that has Tensor Cores. To properly utilize rank_zero.py:64 them, you should set
torch.set_float32_matmul_precision('medium' | 'high')which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_mat mul_precision C:\Users\Administrator\Desktop\anomalib\.venv\Lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric
ROCwill save all targets and predictions in buffer. For large datasets this may lead to large memory footprint. warnings.warn(*args, **kwargs) 2024-03-15 15:21:33,725 - lightning.pytorch.accelerators.cuda - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] INFO LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] cuda.py:58 C:\Users\Administrator\Desktop\anomalib\.venv\Lib\site-packages\lightning\pytorch\core\optimizer.py:180:
LightningModule.configure_optimizersretu rned
None`, this fit will run with no optimizer2024-03-15 15:21:33,733 - lightning.pytorch.callbacks.model_summary - INFO -
| Name | Type | Params
0 | model | PatchcoreModel | 24.9 M
1 | _transform | Compose | 0
2 | normalization_metrics | MinMax | 0
3 | image_threshold | F1AdaptiveThreshold | 0
4 | pixel_threshold | F1AdaptiveThreshold | 0
5 | image_metrics | AnomalibMetricCollection | 0
6 | pixel_metrics | AnomalibMetricCollection | 0
24.9 M Trainable params
0 Non-trainable params
24.9 M Total params
99.450 Total estimated model params size (MB)
INFO model_summary.py:90
| Name | Type | Params
-------------------------------------------------------------------
0 | model | PatchcoreModel | 24.9 M
1 | _transform | Compose | 0
2 | normalization_metrics | MinMax | 0
3 | image_threshold | F1AdaptiveThreshold | 0
4 | pixel_threshold | F1AdaptiveThreshold | 0
5 | image_metrics | AnomalibMetricCollection | 0
6 | pixel_metrics | AnomalibMetricCollection | 0
-------------------------------------------------------------------
24.9 M Trainable params
0 Non-trainable params
24.9 M Total params
99.450 Total estimated model params size (MB)
C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:436: Consider setting
persis tent_workers=True
in 'train_dataloader' to speed up the dataloader worker initialization.C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:436: Consider setting
persis tent_workers=True
in 'val_dataloader' to speed up the dataloader worker initialization.Epoch 0: 0%| | 0/2 [00:00<?, ?it/s]C
:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\optimization\automatic.py:129:
training_step
returnedNone
. If this was on purpose, ignore this warning...Epoch 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 0.29it/s]2
024-03-15 15:21:42,192 - anomalib.models.image.patchcore.lightning_model - INFO - Aggregating the embedding extracted from the training set. ?it/s]
[03/15/24 15:21:42] INFO Aggregating the embedding extracted from the training set. lightning_model.py:90
2024-03-15 15:21:42,199 - anomalib.models.image.patchcore.lightning_model - INFO - Applying core-set subsampling to get the embedding.
INFO Applying core-set subsampling to get the embedding. lightning_model.py:93
Selecting Coreset Indices. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:04
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in _run_module_as_main:198 │
│ in _run_code:88 │
│ │
│ in :7 │
│ │
│ 4 from anomalib.cli.cli import main │
│ 5 if name == 'main': │
│ 6 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │
│ ❱ 7 │ sys.exit(main()) │
│ 8 │
│ │
│ C:\Users\Administrator\Desktop\anomalib\src\anomalib\cli\cli.py:493 in main │
│ │
│ 490 def main() -> None: │
│ 491 │ """Trainer via Anomalib CLI.""" │
│ 492 │ configure_logger() │
│ ❱ 493 │ AnomalibCLI() │
│ 494 │
│ 495 │
│ 496 if name == "main": │
│ │
│ C:\Users\Administrator\Desktop\anomalib\src\anomalib\cli\cli.py:65 in init │
│ │
│ 62 │ │ if _LIGHTNING_AVAILABLE: │
│ 63 │ │ │ self.before_instantiate_classes() │
│ 64 │ │ │ self.instantiate_classes() │
│ ❱ 65 │ │ self._run_subcommand() │
│ 66 │ │
│ 67 │ def init_parser(self, **kwargs) -> ArgumentParser: │
│ 68 │ │ """Method that instantiates the argument parser.""" │
│ │
│ C:\Users\Administrator\Desktop\anomalib\src\anomalib\cli\cli.py:355 in _run_subcommand │
│ │
│ 352 │ │ elif self.config["subcommand"] in (*self.subcommands(), "train", "export", "pred │
│ 353 │ │ │ fn = getattr(self.engine, self.subcommand) │
│ 354 │ │ │ fn_kwargs = self._prepare_subcommand_kwargs(self.subcommand) │
│ ❱ 355 │ │ │ fn(**fn_kwargs) │
│ 356 │ │ else: │
│ 357 │ │ │ self.config_init = self.parser.instantiate_classes(self.config) │
│ 358 │ │ │ getattr(self, f"{self.subcommand}")() │
│ │
│ C:\Users\Administrator\Desktop\anomalib\src\anomalib\engine\engine.py:515 in fit │
│ │
│ 512 │ │ │ # if the model is zero-shot or few-shot, we only need to run validate for no │
│ 513 │ │ │ self.trainer.validate(model, val_dataloaders, datamodule=datamodule, ckpt_pa │
│ 514 │ │ else: │
│ ❱ 515 │ │ │ self.trainer.fit(model, train_dataloaders, val_dataloaders, datamodule, ckpt │
│ 516 │ │
│ 517 │ def validate( │
│ 518 │ │ self, │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\traine │
│ r.py:544 in fit │
│ │
│ 541 │ │ self.state.fn = TrainerFn.FITTING │
│ 542 │ │ self.state.status = TrainerStatus.RUNNING │
│ 543 │ │ self.training = True │
│ ❱ 544 │ │ call._call_and_handle_interrupt( │
│ 545 │ │ │ self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, │
│ 546 │ │ ) │
│ 547 │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\call.p │
│ y:44 in _call_and_handle_interrupt │
│ │
│ 41 │ try: │
│ 42 │ │ if trainer.strategy.launcher is not None: │
│ 43 │ │ │ return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, │
│ ❱ 44 │ │ return trainer_fn(*args, **kwargs) │
│ 45 │ │
│ 46 │ except _TunerExitException: │
│ 47 │ │ _call_teardown_hook(trainer) │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\traine │
│ r.py:580 in _fit_impl │
│ │
│ 577 │ │ │ model_provided=True, │
│ 578 │ │ │ model_connected=self.lightning_module is not None, │
│ 579 │ │ ) │
│ ❱ 580 │ │ self._run(model, ckpt_path=ckpt_path) │
│ 581 │ │ │
│ 582 │ │ assert self.state.stopped │
│ 583 │ │ self.training = False │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\traine │
│ r.py:989 in _run │
│ │
│ 986 │ │ # ---------------------------- │
│ 987 │ │ # RUN THE TRAINER │
│ 988 │ │ # ---------------------------- │
│ ❱ 989 │ │ results = self._run_stage() │
│ 990 │ │ │
│ 991 │ │ # ---------------------------- │
│ 992 │ │ # POST-Training CLEAN UP │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\trainer\traine │
│ r.py:1035 in _run_stage │
│ │
│ 1032 │ │ │ with isolate_rng(): │
│ 1033 │ │ │ │ self._run_sanity_check() │
│ 1034 │ │ │ with torch.autograd.set_detect_anomaly(self._detect_anomaly): │
│ ❱ 1035 │ │ │ │ self.fit_loop.run() │
│ 1036 │ │ │ return None │
│ 1037 │ │ raise RuntimeError(f"Unexpected state {self.state}") │
│ 1038 │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\fit_loop │
│ .py:202 in run │
│ │
│ 199 │ │ while not self.done: │
│ 200 │ │ │ try: │
│ 201 │ │ │ │ self.on_advance_start() │
│ ❱ 202 │ │ │ │ self.advance() │
│ 203 │ │ │ │ self.on_advance_end() │
│ 204 │ │ │ │ self._restarting = False │
│ 205 │ │ │ except StopIteration: │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\fit_loop │
│ .py:359 in advance │
│ │
│ 356 │ │ │ ) │
│ 357 │ │ with self.trainer.profiler.profile("run_training_epoch"): │
│ 358 │ │ │ assert self._data_fetcher is not None │
│ ❱ 359 │ │ │ self.epoch_loop.run(self._data_fetcher) │
│ 360 │ │
│ 361 │ def on_advance_end(self) -> None: │
│ 362 │ │ trainer = self.trainer │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\training │
│ _epoch_loop.py:137 in run │
│ │
│ 134 │ │ while not self.done: │
│ 135 │ │ │ try: │
│ 136 │ │ │ │ self.advance(data_fetcher) │
│ ❱ 137 │ │ │ │ self.on_advance_end(data_fetcher) │
│ 138 │ │ │ │ self._restarting = False │
│ 139 │ │ │ except StopIteration: │
│ 140 │ │ │ │ break │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\training │
│ _epoch_loop.py:285 in on_advance_end │
│ │
│ 282 │ │ │ │ # clear gradients to not leave any unused memory during validation │
│ 283 │ │ │ │ call._call_lightning_module_hook(self.trainer, "on_validation_model_zero │
│ 284 │ │ │ │
│ ❱ 285 │ │ │ self.val_loop.run() │
│ 286 │ │ │ self.trainer.training = True │
│ 287 │ │ │ self.trainer._logger_connector._first_loop_iter = first_loop_iter │
│ 288 │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\utilitie │
│ s.py:182 in _decorator │
│ │
│ 179 │ │ else: │
│ 180 │ │ │ context_manager = torch.no_grad │
│ 181 │ │ with context_manager(): │
│ ❱ 182 │ │ │ return loop_run(self, *args, **kwargs) │
│ 183 │ │
│ 184 │ return _decorator │
│ 185 │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\evaluati │
│ on_loop.py:127 in run │
│ │
│ 124 │ │ │ │ │ dataloader_idx = data_fetcher._dataloader_idx │
│ 125 │ │ │ │ else: │
│ 126 │ │ │ │ │ dataloader_iter = None │
│ ❱ 127 │ │ │ │ │ batch, batch_idx, dataloader_idx = next(data_fetcher) │
│ 128 │ │ │ │ if previous_dataloader_idx != dataloader_idx: │
│ 129 │ │ │ │ │ # the dataloader has changed, notify the logger connector │
│ 130 │ │ │ │ │ self._store_dataloader_outputs() │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\fetchers │
│ .py:127 in next │
│ │
│ 124 │ │ │ │ self.done = not self.batches │
│ 125 │ │ elif not self.done: │
│ 126 │ │ │ # this will run only when no pre-fetching was done. │
│ ❱ 127 │ │ │ batch = super().next() │
│ 128 │ │ else: │
│ 129 │ │ │ # the iterator is empty │
│ 130 │ │ │ raise StopIteration │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\loops\fetchers │
│ .py:56 in next │
│ │
│ 53 │ │ assert self.iterator is not None │
│ 54 │ │ self._start_profiler() │
│ 55 │ │ try: │
│ ❱ 56 │ │ │ batch = next(self.iterator) │
│ 57 │ │ except StopIteration: │
│ 58 │ │ │ self.done = True │
│ 59 │ │ │ raise │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\utilities\comb │
│ ined_loader.py:326 in next │
│ │
│ 323 │ │
│ 324 │ def next(self) -> _ITERATOR_RETURN: │
│ 325 │ │ assert self._iterator is not None │
│ ❱ 326 │ │ out = next(self._iterator) │
│ 327 │ │ if isinstance(self._iterator, _Sequential): │
│ 328 │ │ │ return out │
│ 329 │ │ out, batch_idx, dataloader_idx = out │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\lightning\pytorch\utilities\comb │
│ ined_loader.py:132 in next │
│ │
│ 129 │ │ │ │ │ raise StopIteration │
│ 130 │ │ │
│ 131 │ │ try: │
│ ❱ 132 │ │ │ out = next(self.iterators[0]) │
│ 133 │ │ except StopIteration: │
│ 134 │ │ │ # try the next iterator │
│ 135 │ │ │ self._use_next_iterator() │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data\dataloader.py:6 │
│ 30 in next │
│ │
│ 627 │ │ │ if self._sampler_iter is None: │
│ 628 │ │ │ │ # TODO(pytorch/pytorch#76750) │
│ 629 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 630 │ │ │ data = self._next_data() │
│ 631 │ │ │ self._num_yielded += 1 │
│ 632 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 633 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data\dataloader.py:1 │
│ 345 in _next_data │
│ │
│ 1342 │ │ │ │ self._task_info[idx] += (data,) │
│ 1343 │ │ │ else: │
│ 1344 │ │ │ │ del self._task_info[idx] │
│ ❱ 1345 │ │ │ │ return self._process_data(data) │
│ 1346 │ │
│ 1347 │ def _try_put_index(self): │
│ 1348 │ │ assert self._tasks_outstanding < self._prefetch_factor * self._num_workers │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data\dataloader.py:1 │
│ 371 in _process_data │
│ │
│ 1368 │ │ self._rcvd_idx += 1 │
│ 1369 │ │ self._try_put_index() │
│ 1370 │ │ if isinstance(data, ExceptionWrapper): │
│ ❱ 1371 │ │ │ data.reraise() │
│ 1372 │ │ return data │
│ 1373 │ │
│ 1374 │ def _mark_worker_as_unavailable(self, worker_id, shutdown=False): │
│ │
│ C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch_utils.py:694 in reraise │
│ │
│ 691 │ │ │ # If the exception takes multiple arguments, don't try to │
│ 692 │ │ │ # instantiate since we don't know how to │
│ 693 │ │ │ raise RuntimeError(msg) from None │
│ ❱ 694 │ │ raise exception │
│ 695 │
│ 696 │
│ 697 def _get_available_device_type(): │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\fetch.py", line 54, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib\src\anomalib\data\base\datamodule.py", line 46, in collate_fn
out_dict.update({key: default_collate([item[key] for item in batch]) for key in elem})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib\src\anomalib\data\base\datamodule.py", line 46, in
out_dict.update({key: default_collate([item[key] for item in batch]) for key in elem})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\collate.py", line 265, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\collate.py", line 123, in collate
return collate_fn_map[collate_type](batch, collate_fn_map=collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\collate.py", line 162, in collate_tensor_fn
return torch.stack(batch, 0, out=out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torchvision\tv_tensors_tv_tensor.py", line 77, in torch_function
output = func(*args, **kwargs or dict())
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\collate.py", line 123, in collate
return collate_fn_map[collate_type](batch, collate_fn_map=collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torch\utils\data_utils\collate.py", line 162, in collate_tensor_fn
return torch.stack(batch, 0, out=out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\anomalib.venv\Lib\site-packages\torchvision\tv_tensors_tv_tensor.py", line 77, in torch_function
output = func(*args, **kwargs or dict())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects each tensor to be equal size, but got [224, 224] at entry 0 and [3, 224, 224] at entry 29
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects each tensor to be equal size, but got [224, 224] at entry 0 and [3, 224, 224] at entry 29
Epoch 0: 100%|██████████| 2/2 [00:17<00:00, 0.12it/s]
`
Beta Was this translation helpful? Give feedback.
All reactions