You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But I can't start the distributed learning. It always use 1 GPU to do multiprocess training. Here is a part of training log:
2023/08/25 11:50:17 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
CUDA available: True
numpy_random_seed: 0
GPU 0,1: NVIDIA A100-PCIE-40GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.109
GCC: gcc (GCC) 4.8.5
PyTorch: 2.0.1+cu117
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.2+cu117
OpenCV: 4.8.0
MMEngine: 0.8.4
Runtime environment:
dist_cfg: {'backend': 'nccl'}
seed: 0
Distributed launcher: none
Distributed training: False
GPU number: 1
------------------------------------------------------------
2023/08/25 11:50:17 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
CUDA available: True
numpy_random_seed: 0
GPU 0,1: NVIDIA A100-PCIE-40GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.109
GCC: gcc (GCC) 4.8.5
PyTorch: 2.0.1+cu117
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.2+cu117
OpenCV: 4.8.0
MMEngine: 0.8.4
Runtime environment:
dist_cfg: {'backend': 'nccl'}
seed: 0
Distributed launcher: none
Distributed training: False
GPU number: 1
------------------------------------------------------------
2023/08/25 11:50:18 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
2023/08/25 11:50:18 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
2023/08/25 11:50:18 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
--------------------
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
--------------------
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
--------------------
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
--------------------
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_train_epoch:
(NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_val_epoch:
(NORMAL ) IterTimerHook
--------------------
before_val_iter:
(NORMAL ) IterTimerHook
--------------------
after_val_iter:
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook
--------------------
before_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_test_epoch:
(NORMAL ) IterTimerHook
--------------------
before_test_iter:
(NORMAL ) IterTimerHook
--------------------
after_test_iter:
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
2023/08/25 11:50:18 - mmengine - WARNING - Dataset Dataset has no metainfo. ``dataset_meta`` in visualizer will be None.
2023/08/25 11:50:18 - mmengine - WARNING - The prefix is not set in metric class MeanDice.
2023/08/25 11:50:18 - mmengine - WARNING - Dataset Dataset has no metainfo. ``dataset_meta`` in evaluator, metric and visualizer will be None.
2023/08/25 11:50:18 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
--------------------
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
--------------------
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
--------------------
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
--------------------
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_train_epoch:
(NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_val_epoch:
(NORMAL ) IterTimerHook
--------------------
before_val_iter:
(NORMAL ) IterTimerHook
--------------------
after_val_iter:
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook
--------------------
before_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_test_epoch:
(NORMAL ) IterTimerHook
--------------------
before_test_iter:
(NORMAL ) IterTimerHook
--------------------
after_test_iter:
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
Name of parameter - Initialization information
model.swinViT.patch_embed.proj.weight - torch.Size([48, 1, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.patch_embed.proj.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm1.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm1.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.qkv.weight - torch.Size([144, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.qkv.bias - torch.Size([144]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.proj.weight - torch.Size([48, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.proj.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm2.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear1.weight - torch.Size([192, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear2.weight - torch.Size([48, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm1.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm1.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.qkv.weight - torch.Size([144, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.qkv.bias - torch.Size([144]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.proj.weight - torch.Size([48, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.proj.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm2.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear1.weight - torch.Size([192, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear2.weight - torch.Size([48, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.downsample.reduction.weight - torch.Size([96, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.downsample.norm.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.downsample.norm.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm1.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm1.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 6]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.qkv.weight - torch.Size([288, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.qkv.bias - torch.Size([288]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.proj.weight - torch.Size([96, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.proj.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm2.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear1.weight - torch.Size([384, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear2.weight - torch.Size([96, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm1.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm1.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 6]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.qkv.weight - torch.Size([288, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.qkv.bias - torch.Size([288]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.proj.weight - torch.Size([96, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.proj.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm2.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear1.weight - torch.Size([384, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear2.weight - torch.Size([96, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.downsample.reduction.weight - torch.Size([192, 768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.downsample.norm.weight - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.downsample.norm.bias - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm1.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 12]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.qkv.weight - torch.Size([576, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.qkv.bias - torch.Size([576]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.proj.weight - torch.Size([192, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.proj.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm2.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear1.weight - torch.Size([768, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear1.bias - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear2.weight - torch.Size([192, 768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm1.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 12]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.qkv.weight - torch.Size([576, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.qkv.bias - torch.Size([576]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.proj.weight - torch.Size([192, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.proj.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm2.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear1.weight - torch.Size([768, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear1.bias - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear2.weight - torch.Size([192, 768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.downsample.reduction.weight - torch.Size([384, 1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.downsample.norm.weight - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.downsample.norm.bias - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm1.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 24]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.qkv.weight - torch.Size([1152, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.qkv.bias - torch.Size([1152]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.proj.weight - torch.Size([384, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.proj.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm2.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear1.weight - torch.Size([1536, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear1.bias - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear2.weight - torch.Size([384, 1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm1.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 24]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.qkv.weight - torch.Size([1152, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.qkv.bias - torch.Size([1152]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.proj.weight - torch.Size([384, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.proj.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm2.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear1.weight - torch.Size([1536, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear1.bias - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear2.weight - torch.Size([384, 1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.downsample.reduction.weight - torch.Size([768, 3072]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.downsample.norm.weight - torch.Size([3072]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.downsample.norm.bias - torch.Size([3072]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder1.layer.conv1.conv.weight - torch.Size([48, 1, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder1.layer.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder1.layer.conv3.conv.weight - torch.Size([48, 1, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder2.layer.conv1.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder2.layer.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder3.layer.conv1.conv.weight - torch.Size([96, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder3.layer.conv2.conv.weight - torch.Size([96, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder4.layer.conv1.conv.weight - torch.Size([192, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder4.layer.conv2.conv.weight - torch.Size([192, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder10.layer.conv1.conv.weight - torch.Size([768, 768, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder10.layer.conv2.conv.weight - torch.Size([768, 768, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.transp_conv.conv.weight - torch.Size([768, 384, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.conv_block.conv1.conv.weight - torch.Size([384, 768, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.conv_block.conv2.conv.weight - torch.Size([384, 384, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.conv_block.conv3.conv.weight - torch.Size([384, 768, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.transp_conv.conv.weight - torch.Size([384, 192, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.conv_block.conv1.conv.weight - torch.Size([192, 384, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.conv_block.conv2.conv.weight - torch.Size([192, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.conv_block.conv3.conv.weight - torch.Size([192, 384, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.transp_conv.conv.weight - torch.Size([192, 96, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.conv_block.conv1.conv.weight - torch.Size([96, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.conv_block.conv2.conv.weight - torch.Size([96, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.conv_block.conv3.conv.weight - torch.Size([96, 192, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.transp_conv.conv.weight - torch.Size([96, 48, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.conv_block.conv1.conv.weight - torch.Size([48, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.conv_block.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.conv_block.conv3.conv.weight - torch.Size([48, 96, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.transp_conv.conv.weight - torch.Size([48, 48, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.conv_block.conv1.conv.weight - torch.Size([48, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.conv_block.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.conv_block.conv3.conv.weight - torch.Size([48, 96, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.out.conv.conv.weight - torch.Size([9, 48, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.out.conv.conv.bias - torch.Size([9]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
2023/08/25 11:50:18 - mmengine - WARNING - Dataset Dataset has no metainfo. ``dataset_meta`` in visualizer will be None.
2023/08/25 11:50:18 - mmengine - WARNING - The prefix is not set in metric class MeanDice.
2023/08/25 11:50:18 - mmengine - WARNING - Dataset Dataset has no metainfo. ``dataset_meta`` in evaluator, metric and visualizer will be None.
2023/08/25 11:50:18 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
2023/08/25 11:50:18 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
2023/08/25 11:50:18 - mmengine - INFO - Checkpoints will be saved to /public/home/three_whyz123/yhfu/yfliu/run/02high_dimensional_phenotyping/CT/self_supervised_learning/Swin_UNETR/output/training_logs/ssl_model_performance_test/reduce_manual_labeling_efforts/percent_0.1_pretrained.
Name of parameter - Initialization information
model.swinViT.patch_embed.proj.weight - torch.Size([48, 1, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.patch_embed.proj.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm1.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm1.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.qkv.weight - torch.Size([144, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.qkv.bias - torch.Size([144]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.proj.weight - torch.Size([48, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.attn.proj.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm2.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.norm2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear1.weight - torch.Size([192, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear2.weight - torch.Size([48, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.0.mlp.linear2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm1.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm1.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.qkv.weight - torch.Size([144, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.qkv.bias - torch.Size([144]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.proj.weight - torch.Size([48, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.attn.proj.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm2.weight - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.norm2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear1.weight - torch.Size([192, 48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear2.weight - torch.Size([48, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.blocks.1.mlp.linear2.bias - torch.Size([48]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.downsample.reduction.weight - torch.Size([96, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.downsample.norm.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers1.0.downsample.norm.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm1.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm1.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 6]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.qkv.weight - torch.Size([288, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.qkv.bias - torch.Size([288]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.proj.weight - torch.Size([96, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.attn.proj.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm2.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.norm2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear1.weight - torch.Size([384, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear2.weight - torch.Size([96, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.0.mlp.linear2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm1.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm1.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 6]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.qkv.weight - torch.Size([288, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.qkv.bias - torch.Size([288]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.proj.weight - torch.Size([96, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.attn.proj.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm2.weight - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.norm2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear1.weight - torch.Size([384, 96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear2.weight - torch.Size([96, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.blocks.1.mlp.linear2.bias - torch.Size([96]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.downsample.reduction.weight - torch.Size([192, 768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.downsample.norm.weight - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers2.0.downsample.norm.bias - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm1.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 12]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.qkv.weight - torch.Size([576, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.qkv.bias - torch.Size([576]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.proj.weight - torch.Size([192, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.attn.proj.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm2.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.norm2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear1.weight - torch.Size([768, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear1.bias - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear2.weight - torch.Size([192, 768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.0.mlp.linear2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm1.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm1.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 12]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.qkv.weight - torch.Size([576, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.qkv.bias - torch.Size([576]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.proj.weight - torch.Size([192, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.attn.proj.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm2.weight - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.norm2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear1.weight - torch.Size([768, 192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear1.bias - torch.Size([768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear2.weight - torch.Size([192, 768]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.blocks.1.mlp.linear2.bias - torch.Size([192]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.downsample.reduction.weight - torch.Size([384, 1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.downsample.norm.weight - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers3.0.downsample.norm.bias - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm1.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.relative_position_bias_table - torch.Size([2197, 24]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.qkv.weight - torch.Size([1152, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.qkv.bias - torch.Size([1152]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.proj.weight - torch.Size([384, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.attn.proj.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm2.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.norm2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear1.weight - torch.Size([1536, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear1.bias - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear2.weight - torch.Size([384, 1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.0.mlp.linear2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm1.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm1.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.relative_position_bias_table - torch.Size([2197, 24]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.qkv.weight - torch.Size([1152, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.qkv.bias - torch.Size([1152]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.proj.weight - torch.Size([384, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.attn.proj.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm2.weight - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.norm2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear1.weight - torch.Size([1536, 384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear1.bias - torch.Size([1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear2.weight - torch.Size([384, 1536]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.blocks.1.mlp.linear2.bias - torch.Size([384]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.downsample.reduction.weight - torch.Size([768, 3072]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.downsample.norm.weight - torch.Size([3072]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.swinViT.layers4.0.downsample.norm.bias - torch.Size([3072]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder1.layer.conv1.conv.weight - torch.Size([48, 1, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder1.layer.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder1.layer.conv3.conv.weight - torch.Size([48, 1, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder2.layer.conv1.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder2.layer.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder3.layer.conv1.conv.weight - torch.Size([96, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder3.layer.conv2.conv.weight - torch.Size([96, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder4.layer.conv1.conv.weight - torch.Size([192, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder4.layer.conv2.conv.weight - torch.Size([192, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder10.layer.conv1.conv.weight - torch.Size([768, 768, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.encoder10.layer.conv2.conv.weight - torch.Size([768, 768, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.transp_conv.conv.weight - torch.Size([768, 384, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.conv_block.conv1.conv.weight - torch.Size([384, 768, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.conv_block.conv2.conv.weight - torch.Size([384, 384, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder5.conv_block.conv3.conv.weight - torch.Size([384, 768, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.transp_conv.conv.weight - torch.Size([384, 192, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.conv_block.conv1.conv.weight - torch.Size([192, 384, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.conv_block.conv2.conv.weight - torch.Size([192, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder4.conv_block.conv3.conv.weight - torch.Size([192, 384, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.transp_conv.conv.weight - torch.Size([192, 96, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.conv_block.conv1.conv.weight - torch.Size([96, 192, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.conv_block.conv2.conv.weight - torch.Size([96, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder3.conv_block.conv3.conv.weight - torch.Size([96, 192, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.transp_conv.conv.weight - torch.Size([96, 48, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.conv_block.conv1.conv.weight - torch.Size([48, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.conv_block.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder2.conv_block.conv3.conv.weight - torch.Size([48, 96, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.transp_conv.conv.weight - torch.Size([48, 48, 2, 2, 2]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.conv_block.conv1.conv.weight - torch.Size([48, 96, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.conv_block.conv2.conv.weight - torch.Size([48, 48, 3, 3, 3]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.decoder1.conv_block.conv3.conv.weight - torch.Size([48, 96, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.out.conv.conv.weight - torch.Size([9, 48, 1, 1, 1]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
model.out.conv.conv.bias - torch.Size([9]):
The value is the same before and after calling `init_weights` of MMSwinUNETR
2023/08/25 11:50:18 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
2023/08/25 11:50:18 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
2023/08/25 11:50:18 - mmengine - INFO - Checkpoints will be saved to /public/home/three_whyz123/yhfu/yfliu/run/02high_dimensional_phenotyping/CT/self_supervised_learning/Swin_UNETR/output/training_logs/ssl_model_performance_test/reduce_manual_labeling_efforts/percent_0.1_pretrained.
2023/08/25 11:50:30 - mmengine - INFO - Exp name: percent_0.1_pretrained_20230825_115016
2023/08/25 11:50:30 - mmengine - INFO - Epoch(train) [1][8/8] lr: 4.3877e-07 eta: 6:32:39 time: 1.4732 data_time: 0.8846 memory: 9227 loss: 3.0262 global_loss: 3.0262
2023/08/25 11:50:30 - mmengine - INFO - Saving checkpoint at 1 epochs
2023/08/25 11:50:31 - mmengine - INFO - Exp name: percent_0.1_pretrained_20230825_115016
2023/08/25 11:50:31 - mmengine - INFO - Epoch(train) [1][8/8] lr: 4.3877e-07 eta: 6:58:41 time: 1.5709 data_time: 0.8989 memory: 9230 loss: 3.2593 global_loss: 3.2593
2023/08/25 11:50:31 - mmengine - INFO - Saving checkpoint at 1 epochs
2023/08/25 11:50:42 - mmengine - INFO - Epoch(val) [1][10/24] eta: 0:00:14 time: 1.0446 data_time: 0.9265 memory: 3786
2023/08/25 11:50:44 - mmengine - INFO - Epoch(val) [1][10/24] eta: 0:00:15 time: 1.0759 data_time: 0.9410 memory: 3789
2023/08/25 11:50:48 - mmengine - INFO - Epoch(val) [1][20/24] eta: 0:00:03 time: 0.5986 data_time: 0.4830 memory: 4461
2023/08/25 11:50:49 - mmengine - INFO - Epoch(val) [1][20/24] eta: 0:00:03 time: 0.5713 data_time: 0.4533 memory: 4464
2023/08/25 11:50:59 - mmengine - INFO - Epoch(val) [1][24/24] mean_dice: 0.1435 data_time: 0.5874 time: 0.7059
2023/08/25 11:51:00 - mmengine - INFO - The best checkpoint with 0.1435 mean_dice at 1 epoch is saved to best_mean_dice_epoch_1.pth.
2023/08/25 11:51:00 - mmengine - INFO - Epoch(val) [1][24/24] mean_dice: 0.0878 data_time: 0.5810 time: 0.7022
2023/08/25 11:51:01 - mmengine - INFO - The best checkpoint with 0.0878 mean_dice at 1 epoch is saved to best_mean_dice_epoch_1.pth.
2023/08/25 11:51:11 - mmengine - INFO - Exp name: percent_0.1_pretrained_20230825_115016
2023/08/25 11:51:11 - mmengine - INFO - Epoch(train) [2][8/8] lr: 9.3907e-07 eta: 5:40:15 time: 0.9588 data_time: 0.6123 memory: 9227 loss: 2.9758 global_loss: 2.9969
I can run successfully the distributed training examples python -m torch.distributed.launch --nproc_per_node=8 examples/distributed_training.py --launcher pytorch
Looking forward to your reply~
Yangfan Liu
2023.8.25
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi:
I want to integrate monai package and mmengine to do medical image segmentation model distributed training. Here is code
utils.py
main.py
Here is the command
But I can't start the distributed learning. It always use 1 GPU to do multiprocess training. Here is a part of training log:
I can run successfully the distributed training examples
python -m torch.distributed.launch --nproc_per_node=8 examples/distributed_training.py --launcher pytorch
Looking forward to your reply~
Yangfan Liu
2023.8.25
Beta Was this translation helpful? Give feedback.
All reactions