PG-SWIQA: A Physics-Guided Spatial-Wavelet Interaction Network for No-Reference Underwater Image Quality Assessment
- python=3.8
- torch 2.0.0+cu117 torchvision0.15.1+cu117 torchaudio==2.0.1
- scikit-learn
- pandas
- tensorboardX
- tensorboard
- opencv-python
- imgaug
- timm
- openpyxl
- PyWavelets
- Simply place the images in the dataset in the corresponding folder, the labels are already in "mos.xlsx". The folder structure is as follows.
PIGUIQA/
│ ...
├───Data/
│ ├───SAUD2.0/
│ │ ├───mos_result/
│ │ │ ├───mos.xlsx
│ │ │ ├───record.txt
│ │ │ └───results.xlsx
│ │ ├───train/
│ │ │ ├───train_dataset.pth
│ │ │ └───...
│ │ ├───test/
│ │ │ ├───test_dataset.pth
│ │ │ └───...
│ │ ├───001_BL-TM.png
│ │ ├───001_GL-net.png
│ │ └───...
│ ├───SOTA20000/
│ │ └───...
│ ├───UID2021/
│ │ └───...
│ └───UWIQA/
│ └───...
│ ...
- The "train" folder, "test" folder, "record.txt", and "results.xlsx" will be automatically created after running "main.py".
- Please run "main.py".
- For training, please set "train = True", and set your "data_path". The file structures of the SAUD2.0, SOTA20000, UID2021 and UWIQA have been given. You can also use your own dataset.
- For testing, please set "train = False", and set your "data_path" and "pretrained_model_path".
- The record of the training process and the testing results can be found in "record.txt", and "results.xlsx".