|  | 
| 1 |  | -# ShadowFormer (AAAI'23) | 
| 2 |  | -This is the official implementation of the AAAI 2023 paper [ShadowFormer: Global Context Helps Image Shadow Removal](https://arxiv.org/pdf/2302.01650.pdf). | 
|  | 1 | +# Fork of [GuoLanqing/ShadowFormer](https://github.com/GuoLanqing/ShadowFormer) | 
| 3 | 2 | 
 | 
| 4 |  | -[](https://paperswithcode.com/sota/shadow-removal-on-istd?p=shadowformer-global-context-helps-image) | 
| 5 |  | -[](https://paperswithcode.com/sota/shadow-removal-on-adjusted-istd?p=shadowformer-global-context-helps-image) | 
| 6 |  | -[](https://paperswithcode.com/sota/shadow-removal-on-srd?p=shadowformer-global-context-helps-image) | 
|  | 3 | +Differences between original repository and fork: | 
| 7 | 4 | 
 | 
| 8 |  | -#### News | 
| 9 |  | -* **Feb 24, 2023**: Release the pretrained models for ISTD and ISTD+. | 
| 10 |  | -* **Feb 18, 2023**: Release the training and testing codes. | 
| 11 |  | -* **Feb 17, 2023**: Add the testing results and the description of our work. | 
|  | 5 | +* Compatibility with PyTorch >=2.5. (🔥) | 
|  | 6 | +* Original pretrained models and converted ONNX models from GitHub [releases page](https://github.com/clibdev/ShadowFormer/releases). (🔥) | 
|  | 7 | +* Model conversion to ONNX format using the [export.py](export.py) file. (🔥) | 
|  | 8 | +* Sample script [inference.py](inference.py) for inference of single image. | 
|  | 9 | +* The following deprecations has been fixed: | 
|  | 10 | +  * UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. | 
|  | 11 | +  * FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers. | 
|  | 12 | +  * FutureWarning: You are using 'torch.load' with 'weights_only=False'. | 
| 12 | 13 | 
 | 
| 13 |  | -## Introduction | 
| 14 |  | -To tackle image shadow removal problem, we propose a novel transformer-based method, dubbed ShadowFormer, for exploiting non-shadow | 
| 15 |  | -regions to help shadow region restoration. A multi-scale channel attention framework is employed to hierarchically | 
| 16 |  | -capture the global information. Based on that, we propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions.  | 
| 17 |  | -For more details, please refer to our [original paper](https://arxiv.org/pdf/2302.01650.pdf) | 
|  | 14 | +# Installation | 
| 18 | 15 | 
 | 
| 19 |  | -<p align=center><img width="80%" src="doc/pipeline.jpg"/></p> | 
| 20 |  | - | 
| 21 |  | -<p align=center><img width="80%" src="doc/details.jpg"/></p> | 
| 22 |  | - | 
| 23 |  | -## Requirement | 
| 24 |  | -* Python 3.7 | 
| 25 |  | -* Pytorch 1.7 | 
| 26 |  | -* CUDA 11.1 | 
| 27 |  | -```bash | 
|  | 16 | +```shell | 
| 28 | 17 | pip install -r requirements.txt | 
| 29 | 18 | ``` | 
| 30 | 19 | 
 | 
| 31 |  | -## Datasets | 
| 32 |  | -* ISTD [[link]](https://github.com/DeepInsight-PCALab/ST-CGAN)   | 
| 33 |  | -* ISTD+ [[link]](https://github.com/cvlab-stonybrook/SID) | 
| 34 |  | -* SRD [[Training]](https://drive.google.com/file/d/1W8vBRJYDG9imMgr9I2XaA13tlFIEHOjS/view)[[Testing]](https://drive.google.com/file/d/1GTi4BmQ0SJ7diDMmf-b7x2VismmXtfTo/view) | 
|  | 20 | +# Pretrained models | 
| 35 | 21 | 
 | 
| 36 |  | -## Pretrained models | 
| 37 |  | -[ISTD](https://drive.google.com/file/d/1bHbkHxY5D5905BMw2jzvkzgXsFPKzSq4/view?usp=share_link) | [ISTD+](https://drive.google.com/file/d/10pBsJenoWGriZ9kjWOcE4l4Kzg-F1TFd/view?usp=share_link) | [SRD]() | 
|  | 22 | +* Download links: | 
| 38 | 23 | 
 | 
| 39 |  | -Please download the corresponding pretrained model and modify the `weights` in `test.py`. | 
|  | 24 | +| Name                 | Model Size (MB) | Link                                                                                                                                                                                                          | SHA-256                                                                                                                              | | 
|  | 25 | +|----------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------| | 
|  | 26 | +| ShadowFormer (ISTD)  | 130.9<br>83.0   | [PyTorch](https://github.com/clibdev/ShadowFormer/releases/latest/download/shadowformer-istd.pt)<br>[ONNX](https://github.com/clibdev/ShadowFormer/releases/latest/download/shadowformer-istd.onnx)           | 4700ae374b965253734dbcac0b63c9cac9af5895ff19655710042a988751fc98<br>96b90f5f1d11b67e3c7835cae3ccacaaa78ac4fadbf03a04fd36769e21f619a6 | | 
|  | 27 | +| ShadowFormer (ISTD+) | 130.9<br>83.0   | [PyTorch](https://github.com/clibdev/ShadowFormer/releases/latest/download/shadowformer-istd-plus.pt)<br>[ONNX](https://github.com/clibdev/ShadowFormer/releases/latest/download/shadowformer-istd-plus.onnx) | 2748060149908df37cc65f0695ef61d64cd25847aba0c35af36823f9b780f5b2<br>077128017e7400c0e7c22210d6afb83748bfb068a6e02037156ea4ab8a8592a9 | | 
| 40 | 28 | 
 | 
| 41 |  | -## Test | 
| 42 |  | -You can directly test the performance of the pre-trained model as follows | 
| 43 |  | -1. Modify the paths to dataset and pre-trained model. You need to modify the following path in the `test.py`  | 
| 44 |  | -```python | 
| 45 |  | -input_dir # shadow image input path -- Line 27 | 
| 46 |  | -weights # pretrained model path -- Line 31 | 
| 47 |  | -``` | 
| 48 |  | -2. Test the model | 
| 49 |  | -```python | 
| 50 |  | -python test.py --save_images | 
| 51 |  | -``` | 
| 52 |  | -You can check the output in `./results`. | 
|  | 29 | +# Inference | 
| 53 | 30 | 
 | 
| 54 |  | -## Train | 
| 55 |  | -1. Download datasets and set the following structure | 
| 56 |  | -``` | 
| 57 |  | -|-- ISTD_Dataset | 
| 58 |  | -    |-- train | 
| 59 |  | -        |-- train_A # shadow image | 
| 60 |  | -        |-- train_B # shadow mask | 
| 61 |  | -        |-- train_C # shadow-free GT | 
| 62 |  | -    |-- test | 
| 63 |  | -        |-- test_A # shadow image | 
| 64 |  | -        |-- test_B # shadow mask | 
| 65 |  | -        |-- test_C # shadow-free GT | 
| 66 |  | -``` | 
| 67 |  | -2. You need to modify the following terms in `option.py` | 
| 68 |  | -```python | 
| 69 |  | -train_dir  # training set path | 
| 70 |  | -val_dir   # testing set path | 
| 71 |  | -gpu: 0 # Our model can be trained using a single RTX A5000 GPU. You can also train the model using multiple GPUs by adding more GPU ids in it. | 
| 72 |  | -``` | 
| 73 |  | -3. Train the network | 
| 74 |  | -If you want to train the network on 256X256 images: | 
| 75 |  | -```python | 
| 76 |  | -python train.py --warmup --win_size 8 --train_ps 256 | 
| 77 |  | -``` | 
| 78 |  | -or you want to train on original resolution, e.g., 480X640 for ISTD: | 
| 79 |  | -```python | 
| 80 |  | -python train.py --warmup --win_size 10 --train_ps 320 | 
|  | 31 | +```shell | 
|  | 32 | +python inference.py --weights shadowformer-istd.pt --input_path img/noisy_image.png --mask_path img/mask.png | 
|  | 33 | +python inference.py --weights shadowformer-istd-plus.pt --input_path img/noisy_image.png --mask_path img/mask.png | 
| 81 | 34 | ``` | 
| 82 | 35 | 
 | 
| 83 |  | -## Evaluation | 
| 84 |  | -The results reported in the paper are calculated by the `matlab` script used in [previous method](https://github.com/zhuyr97/AAAI2022_Unfolding_Network_Shadow_Removal/tree/master/codes). Details refer to `evaluation/measure_shadow.m`. | 
| 85 |  | -We also provide the `python` code for calculating the metrics in `test.py`, using `python test.py --cal_metrics` to print. | 
|  | 36 | +# Export to ONNX format | 
| 86 | 37 | 
 | 
| 87 |  | -## Results | 
| 88 |  | -#### Evaluation on ISTD | 
| 89 |  | -The evaluation results on ISTD are as follows | 
| 90 |  | -| Method | PSNR | SSIM | RMSE | | 
| 91 |  | -| :-- | :--: | :--: | :--: | | 
| 92 |  | -| ST-CGAN | 27.44 | 0.929 | 6.65 | | 
| 93 |  | -| DSC | 29.00 | 0.944 | 5.59 | | 
| 94 |  | -| DHAN | 29.11 | 0.954 | 5.66 | | 
| 95 |  | -| Fu et al. | 27.19 | 0.945 | 5.88 | | 
| 96 |  | -| Zhu et al. | 29.85 | 0.960 | 4.27 | | 
| 97 |  | -| **ShadowFormer (Ours)** | **32.21** | **0.968** | **4.09** | | 
| 98 |  | - | 
| 99 |  | -#### Visual Results | 
| 100 |  | -<p align=center><img width="80%" src="doc/res.jpg"/></p> | 
| 101 |  | - | 
| 102 |  | -#### Testing results | 
| 103 |  | -The testing results on dataset ISTD, ISTD+, SRD are: [results](https://drive.google.com/file/d/1zcv7KBCIKgk-CGQJCWnM2YAKcSAj8Sc4/view?usp=share_link) | 
| 104 |  | - | 
| 105 |  | -## References | 
| 106 |  | -Our implementation is based on [Uformer](https://github.com/ZhendongWang6/Uformer) and [Restormer](https://github.com/swz30/Restormer). We would like to thank them. | 
| 107 |  | - | 
| 108 |  | -Citation | 
| 109 |  | ------ | 
| 110 |  | -Preprint available [here](https://arxiv.org/pdf/2302.01650.pdf).  | 
| 111 |  | - | 
| 112 |  | -In case of use, please cite our publication: | 
| 113 |  | - | 
| 114 |  | -L. Guo, S. Huang, D. Liu, H. Cheng and B. Wen, "ShadowFormer: Global Context Helps Image Shadow Removal," AAAI 2023. | 
| 115 |  | - | 
| 116 |  | -Bibtex: | 
|  | 38 | +```shell | 
|  | 39 | +pip install onnx | 
| 117 | 40 | ``` | 
| 118 |  | -@article{guo2023shadowformer, | 
| 119 |  | -  title={ShadowFormer: Global Context Helps Image Shadow Removal}, | 
| 120 |  | -  author={Guo, Lanqing and Huang, Siyu and Liu, Ding and Cheng, Hao and Wen, Bihan}, | 
| 121 |  | -  journal={arXiv preprint arXiv:2302.01650}, | 
| 122 |  | -  year={2023} | 
| 123 |  | -} | 
|  | 41 | +```shell | 
|  | 42 | +python export.py --weights shadowformer-istd.pt | 
|  | 43 | +python export.py --weights shadowformer-istd-plus.pt | 
| 124 | 44 | ``` | 
| 125 |  | - | 
| 126 |  | -## Contact | 
| 127 |  | -If you have any questions, please contact [email protected] | 
0 commit comments