This is the official repository to the paper "SolarFusionNet: Enhanced Solar Irradiance Forecasting via Automated Multi-Modal Feature Selection and Cross-Modal Fusion". In this research work, our study introduces SolarFusionNet, a novel deep learning architecture that effectively integrates automatic multi-modal feature selection and crossmodal data fusion. SolarFusionNet utilizes two distinct types of automatic variable feature selection units to extract relevant features from multichannel satellite images and multivariate meteorological data, respectively. Long-term dependencies are then captured using three types of recurrent layers, each tailored to the corresponding data modal. In particular, a novel Gaussian kernel-injected convolutional long short-term memory network is specifically designed to isolate the sparse features present in the cloud motion field derived from optical flow. Subsequently, a hierarchical multi-head cross-modal self-attention mechanism is proposed based on the physical-logical dependencies among the three modalities to investigate the coupling correlations among the modalities. The experimental results indicate that SolarFusionNet exhibits robust performance in predicting regional solar irradiance, achieving higher accuracy than other state-of-the-art models and a forecast skill ranging from 37.4% to 47.6% against the smart persistence model for the 4-hour-ahead forecast.
The link to download the satellite data is EUMETSAT. Select the RSS dataset. Then use "reproject.py" in the scripts to cut the region, you need to set up the yaml file in configs before cutting.
BSRN data can be downloaded by referring to the Solar data provided by Prof. Yang, or by directly visiting the BSRN official website. Don't forget to perform quality control.
The pytorch modules required for the model must be installed before starting to train the model, detailed versions can be found in requirements.txt. After the requirements for model training have been met, all the files in configs need to be configured.
# clone project
https://github.com/JOTYtao/Solar_Fusionformer.git
# create conda environment
conda create -n Solar_Fusionformer python=3.9
conda activate Solar_Fusionformer
# install requirements
pip install -r requirements.txtThe following baseline models are included:
- Autoformer - Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting [NeurIPS 2021]
- FEDformer - FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting [ICML 2022]
- TFT - Temporal Fusion Transformers for interpretable multi-horizon time series forecasting [International Journal of Forecasting]
- CrossViViT - Improving day-ahead Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context[NeurIPS 2023]
Please cite the following, if you find this work useful in your research:
@ARTICLE{10723760,
author={Jing, Tao and Chen, Shanlin and Navarro-Alarcon, David and Chu, Yinghao and Li, Mengying},
journal={IEEE Transactions on Sustainable Energy},
title={SolarFusionNet: Enhanced Solar Irradiance Forecasting via Automated Multi-Modal Feature Selection and Cross-Modal Fusion},
year={2025},
volume={16},
number={2},
pages={761-773},
keywords={Feature extraction;Satellite images;Forecasting;Optical flow;Solar irradiance;Predictive models;Data models;Deep learning;Accuracy;Correlation;Solar irradiance forecasting;multi-modal deep learning;attention mechanism;optical flow},
doi={10.1109/TSTE.2024.3482360}}
This codebase is built on CrossViViT