Skip to content

Latest commit

 

History

History
161 lines (104 loc) · 5.84 KB

File metadata and controls

161 lines (104 loc) · 5.84 KB

Learnable Activation Functions in Physics-Informed Neural Networks for Solving Partial Differential Equations

arXiv

About

This repository contains all the source code, data, and checkpoints reported in our paper Learnable Activation Functions in Physics-Informed Neural Networks for Solving Partial Differential Equations available on arXiv.

Animation Demo

In the paper, we discussed five PDEs: Helmholtz, Wave, Klein-Gordon, Convection-Diffusion, and Cavity. To analyze spectral bias with these PDEs, refer to the provided notebooks, and instructions on how to run them are below.

Here in this animation, we demonstrate how different activation functions manage spectral bias and convergence in a simple example.

Tanh

2025-06-16_15-34-53-200596_animation.mp4

SiLU

2025-06-16_18-40-53-199457_animation.mp4

Parameterized-Tanh

2025-06-16_15-35-47-393683_animation.mp4

B-spline

2025-06-16_15-33-30-611314_animation.mp4

Fourier

2025-06-16_16-09-33-188557_animation.mp4

Jacobi

2025-06-16_16-12-53-067559_animation.mp4

Chebychev

2025-06-16_16-13-54-042695_animation.mp4

B-spline + SiLU

2025-06-16_15-23-56-550055_animation.mp4

GRBF

2025-06-16_15-37-13-361307_animation.mp4

GRBF + SiLU

2025-06-16_16-34-20-361763_animation.mp4

Project structure

├── checkpoint     /* Logs and checkpoints, not committed to git */
├── data           /* PDE data */
├── model          /* Final trained models - copied from checkpoints */
├── result         /* Final training logs/figures */
└── src 

   ├── data        /* PyTorch data loaders */
   ├── nn          /* PINN code, e.g., Cavity, Wave, etc.*/
   ├── notebooks   /* Test models, generate plots, various other notebooks */
   ├── trainer     /* PyTorch trainer code, that runs the nn code */
   └── utils       /* Additional utility code */

Setup environment

The code is tested in Ubuntu 20.04 LTS, using Nvidia A100 GPU.

conda env create -f environment.yml
conda activate pinn_learnable_activation

# Check if PyTorch and CUDA available
python -m src.utils.check_torch
    Version 2.4.0
    CUDA: True
    CUDA Version: 12.4
    NCCL Version: (2, 20, 5)

Training

To train models, run the following commands:

# Cavity
python -m src.trainer.main_trainer --total_epochs 60000  --save_every 1000 --print_every 1000 --batch_size 128 --log_path ./checkpoints --solver tanh  --problem cavity --weights "[2 , 2 , 2 , 2 , 4 , 0.1]" --network "[3, 300, 300, 300, 3]" --dataset_path ./data/cavity.mat

# Wave
python -m src.trainer.main_trainer --total_epochs 60000  --save_every 1000 --print_every 1000 --batch_size 128 --log_path ./checkpoints --solver tanh --problem wave --weights "[100.0, 100.0, 1.0]" --network "[2, 300, 300, 300, 300, 1]"

# Helmholtz
python -m src.trainer.main_trainer --total_epochs 60000  --save_every 1000 --print_every 1000 --batch_size 128 --log_path ./checkpoints --solver tanh  --problem helmholtz --weights "[10.0, 1.0]" --network "[2, 30, 30, 30, 1]"

# KleinGordon
python -m src.trainer.main_trainer --total_epochs 60000  --save_every 1000 --print_every 1000 --batch_size 128 --log_path ./checkpoints --solver tanh --problem klein_gordon  --weights  "[50.0, 50.0, 1.0]" --network "[2, 30, 30, 30, 1]"


# Diffusion
python -m src.trainer.main_trainer --total_epochs 60000  --save_every 1000 --print_every 1000 --batch_size 128 --log_path ./checkpoints --solver tanh --problem diffusion  --weights "[10.0, 10.0, 1.0]" --network "[3, 300, 300, 300, 1]"

Notebooks

We provided all pre-trained models and training loss log history. The notebooks can be run independently of training models.

Test models:

  • Cavity: cavity_test_model.ipynb
  • Helmholtz: helmholtz_test_model.ipynb
  • Klein_gordon: klein_gordon_test_model.ipynb
  • Wave: wave_test_model.ipynb
  • Diffusion: diffusion_test_model.ipynb

Plot loss history and test results:

  • Cavity training loss history: cavity_plot_training_loss_history.ipynb
  • Cavity contour plot of test and error: cavity_plot_contour.ipynb
  • Helmholtz training loss history: helmholtz_plot_training_loss_history.ipynb
  • Helmholtz contour plot of test and error: helmholtz_plot_contour.ipynb

Plot convergence analysis:

  • Cavity convergence analysis: cavity_spectral_analysis.ipynb
  • Helmholtz convergence analysis: helmholtz_spectral_analysis.ipynb Helmholtz
  • Klein_gordon convergence analysis: klein_gordon_spectral_analysis.ipynb
  • Wave convergence analysis: wave_spectral_analysis.ipynb
  • Diffusion convergence analysis: diffusion_spectral_analysis.ipynb

Support & Contribution

If you have a question or suggestion, and would like to contribute to the repository, please create a GitHub issue or email me.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you find this work useful, we would appreciate it if you could consider citing it:

@article{FAREA2025109753,
title = {{Learnable activation functions in physics-informed neural networks for solving partial differential equations}},
journal = {Computer Physics Communications},
volume = {315},
year = {2025},
url = {https://www.sciencedirect.com/science/article/pii/S0010465525002553},
author = {Afrah Farea and Mustafa Serdar Celebi},
}

Thank you for taking the time to browse our work!