Skip to content

neuraloperator/WoS-NO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

    WoS-NO: Walk-on-Spheres Neural Operator

Mesh-free, Data-free Training of Neural Operators via Monte Carlo Weak Supervision



WoS-GINO Pipeline
The WoS-NO Pipeline leveraging stochastic random walks for weak supervision.


📖 Overview

Training neural PDE solvers is often bottlenecked by expensive data generation or unstable physics-informed neural network (PINN) that involves challenging optimization landscapes due to higher-order derivatives. To tackle this issue, we propose an alternative approach using Monte Carlo approaches to estimate the solution to the PDE as a stochastic process for weak supervision during training.

Recently, an efficient discretization-free Monte-Carlo algorithm called Walk-on-Spheres (WoS) has been popularized for solving PDEs using random walks. Leveraging this, we introduce a learning scheme called Walk-on-Spheres Neural Operator (WoS-NO) which uses weak supervision from WoS to train any given neural operator.

The central principle of our method is to amortize the cost of Monte Carlo walks across the distribution of PDE instances. Our method leverages stochastic representations using the WoS algorithm to generate cheap, noisy, yet unbiased estimates of the PDE solution during training. This is formulated into a data-free physics-informed objective where a neural operator is trained to regress against these weak supervisions. Leveraging the unbiased nature of these estimates, the operator learns a generalized solution map for an entire family of PDEs.

This strategy results in a mesh-free framework that operates without expensive pre-computed datasets, avoids the need for computing higher-order derivatives for loss functions that are memory-intensive and unstable, and demonstrates zero-shot generalization to novel PDE parameters and domains. Experiments show that for the same number of training steps, our method exhibits up to 8.75× improvement in $L_2$-error compared to standard physics-informed training schemes, up to 6.31× improvement in training speed, and reductions of up to 2.97× in GPU memory consumption.

WoS-GINO Pipeline
Figure 1: The WoS-NO Pipeline leveraging stochastic random walks for weak supervision.

🛠️ Tech Stack & Requirements

This project relies on a specific stack of scientific computing libraries. Key dependencies include:

Component Library
Deep Learning PyTorch Torch-Scatter NeuralOperator
PDE Solving FEniCS mshr SciPy
Config/Logs Hydra WandB
Acceleration CUDA JAX Triton

💾 Data

The datasets required for training and evaluation are available via Google Drive.


⚙️ Installation

The installation process involves setting up a Conda environment for FEniCS and compiling custom C++ bindings for the Walk-on-Spheres solver.

1. System Dependencies (Linux)

sudo apt-get update
sudo apt-get install -y gcc-10 g++-10 libboost-iostreams-dev libtbb-dev libblosc-dev

2. Environment Setup

Create the environment using Conda (using libmamba solver is recommended for speed):

conda create -n MCGINO -c conda-forge python=3.10 fenics=2019.1.0 mshr=2019.1.0 --solver=libmamba
conda activate MCGINO

3. Build Custom Bindings (OpenVDB & Zombie)

Set your compiler variables:

export CC=gcc-10
export CXX=g++-10
Build OpenVDB:
cd lib/openvdb
mkdir -p build && cd build
cmake .. -D OPENVDB_BUILD_PYTHON_MODULE=ON -D USE_NUMPY=ON
sudo make install
cd ../../..
Build Solver Bindings:
cd bindings/zombie
mkdir -p build && cd build
cmake ..
make -j4
# Move the compiled library to the source folder
cp zombie_bindings.cpython-310-x86_64-linux-gnu.so ../../../src/solvers/wos/
cd ../../..

4. Python Dependencies

Install NeuralOperator and other requirements:

Install NeuralOperator from source

git clone [https://github.com/NeuralOperator/neuraloperator](https://github.com/NeuralOperator/neuraloperator)
cd neuraloperator
pip install -e .
cd ..

Install PyTorch and Scatter (Ensure CUDA 12.6 compatibility)

Adjust the URL based on your specific CUDA version if needed

pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu126](https://download.pytorch.org/whl/cu126)
pip install torch-scatter -f [https://data.pyg.org/whl/torch-2.6.0+cu126.html](https://data.pyg.org/whl/torch-2.6.0+cu126.html)

Install remaining python requirements

pip install wandb hydra-core==1.3.2 hydra-submitit-launcher==1.2.0 torch_harmonics==0.8.0 jax pyparsing

🚀 Usage Before running, ensure you have set up your CUDA environment variables:

export CUDA_HOME=/usr/local/cuda-12.6
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH

Warning

Important Config Note: Please search for the string hong in the codebase and replace corresponding directory paths with your local absolute paths before initiating training.

Training in 2D To train the model on the 2D Linear Poisson dataset using the Walk-on-Spheres loss:

python -m scripts.main \
    loss=wos \
    model=gino \
    dataset=linear_poisson2d \
    train=wos \
    solver=wos2d

Training in 3D (Experimental) To train on the 3D dataset:

python -m scripts.main \
    loss=wos3d \
    model=gino3d \
    dataset=linear_poisson3d \
    train=wos3d \
    solver=wos3d

Note

Regarding train=wos: Always pass the train=wos (or wos3d) flag if you do not want to compute the gradient with respect to inputs (BVC). Omitting it will result in significantly higher memory usage.

🐛 Known Issues 3D Stability: The 3D implementation has not been extensively tested and may require hyperparameter tuning.

2D BVC: Boundary Value Correction (BVC) in 2D is currently unstable.

📜 Citation If you use this code for your research, please cite our work:


@article{viswanath2026operator,
  title={Operator Learning Using Weak Supervision from Walk-on-Spheres},
  author={Viswanath, Hrishikesh and Nam, Hong Chul and Deng, Xi and Berner, Julius and Anandkumar, Anima and Bera, Aniket},
  journal={arXiv preprint arXiv:2603.01193},
  year={2026}
}

About

Training neural operators with weak supervision from Monte-Carlo walk-on-spheres

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors