Mesh-free, Data-free Training of Neural Operators via Monte Carlo Weak Supervision
The WoS-NO Pipeline leveraging stochastic random walks for weak supervision.
Training neural PDE solvers is often bottlenecked by expensive data generation or unstable physics-informed neural network (PINN) that involves challenging optimization landscapes due to higher-order derivatives. To tackle this issue, we propose an alternative approach using Monte Carlo approaches to estimate the solution to the PDE as a stochastic process for weak supervision during training.
Recently, an efficient discretization-free Monte-Carlo algorithm called Walk-on-Spheres (WoS) has been popularized for solving PDEs using random walks. Leveraging this, we introduce a learning scheme called Walk-on-Spheres Neural Operator (WoS-NO) which uses weak supervision from WoS to train any given neural operator.
The central principle of our method is to amortize the cost of Monte Carlo walks across the distribution of PDE instances. Our method leverages stochastic representations using the WoS algorithm to generate cheap, noisy, yet unbiased estimates of the PDE solution during training. This is formulated into a data-free physics-informed objective where a neural operator is trained to regress against these weak supervisions. Leveraging the unbiased nature of these estimates, the operator learns a generalized solution map for an entire family of PDEs.
This strategy results in a mesh-free framework that operates without expensive pre-computed datasets, avoids the need for computing higher-order derivatives for loss functions that are memory-intensive and unstable, and demonstrates zero-shot generalization to novel PDE parameters and domains. Experiments show that for the same number of training steps, our method exhibits up to 8.75× improvement in
This project relies on a specific stack of scientific computing libraries. Key dependencies include:
| Component | Library |
|---|---|
| Deep Learning | |
| PDE Solving | |
| Config/Logs | |
| Acceleration |
The datasets required for training and evaluation are available via Google Drive.
The installation process involves setting up a Conda environment for FEniCS and compiling custom C++ bindings for the Walk-on-Spheres solver.
sudo apt-get update
sudo apt-get install -y gcc-10 g++-10 libboost-iostreams-dev libtbb-dev libblosc-devCreate the environment using Conda (using libmamba solver is recommended for speed):
conda create -n MCGINO -c conda-forge python=3.10 fenics=2019.1.0 mshr=2019.1.0 --solver=libmamba
conda activate MCGINOSet your compiler variables:
export CC=gcc-10
export CXX=g++-10
Build OpenVDB:cd lib/openvdb
mkdir -p build && cd build
cmake .. -D OPENVDB_BUILD_PYTHON_MODULE=ON -D USE_NUMPY=ON
sudo make install
cd ../../..
Build Solver Bindings:cd bindings/zombie
mkdir -p build && cd build
cmake ..
make -j4
# Move the compiled library to the source folder
cp zombie_bindings.cpython-310-x86_64-linux-gnu.so ../../../src/solvers/wos/
cd ../../..Install NeuralOperator and other requirements:
git clone [https://github.com/NeuralOperator/neuraloperator](https://github.com/NeuralOperator/neuraloperator)
cd neuraloperator
pip install -e .
cd ..pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu126](https://download.pytorch.org/whl/cu126)
pip install torch-scatter -f [https://data.pyg.org/whl/torch-2.6.0+cu126.html](https://data.pyg.org/whl/torch-2.6.0+cu126.html)pip install wandb hydra-core==1.3.2 hydra-submitit-launcher==1.2.0 torch_harmonics==0.8.0 jax pyparsing🚀 Usage Before running, ensure you have set up your CUDA environment variables:
export CUDA_HOME=/usr/local/cuda-12.6
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATHWarning
Important Config Note: Please search for the string hong in the codebase and replace corresponding directory paths with your local absolute paths before initiating training.
Training in 2D To train the model on the 2D Linear Poisson dataset using the Walk-on-Spheres loss:
python -m scripts.main \
loss=wos \
model=gino \
dataset=linear_poisson2d \
train=wos \
solver=wos2dTraining in 3D (Experimental) To train on the 3D dataset:
python -m scripts.main \
loss=wos3d \
model=gino3d \
dataset=linear_poisson3d \
train=wos3d \
solver=wos3dNote
Regarding train=wos: Always pass the train=wos (or wos3d) flag if you do not want to compute the gradient with respect to inputs (BVC). Omitting it will result in significantly higher memory usage.
🐛 Known Issues 3D Stability: The 3D implementation has not been extensively tested and may require hyperparameter tuning.
2D BVC: Boundary Value Correction (BVC) in 2D is currently unstable.
📜 Citation If you use this code for your research, please cite our work:
@article{viswanath2026operator,
title={Operator Learning Using Weak Supervision from Walk-on-Spheres},
author={Viswanath, Hrishikesh and Nam, Hong Chul and Deng, Xi and Berner, Julius and Anandkumar, Anima and Bera, Aniket},
journal={arXiv preprint arXiv:2603.01193},
year={2026}
}