This repository contains code for training a neural network model using Ordinary Differential Equations (ODEs) as the loss function. The project leverages JAX for efficient computation and Hydra for configuration management.
To learn more about the specific problem solved, read problem definition.
For an easy walkthrough, open the walkthrough notebook.
-
Clone the repository:
git clone https://github.com/AndreKev/physics-informed-neural-solver.git cd physics-informed-neural-solver -
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
- To solve the ode using numerical methods
eulerandrunge_kutta, run the command
python data_generator.py- To train the model using
mse_loss on training data. Run the command
python train.py- To train the model using
ode_loss with initial conditions, run the command
python ode_train.pyThis script uses Hydra for configuration management.
The configuration is managed using Hydra. The main configuration file is located at
config.yaml
. You can modify this file to change the parameters for the model, solver, and training process.
config.yaml
)
defaults:
- solver: runge_kutta # Default solver
- _self_
solver:
name: [euler, runge_kutta] # Single options - euler, runge_kutta
use_lax: True # Use LAX function by default
params:
b: 0.3
m: 1.0
l: 1.0
g: 9.81
y0: [2.0943951023931953, 0.0]
model:
features: [16, 16, 16]
output_shape: 1
optim:
learning_rate: 1e-3
batch_size: 200
epochs: 100000
use_lax: False # If true, the model will not print the training step
imPath:
solver: 'images/solvers'
model: 'images/neuralnet/'
time_step: 0.01
time_range: [0, 20]
# Random
random_state: 0You can override the default configuration from the command line. For example, to use the Euler solver and change the pendulum bulb mass, you can run:
python data_generator.py solver.name=euler solver.params.m=2To modify the learning rate, batch_size and number of epochs to train either with mse_loss on data, or ode loss and initial conditions:
python train.py model.optim.learning_rate=1e-2 model.optim.batch_size=150 model.optim.epochs=1000Using Lax training function will not print step losses.
python ode_train.py model.use_lax=True├── conf/
│ ├── config.yaml
│ └── solver/
│ ├── euler.yaml
│ └── runge_kutta.yaml
├── data_generator.py
├── helpers.py
├── images/
│ ├── neuralnet/
│ └── solvers/
├── ode_train.py
├── README.md
├── train.py
└── walkthrough.ipynb
- conf/: Configuration files for Hydra.
- data_generator.py: Contains functions for generating data and solving ODEs.
- helpers.py: Helper functions.
- images/: Directory for saving images.
- ode_train.py: Main script for training the model with ode_loss function.
- train.py: Contains training functions and model definitions.
- walkthrough.ipynb: Jupyter notebook for interactive exploration.
To train the model with the default configuration, simply run:
python ode_train.pyTo use the Euler solver instead of multiple solvers by default, and changing time step run:
python data_generator.py solver.name=euler time_step=0.1To change the learning rate, run:
python ode_train.py model.optim.learning_rate=0.01Visualising the outputs from the files
-
Solving with Euler method
python data_generator.py solver.name=euler
- Output image
Visualizing first order euler method estimate.
-
Solving with Fourth Order Runge Kutta method
python data_generator.py solver.name=runge_kutta
- Output iamge
Visualizing fourth order runge_kutta estimate
-
Comparing both Euler and Runge Kutta Methods
python data_generator.py
- Output image
We can read the legend that runge_kutta ode estimate is more precise than the euler estimate.
-
Training model with
mse_losson observed datapython train.py model.use_lax=True model.optim.epochs=200
We use LAX for faster computations.
- Output image
We can observe that although the model fits to the training and validation data, it fails to capture the real patterns in the phenomenon. That's why Physics Informed Neural Nets comes into action
-
Training model with
ode_lossand initial conditionspython ode_train.py model.use_lax=True model.optim.batch_size=500 model.optim.learning_rate=1e-4 model.optim.epochs=9e5
- Output image
Using only the initial condition as data and the ODE for computing residual in the loss function, the model could generalize.
Considering some observed data points in the ode loss function will help the model to converge better. But this approach was removed since it was
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.
We will not talk about this for this project :)
@Author NYEMB NDJEM EONE ANDRE KEVIN




