NeuralMPCX is a Python library for building and deploying Model Predictive Controllers with classic and neural dynamical models. You write constrained MPC with RNN/LSTM models in a CasADi/IPOPT workflow. The library handles CasADi RNN integration, warm-starting, constraint management, real-time feasibility, and both LTI state-space and neural dynamics in one framework. You can run neural and classical MPC controllers side by side.
- Neural MPC (RNN dynamic models)
- Recurrent Neural Networks for system dynamics modelling
- Classic MPC with CasADi-based optimization
- Constraint handling (state, input, terminal, soft constraints)
- Warm-starting & real-time iteration
- Differentiable cost terms & custom regularization
- Simulation utilities + logging
Note: NeuralMPCX is not yet available on PyPI. Install it locally from the downloaded repository.
git clone https://github.com/hzdr/neural-mpcx.git
cd neural-mpcxBasic install (core dependencies only):
pip install -e .With PyTorch support (for neural network training and deployment):
pip install -e .[torch]This installs:
torch >= 2.0.0
torchvision >= 0.15.1
torchaudio >= 2.0.1
For development (testing, linting, type checking):
pip install -e .[dev]Core Dependencies (installed automatically):
numpy >= 1.26.4
casadi >= 3.6.6
joblib >= 1.4.2
gymnasium >= 0.29.1
scipy >= 1.10.0
matplotlib >= 3.5.0
pandas >= 1.5.0
If you prefer to install PyTorch separately (e.g., to choose a specific CUDA version):
CPU only:
pip install torch>=2.0 torchvision>=0.15 torchaudio>=2.0 --index-url https://download.pytorch.org/whl/cpuNVIDIA GPU (CUDA 12.4, recommended for recent GPUs):
pip install torch>=2.0 torchvision>=0.15 torchaudio>=2.0 --index-url https://download.pytorch.org/whl/cu124WSL2 users: GPU support works out of the box. Install the NVIDIA driver on Windows only (not inside WSL), then use the CUDA command above.
Supported Python versions:
- Python >= 3.9
- CasADi >= 3.6.6
Tested on Python 3.9, 3.10, 3.11, and 3.12.
See examples/Cascaded_Two_Tank_System/neural_mpc_cts.py for a Neural MPC deployment. It reproduces an adapted version of the controller from [1], tested on the Cascaded Two-Tank System (CTS) benchmark. [2] describes the CTS in detail, and [3] hosts the LSTM RNN training and test datasets.
See examples/MPC_Grinding_Circuit/mpc_grinding_circuit.py for a classic MPC deployment. It reproduces an adapted version of the controller from [4]: a constrained MPC for the 4x4 grinding circuit (
The plant model is a discrete-time LTI state-space system (
The original paper uses a step-response DMC. Here, the same problem is formulated as an NLP over a state-space model using CasADi/IPOPT (multi-shooting), with unified soft constraints via slacks and a large penalty
The paper’s 16 transfer functions
See examples/CSTR/nmpc_cstr.py for a Nonlinear MPC (NMPC) controller on the Continuous Stirred Tank Reactor (CSTR) benchmark. Based on the do-mpc CSTR benchmark from [6] (Fiedler et al., 2023), the reactor has two parallel reactions (A->B and B->C) and one side reaction (2A->D), controlled via feed flow rate and heat removal rate.
The plant model uses symbolic CasADi expressions with Arrhenius kinetics and 4th-order Runge-Kutta (RK4) integration, so the optimizer gets exact first-order derivatives. The NLP is formulated with CasADi/IPOPT (multi-shooting), soft state constraints via slack variables, and a quadratic stage and terminal cost.
A Neural MPC version of the same process is available at examples/CSTR/neural_mpc_cstr.py, where a trained LSTM replaces the explicit dynamics. You can compare physics-based NMPC against data-driven Neural MPC on the same benchmark.
NeuralMPCX provides Kalman filter implementations for state and bias estimation in MPC applications:
from neuralmpcx.util.estimators import AugmentedKalmanFilter
from neuralmpcx.util.control import mimo_tf2ss
import numpy as np
# Create state-space model from transfer functions
ss = mimo_tf2ss(G, ny=4, nu=4, Ts=30.0)
# Create augmented Kalman filter for bias estimation
kf = AugmentedKalmanFilter(
Ad=ss.Ad, Bd=ss.Bd, Cd=ss.Cd, Dd=ss.Dd,
Q_x=np.eye(ss.nx) * 0.1, # Process noise for states
Q_du=np.eye(ss.nu) * 0.01, # Process noise for input bias
Q_dy=np.eye(ss.ny) * 0.01, # Process noise for output bias
R=np.eye(ss.ny) * 1.0, # Measurement noise
)
# In MPC loop
for t in range(T):
kf.predict(u=dev_u)
kf.update(y=y_measured - y_offset)
# Pass bias estimates directly to MPC
u_opt = mpc.solve_mpc(..., dynamic_pars=kf.get_mpc_biases())The AugmentedKalmanFilter estimates plant state, input bias, and output bias at the same time, so you get offset-free MPC tracking even with plant-model mismatch.
NeuralMPCX uses recurrent neural networks as internal dynamics models inside the MPC. Making this work required several adaptations:
- Converting PyTorch RNNs into CasADi
- Providing arrays of context actions and states to warm up the neural model
x0is an array withn_contexttimestepsu0is also required withn_contexttimesteps
The initial hidden state comes from a context window of past observations, following [5]. Only LSTM networks are supported so far.
An RNN’s initial state, often set to zero, shapes its first predictions. In MPC, this initial state determines how the network reads the system’s dynamics and how well it predicts future states.
NeuralMPCX implements the approach from [5], where the predicted output
Predictions follow the equations above. The initial state is estimated from a window of past data
The model acts as its own state estimator, so training runs a joint backward pass over both estimation and forecasting. Sequences are split as shown below.
Each MPC iteration initializes the RNN's state from an array of past data. You update this context array at every step. Adapted from [5].
src/neuralmpcx/
core/ # Cache, solutions, warmstart, debug
multistart/ # Start point generation for warm-starting
neural/ # LSTM/RNN integration with CasADi
nlps/ # NLP building blocks (parameters, variables, constraints)
util/ # Utilities: control, estimators, math, io
control.py # Transfer functions, state-space, LQR
estimators.py # Kalman filters for state estimation
wrappers/ # MPC wrappers (Mpc)
__init__.py # __version__ here
examples/
pip install -e ".[dev]"
ruff check src tests
mypy src
pytest -qBlack formats code. Ruff lints it.
# Check what would be reformatted
black --check src tests
# Format all code
black src tests
# Format specific files
black src/neuralmpcx/neural/casadi_lstm.py# Check all linting issues
ruff check src tests
# Auto-fix safe issues
ruff check --fix src tests
# Show what can be fixed
ruff check --fix --show-fixes src tests# Run type checker
mypy srcRun all checks before committing:
# 1. Format with black
black src tests
# 2. Auto-fix with ruff
ruff check --fix src tests
# 3. Check remaining issues
ruff check src tests
# 4. Run type checking
mypy src
# 5. Run tests
pytest -qPre-commit hooks run these checks on every commit:
pre-commit install
pre-commit run --all-filesAll public APIs use NumPy-style docstrings. Example:
def my_function(param1, param2):
"""Brief description of the function.
Extended description if needed.
Parameters
----------
param1 : type
Description of param1.
param2 : type
Description of param2.
Returns
-------
type
Description of return value.
"""Contributions are welcome. Follow these guidelines:
- Use
pre-commithooks (ruff/black/mypy/end-of-file-fixer) - Follow NumPy-style docstrings for all public APIs (see Development Workflow section)
- Follow Conventional Commits (
feat:,fix:,docs:, etc.) - Open issues with minimal reproducible examples
- Run all tests and linting checks before submitting PRs
pre-commit installFor academic work, cite:
@software{neuralmpcx2026, title = {NeuralMPCX: A Model Predictive Control library that supports classic MPC and neural MPC with CasADi}, author = {Lopes Júnior, Ênio and Reinecke, Sebastian Felix}, year = {2026}, url = {https://github.com/hzdr/neural-mpcx}}
Apache License 2.0. See LICENSE.txt.
Portions derive from casadi-nlp by Filippo Airaldi, under the MIT License. See LICENSE-MIT. Original project: https://github.com/FilippoAiraldi/casadi-nlp
- Ênio Lopes Júnior
- Sebastian Felix Reinecke
- Issues: https://github.com/hzdr/neural-mpcx/issues
See CHANGELOG.md.
[1] Adhau, S., Gros, S. and Skogestad, S. (2024). "Reinforcement learning based MPC with neural dynamical models". European Journal of Control, 80(A), 101048.
[2] Schoukens, M. and Noël, J. P. (2017). Three Benchmarks Addressing Open Challenges in Nonlinear System Identification. 20th World Congress of the International Federation of Automatic Control, Toulouse, France, July 9–14, 2017, pp. 448–453. (preprint)
[3] Schoukens, M., Mattsson, P., Wigren, T. and Noël, J. P. Cascaded tanks benchmark combining soft and hard nonlinearities. 4TU.ResearchData, Dataset.
[4] Chen, X. S., Zhai, J. Y., Li, S. H. and Li, Q. (2007). "Application of model predictive control in ball mill grinding circuit". Minerals Engineering, 20(11), 1099–1108.
[5] Forgione, M., Muni, A., Piga, D. and Gallieri, M. (2023). "On the adaptation of recurrent neural networks for system identification". Automatica, 155, 111092.
[6] Fiedler, F., Karg, B., Lüken, L., Brandner, D., Heinlein, M., Brabender, F. and Lucia, S. (2023). "do-mpc: Towards FAIR nonlinear and robust model predictive control". Control Engineering Practice, 140, 105676.

![Each MPC iteration initializes the RNN's state from an array of past data. You update this context array at every step. Adapted from [5].](/hzdr/neural-mpcx/raw/main/fig/warmed_up_neural_mpc.png)