Copyright Georgia Tech 2024
ML4OPF is a Python package for developing machine learning proxy models for optimal power flow. It is built on top of PyTorch and PyTorch Lightning, and is designed to be modular and extensible. The main components are:
-
formulations
: The main interface to the OPF formulations ACOPF, DCOPF, and Economic Dispatch.- Each OPF formulation has three main component classes:
OPFProblem
,OPFViolation
, andOPFModel
. TheOPFProblem
class loads and parses data from disk, theOPFViolation
class calculates constraints residuals, incidence matrices, objective value, etc., and theOPFModel
class is an abstract base class for proxy models.
- Each OPF formulation has three main component classes:
-
loss_functions
: Various loss functions including LDFLoss and the self-supervised ObjectiveLoss. -
layers
: Various feasibility recovery layers including BoundRepair and HyperSimplexRepair. -
models
: Various proxy models including BasicNeuralNet, LDFNeuralNet, and E2ELR. -
parsers
: Parsers for data generated by AI4OPT/PGLearn.jl. -
viz
: Visualization helpers for plots and tables.
Documentation based on docstrings is live here.
To install ml4opf
on macOS (CPU/MPS) and Windows (CPU), run:
pip install git+ssh://[email protected]/AI4OPT/ML4OPF.git
# or, to install with optional dependencies (options: "all", "dev", "viz"):
pip install "ml4opf[all] @ git+ssh://[email protected]/AI4OPT/ML4OPF.git"
If you don't already have PyTorch on Linux (CPU/CUDA/ROCm) or Windows (CUDA), make sure to provide the correct --index-url
which you can find here. For example, to install from scratch with CUDA 12.6 and all optional dependencies:
pip install "ml4opf[all] @ git+ssh://[email protected]/AI4OPT/ML4OPF.git" \
--index-url https://download.pytorch.org/whl/cu126 \
--extra-index-url https://pypi.python.org/simple/
For development, the recommended installation method is using Conda environment files provided at environment.yml and environment_cuda.yml:
git clone [email protected]:AI4OPT/ML4OPF.git # clone this repo
cd ML4OPF # cd into the repo
conda env create -f environment.yml # create the environment
conda activate ml4opf # activate the environment
pip install -e ".[all]" # install ML4OPF
import torch
# load data
from ml4opf import ACProblem
data_path = ...
problem = ACProblem(data_path)
# make a basic neural network model
from ml4opf.models.basic_nn import ACBasicNeuralNet # requires pytorch-lightning
config = {
"optimizer": "adam",
"learning_rate": 1e-3,
"loss": "mse",
"hidden_sizes": [500,300,500],
"activation": "sigmoid",
"boundrepair": "none" # optionally clamp outputs to bounds (choices: "sigmoid", "relu", "clamp")
}
model = ACBasicNeuralNet(config, problem)
model.train(trainer_kwargs={'max_epochs': 100, 'accelerator': 'auto'}) # pass args to the PyTorch Lightning Trainer
evals = model.evaluate_model()
from ml4opf.viz import make_stats_df
print(make_stats_df(evals))
model.save_checkpoint("./basic_300bus") # creates a folder called "basic_300bus" with a file "trainer.ckpt" in it.
import torch
from ml4opf import ACProblem
data_path = ...
# parse HDF5/JSON
problem = ACProblem(data_path)
# get train/test set:
train_data = problem.train_data
test_data = problem.test_data
train_data['input/pd'].shape # torch.Size([52863, 201])
test_data['input/pd'].shape # torch.Size([5000, 201])
# if needed, convert the HDF5 data to a tree dictionary instead of a flat dictionary:
from ml4opf.parsers import PGLearnParser
h5_tree = PGLearnParser.make_tree(train_data) # this tree structure should
# exactly mimic the
# structure of the HDF5 file.
h5_tree['input']['pd'].shape # torch.Size([52863, 201])
This material is based upon work supported by the National Science Foundation AI Institute for Advances in Optimization (AI4OPT) under Grant No. 2112533 and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.