Skip to content

A tool to encapsulate machine learning models defined using the ONNX format to Functional Mock-up Units (FMU) defined by the FMI standard.

License

Notifications You must be signed in to change notification settings

HyRES-FBK/onnx2fmu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

232 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Description of the image

ONNX2FMU: Encapsulate ONNX models in Functional Mock-up Units (FMUs)

What do ONNX2FMU do? It wraps ONNX models into co-simulation FMUs.

πŸš€ Get started

  • Python 3.10+
  • CMake 3.22+
  • A code compiler for the host platform, which could be one of Linux, Windows or MacOS.

The default Windows CMake Generator is Visual Studio 2022.

To install ONNX2FMU use

pip install onnx2fmu

in your shell. You don't need to install CMake because it will be installed with the Python package, but you need to install a C compiler (e.g., Visual Studio in Windows, gcc in Linux, etc.).

πŸ“ ONNX model declaration

ONNX2FMU can handle models with multiple inputs, outputs, and local variables. These entries must be listed in the model description JSON file, and their names must match the name of a node in the ONNX model graph.

Model description file

A model description is declared in a JSON file and its schema includes the following global items:

  • "name" is the model name, which will also be the FMU archive name;
  • "description" provides a generic description of the model;
  • "FMIVersion" is the FMI standard version for generatign the FMU code and the FMU binaries, which can be either 2.0 or 3.0;
  • "inputs" and "outputs" are the lists of inputs and output nodes in the ONNX model;
  • "locals" are mapping between an input and an output node. Their behavior is explained in A model with local variables.

Each entry of the the inputs and output lists is characterized by the following schema:

  • "name" must match the name of one of the model nodes, whereas
  • "labels" is the list of user-provided names for each of the node elements. The number of names in the "labels" list must match the number of elements of a given entry.
  • "description" allows the user to attach a description to each of the arrays.

The following is an example of a model description for a model with three input nodes and one output node.

{
    "name": "example1",
    "description": "The model defines a simple example model with a scalar input and two vector inputs, one with 'local' variability and one with 'continuous' variability.",
    "FMIVersion": "2.0",
    "inputs": [
        {
            "name": "scalar_input",
            "description": "A scalar input to the model."
        },
        {
            "name": "vector_input",
            "description": "A vector of input variables with variability discrete."
        },
        {
            "name": "vector_input_discrete",
            "description": "Inputs have variability discrete by default."
        }
    ],
    "outputs": [
        {
            "name": "output",
            "description": "The output array.",
            "labels": [
                "Class1",
                "Class2",
                ...
            ]
        }
    ]
}

Variability of model variables

Allowed variables types are input, output, and local. Admissible variabilities are continuous and discrete; the default choice is continuous if nothing is specified in the model description.

Model declaration: A PyTorch example

ONNX2FMU works with any ONNX model, which can be generated by all the major ML/DL frameworks, e.g., PyTorch, TensorFlow, Scikit-Learn, etc. However, we chose PyTorch to show how ONNX2FMU works.

To run the examples, clone the repository and install development dependencies. If you are using uv, you can install them with

uv sync --all-extras

if you are using pip

pip install ".[extra]"

The following model is used in tests/example1 to perform some basic vector operations.

class ExampleModel(nn.Module):

    def __init__(self):
        super(ExampleModel, self).__init__()

    def forward(self, x1, x2, x3):
        # Input x1 is a scalar
        # Input x2 is a vector with causality 'local' and 4 elements
        # Input x3 is a vector with causality 'countinuous' and 5 elements
        x4 = x2 + x3[:4]
        x5 = x2 - x3[:4]
        x6 = x1 * x3[-1]
        x7 = x1 / x3[-1]
        x = torch.cat([x4, x5, x6, x7])
        return x

All the basic array operations are allowed, which is useful to define not only deep learning models, but also generic, graph-based models.

A more complex examples is explained in tests/example3, where an RNN model is used to predict the temperature of a point on a metallic plate. The model is declared as follows

class HeatRNN(nn.Module):
    def __init__(self, input_size, hidden_size, output_size, num_layers, norm_params):
        super(HeatRNN, self).__init__()
        x_min, x_max, y_min, y_max = norm_params
        self.register_buffer("x_min", torch.tensor(x_min))
        self.register_buffer("x_max", torch.tensor(x_max))
        self.register_buffer("y_min", torch.tensor(y_min))
        self.register_buffer("y_max", torch.tensor(y_max))
        self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x, h=None):
        x = (x - self.x_min) / (self.x_max - self.x_min)
        out, _ = self.rnn(x, h)
        out = self.fc(out)  # Predict next time step
        out = out * (self.y_max - self.y_min) + self.y_min
        return out

In this example, the normalization parameters, which do not have to be optimized, are stored in the model using the register_buffer method, which detaches them from the computational graph.

A model with local variables

Recurrent neural network architectures might require to feed the model with the output of the model itself in a feedback loop fashion. In ONNX2FMU, we realize this functionality through FMI local variables. Mapping model's input to output is necessary though. In example4, we show how FMUs with local variables are declared. The model description file must present a section named locals like the following:

{
    ...
    "locals": [
        {
            "nameIn": "X",
            "nameOut": "X1",
            "description": "The history of states from t-N to t."
        },
        {
            "nameIn": "U",
            "nameOut": "U1",
            "description": "The history of control variables frmo t-N to t-1."
        }
    ]
}

Local variables requires two names, i.e., nameIn and nameOut, which define an input-output relationship that feeds the nameIn input node with the output of the nameOut output node. The user must take care to provide the right output in the forward method. In example4, the relationship between input U and output U1 is defined as follows

class ExampleModel(nn.Module):

    def __init__(self):
        super(ExampleModel, self).__init__()

    def forward(self, u, U, X):
        U1 = torch.concat((U[1:, :], u))
        x = torch.stack([U1[-3, 0], U1[-2, 1], U1[-1, 2]]).unsqueeze(0)
        X1 = torch.concat((X[1:, :], x))
        return x, X1, U1

In the example above, the input U is updated with the content of u and returned by the function with the name U1 (remember that node names cannot be repeated in ONNX). During the next time step, the FMU will take care to pass the output to the model as a new, updated input.

πŸ”¨ ONNX model generation

ONNX2FMU provides two ways to build an FMU from an ONNX model.

CLI

ONNX2FMU is designed as command line application first. The build command requires the ONNX model path and the model description path

onnx2fmu build <model.onnx> <modelDescription.json> [OPTIONS]

Built-in functions

FMUs of ONNX models can be build from a Python script by calling the build function from app.py. This is particularly useful when training a model and generating the FMU in the same file.

Generation and compilation

ONNX2FMU provides the possibility to separate generation of the FMU code and its compilation. This can be achived using the commands generate and compile, respectively. To see the documentation of the commands, one can use

onnx2fmu [generate|compile] --help

Separating FMU code generation and compilation allows a user to customize the the FMU source code.

Acknowledgements

The code in this repository is inspired to the Reference FMU project.

References

If you found this library useful in your research, please consider citing.

@inproceedings{urbani2025tool,
  title={A Tool for the Implementation of Open Neural Network Exchange Models in Functional Mockup Units},
  author={Urbani, Michele and Bolognese, Michele and Prattic{\`o}, Luca and Testi, Matteo},
  booktitle={Modelica Conferences},
  pages={645--651},
  year={2025},
  doi={https://doi.org/10.3384/ecp218645}
}

About

A tool to encapsulate machine learning models defined using the ONNX format to Functional Mock-up Units (FMU) defined by the FMI standard.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published