Skip to content

Commit 8a70a47

Browse files
committed
feat: Add project on hls4ml integration in SOFIE
1 parent 1613bf6 commit 8a70a47

File tree

3 files changed

+45
-4
lines changed

3 files changed

+45
-4
lines changed

_gsocproposals/2025/proposal_TMVA-SOFIE-GPU.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@ year: 2025
66
organization: CERN
77
difficulty: medium
88
duration: 350
9-
mentor_avail: June-October
9+
mentor_avail: Flexible
1010
---
1111

1212
# Description
13-
SOFIE (System for Optimized Fast Inference code Emit) is a Machine Learning Inference Engine within TMVA (Toolkit for Multivariate Data Analysis) in ROOT. SOFIE offers a parser capable of converting ML models trained in Keras, PyTorch, or ONNX format into its own Intermediate Representation, and generates C++ functions that can be easily invoked for fast inference of trained neural networks. The parsed models produce C++ header files that can be seamlessly included and used in a 'plug-and-go' style.
13+
SOFIE (System for Optimized Fast Inference code Emit) is a Machine Learning Inference Engine within TMVA (Toolkit for Multivariate Data Analysis) in ROOT. SOFIE offers a parser capable of converting ML models trained in Keras, PyTorch, or ONNX format into its own Intermediate Representation, and generates C++ functions that can be easily invoked for fast inference of trained neural networks. Using the IR, SOFIE can produce C++ header files that can be seamlessly included and used in a 'plug-and-go' style.
1414

1515
SOFIE currently supports various Machine Learning operators defined by the ONNX standards, as well as a Graph Neural Network (GNN) implementation. It supports the parsing and inference of Graph Neural Networks trained using DeepMind Graph Nets.
1616

@@ -27,7 +27,7 @@ In this project, the contributor will gain experience with GPU programming and i
2727

2828
## Requirements
2929
* Proficiency in C++ and Python.
30-
* Basic knowledge of GPU programming (e.g., CUDA).
30+
* Knowledge of GPU programming (e.g., CUDA).
3131
* Familiarity with version control systems like Git/GitHub.
3232

3333
## Mentors
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
---
2+
title: TMVA SOFIE - HLS4ML Integration for Machine Learning Inference
3+
layout: gsoc_proposal
4+
project: ROOT
5+
year: 2025
6+
organization: CERN
7+
difficulty: medium
8+
duration: 350
9+
mentor_avail: Flexible
10+
---
11+
12+
# Description
13+
SOFIE (System for Optimized Fast Inference code Emit) is a Machine Learning Inference Engine within TMVA (Toolkit for Multivariate Data Analysis) in ROOT. SOFIE offers a parser capable of converting ML models trained in Keras, PyTorch, or ONNX format into its own Intermediate Representation, and generates C++ functions that can be easily invoked for fast inference of trained neural networks. Using the IR, SOFIE can produce C++ header files that can be seamlessly included and used in a 'plug-and-go' style.
14+
15+
Currently, SOFIE supports various machine learning operators defined by ONNX standards, as well as a Graph Neural Network implementation. It supports parsing and inference of Graph Neural Networks trained using DeepMind Graph Nets.
16+
17+
As SOFIE evolves, there is a growing need for inference capabilities on models trained across a variety of frameworks. This project will focus on integrating hls4ml in SOFIE, thereby enabling generation of C++ inference functions on models parsed by hls4ml.
18+
19+
## Task ideas
20+
In this project, the contributor will gain experience with C++ and Python programming, hls4ml, and their role in machine learning inference. The contributor will start by familiarizing themselves with SOFIE and running inference on CPUs. After researching the possibilities for integration with hls4ml, they will implement functionalities that ensure efficient inference of ML models parsed by hls4ml, which were previously trained in external frameworks like TensorFlow and PyTorch.
21+
22+
## Expected results and milestones
23+
* **Familiarization with TMVA SOFIE**: Understanding the SOFIE architecture, working with its internals, and running inference on CPUs.
24+
* **Research and Evaluation**: Exploring hls4ml, its support for Keras and PyTorch, and possible integration with SOFIE.
25+
* **Integration with hls4ml**: Developing functionalities for running inference on models parsed by hls4ml.
26+
27+
## Requirements
28+
* Proficiency in C++ and Python.
29+
* Knowledge of hls4ml
30+
* Familiarity with version control systems like Git/GitHub.
31+
32+
## Mentors
33+
* **[Lorenzo Moneta](mailto:[email protected])**
34+
* [Sanjiban Sengupta](mailto:[email protected])
35+
36+
## Links
37+
* [ROOT Project homepage](https://root.cern/)
38+
* [ROOT Project repository](https://github.com/root-project/root)
39+
* [SOFIE Repository](https://github.com/root-project/root/tree/master/tmva/sofie)
40+
* [hls4ml documentation](https://fastmachinelearning.org/hls4ml/)
41+
* [hls4ml Repository](https://github.com/fastmachinelearning/hls4ml)

gsoc/2025/mentors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ layout: plain
2323
* Felice Pantaleo [[email protected]](mailto:[email protected]) CERN
2424
* Giacomo Parolini [[email protected]](mailto:[email protected]) CERN
2525
* Alexander Penev [[email protected]](mailto:[email protected]) CompRes/University of Plovdiv, BG
26-
* Sanjiban Sengupta [[email protected]](mailto:[email protected]) CERN
26+
* Sanjiban Sengupta [[email protected]](mailto:[email protected]) CERN/UofManchester
2727
* Mayank Sharma [[email protected]](mailto:[email protected]) UMich
2828
* Simon Spannagel [[email protected]](mailto:[email protected]) DESY
2929
* Graeme Stewart [[email protected]](mailto:[email protected]) CERN

0 commit comments

Comments
 (0)