Skip to content

Commit b6a22c2

Browse files
committed
[GSoC 2025] Final Report - TMVA - SOFIE Project - PrasannaKasar
1 parent d2b1d57 commit b6a22c2

File tree

3 files changed

+173
-0
lines changed

3 files changed

+173
-0
lines changed
Lines changed: 173 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,173 @@
1+
---
2+
project: ML4EP - TMVA SOFIE
3+
title: Enhancing Keras Parser and JAX/FLAX Integration
4+
author: Prasanna Kasar
5+
photo: blog_authors/PrasannaKasar.jpeg
6+
date: 07.09.2025
7+
year: 2025
8+
layout: blog_post
9+
logo: "![TMVA - SOFIE](https://www.google.com/imgres?q=root%20tmva%20sofie%20logo&imgurl=x-raw-image%3A%2F%2F%2F476d34aa4cdf4014ea93a65d187760a7576a158f8785051edf10d5e4c7968fa6&imgrefurl=https%3A%2F%2Findico.jlab.org%2Fevent%2F459%2Fcontributions%2F11746%2Fattachments%2F9716%2F14215%2FTMVA_SOFIE_%2520CHEP23.pdf&docid=ERFh1aCeXyKOeM&tbnid=FEDfTKo0bUcNfM&vet=12ahUKEwjI0dGK1MaPAxUYzjgGHfFeGxsQM3oECBYQAA..i&w=1282&h=364&hcb=2&ved=2ahUKEwjI0dGK1MaPAxUYzjgGHfFeGxsQM3oECBYQAA)"
10+
intro: |
11+
Developed parser within SOFIE to parse Machine Learning models trained with Keras. Rewrote the existing parser, which was written C++, in Python. Added support for parsing missing layers such as Pooling and LayerNormalization and wrote unit tests for the parser.
12+
---
13+
14+
# Final Evaluation Report for GSoC 2025
15+
<img width="1434" height="413" alt="image" src="https://gist.github.com/user-attachments/assets/6b8528de-aeb7-465b-9720-0b8d9d94d9a4" />
16+
17+
## Details
18+
19+
| | |
20+
| --- | --- |
21+
| Name | [Prasanna Kasar](https://github.com/prasannakasar) |
22+
| Organisation | [CERN HSF (Root Project)](https://github.com/root-project/root) |
23+
| Mentor | [Sanjiban Sengupta](https://github.com/sanjibansg), [Dr. Lorenzo Moneta](https://github.com/lmoneta)|
24+
| Project | [TMVA SOFIE - Enhancing Keras Parser and JAX/FLAX Integration](https://summerofcode.withgoogle.com/programs/2025/projects/uAjGYhgX) |
25+
26+
27+
## Project Description
28+
29+
The SOFIE (System for Optimized Fast Inference Code Emit) project is an initiative within the TMVA (Toolkit for Multivariate Data Analysis) framework in ROOT, which aims to enhance the efficiency and speed of inference for machine learning models. SOFIE converts ML models trained in different frameworks such as ONNX, PyTorch, and TensorFlow into an Intermediate Representation (IR). This IR allows SOFIE to generate optimized C++ functions for fast and effective inference of neural networks and subsequently convert them into C++ header files, which can be used in plug-and-go style for inference.
30+
31+
## SOFIE's workflow
32+
33+
To reduce the overhead of using multiple frameworks for inference, SOFIE generates unified inference code for models trained with different frameworks.
34+
35+
<img width="512" height="235" alt="image" src="https://gist.github.com/user-attachments/assets/bf1f9c4d-28e4-46c0-bdff-ab5653960512" />
36+
37+
SOFIE has mainly 2 components: Parser and inference code generator.
38+
39+
<img width="867" height="274" alt="image" src="https://gist.github.com/user-attachments/assets/096f2af8-72d4-4551-8fd5-4208f3ed1894" />
40+
41+
SOFIE currently supports parsing mechanisms for ML models built with frameworks such as ONNX, PyTorch, and TensorFlow.
42+
43+
## About SOFIE's Keras Parser
44+
45+
Currently, SOFIE's existing Keras parser is written in C++ and is quite old. Although it's written in C++, the actual parsing logic is implemented in Python. Additionally, it lacks support for parsing layers such as Pooling and LayerNormalization.
46+
47+
## Project Objectives
48+
49+
- Rewrite the Keras model parser in Python, replacing earlier C++ logic to improve modular design and flexibility,
50+
and simplify future extensions
51+
- Extend parser functionality to support Pooling and LayerNormalization layers
52+
- Enable support for Keras 3, while preserving support for Keras 2.x models, ensuring full backward compatibility
53+
- Add support for both types of models i.e. models built using Keras' Functional as well as Sequential API.
54+
- Design comprehensive unit tests for the parser to guarantee robustness and correctness
55+
56+
## Work Accomplished
57+
58+
Since SOFIE's operators are written entirely in C++, we had to leverage ROOT's `Pythonization` functionality, which essentially lets us use SOFIE's C++ object in Pythonic interface.
59+
The overall structure of the parser is very similar to the previous one. The sequence of operations is as follows:
60+
### 1. Load the Keras model
61+
62+
### 2. Instantiate the `RModel` class
63+
64+
### 3. Iterate over the individual layers and extract required information
65+
To create the RModel object, we had to extract layer-specific information such as Layer name, its type (Dense, Convolution, etc.) and its input and output layer name.
66+
In case of Keras 2.x and models built using Functional API, the output name of the current layer is same as input name for the next layer. But in the case of Keras 3, particularly with models built using Sequential API, this changes, and the input and output names are no more consistent. (This is better explained in here [issue link](https://github.com/keras-team/keras/issues/21599)). So, we used a custom iterator which would iterate over each of the layers and replace the suffix of the input and output name to be perfectly consistent.
67+
68+
Then we had to extract weight's name and some more operator specific information such as in case of Convolutional and Pooling layers, ONNX only supports `channels_first` data format. Whereas Keras supports both i.e. `channels_first` and `channels_last`.
69+
After extracting the layer information for a particular layer we add it to the `rmodel` object.
70+
71+
### 4. Adding layer to `rmodel` object
72+
For all the other operators, the process of adding layer operators to the `rmodel` object is quite easy. But for Convolutional and Pooling layers, its a bit different. For these layers, if the data format is `channels_last`, we have to perform a transpose before and after adding the layer to the `rmodel` object.
73+
74+
### 5. Operator specific functions
75+
To add the layer operators, we create the layer operator in layer specific functions and return it. For this, we make use of the extracted layer information from step [3](#3-iterate-over-the-individual-layers-and-extract-required-information).
76+
77+
### 5. Extract model's weights
78+
79+
### 6. Adding input and output names of the Keras model to `rmodel` object
80+
While adding the input and output names of the Keras model, we need to make sure that we use the new layer iterator, otherwise the layer names would have been inconsistent again.
81+
82+
## How to make sure the parser has backwards compatibility with Keras version 2.x?
83+
Along with parsing support for models trained with Keras 3, we also needed backward compatibility with Keras 2.x. Since Keras 3 introduced significant changes in attribute and layer names and storage formats, we conducted research on the updated versions. For example, the weight names in Keras 3 are no longer unique. Assume a model has 2 dense layers. Previously, with Keras 2.x, the layer weight names would have been
84+
```
85+
dense/kernel:0
86+
dense/bias:0
87+
dense_1/kernel:0
88+
dense_1/bias:0
89+
```
90+
but with Keras 3, the layer weight names are like this:
91+
```
92+
kernel
93+
bias
94+
kernel
95+
bias
96+
```
97+
To remove the ambiguity, we used weight paths instead of weight names.
98+
99+
After these steps, the parser was in good shape and can be used to parse these layers:
100+
101+
- Add
102+
- AveragePool2D channels first
103+
- AveragePool2D channels last
104+
- BatchNormalization
105+
- Concat
106+
- Conv2D channels first
107+
- Conv2D channels last
108+
- Conv2D padding same
109+
- Conv2D padding valid
110+
- Dense
111+
- Elu
112+
- Flatten
113+
- GlobalAveragePool2D channels first
114+
- GlobalAveragePool2D channels last
115+
- LayerNormalization
116+
- LeakyReLU
117+
- MaxPool2D channels first
118+
- MaxPool2D channels last
119+
- Multiply
120+
- Permute
121+
- Relu
122+
- Reshape
123+
- Selu
124+
- Sigmoid
125+
- Softmax
126+
- Subtract
127+
- Swish
128+
- Tanh
129+
130+
Along the way, we also fixed few minor bugs within SOFIE's ROperator header files.
131+
132+
## About JAX/FLAX parser
133+
Initially, we aimed for JAX/FLAX Integration within SOFIE by researching about models built using its `nnx` and `linen` API, but after a careful discussion with the project mentors we came to the conclusion of focusing the Keras parser itself, by adding support to parse more layers and by writing unit tests for the same.
134+
135+
## Writing Unit Tests
136+
Along with verifying the parsing capability to parse all the supported layers, we also needed to verify the correctness of the generated code. For this we created two functions:
137+
138+
### 1. To generate and test the inference code
139+
This function takes the file path of model built with keras. Then, it parses the same to the parser. After the parser returns the `rmodel` object, we generate the inference code. Now, we need to verify the correctness of the generated header file. For this, we need to pass a sample input to both the generated header file and the keras model. To avoid hardcoding the shape of the input for each and every model, we extract the input shapes from the Keras model and using it we create the sample input. After this, we pass the sample input and get the resultant output tensor. Since SOFIE always flattens the output tensor before returning it, we also check the output tensor shape from both Keras and SOFIE.
140+
141+
### 2. Validate the accuracy of the result
142+
To validate the inference result from SOFIE, we compare the element-wise values in the output tensor of Keras and SOFIE and make sure that the difference between the results is within a specified tolerance.
143+
144+
## Unit tests
145+
To write the unit tests, we used the `unittest` module within Python as it allows parametrizing those with minimum code repetition. There are two different sets of tests: 1. For models built using Keras' Functional API and 2. For models built using Keras' Sequential API. Within these, there are operator-specific tests which are invoked whenever a sub-test is called. While running the unit tests for both types of models, i.e. Functional and Sequential, temporary directories are created and torn down as soon as both are finished running.
146+
147+
## Pull request status
148+
149+
| Pull Request | PR Number | Status |
150+
|--------------------------|-----------------------------------------|-----------|
151+
| New Keras Parser| [#19692](https://github.com/root-project/root/pull/19692) | <img src="https://img.shields.io/badge/PR-Yet_To_Be_Merged-orange?style=for-the-badge&logo=appveyor"> |
152+
153+
## Challenges faced and Learning Outcomes
154+
- Faced difficulty while setting up the ROOT project with SOFIE enabled due to missing dependencies and incompatible versions of packages
155+
- Navigated through SOFIE's complex code base
156+
- Got hands-on experince with Keras, its Functional and Sequential API, and the overall structure of its models
157+
- Improved skills in reading documentation and solving bugs independently.
158+
- Learned how to write concise and modular unit tests
159+
160+
## Future work
161+
In the future, I would love continuing my contributions to the SOFIE codebase beyond the GSoC period. My current focus is on adding support for parsing the `Conv2DTranspose`, `Dropout` and Recurrent layers.
162+
163+
## Conclusion
164+
I am thankful for my project mentors, Sanjiban Sengupta and Dr. Lorenzo Moneta, for their kind guidance, which made my learning experience enriching and rewarding. They guided me whenever I felt any difficulty. I could ask them the silliest doubt, and they would still answer it happily. I am fortunate to be part of such a wonderful project and contribute to CERN-HSF this summer. I look forward to contributing to CERN-HSF beyond my GSoC project completion.
165+
Lastly, I would like to thank my seniors from the Open-Source community at my university, Project-X, for introducing me to GSoC and helping me in the pre-GSoC period.
166+
167+
#### Thanks and Regards
168+
169+
#### Prasanna Kasar
170+
171+
172+
173+

_gsocblogs/2025/image.png

15.2 KB
Loading
1.85 MB
Loading

0 commit comments

Comments
 (0)