Automatic type inference for param_t in Parametrised Activations#1139
Automatic type inference for param_t in Parametrised Activations#1139JanFSchulte merged 19 commits intomainfrom
param_t in Parametrised Activations#1139Conversation
|
I see some tests related to oneAPI fails; it's hard to me to understand why they fail, how should I proceed? |
|
If you have a linux setup it should be pretty straightforward to install oneAPI, and then you can run the pytest. But we can wait to look at the other issues first. Maybe it will clear itself. |
|
Note the utility we have in hls4ml already: |
|
I wanted to try to install oneAPI myself, so I played with this PR a bit. The issue seems to be that the precision for the parameter of the leaky ReLU is reduced significantly, from So we need to make sure to take this into account when inferring the precision for the parameters. |
|
Hey @nghielme any news on this one? |
|
I'll take a look soon |
|
Please check the logic of the precision setting. |
|
I added a unit test to cover the various options, so I am more confident. It did discover an error in the max setting for unsigned FixedPrecisionType, which I fixed, and am including here, though it's logically independent. |
|
There were some weird pytest failures that I'm rerunning, but otherwise I think this can be merged now. |
|
Looks good to me. One small note, I think the test could be rewritten in a more |
|
Tests have only the "expected" failures now, so I think this is ok. I agree with Nicolo's comment on the pytest though, so if you could integrate that before merging that would be great, Jovan. |
|
I just put Nicolo's test in the pytests instead of the one I had (with minor pre-commit changes). I also updated to the latest main so ideally the test failures should be gone. |
|
Looks good now. @jmitrevs since you have the changes requested, you'll need to merge it. |
…astmachinelearning#1139) * Added automatic inference of `param_t` constant for parametrised activations * pre-commit fixes * Fix the case the param is a power of 2 * Fix for a specific case related to no bits in the mantissa * Update subproject commit reference in example-models * first, untested version of constant precison * try using Fxp for precision setting * fix bug in max attribute of unsigned FixedPrecisionType * add unit test for precision from constant * integrate suggested test_precision_from_constant_unit change --------- Co-authored-by: Jan-Frederik Schulte <jschulte@cern.ch> Co-authored-by: Jovan Mitrevski <jmitrevs@fnal.gov> Co-authored-by: Jovan Mitrevski <j.p.mitrevski@gmail.com>
This small PR implement the inference of W and I parameter for a given floating point constant. It is exploited in parametrised activations
Type of change
Tests
I run some tests related to Parametrised Activations, already present in the pytests of hls4ml.
Checklist
pre-commiton the files I edited or added.