Skip to content

Commit bc27599

Browse files
committed
Merge branch 'main' into libero_backend
2 parents 608c136 + 3d4c8f3 commit bc27599

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+1567
-168
lines changed

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@ repos:
3838
rev: v3.19.1
3939
hooks:
4040
- id: pyupgrade
41-
args: ["--py36-plus"]
41+
args: ["--py310-plus"]
4242

4343
- repo: https://github.com/pycqa/flake8
44-
rev: 7.1.2
44+
rev: 7.2.0
4545
hooks:
4646
- id: flake8
4747
exclude: docs/conf.py

docs/advanced/extension.rst

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ Extension API
55
``hls4ml`` natively supports a large number of neural network layers.
66
But what if a desired layer is not supported?
77
If it is standard enough and its implementation would benefit the community as a whole, we would welcome a contribution to add it to the standard set of supported layers.
8-
However, if it is a somewhat niche custom layer, there is another approach we can take to extend hls4ml through the *extension API*.
8+
However, if it is a somewhat niche custom layer, there is another approach we can take to extend hls4ml through the *extension API*. This feature is support for both keras and pytorch layers.
99

10-
This documentation will walk through a complete `complete end-to-end example <https://github.com/fastmachinelearning/hls4ml/blob/main/test/pytest/test_extensions.py>`_, which is part of our testing suite.
10+
Complete end-to-end examples are available for both `keras <https://github.com/fastmachinelearning/hls4ml/blob/main/test/pytest/test_extensions.py>`_ and `pytorch <https://github.com/fastmachinelearning/hls4ml/blob/main/test/pytest/test_extensions_pytorch.py>`_, which are part of our testing suite. The description here uses the keras example.
1111
To implement a custom layer in ``hls4ml`` with the extension API, the required components are:
1212

1313
* Your custom layer class
@@ -18,9 +18,6 @@ To implement a custom layer in ``hls4ml`` with the extension API, the required c
1818
* Function config template
1919
* Registration of layer, source code, and templates
2020

21-
.. note::
22-
currently, then extension API supports keras models. Support for pytorch models is in development.
23-
2421
Complete example
2522
================
2623

docs/intro/setup.rst

Lines changed: 37 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -72,29 +72,55 @@ Here we give line-by-line instructions to demonstrate the general workflow.
7272
.. code-block:: python
7373
7474
import hls4ml
75+
import tensorflow as tf
76+
from tensorflow.keras.layers import Dense
7577
76-
# Fetch a keras model from our example repository
77-
# This will download our example model to your working directory and return an example configuration file
78-
config = hls4ml.utils.fetch_example_model('KERAS_3layer.json')
78+
# Construct a basic keras model
79+
model = tf.keras.models.Sequential()
80+
model.add(Dense(64, input_shape=(16,), name='Dense', kernel_initializer='lecun_uniform', kernel_regularizer=None))
81+
model.add(Activation(activation='elu', name='Activation'))
82+
model.add(Dense(32, name='Dense2', kernel_initializer='lecun_uniform', kernel_regularizer=None))
83+
model.add(Activation(activation='elu', name='Activation2'))
7984
80-
# You can print it to see some default parameters
85+
# This is where you would train the model in a real-world scenario
86+
87+
# Generate an hls configuration from the keras model
88+
config = hls4ml.utils.config_from_keras_model(model)
89+
90+
# You can print the config to see some default parameters
8191
print(config)
8292
83-
# Convert it to a hls project
84-
hls_model = hls4ml.converters.keras_to_hls(config)
93+
# Convert the model to an hls project using the config
94+
hls_model = hls4ml.converters.convert_from_keras_model(
95+
model=model,
96+
hls_config=config,
97+
backend='Vitis'
98+
)
99+
100+
Once converted to an HLS project, you can connect the project into the Python runtime and use it to run predictions on a numpy array:
101+
102+
.. code-block:: python
103+
104+
import numpy as np
105+
106+
# Compile the hls project and link it into the Python runtime
107+
hls_model.compile()
108+
109+
# Generate random input data
110+
X_input = np.random.rand(100, 16)
85111
86-
# Print full list of example model if you want to explore more
87-
hls4ml.utils.fetch_example_list()
112+
# Run the model on the input data
113+
hls_prediction = hls_model.predict(X_input)
88114
89-
After that, you can use :code:`Vivado HLS` to synthesize the model:
115+
After that, you can use :code:`Vitis HLS` to synthesize the model:
90116

91117
.. code-block:: python
92118
93-
# Use Vivado HLS to synthesize the model
119+
# Use Vitis HLS to synthesize the model
94120
# This might take several minutes
95121
hls_model.build()
96122
97-
# Print out the report if you want
123+
# Optional: print out the report
98124
hls4ml.report.read_vivado_report('my-hls-test')
99125
100126
Done! You've built your first project using ``hls4ml``! To learn more about our various API functionalities, check out our tutorials `here <https://github.com/fastmachinelearning/hls4ml-tutorial>`__.

hls4ml/backends/catapult/passes/bn_quant.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ def transform(self, model, node):
9696
bn_layer.get_weights('scale').data, bn_layer.get_weights('bias').data, node.get_attr('threshold', 0.5)
9797
)
9898
# Remove the BatchNormalization layer
99-
model.remove_node(bn_layer, rewire=True)
99+
model.remove_node(bn_layer)
100100
# Replace the old Activation layer with this one
101101
model.replace_node(node, bnbt_layer)
102102

hls4ml/backends/fpga/passes/clone.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,8 @@ def match(self, node):
6161

6262
# Check if the output is used more than once
6363
output_map = node.get_output_use_map()
64-
in_output = node.name in node.model.outputs
6564
for output in node.outputs:
65+
in_output = output in node.model.outputs
6666
if len(output_map[output]) + in_output > 1:
6767
# model output also need a stream
6868
return True
@@ -72,10 +72,10 @@ def match(self, node):
7272
def transform(self, model, node):
7373

7474
output_map = node.get_output_use_map()
75-
in_output = node.name in node.model.outputs
7675

7776
transformed = False
7877
for output in node.outputs:
78+
in_output = output in node.model.outputs
7979
n_outputs = len(output_map[output]) + in_output
8080
if n_outputs == 1:
8181
continue
@@ -90,8 +90,8 @@ def transform(self, model, node):
9090
init_stream_idx = 1
9191
if in_output:
9292
# If the value is used as output, add one extra stream
93-
idx = node.model.outputs.index(node.name)
94-
node.model.outputs[idx] = node.name + '_cpy1'
93+
idx = node.model.outputs.index(output)
94+
node.model.outputs[idx] = output + '_cpy1'
9595
init_stream_idx = 2
9696
for i, layer in enumerate(output_map[output], init_stream_idx):
9797
idx = layer.inputs.index(output)
@@ -102,7 +102,7 @@ def transform(self, model, node):
102102
'clone_' + node.name,
103103
attrs,
104104
[output],
105-
[output + '_cpy' + str(i + 1) for i in range(n_outputs)],
105+
[f'{output}_cpy{i + 1}' for i in range(n_outputs)],
106106
)
107107
for i in range(n_outputs):
108108
key = output + '_cpy' + str(i + 1)

hls4ml/backends/fpga/passes/final_reshape.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,7 @@ def match(self, node):
1212
def transform(self, model, node):
1313
if model.config.get_config_value('IOType') == 'io_parallel':
1414
print('WARNING: Final layer is a Reshape, which does not affect the output for io_parallel; removing it')
15-
# remove, but don't rewire because it's the output layer
16-
model.remove_node(node, rewire=False)
15+
model.remove_node(node)
1716
return True
1817
elif model.config.get_config_value('IOType') == 'io_stream':
1918
print(

hls4ml/backends/fpga/passes/hgq_proxy_model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ def match(self, node: Layer):
5353

5454
def transform(self, model, node: FixedPointQuantizer):
5555
if node.fusible:
56-
model.remove_node(node, rewire=True)
56+
model.remove_node(node)
5757
return True
5858

5959
if model.config.config['IOType'] != 'io_parallel':

hls4ml/backends/fpga/passes/remove_softmax.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,5 +9,5 @@ def match(self, node):
99
return is_softmax and remove_softmax
1010

1111
def transform(self, model, node):
12-
model.remove_node(node, rewire=True)
12+
model.remove_node(node)
1313
return True

hls4ml/backends/oneapi/oneapi_backend.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
from hls4ml.model.layers import GRU, LSTM, Activation, Conv1D, Conv2D, Dense, Embedding, Layer, SimpleRNN, Softmax
1111
from hls4ml.model.optimizer import get_backend_passes, layer_optimizer
1212
from hls4ml.model.types import FixedPrecisionType, IntegerPrecisionType, NamedType
13+
from hls4ml.report import parse_oneapi_report
1314
from hls4ml.utils import attribute_descriptions as descriptions
1415

1516
# from hls4ml.report import parse_oneapi_report
@@ -207,6 +208,8 @@ def build(self, model, build_type='fpga_emu', run=False):
207208
executable = builddir / f'{model.config.get_project_name()}.{build_type}'
208209
subprocess.run(f'{str(executable)}', shell=True, cwd=builddir, check=True)
209210

211+
return parse_oneapi_report(model.config.get_output_dir())
212+
210213
@layer_optimizer(Layer)
211214
def init_base_layer(self, layer):
212215
reuse_factor = layer.model.config.get_reuse_factor(layer)

hls4ml/backends/oneapi/passes/bn_quant.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ def transform(self, model, node):
149149
bn_layer.get_weights('scale').data, bn_layer.get_weights('bias').data, node.get_attr('threshold', 0.5)
150150
)
151151
# Remove the BatchNormalization layer
152-
model.remove_node(bn_layer, rewire=True)
152+
model.remove_node(bn_layer)
153153
# Replace the old Activation layer with this one
154154
model.replace_node(node, bnbt_layer)
155155

0 commit comments

Comments
 (0)