Skip to content

Commit 7cd25a0

Browse files
committed
Add docs for hls4ml Optimization API
1 parent a778e39 commit 7cd25a0

File tree

1 file changed

+120
-0
lines changed

1 file changed

+120
-0
lines changed
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
========================
2+
hls4ml Optimization API
3+
========================
4+
5+
Pruning and weight sharing are effective techniques to reduce model footprint and computational requirements. The hls4ml Optimization API introduces hardware-aware pruning and weight sharing.
6+
By defining custom objectives, the algorithm solves a Knapsack optimization problem aimed at maximizing model performance, while keeping the target resource(s) at a minimum. Out-of-the box objectives include network sparsity, GPU FLOPs, Vivado DSPs, memory utilization etc.
7+
8+
The code block below showcases three use cases of the hls4ml Optimization API - network sparsity (unstructured pruning), GPU FLOPs (structured pruning) and Vivado DSP utilization (pattern pruning). First, we start with unstructured pruning:
9+
10+
.. code-block:: Python
11+
from sklearn.metrics import accuracy_score
12+
from tensorflow.keras.optimizers import Adam
13+
from tensorflow.keras.metrics import CategoricalAccuracy
14+
from tensorflow.keras.losses import CategoricalCrossentropy
15+
from hls4ml.optimization.keras import optimize_model
16+
from hls4ml.optimization.keras.utils import get_model_sparsity
17+
from hls4ml.optimization.attributes import get_attributes_from_keras_model
18+
from hls4ml.optimization.objectives import ParameterEstimator
19+
from hls4ml.optimization.scheduler import PolynomialScheduler
20+
# Define baseline model and load data
21+
# X_train, y_train = ...
22+
# X_val, y_val = ...
23+
# X_test, y_test = ...
24+
# baseline_model = ...
25+
# Evaluate baseline model
26+
y_baseline = baseline_model.predict(X_test)
27+
acc_base = accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_baseline, axis=1))
28+
sparsity, layers = get_model_sparsity(baseline_model)
29+
print(f'Baseline Keras accuracy: {acc_base}')
30+
print(f'Baseline Keras sparsity, overall: {sparsity}')
31+
print(f'Baseline Keras sparsity, per-layer: {layers}')
32+
# Defining training parameters
33+
# Epochs refers to the number of maximum epochs to train a model, after imposing some sparsity
34+
# If the model is pre-trained, a good rule of thumb is to use between a 1/3 and 1/2 of the number of epochs used to train baseline model
35+
epochs = 10
36+
batch_size = 128
37+
metric = 'accuracy'
38+
optimizer = Adam()
39+
loss_fn = CategoricalCrossentropy(from_logits=True)
40+
41+
# Define the metric to monitor, as well as if its increasing or decreasing
42+
# This disctinction allows us to optimize both regression and classification models
43+
# In regression, e.g. minimize validation MSE & for classification e.g. maximize accuracy
44+
metric, increasing = CategoricalAccuracy(), True
45+
# Relative tolerance (rtol) is the the relative loss in metric the optimized model is allowed to incur
46+
rtol = 0.975
47+
48+
# A scheduler defines how the sparsity is incremented at each step
49+
# In this case, the maximum sparsity is 50% and it will be applied at a polynomially decreasing rate, for 10 steps
50+
# If the final sparsity is unspecified, it is set to 100%
51+
# The optimization algorithm stops either when (i) the relative drop in performance is below threshold or (ii) final sparsity reached
52+
scheduler = PolynomialScheduler(5, final_sparsity=0.5)
53+
# Get model attributes
54+
model_attributes = get_attributes_from_keras_model(baseline_model)
55+
56+
# Optimize model
57+
# ParameterEstimator is the objective and, in this case, the objective is to minimize the total number of parameters
58+
optimized_model = optimize_model(
59+
baseline_model, model_attributes, ParameterEstimator, scheduler,
60+
X_train, y_train, X_val, y_val, batch_size, epochs, optimizer, loss_fn, metric, increasing, rtol
61+
)
62+
# Evaluate optimized model
63+
y_optimized = optimized_model.predict(X_test)
64+
acc_optimized = accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_optimized, axis=1))
65+
sparsity, layers = get_model_sparsity(optimized_model)
66+
print(f'Optimized Keras accuracy: {acc_optimized}')
67+
print(f'Optimized Keras sparsity, overall: {sparsity}')
68+
print(f'Opimized Keras sparsity, per-layer: {layers}')
69+
70+
In a similar manner, it is possible to target GPU FLOPs or Vivado DSPs. However, in that case, sparsity is not equivalent to model sparsity.
71+
Instead, it is the sparsity of the target resource. As an example: Starting with a network utilizing 512 DSPs and a final sparsity of 50%; the optimized network will use 256 DSPs.
72+
73+
To optimize GPU FLOPs, the code is similar to above:
74+
.. code-block:: Python
75+
from hls4ml.optimization.objectives.gpu_objectives import GPUFLOPEstimator
76+
77+
# Optimize model
78+
# Note the change from ParameterEstimator to GPUFLOPEstimator
79+
optimized_model = optimize_model(
80+
baseline_model, model_attributes, GPUFLOPEstimator, scheduler,
81+
X_train, y_train, X_val, y_val, batch_size, epochs, optimizer, loss_fn, metric, increasing, rtol
82+
)
83+
# Evaluate optimized model
84+
y_optimized = optimized_model.predict(X_test)
85+
acc_optimized = accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_optimized, axis=1))
86+
print(f'Optimized Keras accuracy: {acc_optimized}')
87+
# Note the difference in total number of parameters
88+
# Optimizing GPU FLOPs is equivalent to removing entire structures (filters, neurons) from the network
89+
print(baseline_model.summary())
90+
print(optimized_model.summary())
91+
92+
Finally, optimizing Vivado DSPs is possible, given a hls4ml config:
93+
.. code-block:: Python
94+
from hls4ml.utils.config import config_from_keras_model
95+
from hls4ml.optimization.objectives.vivado_objectives import VivadoDSPEstimator
96+
97+
# Note the change from optimize_model to optimize_keras_for_hls4ml
98+
# The function optimize_keras_for_hls4ml acts as a wrapper for the function, parsing hls4ml config to model attributes
99+
from hls4ml.optimization import optimize_keras_for_hls4ml
100+
101+
# Create hls4ml config
102+
default_reuse_factor = 4
103+
default_precision = 'ac_fixed<16, 6>'
104+
hls_config = config_from_keras_model(baseline_model, granularity='name', default_precision=default_precision, default_reuse_factor=default_reuse_factor)
105+
hls_config['IOType'] = 'io_parallel'
106+
hls_config['Model']['Strategy'] = 'Resource' # Strategy must be present for optimisation
107+
108+
# Optimize model
109+
# Note the change from ParameterEstimator to VivadoDSPEstimator
110+
optimized_model = optimize_keras_for_hls4ml(
111+
baseline_model, model_attributes, VivadoDSPEstimator, scheduler,
112+
X_train, y_train, X_val, y_val, batch_size, epochs, optimizer, loss_fn, metric, increasing, rtol
113+
)
114+
115+
There are two more Vivado "optimizers" - VivadoFFEstimator, aimed at reducing register utilisation and VivadoMultiObjectiveEstimator, aimed at optimising BRAM and DSP utilisation.
116+
Note, to ensure DSPs are optimized, "unrolled" Dense multiplication must be used before synthesing HLS, by modifying the config:
117+
.. code-block:: Python
118+
hls_config = config_from_keras_model(optimized_model)
119+
hls_config['Model']['DenseResourceImplementation'] = 'Unrolled'
120+
# Any addition hls4ml config, such as strategy, reuse factor etc...

0 commit comments

Comments
 (0)