Creation of a qkeras zoo. #66
Conversation
…uicknet, quicknet_small, quicknet_large, alexnet and birealnet. Each network is built in its own class and tested with the methods in utils.py file.
| alexnet_name = "alexNet" | ||
|
|
||
|
|
||
| class alexnet(): |
There was a problem hiding this comment.
remove "()"
and also follow https://google.github.io/styleguide/pyguide.html#3162-naming-conventions
zhuangh
left a comment
There was a problem hiding this comment.
@FrancescoLoro thank you so much for the license modification.
I added some comments to help us better understand the benchmark models.
FYI, we will discuss the larq part internally first in this week to see whether we want to include the part in this PR.
BTW, please also try to apply google python style in the code for readability. thanks!
| """ | ||
| Class to create and load weights of: alexnet | ||
| Attributes: | ||
| network_name: Name of the network |
There was a problem hiding this comment.
one sentence in the first line of the docstring followed by one empty line
ref: https://google.github.io/styleguide/pyguide.html#384-classes
could you use 2 or 4 spaces for the indentation? thanks
| @staticmethod | ||
| def add_qkeras_conv_block(model, filters_num, kernel_size, pool, | ||
| qnt, strides=1): | ||
| """ |
There was a problem hiding this comment.
Move one sentence in the first line followed by an empty line.
ref: https://google.github.io/styleguide/pyguide.html#383-functions-and-methods
| Add a sequence of: Activation quantization, Quantized Conv2D, reshape, | ||
| Average Pooling, Conv2D, 2x BatchNormalization | ||
| :param model: model where to add the sequence | ||
| :param filters_num: number of filters for Cov2D |
|
|
||
| for _ in range(0, 3): | ||
| self.add_qkeras_residual_block(qkeras_biRealNet, 512) | ||
| qkeras_biRealNet.add(tf.keras.layers.AveragePooling2D(pool_size=(7, 7))) |
There was a problem hiding this comment.
Any reason of not using power of two numbers for pool_size? For example, (2,2) or (4,4)
We need to be careful here regarding the precision, if the pool_size is not power of two (49=7x7 here).
avg = sum(pool_element) / pool_size = sum(pool_element) / 49, which introduces the precision loss if we use fixed point or integer to hold avg, right.
| model.add(tf.keras.layers.Conv2D(filters_num, (1, 1), padding="same", | ||
| use_bias=False)) | ||
| model.add(tf.keras.layers.BatchNormalization()) | ||
| model.add(tf.keras.layers.BatchNormalization()) |
There was a problem hiding this comment.
why consecutive batchnormalization layers?
There was a problem hiding this comment.
In next commit I've done a complete refactor of that class, fixing also that mistake
| BIREALNET_NAME = "biRealNet" | ||
|
|
||
|
|
||
| class BirealNet: |
There was a problem hiding this comment.
is it from this paper https://arxiv.org/pdf/1808.00278.pdf
…efactor of biRealNet fixing consecutive batch norm layers
| Results on 100 random samples: | ||
|
|
||
| Alexnet: | ||
| Mean MSE for quickNet -> 0.0 |
There was a problem hiding this comment.
why it is called quickNet under AlexNet?
| Absolute errors for quickNet -> 0 | ||
|
|
||
| Binary DenseNet e28 | ||
| Mean MSE for quickNet -> 0.0 |
This folder contains a collection of networks written using two different frameworks: qkeras and larq.
Each network can be built and tested using a randomly generated dataset, the output will consist of two measurements:
not coincide with the class predicted by the other network
The folder is divided in:
Link to the folder with all weights: https://drive.google.com/drive/folders/1pGZ6dGWvJyc9aH-TOQohm0PhORihQZ5I?usp=sharing
ALready added networks: quicknet, quicknet_small, quicknet_large, alexnet, and birealnet.