Skip to content

Commit 766e9fb

Browse files
authored
Merge pull request Xilinx#1023 from Xilinx/fix/minor_fixes
Minor fixes to documentation and examples for dev branch
2 parents 82faae7 + b318693 commit 766e9fb

File tree

14 files changed

+28
-41
lines changed

14 files changed

+28
-41
lines changed

docs/finn/developers.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22
Developer documentation
33
***********************
44

5-
.. note:: **This page is under construction.**
6-
75
This page is intended to serve as a starting point for new FINN developers.
86
Power users may also find this information useful.
97

docs/finn/getting_started.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ General FINN Docker tips
125125

126126
Supported FPGA Hardware
127127
=======================
128-
**Vivado IPI support for any Xilinx FPGA:** FINN generates a Vivado IP Integrator (IPI) design from the neural network with AXI stream (FIFO) in-o>
128+
**Vivado IPI support for any Xilinx FPGA:** FINN generates a Vivado IP Integrator (IPI) design from the neural network with AXI stream (FIFO) in-out interfaces, which can be integrated onto any Xilinx-AMD FPGA as part of a larger system. It’s up to you to take the FINN-generated accelerator (what we call “stitched IP” in the tutorials), wire it up to your FPGA design and send/receive neural network data to/from the accelerator.
129129

130130
**Shell-integrated accelerator + driver:** For quick deployment, we target boards supported by `PYNQ <http://www.pynq.io/>`_ . For these platforms, we can build a full bitfile including DMAs to move data into and out of the FINN-generated accelerator, as well as a Python driver to launch the accelerator. We support the Pynq-Z1, Pynq-Z2, Kria SOM, Ultra96, ZCU102 and ZCU104 boards, as well as Alveo cards.
131131

docs/finn/hw_build.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,4 @@ transformation for Zynq, and the `VitisLink` transformation for Alveo.
8787
Deployment
8888
==========
8989

90-
91-
Deployment
92-
-----------
93-
9490
The bitfile and the driver file(s) can be copied to the PYNQ board and be executed there. For more information see the description in the `end2end_example <https://github.com/Xilinx/finn/tree/main/notebooks/end2end_example>`_ Jupyter notebooks.

docs/finn/img/mem_mode.png

18.2 KB
Loading

docs/finn/internals.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ Disadvantages:
181181

182182
Internal_decoupled mode
183183
------------------------
184-
In *internal_decoupled* mode a different variant of the MVAU with three ports is used. Besides the input and output streams, which are fed into the circuit via Verilog FIFOs, there is another input, which is used to stream the weights. For this the `streaming MVAU <https://github.com/Xilinx/finn-hlslib/blob/master/mvau.hpp#L214>`_ from the finn-hls library is used. To make the streaming possible a Verilog weight streamer component accesses the weight memory and sends the values via another FIFO to the MVAU. This component can be found in the `finn-rtllib <https://github.com/Xilinx/finn/tree/dev/finn-rtllib>`_ under the name *memstream.v*. For the IP block generation this component, the IP block resulting from the synthesis of the HLS code of the streaming MVAU and a FIFO for the weight stream are combined in a verilog wrapper. The weight values are saved in .dat files and stored in the weight memory from which the weight streamer reads. The resulting verilog component, which is named after the name of the node and has the suffix "_memstream.v", exposes only two ports to the outside, the data input and output. It therefore behaves externally in the same way as the MVAU in *internal_embedded* mode.
184+
In *internal_decoupled* mode a different variant of the MVAU with three ports is used. Besides the input and output streams, which are fed into the circuit via Verilog FIFOs, there is another input, which is used to stream the weights. For this the `streaming MVAU <https://github.com/Xilinx/finn-hlslib/blob/master/mvau.hpp#L214>`_ from the finn-hls library is used. To make the streaming possible a Verilog weight streamer component accesses the weight memory and sends the values via another FIFO to the MVAU. This component can be found in the `finn-rtllib <https://github.com/Xilinx/finn/tree/dev/finn-rtllib>`_ under the name *memstream.v*. For the IP block generation this component, the IP block resulting from the synthesis of the HLS code of the streaming MVAU and a FIFO for the weight stream are combined. The weight values are saved in .dat files and stored in the weight memory from which the weight streamer reads. The resulting verilog component, which is named after the name of the node and has the suffix "_memstream.v", exposes only two ports to the outside, the data input and output. It therefore behaves externally in the same way as the MVAU in *internal_embedded* mode.
185185

186186
Advantages:
187187

docs/requirements.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
brevitas@git+https://github.com/Xilinx/brevitas@master#egg=brevitas_examples
22
dataclasses-json==0.5.7
3-
docutils==0.17.1
3+
docutils==0.19
44
gspread==3.6.0
55
importlib_resources
66
IPython
@@ -9,7 +9,7 @@ netron
99
pytest
1010
pyverilator@git+https://github.com/maltanar/pyverilator@master#egg=pyverilator
1111
qonnx@git+https://github.com/fastmachinelearning/qonnx@main#egg=qonnx
12-
sphinx_rtd_theme==0.5.0
12+
sphinx_rtd_theme==2.0.0
1313
torch
1414
torchvision
1515
tqdm

notebooks/advanced/4_advanced_builder_settings.ipynb

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"id": "5dbed63f",
4747
"metadata": {},
4848
"source": [
49-
"## Introduction to the CNV-w2a2 network <a id=\"intro_cnv\"></a>\n",
49+
"## Introduction to the CNV-w2a2 network <a id='intro_cnv'></a>\n",
5050
"\n",
5151
"The particular quantized neural network (QNN) we will be targeting in this notebook is referred to as CNV-w2a2 and it classifies 32x32 RGB images into one of ten CIFAR-10 classes. All weights and activations in this network are quantized to two bit, with the exception of the input (which is RGB with 8 bits per channel) and the final output (which is 32-bit numbers). It is similar to the convolutional neural network used in the [cnv_end2end_example](../end2end_example/bnn-pynq/cnv_end2end_example.ipynb) Jupyter notebook.\n",
5252
"\n",
@@ -116,7 +116,7 @@
116116
"id": "c764ed76",
117117
"metadata": {},
118118
"source": [
119-
"## Quick recap, how to setup up default builder flow for resource estimations <a id=\"recap_builder\"></a>"
119+
"## Quick recap, how to setup up default builder flow for resource estimations <a id='recap_builder'></a>"
120120
]
121121
},
122122
{
@@ -305,7 +305,7 @@
305305
"id": "7e561a91",
306306
"metadata": {},
307307
"source": [
308-
"## Build steps <a id=\"build_step\"></a>"
308+
"## Build steps <a id='build_step'></a>"
309309
]
310310
},
311311
{
@@ -369,7 +369,7 @@
369369
"id": "e9c2c97f",
370370
"metadata": {},
371371
"source": [
372-
"### How to create a custom build step <a id=\"custom_step\"></a>"
372+
"### How to create a custom build step <a id='custom_step'></a>"
373373
]
374374
},
375375
{
@@ -643,7 +643,7 @@
643643
"id": "a6edf5c4-9213-45cd-834f-615c12685d9e",
644644
"metadata": {},
645645
"source": [
646-
"## Specialize layers configuration json <a id=\"specialize_layers\"></a>"
646+
"## Specialize layers configuration json <a id='specialize_layers'></a>"
647647
]
648648
},
649649
{
@@ -675,7 +675,7 @@
675675
"id": "bc90b589-7a92-4996-9704-02736ac4e60e",
676676
"metadata": {},
677677
"source": [
678-
"The builder flow step before `step_specialize_layers` generates a template json file to set the preferred implementation style per layer. We can copy it from one of the previous runs to this folder and manipulate it to pass it to a new build."
678+
"The builder flow step before `step_create_dataflow_partition` generates a template json file to set the preferred implementation style per layer. We can copy it from one of the previous runs to this folder and manipulate it to pass it to a new build."
679679
]
680680
},
681681
{
@@ -934,7 +934,7 @@
934934
"id": "5ffbadd1",
935935
"metadata": {},
936936
"source": [
937-
"## Folding configuration json <a id=\"folding_config\"></a>"
937+
"## Folding configuration json <a id='folding_config'></a>"
938938
]
939939
},
940940
{
@@ -1270,7 +1270,7 @@
12701270
"id": "4a675834",
12711271
"metadata": {},
12721272
"source": [
1273-
"## Additional builder arguments <a id=\"builder_arg\"></a>"
1273+
"## Additional builder arguments <a id='builder_arg'></a>"
12741274
]
12751275
},
12761276
{
@@ -1294,7 +1294,7 @@
12941294
"id": "e0c167f4",
12951295
"metadata": {},
12961296
"source": [
1297-
"### Verification steps <a id=\"verify\"></a>"
1297+
"### Verification steps <a id='verify'></a>"
12981298
]
12991299
},
13001300
{
@@ -1505,7 +1505,7 @@
15051505
"id": "4609f94d",
15061506
"metadata": {},
15071507
"source": [
1508-
"### Other builder arguments <a id=\"other_args\"></a>"
1508+
"### Other builder arguments <a id='other_args'></a>"
15091509
]
15101510
},
15111511
{
@@ -1610,7 +1610,7 @@
16101610
"id": "3b98eb65",
16111611
"metadata": {},
16121612
"source": [
1613-
"### Example for additional builder arguments & bitfile generation <a id=\"example_args\"></a>"
1613+
"### Example for additional builder arguments & bitfile generation <a id='example_args'></a>"
16141614
]
16151615
},
16161616
{

src/finn/qnn-data/build_dataflow/build.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
# Copyright (c) 2020 Xilinx, Inc.
1+
# Copyright (C) 2020-2022 Xilinx, Inc.
2+
# Copyright (C) 2022-2024, Advanced Micro Devices, Inc.
23
# All rights reserved.
34
#
45
# Redistribution and use in source and binary forms, with or without

src/finn/qnn-data/build_dataflow/dataflow_build_config.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@
44
"mvau_wwidth_max": 10000,
55
"synth_clk_period_ns": 10.0,
66
"board": "Pynq-Z1",
7-
"standalone_thresholds": true,
7+
"standalone_thresholds": false,
8+
"folding_config_file": "folding_config.json",
89
"shell_flow_type": "vivado_zynq",
910
"verify_save_rtlsim_waveforms": true,
1011
"force_python_rtlsim": true,

src/finn/qnn-data/build_dataflow/folding_config.json

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
{
22
"Defaults": {},
3-
"Thresholding_hls_0": {
4-
"PE": 49,
5-
"ram_style": "distributed"
3+
"Thresholding_rtl_0": {
4+
"PE": 49
65
},
76
"MVAU_hls_0": {
87
"PE": 16,

0 commit comments

Comments
 (0)