Skip to content

Commit e04b825

Browse files
JP-AmboageJuan P Garcia Amboagenickfraser
authored
Fix (Notebooks): Corrected typos/small errors in the text cells with explanations (#1399)
--------- Signed-off-by: Juan P Garcia Amboage <jgarciaa@XIRJGARCIAA01.amd.com> Co-authored-by: Juan P Garcia Amboage <jgarciaa@XIRJGARCIAA01.amd.com> Co-authored-by: nickfraser <icanlosh@gmail.com>
1 parent 40d0983 commit e04b825

File tree

5 files changed

+19
-19
lines changed

5 files changed

+19
-19
lines changed

notebooks/01_quant_tensor_quant_conv2d_overview.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@
353353
"cell_type": "markdown",
354354
"metadata": {},
355355
"source": [
356-
"As expected, we have that the quantized value (in dequantized format) can be computer from its integer representation, together with zero-point and scale:"
356+
"As expected, we have that the quantized value (in dequantized format) can be computed from its integer representation, together with zero-point and scale:"
357357
]
358358
},
359359
{
@@ -422,7 +422,7 @@
422422
"cell_type": "markdown",
423423
"metadata": {},
424424
"source": [
425-
"Calling `is_valid` is relative expensive, so it should be using sparingly, but there are a few cases where a non-valid QuantTensor might be generated that is important to be aware of. Say we have two QuantTensor as output of the same quantized activation, and we want to sum them together:"
425+
"Calling `is_valid` is relatively expensive, so it should be used sparingly, but there are a few cases where a non-valid QuantTensor might be generated that are important to be aware of. Say we have two QuantTensor as output of the same quantized activation, and we want to sum them together:"
426426
]
427427
},
428428
{
@@ -540,7 +540,7 @@
540540
"cell_type": "markdown",
541541
"metadata": {},
542542
"source": [
543-
"`QuantTensor` implements `__torch_function__` to handle being called from torch functional operators (e.g. ops under `torch.nn.functional`). Passing a QuantTensor to supported ops that are invariant to quantization, e.g. max-pooling, preserve the the validity of a QuantTensor. Example:"
543+
"`QuantTensor` implements `__torch_function__` to handle being called from torch functional operators (e.g. ops under `torch.nn.functional`). Passing a QuantTensor to supported ops that are invariant to quantization, e.g. max-pooling, preserves the the validity of a QuantTensor. Example:"
544544
]
545545
},
546546
{
@@ -634,7 +634,7 @@
634634
"source": [
635635
"## Input Quantization\n",
636636
"\n",
637-
"We can obtain a valid output `QuantTensor` by making sure that both input and weight of `QuantConv2d` are quantized. To do so, we can set a quantizer for `input_quant`. In this example we pick a *signed 8-bit* quantizer with *per-tensor floating-point scale factor*:"
637+
"We can obtain a valid output `QuantTensor` by making sure that both the inputs and weights of `QuantConv2d` are quantized. To do so, we can set a quantizer for `input_quant`. In this example we pick a *signed 8-bit* quantizer with *per-tensor floating-point scale factor*:"
638638
]
639639
},
640640
{
@@ -708,7 +708,7 @@
708708
"cell_type": "markdown",
709709
"metadata": {},
710710
"source": [
711-
"What happens internally is that the input tensor passed to `input_quant_conv` is being quantized before being passed to the convolution operator. That means we are now computing a convolution between two quantized tensors, which mimplies that the output of the operation is also quantized. As expected then `out_tensor` is marked as valid. \n",
711+
"What happens internally is that the input tensor passed to `input_quant_conv` is being quantized before being passed to the convolution operator. That means we are now computing a convolution between two quantized tensors, which implies that the output of the operation is also quantized. As expected, `out_tensor` is then marked as valid. \n",
712712
"\n",
713713
"Another important thing to notice is how the `bit_width` field of `out_tensor` is relatively high at *21 bits*. In Brevitas, the assumption is always that the output bit-width of an operator reflects the worst-case size of the *accumulator* required by that operation. In other terms, given the *size* of the input and weight tensors and their *bit-widths*, 21 is the bit-width that would be required to represent the largest possible output value that could be generated. This makes sure that the affine quantization invariant is always respected."
714714
]
@@ -799,7 +799,7 @@
799799
"cell_type": "markdown",
800800
"metadata": {},
801801
"source": [
802-
"**Note**: how we are explicitly forcing `value`, `scale`, `zero_point` and `bit_width` to be floating-point `torch.Tensor`, as this is expected by Brevitas but it's currently not enforced automatically at initialization time.\n",
802+
"**Note** how we are explicitly forcing `value`, `scale`, `zero_point` and `bit_width` to be floating-point `torch.Tensor`, as this is expected by Brevitas but it's currently not enforced automatically at initialization time.\n",
803803
"\n",
804804
"If we now pass in `quant_tensor_input` to `return_quant_conv`, we will see that indeed the output is a valid 21-bit `QuantTensor`:"
805805
]
@@ -918,7 +918,7 @@
918918
"source": [
919919
"## Output Quantization\n",
920920
"\n",
921-
"Let's now look at would have happened if we instead enabled output quantization:"
921+
"Let's now look at what would have happened if we had instead enabled output quantization:"
922922
]
923923
},
924924
{
@@ -1235,7 +1235,7 @@
12351235
"cell_type": "markdown",
12361236
"metadata": {},
12371237
"source": [
1238-
"Not all scenarios require bias quantization to depend on the scale factor of the input. In those cases, biases can be quantized the same way weights are quantized, and have their own scale factor. In Brevitas, a predefined quantizer that reflects this other scenario is `Int8BiasPerTensorFloatInternalScaling`. In this case then a valid quantized input is not required:"
1238+
"Not all scenarios require bias quantization to depend on the scale factor of the input. In those cases, biases can be quantized in the same way weights are quantized, and have their own scale factor. In Brevitas, a predefined quantizer that reflects this other scenario is `Int8BiasPerTensorFloatInternalScaling`. In this case then a valid quantized input is not required:"
12391239
]
12401240
},
12411241
{

notebooks/02_quant_activation_overview.ipynb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@
141141
"cell_type": "markdown",
142142
"metadata": {},
143143
"source": [
144-
"From an algorithmic point of view then the two different implementation are doing the same thing. However, as it will become clearer in later tutorials, there are currently some scenarios where picking one style over the other can make a difference when it comes to exporting to a format such as standard ONNX. In the meantime, we can just keep in mind that both alternatives exist."
144+
"From an algorithmic point of view the two different implementation are doing the same thing. However, as it will become clearer in later tutorials, there are currently some scenarios where picking one style over the other can make a difference when it comes to exporting to a format such as standard ONNX. In the meantime, we can just keep in mind that both alternatives exist."
145145
]
146146
},
147147
{
@@ -251,7 +251,7 @@
251251
"cell_type": "markdown",
252252
"metadata": {},
253253
"source": [
254-
"As expected, a `QuantIdentity` with quantization disabled behaves like an identity function also when a `QuantTensor` is passed in. However, depending on whather `return_quant_tensor` is set to `False` or not, quantization metadata might be stripped out, i.e. the input `QuantTensor` is going to be returned as an implicitly quantized `torch.Tensor`:"
254+
"As expected, a `QuantIdentity` with quantization disabled behaves like an identity function also when a `QuantTensor` is passed in. However, depending on whether `return_quant_tensor` is set to `False` or not, quantization metadata might be stripped out, i.e. the input `QuantTensor` is going to be returned as an implicitly quantized `torch.Tensor`:"
255255
]
256256
},
257257
{
@@ -625,7 +625,7 @@
625625
"source": [
626626
"Regarding some premade activation quantizers, such as `Uint8ActPerTensorFloat`, `ShiftedUint8ActPerTensorFloat`, and `Int8ActPerTensorFloat`, a word of caution that anticipates some of the themes of the next tutorial.\n",
627627
"To minimize user interaction, Brevitas initializes scale and zero-point by collecting statistics for a number of training steps (by default 30). This can be seen as a sort of very basic calibration step, although it typically happens during training and with quantization already enabled. These statistics are accumulated in an exponential moving average that at end of the collection phase is used to initialize a learned *parameter*.\n",
628-
"During the collection phase then, the quantizer behaves differently between `train()` and `eval()` mode. In `train()` mode, the statistics for that particular batch are returned. In `eval()` mode, the exponential moving average is returned. After the collection phase is over the learned parameter is returned in both execution modes.\n",
628+
"During the collection phase then, the quantizer behaves differently between `train()` and `eval()` mode. In `train()` mode, the statistics for that particular batch are returned. In `eval()` mode, the exponential moving average is returned. After the collection phase is over, the learned parameter is returned in both execution modes.\n",
629629
"We can easily observe this behaviour with an example. Let's first define a quantized activation and two random input tensors:"
630630
]
631631
},
@@ -818,7 +818,7 @@
818818
"cell_type": "markdown",
819819
"metadata": {},
820820
"source": [
821-
"In all of the examples that have currently been looked at in this tutorial, we have used per-tensor quantization. I.e., the output tensor of the activation, if quantized, was always quantized on a per-tensor level, with a single scale and zero-point quantization parameter per output tensor. However, one can also do per-channel quantization, where each output channel of the tensor has its own quantization parameters. In the example below, we look at per-tensor quantization of an input tensor that has 3 channels and 256 elements in the height and width dimensions. We purposely mutate the 1st channel to have its dynamic range be 3 times larger than the other 2 channels. We then feed it through a `QuantReLU`, whose default behavior is to quantize at a per-tensor granularity."
821+
"In all of the examples that have looked at so far in this tutorial, we have used per-tensor quantization. I.e., the output tensor of the activation, if quantized, was always quantized on a per-tensor level, with a single scale and zero-point quantization parameter per output tensor. However, one can also do per-channel quantization, where each output channel of the tensor has its own quantization parameters. In the example below, we look at per-tensor quantization of an input tensor that has 3 channels and 256 elements in the height and width dimensions. We purposely mutate the 1st channel to have its dynamic range be 3 times larger than the other 2 channels. We then feed it through a `QuantReLU`, whose default behavior is to quantize at a per-tensor granularity."
822822
]
823823
},
824824
{
@@ -1069,15 +1069,15 @@
10691069
"cell_type": "markdown",
10701070
"metadata": {},
10711071
"source": [
1072-
"We can see that the number of elements in the quantization scale of the outputted tensor is now 3, matching those of the 3-channel tensor! Furthermore, we see that each channel has an 8-bit quantization range that matches its data distribution, which is much more ideal in terms of reducing quantization mismatch. However, it's important to note that some hardware providers don't efficiently support per-channel quantization in production, so it's best to check if your targetted hardware will allow per-channel quantization."
1072+
"We can see that the number of elements in the quantization scale of the output tensor is now 3, matching those of the 3-channel tensor! Furthermore, we see that each channel has an 8-bit quantization range that matches its data distribution, which is much more ideal in terms of reducing quantization mismatch. However, it's important to note that some hardware providers don't efficiently support per-channel quantization in production, so it's best to check if your targetted hardware will allow per-channel quantization."
10731073
]
10741074
},
10751075
{
10761076
"cell_type": "markdown",
10771077
"metadata": {},
10781078
"source": [
10791079
"Finally, a reminder that mixing things up is perfectly legal and encouraged in Brevitas.\n",
1080-
"For example, a `QuantIdentity` with `act_quant=Int8ActPerTensorFloatMinMaxInit` is equivalent to a default `QuantHardTanh`, or conversely a `QuantHardTanh` with `act_quant=Int8ActPerTensorFloat` is equivalent to a default `QuantIdentity`. This is allowed by the fact that - as it will be explained in the next tutorial - the same layer can accept different keyword arguments when different quantizers are set. So a QuantIdentity with `act_quant=Int8ActPerTensorFloatMinMaxInit` is going to expect arguments `min_val` and `max_val` the same way a default `QuantHardTanh` would."
1080+
"For example, a `QuantIdentity` with `act_quant=Int8ActPerTensorFloatMinMaxInit` is equivalent to a default `QuantHardTanh`, or conversely a `QuantHardTanh` with `act_quant=Int8ActPerTensorFloat` is equivalent to a default `QuantIdentity`. This is allowed by the fact that - as it will be explained in the next tutorial - the same layer can accept different keyword arguments when different quantizers are set. So a `QuantIdentity` with `act_quant=Int8ActPerTensorFloatMinMaxInit` is going to expect arguments `min_val` and `max_val` the same way a default `QuantHardTanh` would."
10811081
]
10821082
}
10831083
],

notebooks/ONNX_export_tutorial.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@
8585
"\n",
8686
"QCDQ is a style of representation introduced by Brevitas that extends the standard QDQ representation for quantization in ONNX. In Q(C)DQ export, before each operation, two (or three, in case of clipping) extra ONNX nodes are added:\n",
8787
"- `QuantizeLinear`: Takes as input a FP tensor, and quantizes it with a given zero-point and scale factor. It returns an (U)Int8 tensor.\n",
88-
"- `Clip` (Optional): Takes as input an INT8 tensor, and, given ntenger min/max values, restricts its range.\n",
88+
"- `Clip` (Optional): Takes as input an INT8 tensor, and, given integer min/max values, restricts its range.\n",
8989
"- `DeQuantizeLinear`: Takes as input an INT8 tensor, and converts it to its FP equivalent with a given zero-point and scale factor.\n",
9090
"\n",
9191
"There are several implications associated with this set of operations:\n",

notebooks/minifloat_mx_tutorial.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -133,16 +133,16 @@
133133
"\n",
134134
"The reason for this is shaping. When quantizing a tensor with shapes [O, I], where O is output channel and I is input channel, with groupsize k, groupwise quantization is normally represented as follow:\n",
135135
"\n",
136-
"- Tensor with shapes [O, k, I/k]\n",
137-
"- Scales with shapes [O, k, 1]\n",
136+
"- Tensor with shapes [O, I/k, k]\n",
137+
"- Scales with shapes [O, I/k, 1]\n",
138138
"- Zero point same as scale\n",
139139
"\n",
140140
"The alternative to this representation is to have all three tensors with shapes [O,I], with a massive increase in memory utilization, especially with QAT + gradients.\n",
141141
"\n",
142142
"The underscored attributes will have the compressed shapes, while the properties (non-underscored naming) will dynamically compute the expanded version of the property. This means:\n",
143143
"```python\n",
144144
"quant_tensor.scale_.shape\n",
145-
"# This will print [O, k, 1]\n",
145+
"# This will print [O, I/k, 1]\n",
146146
"quant_tensor.scale.shape\n",
147147
"# This will print [O, I]\n",
148148
"```\n",

notebooks/quantized_recurrent.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -396,7 +396,7 @@
396396
"`QuantRNN` follows the same `forward` interface of `torch.nn.RNN`, with a couple of exceptions. Packed variable length inputs are currently not supported, and unbatched inputs are not supported. \n",
397397
"Other than that, everything else is the same. \n",
398398
"\n",
399-
"Inputs are expected to have shape `(batch, sequence, features)` for `batch_first=False`, or `(sequence, batch, features)` for `batch_first=True`. The layer returns a tuple with `(outputs, hidden_states)`, where `outputs` has shape `(sequence, batch, hidden_size * num_directions)` with `num_directions=2` when `bidirectional=True`, for `batch_first=False`, or `(batch, sequence, hidden_size * num_directions)` for `batch_first=True`, while `hidden_states` has shape `(num_directions * num_layers, batch, hidden_size)`."
399+
"Inputs are expected to have shape `(sequence, batch, features)` for `batch_first=False`, or `(batch, sequence, features)` for `batch_first=True`. The layer returns a tuple with `(outputs, hidden_states)`, where `outputs` has shape `(sequence, batch, hidden_size * num_directions)` with `num_directions=2` when `bidirectional=True`, for `batch_first=False`, or `(batch, sequence, hidden_size * num_directions)` for `batch_first=True`, while `hidden_states` has shape `(num_directions * num_layers, batch, hidden_size)`."
400400
]
401401
},
402402
{

0 commit comments

Comments
 (0)