Skip to content

Commit 4141730

Browse files
authored
Update reduced_precision.md (#402)
1 parent 3efb89d commit 4141730

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

docs/usage/reduced_precision.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ You could wrap this dataset for calibration, by defining a new dataset which ret
6969

7070
```python
7171
from torchvision.datasets import ImageFolder
72-
from torchvision.transforms import ToTensor, Compose, Normalize
72+
from torchvision.transforms import ToTensor, Compose, Normalize, Resize
7373

7474

7575
class ImageFolderCalibDataset():
@@ -78,9 +78,9 @@ class ImageFolderCalibDataset():
7878
self.dataset = ImageFolder(
7979
root=root,
8080
transform=Compose([
81-
transforms.Resize((224, 224)),
82-
transforms.ToTensor(),
83-
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
81+
Resize((224, 224)),
82+
ToTensor(),
83+
Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
8484
])
8585
)
8686

@@ -105,7 +105,7 @@ model_trt = torch2trt(model, [data], int8_calib_dataset=dataset)
105105

106106
To override the default calibration algorithm that torch2trt uses, you can set the ``int8_calib_algoirthm``
107107
to the [``tensorrt.CalibrationAlgoType``](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Int8/Calibrator.html#iint8calibrator)
108-
that you wish to use. For example, to use the minmax calibration algoirthm you would do
108+
that you wish to use. For example, to use the minmax calibration algorithm you would do
109109

110110
```python
111111
import tensorrt as trt
@@ -129,7 +129,7 @@ The data type of input and output bindings in TensorRT are determined by the ori
129129
PyTorch module input and output data types.
130130
This does not directly impact whether the TensorRT optimizer will internally use fp16 or int8 precision.
131131

132-
For example, to create a model with half precision bindings, you would do the following
132+
For example, to create a model with fp32 precision bindings, you would do the following
133133

134134
```python
135135
model = model.float()
@@ -149,4 +149,4 @@ model_trt = torch2trt(model, [data], fp16_mode=True)
149149
```
150150

151151
Now, the input and output bindings of the model are half precision, and internally the optimizer may
152-
choose to select fp16 layers as well.
152+
choose to select fp16 layers as well.

0 commit comments

Comments
 (0)