You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Add ResNet preprocessing model
Signed-off-by: Joaquin Anton <[email protected]>
* Support sequence in tests
Signed-off-by: Joaquin Anton <[email protected]>
---------
Signed-off-by: Joaquin Anton <[email protected]>
> Compared with the fp32 ResNet50, int8 ResNet50's Top-1 accuracy drop ratio is 0.27%, Top-5 accuracy drop ratio is 0.01% and performance improvement is 1.82x.
39
39
>
40
-
> Note the performance depends on the test hardware.
41
-
>
40
+
> Note the performance depends on the test hardware.
41
+
>
42
42
> Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
43
43
44
44
|Model |Download |Download (with sample test data)| ONNX version |Opset version|
@@ -68,24 +68,88 @@ All pre-trained models expect input images normalized in the same way, i.e. mini
68
68
The inference was done using jpeg image.
69
69
70
70
### Preprocessing
71
-
The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing.
72
71
73
-
The following code shows how to preprocess a NCHW tensor:
72
+
The image needs to be preprocessed before fed to the network.
73
+
The first step is to extract a 224x224 crop from the center of the image. For this, the image is first scaled to a minimum size of 256x256, while keeping aspect ratio. That is, the shortest side of the image is resized to 256 and the other side is scaled accordingly to maintain the original aspect ratio. After that, the image is normalized with mean = 255*[0.485, 0.456, 0.406] and std = 255*[0.229, 0.224, 0.225]. Last step is to transpose it from HWC to CHW layout.
74
+
75
+
The described preprocessing steps can be represented with an ONNX model:
Check [imagenet_preprocess.py](../imagenet_preprocess.py) for additional sample code.
151
+
152
+
Check [imagenet_preprocess.py](../imagenet_preprocess.py) for some reference Python and MxNet implementations.
89
153
90
154
### Output
91
155
The model outputs image scores for each of the [1000 classes of ImageNet](../synset.txt).
@@ -113,7 +177,7 @@ We used MXNet as framework with gluon APIs to perform validation. Use the notebo
113
177
ResNet50-int8 and ResNet50-qdq are obtained by quantizing ResNet50-fp32 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/resnet50/quantization/ptq/README.md) to understand how to use Intel® Neural Compressor for quantization.
114
178
115
179
### Environment
116
-
onnx: 1.7.0
180
+
onnx: 1.7.0
117
181
onnxruntime: 1.6.0+
118
182
119
183
### Prepare model
@@ -153,6 +217,7 @@ In European Conference on Computer Vision, pp. 630-645. Springer, Cham, 2016.
0 commit comments