You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that I use the simplest method to parse the command line args, so please do **Not** change the order of the args in above command.
50
50
51
51
52
-
#### Infer with one single image
52
+
#### 4. Infer with one single image
53
53
Run inference like this:
54
54
```
55
55
$ ./segment run /path/to/saved_model.trt /path/to/input/image.jpg /path/to/saved_img.jpg
56
56
```
57
57
58
58
59
-
#### Test speed
59
+
#### 5. Test speed
60
60
The speed depends on the specific gpu platform you are working on, you can test the fps on your gpu like this:
61
61
```
62
62
$ ./segment test /path/to/saved_model.trt
63
63
```
64
64
65
65
66
-
#### Tips:
66
+
#### 6. Tips:
67
67
1.~Since tensorrt 7.0.0 cannot parse well the `bilinear interpolation` op exported from pytorch, I replace them with pytorch `nn.PixelShuffle`, which would bring some performance overhead(more flops and parameters), and make inference a bit slower. Also due to the `nn.PixelShuffle` op, you **must** export the onnx model with input size to be *n* times of 32.~
68
68
If you are using 7.2.3.4 or newer versions, you should not have problem with `interpolate` anymore.
69
69
@@ -80,7 +80,7 @@ Likewise, you do not need to worry about this anymore with version newer than 7.
80
80
You can also use python script to compile and run inference of your model.
0 commit comments