@@ -16,13 +16,13 @@ Note that ELU converter is now supported in our library. If you want to get abov
1616error and run the example in this document, you can either:
17171 . get the source code, go to root directory, then run: <br />
1818 ` git apply ./examples/custom_converters/elu_converter/disable_core_elu.patch `
19- 2 . If you are using a pre-downloaded release of TRTorch , you need to make sure that
20- it doesn't support elu operator in default. (TRTorch <= v0.1.0)
19+ 2 . If you are using a pre-downloaded release of Torch-TensorRT , you need to make sure that
20+ it doesn't support elu operator in default. (Torch-TensorRT <= v0.1.0)
2121
2222## Writing Converter in C++
2323We can register a converter for this operator in our application. You can find more
2424information on all the details of writing converters in the contributors documentation
25- ([ Writing Converters] ( https://nvidia.github.io/TRTorch /contributors/writing_converters.html ) ).
25+ ([ Writing Converters] ( https://nvidia.github.io/Torch-TensorRT /contributors/writing_converters.html ) ).
2626Once we are clear about these rules and writing patterns, we can create a seperate new C++ source file as:
2727
2828``` c++
@@ -66,7 +66,7 @@ from torch.utils import cpp_extension
6666
6767
6868# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
69- # 1) download the latest package from https://github.com/NVIDIA/TRTorch /releases/
69+ # 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT /releases/
7070# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
7171# 3) Set torch_tensorrt_path to that directory
7272torch_tensorrt_path = <PATH TO TRTORCH>
8787```
8888Make sure to include the path for header files in ` include_dirs ` and the path
8989for dependent libraries in ` library_dirs ` . Generally speaking, you should download
90- the latest package from [ here] ( https://github.com/NVIDIA/TRTorch /releases ) , extract
90+ the latest package from [ here] ( https://github.com/NVIDIA/Torch-TensorRT /releases ) , extract
9191the files, and the set the ` torch_tensorrt_path ` to it. You could also add other compilation
9292flags in cpp_extension if you need. Then, run above python scripts as:
9393``` shell
@@ -99,7 +99,7 @@ by the command above. In build folder, you can find the generated `.so` library,
9999which could be loaded in our Python application.
100100
101101## Load ` .so ` in Python Application
102- With the new generated library, TRTorch now support the new developed converter.
102+ With the new generated library, Torch-TensorRT now support the new developed converter.
103103We use ` torch.ops.load_library ` to load ` .so ` . For example, we could load the ELU
104104converter and use it in our application:
105105``` python
@@ -124,7 +124,7 @@ def cal_max_diff(pytorch_out, torch_tensorrt_out):
124124 diff = torch.sub(pytorch_out, torch_tensorrt_out)
125125 abs_diff = torch.abs(diff)
126126 max_diff = torch.max(abs_diff)
127- print (" Maximum differnce between TRTorch and PyTorch: \n " , max_diff)
127+ print (" Maximum differnce between Torch-TensorRT and PyTorch: \n " , max_diff)
128128
129129
130130def main ():
@@ -146,12 +146,12 @@ def main():
146146
147147 torch_tensorrt_out = trt_ts_module(input_data)
148148 print (' PyTorch output: \n ' , pytorch_out[0 , :, :, 0 ])
149- print (' TRTorch output: \n ' , torch_tensorrt_out[0 , :, :, 0 ])
149+ print (' Torch-TensorRT output: \n ' , torch_tensorrt_out[0 , :, :, 0 ])
150150 cal_max_diff(pytorch_out, torch_tensorrt_out)
151151
152152
153153if __name__ == " __main__" :
154154 main()
155155
156156```
157- Run this script, we can get the different outputs from PyTorch and TRTorch .
157+ Run this script, we can get the different outputs from PyTorch and Torch - TensorRT .
0 commit comments