diff --git a/intermediate_source/rpc_async_execution.rst b/intermediate_source/rpc_async_execution.rst index 562d93bbbf8..4c7739104cc 100644 --- a/intermediate_source/rpc_async_execution.rst +++ b/intermediate_source/rpc_async_execution.rst @@ -15,8 +15,7 @@ Prerequisites: This tutorial demonstrates how to build batch-processing RPC applications with the `@rpc.functions.async_execution `__ decorator, which helps to speed up training by reducing the number of blocked -RPC threads and consolidating CUDA operations on the callee. This shares the -same idea as `Batch Inference with TorchServe `__. +RPC threads and consolidating CUDA operations on the callee. .. note:: This tutorial requires PyTorch v1.6.0 or above.