Skip to content

Latest commit

 

History

History
18 lines (13 loc) · 481 Bytes

File metadata and controls

18 lines (13 loc) · 481 Bytes
.. currentmodule:: torchrl.modules.inference_server

Inference Server

The inference server provides auto-batching model serving for RL actors. Multiple actors submit individual TensorDicts; the server transparently batches them, runs a single model forward pass, and routes results back.

.. autosummary::
    :toctree: generated/
    :template: rl_template_noinherit.rst

    InferenceServer
    InferenceClient
    InferenceTransport