You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add AsyncBatchedCollector that pairs AsyncEnvPool with InferenceServer
for pipelined RL data collection. Users supply env factories and a policy;
the collector handles all internal wiring (transport, server, env pool).
Co-authored-by: Cursor <cursoragent@cursor.com>
ghstack-source-id: 525120c
Pull-Request: #3498
Copy file name to clipboardExpand all lines: docs/source/reference/collectors.rst
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,7 @@ making it easy to collect high-quality training data efficiently.
12
12
TorchRL provides several collector implementations optimized for different scenarios:
13
13
14
14
- :class:`Collector`: Single-process collection on the training worker
15
+
- :class:`AsyncBatchedCollector`: Async environments + auto-batching inference server (see :class:`AsyncBatchedCollector`)
15
16
- :class:`MultiCollector`: Parallel collection across multiple workers (see below)
16
17
- **Distributed collectors**: For multi-node setups using Ray, RPC, or distributed backends (see :class:`DistributedCollector` / :class:`RPCCollector`)
0 commit comments