File tree Expand file tree Collapse file tree 2 files changed +12
-3
lines changed Expand file tree Collapse file tree 2 files changed +12
-3
lines changed Original file line number Diff line number Diff line change @@ -30,7 +30,7 @@ pytorch-lightning
30
30
torchx
31
31
# TODO: use stable 0.5 when released
32
32
-e git+https://github.com/pytorch/rl.git#egg=torchrl
33
- -e git+https://github.com/pytorch/tensordict.git@del-futures #egg=tensordict
33
+ -e git+https://github.com/pytorch/tensordict.git#egg=tensordict
34
34
ax-platform
35
35
nbformat >== 5.9.2
36
36
datasets
Original file line number Diff line number Diff line change @@ -598,7 +598,8 @@ def pin_copy_to_device_nonblocking(*tensors):
598
598
#
599
599
600
600
######################################################################
601
- # ## Conclusion
601
+ # Conclusion
602
+ # ----------
602
603
#
603
604
# .. _pinmem_conclusion:
604
605
#
@@ -610,7 +611,15 @@ def pin_copy_to_device_nonblocking(*tensors):
610
611
# Remember that profiling your code and keeping an eye on the memory consumption are essential to optimize resource
611
612
# usage and achieve the best possible performance.
612
613
#
613
- # ## Additional resources
614
+ # Additional resources
615
+ # --------------------
614
616
#
615
617
# .. _pinmem_resources:
616
618
#
619
+ # If you are dealing with issues with memory copies when using CUDA devices or want to learn more about
620
+ # what was discussed in this tutorial, check the following references:
621
+ #
622
+ # - `CUDA toolkit memory management doc <https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html>`_
623
+ # - `CUDA pin-memory note <https://forums.developer.nvidia.com/t/pinned-memory/268474>`_
624
+ # - tensordict :meth:`~tensordict.TensorDict.to` method;
625
+ #
You can’t perform that action at this time.
0 commit comments