diff --git a/_static/css/custom.css b/_static/css/custom.css index 7b7055fff78..a467a088159 100755 --- a/_static/css/custom.css +++ b/_static/css/custom.css @@ -71,3 +71,23 @@ .sd-card:hover:after { transform: scaleX(1); } + +.card-prerequisites:hover { + transition: none; + border: none; +} + +.card-prerequisites:hover:after { + transition: none; + transform: none; +} + +.card-prerequisites:after { + display: block; + content: ''; + border-bottom: none; + background-color: #fff; + transform: none; + transition: none; + transform-origin: none; +} diff --git a/advanced_source/cpp_custom_ops.rst b/advanced_source/cpp_custom_ops.rst index 66df7344522..fa56a0cc219 100644 --- a/advanced_source/cpp_custom_ops.rst +++ b/advanced_source/cpp_custom_ops.rst @@ -8,14 +8,16 @@ Custom C++ and CUDA Operators .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - * How to integrate custom operators written in C++/CUDA with PyTorch - * How to test custom operators using ``torch.library.opcheck`` + * How to integrate custom operators written in C++/CUDA with PyTorch + * How to test custom operators using ``torch.library.opcheck`` .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites - * PyTorch 2.4 or later - * Basic understanding of C++ and CUDA programming + * PyTorch 2.4 or later + * Basic understanding of C++ and CUDA programming PyTorch offers a large library of operators that work on Tensors (e.g. torch.add, torch.sum, etc). However, you may wish to bring a new custom operator to PyTorch. This tutorial demonstrates the diff --git a/advanced_source/python_custom_ops.py b/advanced_source/python_custom_ops.py index 36045cb9e48..9111e1f43f4 100644 --- a/advanced_source/python_custom_ops.py +++ b/advanced_source/python_custom_ops.py @@ -9,13 +9,15 @@ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - * How to integrate custom operators written in Python with PyTorch - * How to test custom operators using ``torch.library.opcheck`` + * How to integrate custom operators written in Python with PyTorch + * How to test custom operators using ``torch.library.opcheck`` .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites - * PyTorch 2.4 or later + * PyTorch 2.4 or later PyTorch offers a large library of operators that work on Tensors (e.g. ``torch.add``, ``torch.sum``, etc). However, you might wish to use a new customized diff --git a/beginner_source/ddp_series_fault_tolerance.rst b/beginner_source/ddp_series_fault_tolerance.rst index 7a4e3cc8c80..2bb0d528d1b 100644 --- a/beginner_source/ddp_series_fault_tolerance.rst +++ b/beginner_source/ddp_series_fault_tolerance.rst @@ -14,11 +14,12 @@ Authors: `Suraj Subramanian `__ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites :margin: 0 - - Launching multi-GPU training jobs with ``torchrun`` - - Saving and loading snapshots of your training job - - Structuring your training script for graceful restarts + - Launching multi-GPU training jobs with ``torchrun`` + - Saving and loading snapshots of your training job + - Structuring your training script for graceful restarts .. grid:: 1 @@ -27,6 +28,7 @@ Authors: `Suraj Subramanian `__ :octicon:`code-square;1.0em;` View the code used in this tutorial on `GitHub `__ .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites :margin: 0 * High-level `overview `__ of DDP diff --git a/beginner_source/ddp_series_multigpu.rst b/beginner_source/ddp_series_multigpu.rst index 4a735af56ed..f8335ba8cf4 100644 --- a/beginner_source/ddp_series_multigpu.rst +++ b/beginner_source/ddp_series_multigpu.rst @@ -14,6 +14,7 @@ Authors: `Suraj Subramanian `__ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - How to migrate a single-GPU training script to multi-GPU via DDP - Setting up the distributed process group @@ -26,6 +27,7 @@ Authors: `Suraj Subramanian `__ :octicon:`code-square;1.0em;` View the code used in this tutorial on `GitHub `__ .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites * High-level overview of `how DDP works `__ * A machine with multiple GPUs (this tutorial uses an AWS p3.8xlarge instance) diff --git a/beginner_source/ddp_series_theory.rst b/beginner_source/ddp_series_theory.rst index 76083b2e343..8957ab6ec4b 100644 --- a/beginner_source/ddp_series_theory.rst +++ b/beginner_source/ddp_series_theory.rst @@ -12,6 +12,7 @@ Authors: `Suraj Subramanian `__ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites * How DDP works under the hood * What is ``DistributedSampler`` @@ -19,6 +20,7 @@ Authors: `Suraj Subramanian `__ .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites * Familiarity with `basic non-distributed training `__ in PyTorch diff --git a/beginner_source/template_tutorial.py b/beginner_source/template_tutorial.py index 520bd40eb03..d7fae7c4c5e 100644 --- a/beginner_source/template_tutorial.py +++ b/beginner_source/template_tutorial.py @@ -9,16 +9,18 @@ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - * Item 1 - * Item 2 - * Item 3 + * Item 1 + * Item 2 + * Item 3 .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites - * PyTorch v2.0.0 - * GPU ??? - * Other items 3 + * PyTorch v2.0.0 + * GPU ??? + * Other items 3 If you have a video, add it here like this: diff --git a/intermediate_source/ddp_series_minGPT.rst b/intermediate_source/ddp_series_minGPT.rst index 1d1f809e434..259db3623c6 100644 --- a/intermediate_source/ddp_series_minGPT.rst +++ b/intermediate_source/ddp_series_minGPT.rst @@ -11,6 +11,7 @@ Authors: `Suraj Subramanian `__ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - Best practices when writing a distributed training script - Increased flexibility with saving/loading artifacts in the cloud @@ -23,6 +24,7 @@ Authors: `Suraj Subramanian `__ :octicon:`code-square;1.0em;` View the code used in this tutorial on `GitHub `__ .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites - Familiarity with `multi-GPU training <../beginner/ddp_series_multigpu.html>`__ and `torchrun <../beginner/ddp_series_fault_tolerance.html>`__ - [Optional] Familiarity with `multinode training `__ diff --git a/intermediate_source/ddp_series_multinode.rst b/intermediate_source/ddp_series_multinode.rst index 721c5580f6c..5717589bdaa 100644 --- a/intermediate_source/ddp_series_multinode.rst +++ b/intermediate_source/ddp_series_multinode.rst @@ -11,6 +11,7 @@ Authors: `Suraj Subramanian `__ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - Launching multinode training jobs with ``torchrun`` - Code changes (and things to keep in mind) when moving from single-node to multinode training. @@ -22,6 +23,7 @@ Authors: `Suraj Subramanian `__ :octicon:`code-square;1.0em;` View the code used in this tutorial on `GitHub `__ .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites - Familiarity with `multi-GPU training <../beginner/ddp_series_multigpu.html>`__ and `torchrun <../beginner/ddp_series_fault_tolerance.html>`__ - 2 or more TCP-reachable GPU machines (this tutorial uses AWS p3.2xlarge instances) diff --git a/intermediate_source/dqn_with_rnn_tutorial.py b/intermediate_source/dqn_with_rnn_tutorial.py index 8135f07cd3f..991a0ff8bd6 100644 --- a/intermediate_source/dqn_with_rnn_tutorial.py +++ b/intermediate_source/dqn_with_rnn_tutorial.py @@ -9,15 +9,17 @@ .. grid:: 2 .. grid-item-card:: :octicon:`mortar-board;1em;` What you will learn + :class-card: card-prerequisites - * How to incorporating an RNN in an actor in TorchRL - * How to use that memory-based policy with a replay buffer and a loss module + * How to incorporating an RNN in an actor in TorchRL + * How to use that memory-based policy with a replay buffer and a loss module .. grid-item-card:: :octicon:`list-unordered;1em;` Prerequisites + :class-card: card-prerequisites - * PyTorch v2.0.0 - * gym[mujoco] - * tqdm + * PyTorch v2.0.0 + * gym[mujoco] + * tqdm """ #########################################################################