File tree Expand file tree Collapse file tree 1 file changed +8
-1
lines changed Expand file tree Collapse file tree 1 file changed +8
-1
lines changed Original file line number Diff line number Diff line change 39
39
# !pip3 install fbgemm_gpu --index-url https://download.pytorch.org/whl/cu121
40
40
# !pip3 install torchmetrics==1.0.3
41
41
# !pip3 install torchrec --index-url https://download.pytorch.org/whl/cu121
42
+ #
43
+ # .. note::
44
+ # If you are running this in Google Colab, make sure to switch to a GPU runtime type.
45
+ # For more information,
46
+ # see `Enabling CUDA <https://pytorch.org/tutorials/beginner/colab#enabling-cuda>`__
47
+ #
42
48
43
49
44
50
217
223
218
224
######################################################################
219
225
# TorchRec Modules and Data Types
220
- # ------------------------------
226
+ # ----------------------------------
221
227
#
222
228
# This section goes over TorchRec Modules and data types including such
223
229
# entities as ``EmbeddingCollection`` and ``EmbeddingBagCollection``,
@@ -919,6 +925,7 @@ def _wait_impl(self) -> torch.Tensor:
919
925
# the trained model in a Python environment is incredibly inefficient.
920
926
# There are two key differences between inference and training
921
927
# environments:
928
+ #
922
929
# * **Quantization**: Inference models are typically
923
930
# quantized, where model parameters lose precision for lower latency in
924
931
# predictions and reduced model size. For example FP32 (4 bytes) in
You can’t perform that action at this time.
0 commit comments