Skip to content

Commit b56a06b

Browse files
authored
Merge branch 'main' into remove-files-gh-pages
2 parents e8ba9e1 + d3ac156 commit b56a06b

File tree

2 files changed

+2
-3
lines changed

2 files changed

+2
-3
lines changed

beginner_source/hyperparameter_tuning_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def forward(self, x):
184184
# inputs, labels = inputs.to(device), labels.to(device)
185185
#
186186
# The code now supports training on CPUs, on a single GPU, and on multiple GPUs. Notably, Ray
187-
# also supports `fractional GPUs <https://docs.ray.io/en/master/using-ray-with-gpus.html#fractional-gpus>`_
187+
# also supports `fractional GPUs <https://docs.ray.io/en/latest/ray-core/scheduling/accelerators.html#fractional-accelerators>`_
188188
# so we can share GPUs among trials, as long as the model still fits on the GPU memory. We'll come back
189189
# to that later.
190190
#

intermediate_source/rpc_async_execution.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ Prerequisites:
1515
This tutorial demonstrates how to build batch-processing RPC applications with
1616
the `@rpc.functions.async_execution <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.functions.async_execution>`__
1717
decorator, which helps to speed up training by reducing the number of blocked
18-
RPC threads and consolidating CUDA operations on the callee. This shares the
19-
same idea as `Batch Inference with TorchServe <https://pytorch.org/serve/batch_inference_with_ts.html>`__.
18+
RPC threads and consolidating CUDA operations on the callee.
2019

2120
.. note:: This tutorial requires PyTorch v1.6.0 or above.
2221

0 commit comments

Comments
 (0)