Skip to content

Commit b52628f

Browse files
authored
Merge branch 'main' into add-new-theme
2 parents 31454aa + d781eeb commit b52628f

File tree

3 files changed

+2
-17
lines changed

3 files changed

+2
-17
lines changed

beginner_source/hyperparameter_tuning_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def forward(self, x):
184184
# inputs, labels = inputs.to(device), labels.to(device)
185185
#
186186
# The code now supports training on CPUs, on a single GPU, and on multiple GPUs. Notably, Ray
187-
# also supports `fractional GPUs <https://docs.ray.io/en/master/using-ray-with-gpus.html#fractional-gpus>`_
187+
# also supports `fractional GPUs <https://docs.ray.io/en/latest/ray-core/scheduling/accelerators.html#fractional-accelerators>`_
188188
# so we can share GPUs among trials, as long as the model still fits on the GPU memory. We'll come back
189189
# to that later.
190190
#

index.rst

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -613,20 +613,6 @@ Welcome to PyTorch Tutorials
613613
:link: advanced/static_quantization_tutorial.html
614614
:tags: Quantization
615615

616-
.. customcarditem::
617-
:header: Grokking PyTorch Intel CPU Performance from First Principles
618-
:card_description: A case study on the TorchServe inference framework optimized with Intel® Extension for PyTorch.
619-
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
620-
:link: intermediate/torchserve_with_ipex
621-
:tags: Model-Optimization,Production
622-
623-
.. customcarditem::
624-
:header: Grokking PyTorch Intel CPU Performance from First Principles (Part 2)
625-
:card_description: A case study on the TorchServe inference framework optimized with Intel® Extension for PyTorch (Part 2).
626-
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
627-
:link: intermediate/torchserve_with_ipex_2
628-
:tags: Model-Optimization,Production
629-
630616
.. customcarditem::
631617
:header: Multi-Objective Neural Architecture Search with Ax
632618
:card_description: Learn how to use Ax to search over architectures find optimal tradeoffs between accuracy and latency.

intermediate_source/rpc_async_execution.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ Prerequisites:
1515
This tutorial demonstrates how to build batch-processing RPC applications with
1616
the `@rpc.functions.async_execution <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.functions.async_execution>`__
1717
decorator, which helps to speed up training by reducing the number of blocked
18-
RPC threads and consolidating CUDA operations on the callee. This shares the
19-
same idea as `Batch Inference with TorchServe <https://pytorch.org/serve/batch_inference_with_ts.html>`__.
18+
RPC threads and consolidating CUDA operations on the callee.
2019

2120
.. note:: This tutorial requires PyTorch v1.6.0 or above.
2221

0 commit comments

Comments
 (0)