Skip to content

Commit 5a26ac1

Browse files
authored
Merge branch 'main' into tutorial-audit-T228334528
2 parents a4ce09e + aef2510 commit 5a26ac1

File tree

4 files changed

+2
-20
lines changed

4 files changed

+2
-20
lines changed

beginner_source/hyperparameter_tuning_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def forward(self, x):
184184
# inputs, labels = inputs.to(device), labels.to(device)
185185
#
186186
# The code now supports training on CPUs, on a single GPU, and on multiple GPUs. Notably, Ray
187-
# also supports `fractional GPUs <https://docs.ray.io/en/master/using-ray-with-gpus.html#fractional-gpus>`_
187+
# also supports `fractional GPUs <https://docs.ray.io/en/latest/ray-core/scheduling/accelerators.html#fractional-accelerators>`_
188188
# so we can share GPUs among trials, as long as the model still fits on the GPU memory. We'll come back
189189
# to that later.
190190
#

index.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -612,13 +612,6 @@ Welcome to PyTorch Tutorials
612612
:link: advanced/static_quantization_tutorial.html
613613
:tags: Quantization
614614

615-
.. customcarditem::
616-
:header: Grokking PyTorch Intel CPU Performance from First Principles (Part 2)
617-
:card_description: A case study on the TorchServe inference framework optimized with Intel® Extension for PyTorch (Part 2).
618-
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
619-
:link: intermediate/torchserve_with_ipex_2
620-
:tags: Model-Optimization,Production
621-
622615
.. customcarditem::
623616
:header: Multi-Objective Neural Architecture Search with Ax
624617
:card_description: Learn how to use Ax to search over architectures find optimal tradeoffs between accuracy and latency.
@@ -1064,7 +1057,6 @@ Additional Resources
10641057
intermediate/dynamic_quantization_bert_tutorial
10651058
intermediate/quantized_transfer_learning_tutorial
10661059
advanced/static_quantization_tutorial
1067-
intermediate/torchserve_with_ipex_2
10681060
intermediate/nvfuser_intro_tutorial
10691061
intermediate/ax_multiobjective_nas_tutorial
10701062
intermediate/torch_compile_tutorial

intermediate_source/rpc_async_execution.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ Prerequisites:
1515
This tutorial demonstrates how to build batch-processing RPC applications with
1616
the `@rpc.functions.async_execution <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.functions.async_execution>`__
1717
decorator, which helps to speed up training by reducing the number of blocked
18-
RPC threads and consolidating CUDA operations on the callee. This shares the
19-
same idea as `Batch Inference with TorchServe <https://pytorch.org/serve/batch_inference_with_ts.html>`__.
18+
RPC threads and consolidating CUDA operations on the callee.
2019

2120
.. note:: This tutorial requires PyTorch v1.6.0 or above.
2221

recipes_source/recipes_index.rst

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -450,15 +450,6 @@ Recipes are bite-sized, actionable examples of how to use specific PyTorch featu
450450
:link: ../recipes/distributed_comm_debug_mode.html
451451
:tags: Distributed-Training
452452

453-
.. TorchServe
454-
455-
.. customcarditem::
456-
:header: Deploying a PyTorch Stable Diffusion model as a Vertex AI Endpoint
457-
:card_description: Learn how to deploy model in Vertex AI with TorchServe
458-
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
459-
:link: ../recipes/torchserve_vertexai_tutorial.html
460-
:tags: Production
461-
462453
.. End of tutorial card section
463454
464455
.. raw:: html

0 commit comments

Comments
 (0)