Skip to content

Commit 93b109a

Browse files
authored
Merge branch 'main' into whatsnew2-7
2 parents 81c6470 + 35c68ea commit 93b109a

File tree

3 files changed

+23
-23
lines changed

3 files changed

+23
-23
lines changed

_templates/layout.html

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -211,14 +211,5 @@
211211

212212
<img height="1" width="1" style="border-style:none;" alt="" src="https://www.googleadservices.com/pagead/conversion/795629140/?label=txkmCPmdtosBENSssfsC&amp;guid=ON&amp;script=0"/>
213213

214-
<script>
215-
//temporarily add a link to survey
216-
var survey = '<div class="survey-banner"><p><i class="fas fa-poll" aria-hidden="true">&nbsp </i> Take the <a href="https://forms.gle/KZ4xGL65VRMYNbbG6">PyTorch Docs/Tutorials survey</a>.</p></div>'
217-
if ($(".pytorch-call-to-action-links").length) {
218-
$(".pytorch-call-to-action-links").before(survey);
219-
} else {
220-
$("#pytorch-article").prepend(survey);
221-
}
222-
</script>
223214

224215
{% endblock %}

prototype_source/inductor_windows.rst

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -44,18 +44,21 @@ Next, let's configure our environment.
4444
4545
"C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Auxiliary/Build/vcvars64.bat"
4646
#. Create and activate a virtual environment: ::
47+
4748
#. Install `PyTorch 2.5 <https://pytorch.org/get-started/locally/>`_ or later for CPU Usage. Install PyTorch 2.7 or later refer to `Getting Started on Intel GPU <https://pytorch.org/docs/main/notes/get_start_xpu.html>`_ for XPU usage.
49+
4850
#. Here is an example of how to use TorchInductor on Windows:
49-
.. code-block:: python
50-
51-
import torch
52-
device="cpu" # or "xpu" for XPU
53-
def foo(x, y):
54-
a = torch.sin(x)
55-
b = torch.cos(x)
56-
return a + b
57-
opt_foo1 = torch.compile(foo)
58-
print(opt_foo1(torch.randn(10, 10).to(device), torch.randn(10, 10).to(device)))
51+
52+
.. code-block:: python
53+
54+
import torch
55+
device="cpu" # or "xpu" for XPU
56+
def foo(x, y):
57+
a = torch.sin(x)
58+
b = torch.cos(x)
59+
return a + b
60+
opt_foo1 = torch.compile(foo)
61+
print(opt_foo1(torch.randn(10, 10).to(device), torch.randn(10, 10).to(device)))
5962
6063
#. Below is the output of the above example::
6164

recipes_source/foreach_map.py

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
2-
(beta) Explicit horizontal fusion with foreach_map and torch.compile
3-
============================================================
2+
Explicit horizontal fusion with foreach_map and torch.compile
3+
===============================================================
44
55
**Author:** `Michael Lazos <https://github.com/mlazos>`_
66
"""
@@ -13,11 +13,17 @@
1313
# allows conversion of any pointwise op in ``torch`` to a horiztonally fused foreach
1414
# variant. In this tutorial, we will demonstrate how to implement the Adam optimizer
1515
# with ``foreach_map`` to generate a fully fused kernel.
16-
#
1716
#
1817
# .. note::
1918
#
20-
# This tutorial requires PyTorch 2.7.0 or later.
19+
# This recipe describes a prototype feature. Prototype features are typically
20+
# at an early stage for feedback and testing and are subject to change.
21+
#
22+
# Prerequisites
23+
# -------------
24+
#
25+
# * PyTorch v2.7.0 or later
26+
#
2127

2228
#####################################################################
2329
# Model Setup

0 commit comments

Comments
 (0)