Skip to content

Commit b700d30

Browse files
authored
Switch docs to 0.6 branch (#10212)
1 parent e42c504 commit b700d30

File tree

31 files changed

+65
-65
lines changed

31 files changed

+65
-65
lines changed

Package.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
//
1616
// For details on building frameworks locally or using prebuilt binaries,
1717
// see the documentation:
18-
// https://pytorch.org/executorch/main/using-executorch-ios.html
18+
// https://pytorch.org/executorch/0.6/using-executorch-ios.html
1919

2020
import PackageDescription
2121

README-wheel.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@ to run ExecuTorch `.pte` files, with some restrictions:
1414
operators](https://pytorch.org/executorch/stable/ir-ops-set-definition.html)
1515
are linked into the prebuilt module
1616
* Only the [XNNPACK backend
17-
delegate](https://pytorch.org/executorch/main/native-delegates-executorch-xnnpack-delegate.html)
17+
delegate](https://pytorch.org/executorch/0.6/backends-xnnpack)
1818
is linked into the prebuilt module.
19-
* \[macOS only] [Core ML](https://pytorch.org/executorch/main/build-run-coreml.html)
20-
and [MPS](https://pytorch.org/executorch/main/build-run-mps.html) backend
19+
* \[macOS only] [Core ML](https://pytorch.org/executorch/0.6/backends-coreml)
20+
and [MPS](https://pytorch.org/executorch/0.6/backends-mps) backend
2121
delegates are also linked into the prebuilt module.
2222

2323
Please visit the [ExecuTorch website](https://pytorch.org/executorch/) for
@@ -30,7 +30,7 @@ tutorials and documentation. Here are some starting points:
3030
* Learn how to use ExecuTorch to export and accelerate a large-language model
3131
from scratch.
3232
* [Exporting to
33-
ExecuTorch](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial.html)
33+
ExecuTorch](https://pytorch.org/executorch/0.6/tutorials/export-to-executorch-tutorial.html)
3434
* Learn the fundamentals of exporting a PyTorch `nn.Module` to ExecuTorch, and
3535
optimizing its performance using quantization and hardware delegation.
3636
* Running LLaMA on

backends/cadence/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
## Tutorial
88

9-
Please follow the [tutorial](https://pytorch.org/executorch/main/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.
9+
Please follow the [tutorial](https://pytorch.org/executorch/0.6/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.
1010

1111
## Directory Structure
1212

backends/qualcomm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This backend is implemented on the top of
88
[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk).
99
Please follow [tutorial](../../docs/source/backends-qualcomm.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).
1010

11-
A website version of the tutorial is [here](https://pytorch.org/executorch/main/backends-qualcomm).
11+
A website version of the tutorial is [here](https://pytorch.org/executorch/0.6/backends-qualcomm).
1212

1313
## Delegate Options
1414

backends/xnnpack/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,5 +132,5 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).
132132

133133
## See Also
134134
For more information about the XNNPACK Backend, please check out the following resources:
135-
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
136-
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)
135+
- [XNNPACK Backend](https://pytorch.org/executorch/0.6/backends-xnnpack)
136+
- [XNNPACK Backend Internals](https://pytorch.org/executorch/0.6/backend-delegates-xnnpack-reference)

docs/source/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ ExecuTorch provides support for:
7979
- [Executorch Runtime API Reference](executorch-runtime-api-reference)
8080
- [Runtime Python API Reference](runtime-python-api-reference)
8181
- [API Life Cycle](api-life-cycle)
82-
- [Javadoc](https://pytorch.org/executorch/main/javadoc/)
82+
- [Javadoc](https://pytorch.org/executorch/0.6/javadoc/)
8383
#### Quantization
8484
- [Overview](quantization-overview)
8585
#### Kernel Library
@@ -208,7 +208,7 @@ export-to-executorch-api-reference
208208
executorch-runtime-api-reference
209209
runtime-python-api-reference
210210
api-life-cycle
211-
Javadoc <https://pytorch.org/executorch/main/javadoc/>
211+
Javadoc <https://pytorch.org/executorch/0.6/javadoc/>
212212
```
213213

214214
```{toctree}

docs/source/llm/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ example_inputs = (torch.randint(0, 100, (1, model.config.block_size), dtype=torc
159159
# long as they adhere to the rules specified in the dynamic shape configuration.
160160
# Here we set the range of 0th model input's 1st dimension as
161161
# [0, model.config.block_size].
162-
# See https://pytorch.org/executorch/main/concepts#dynamic-shapes
162+
# See https://pytorch.org/executorch/0.6/concepts#dynamic-shapes
163163
# for details about creating dynamic shapes.
164164
dynamic_shape = (
165165
{1: torch.export.Dim("token_dim", max=model.config.block_size)},
@@ -478,7 +478,7 @@ example_inputs = (
478478
# long as they adhere to the rules specified in the dynamic shape configuration.
479479
# Here we set the range of 0th model input's 1st dimension as
480480
# [0, model.config.block_size].
481-
# See https://pytorch.org/executorch/main/concepts.html#dynamic-shapes
481+
# See https://pytorch.org/executorch/0.6/concepts.html#dynamic-shapes
482482
# for details about creating dynamic shapes.
483483
dynamic_shape = (
484484
{1: torch.export.Dim("token_dim", max=model.config.block_size - 1)},

docs/source/memory-planning-inspection.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Memory Planning Inspection in ExecuTorch
22

3-
After the [Memory Planning](https://pytorch.org/executorch/main/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/main/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
3+
After the [Memory Planning](https://pytorch.org/executorch/0.6/concepts.html#memory-planning) pass of ExecuTorch, memory allocation information is stored on the nodes of the [`ExportedProgram`](https://pytorch.org/executorch/0.6/concepts.html#exportedprogram). Here, we present a tool designed to inspect memory allocation and visualize all active tensor objects.
44

55
## Usage
6-
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
6+
User should add this code after they call [to_executorch()](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch), and it will write memory allocation information stored on the nodes to the file path "memory_profile.json". The file is compatible with the Chrome trace viewer; see below for more information about interpreting the results.
77

88
```python
99
from executorch.util.activation_memory_profiler import generate_memory_trace
@@ -13,7 +13,7 @@ generate_memory_trace(
1313
enable_memory_offsets=True,
1414
)
1515
```
16-
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/main/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
16+
* `prog` is an instance of [`ExecuTorchProgramManager`](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.ExecutorchProgramManager), returned by [to_executorch()](https://pytorch.org/executorch/0.6/export-to-executorch-api-reference.html#executorch.exir.EdgeProgramManager.to_executorch).
1717
* Set `enable_memory_offsets` to `True` to show the location of each tensor on the memory space.
1818

1919
## Chrome Trace
@@ -27,4 +27,4 @@ Note that, since we are repurposing the Chrome trace tool, the axes in this cont
2727
* The vertical axis has a 2-level hierarchy. The first level, "pid", represents memory space. For CPU, everything is allocated on one "space"; other backends may have multiple. In the second level, each row represents one time step. Since nodes will be executed sequentially, each node represents one time step, thus you will have as many nodes as there are rows.
2828

2929
## Further Reading
30-
* [Memory Planning](https://pytorch.org/executorch/main/compiler-memory-planning.html)
30+
* [Memory Planning](https://pytorch.org/executorch/0.6/compiler-memory-planning.html)

docs/source/new-contributor-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
129129
git push # push updated local main to your GitHub fork
130130
```
131131

132-
6. [Build the project](https://pytorch.org/executorch/main/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
132+
6. [Build the project](https://pytorch.org/executorch/0.6/using-executorch-building-from-source.html) and [run the tests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#testing).
133133

134134
Unfortunately, this step is too long to detail here. If you get stuck at any point, please feel free to ask for help on our [Discord server](https://discord.com/invite/Dh43CKSAdc) — we're always eager to help newcomers get onboarded.
135135

docs/source/using-executorch-android.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file.
44

5-
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/main/using-executorch-building-from-source.html#cross-compilation).
5+
Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](https://pytorch.org/executorch/0.6/using-executorch-building-from-source.html#cross-compilation).
66

77
## Installation
88

@@ -41,8 +41,8 @@ dependencies {
4141
Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`.
4242

4343
Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio.
44-
<a href="https://pytorch.org/executorch/main/_static/img/android_studio.mp4">
45-
<img src="https://pytorch.org/executorch/main/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
44+
<a href="https://pytorch.org/executorch/0.6/_static/img/android_studio.mp4">
45+
<img src="https://pytorch.org/executorch/0.6/_static/img/android_studio.jpeg" width="800" alt="Integrating and Running ExecuTorch on Android">
4646
</a>
4747

4848
## Using AAR file directly
@@ -130,17 +130,17 @@ Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` b
130130

131131
#### Using MediaTek backend
132132

133-
To use [MediaTek backend](https://pytorch.org/executorch/main/backends-mediatek.html),
133+
To use [MediaTek backend](https://pytorch.org/executorch/0.6/backends-mediatek.html),
134134
after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path.
135135

136136
#### Using Qualcomm AI Engine Backend
137137

138-
To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/main/backends-qualcomm.html#qualcomm-ai-engine-backend),
138+
To use [Qualcomm AI Engine Backend](https://pytorch.org/executorch/0.6/backends-qualcomm.html#qualcomm-ai-engine-backend),
139139
after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path.
140140

141141
#### Using Vulkan Backend
142142

143-
To use [Vulkan Backend](https://pytorch.org/executorch/main/backends-vulkan.html#vulkan-backend),
143+
To use [Vulkan Backend](https://pytorch.org/executorch/0.6/backends-vulkan.html#vulkan-backend),
144144
set `EXECUTORCH_BUILD_VULKAN` to `ON`.
145145

146146
## Android Backends
@@ -195,4 +195,4 @@ using ExecuTorch AAR package.
195195

196196
## Java API reference
197197

198-
Please see [Java API reference](https://pytorch.org/executorch/main/javadoc/).
198+
Please see [Java API reference](https://pytorch.org/executorch/0.6/javadoc/).

0 commit comments

Comments
 (0)