Skip to content

Commit 8040581

Browse files
shoumikhinkeyprocedure
authored andcommitted
Fix URLs (pytorch#10316)
1 parent 915c1be commit 8040581

File tree

20 files changed

+43
-43
lines changed

20 files changed

+43
-43
lines changed

backends/vulkan/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ will be executed on the GPU.
133133

134134

135135
::::{note}
136-
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/partitioner/supported_ops.py)
136+
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/op_registry.py#L194)
137137
Vulkan partitioner code can be inspected to examine which ops are currently
138138
implemented in the Vulkan delegate.
139139
::::

docs/source/Doxyfile

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -399,9 +399,9 @@ BUILTIN_STL_SUPPORT = NO
399399
CPP_CLI_SUPPORT = NO
400400

401401
# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
402-
# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
403-
# will parse them like normal C++ but will assume all classes use public instead
404-
# of private inheritance when no explicit protection keyword is present.
402+
# https://python-sip.readthedocs.io/en/stable/introduction.html) sources only.
403+
# Doxygen will parse them like normal C++ but will assume all classes use public
404+
# instead of private inheritance when no explicit protection keyword is present.
405405
# The default value is: NO.
406406

407407
SIP_SUPPORT = NO
@@ -1483,8 +1483,9 @@ HTML_INDEX_NUM_ENTRIES = 100
14831483
# output directory. Running make will produce the docset in that directory and
14841484
# running make install will install the docset in
14851485
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
1486-
# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
1487-
# genXcode/_index.html for more information.
1486+
# startup. See
1487+
# https://developer.apple.com/library/archive/featuredarticles/DoxygenXcode/_index.html
1488+
# for more information.
14881489
# The default value is: NO.
14891490
# This tag requires that the tag GENERATE_HTML is set to YES.
14901491

docs/source/backends-cadence.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ executorch
8989

9090
***AoT (Ahead-of-Time) Components***:
9191

92-
The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py) and have corresponding implemetations in the other folders.
92+
The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py) and have corresponding implemetations in the other folders.
9393

9494
***Operators***:
9595

@@ -115,8 +115,8 @@ python3 -m examples.portable.scripts.export --model_name="add"
115115
***Quantized Operators***:
116116

117117
The other, more complex model are custom operators, including:
118-
- a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/quantized_linear_op.py#L28). Linear is the backbone of most Automatic Speech Recognition (ASR) models.
119-
- a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/quantized_conv1d_op.py#L36). Convolutions are important in wake word and many denoising models.
118+
- a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_linear_op.py#L30). Linear is the backbone of most Automatic Speech Recognition (ASR) models.
119+
- a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_conv1d_op.py#L40). Convolutions are important in wake word and many denoising models.
120120

121121
In both cases the generated file is called `CadenceDemoModel.pte`.
122122

docs/source/backends-vulkan.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ will be executed on the GPU.
133133

134134

135135
::::{note}
136-
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/partitioner/supported_ops.py)
136+
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/op_registry.py#L194)
137137
Vulkan partitioner code can be inspected to examine which ops are currently
138138
implemented in the Vulkan delegate.
139139
::::

docs/source/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
# Example configuration for intersphinx: refer to the Python standard library.
193193
intersphinx_mapping = {
194194
"python": ("https://docs.python.org/", None),
195-
"numpy": ("https://docs.scipy.org/doc/numpy/", None),
195+
"numpy": ("https://numpy.org/doc/stable/", None),
196196
"torch": ("https://pytorch.org/docs/stable/", None),
197197
}
198198

docs/source/new-contributor-guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -92,8 +92,8 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
9292
Depending on how you cloned your repo (HTTP, SSH, etc.), this should print something like:
9393
9494
```bash
95-
origin https://github.com/YOUR_GITHUB_USERNAME/executorch.git (fetch)
96-
origin https://github.com/YOUR_GITHUB_USERNAME/executorch.git (push)
95+
origin https://github.com/{YOUR_GITHUB_USERNAME}/executorch.git (fetch)
96+
origin https://github.com/{YOUR_GITHUB_USERNAME}/executorch.git (push)
9797
upstream https://github.com/pytorch/executorch.git (fetch)
9898
upstream https://github.com/pytorch/executorch.git (push)
9999
```

docs/source/runtime-profiling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model
2020
- Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level.
2121

2222

23-
Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.rst) for a step-by-step walkthrough of the above process on a sample model.
23+
Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model.

docs/source/tutorials_source/template_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
Template Tutorial
1010
=================
1111
12-
**Author:** `FirstName LastName <https://github.com/username>`_
12+
**Author:** `FirstName LastName <https://github.com/{username}>`_
1313
1414
.. grid:: 2
1515

docs/source/using-executorch-android.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@ You can also directly specify an AAR file in the app. We upload pre-built AAR to
5959
### Snapshots from main branch
6060

6161
Starting from 2025-04-12, you can download nightly `main` branch snapshots:
62-
* `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-YYYYMMDD/executorch.aar`
63-
* `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-YYYYMMDD/executorch.aar.sha256sums`
62+
* `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar`
63+
* `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar.sha256sums`
6464
* Replace `YYYYMMDD` with the actual date you want to use.
6565
* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/c66b37d010c88a113560693b14dc6bd112593c11/.github/workflows/android-release-artifacts.yml#L14-L15).
6666

examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ python -m examples.models.llama.export_llama --model "llama3_2" --checkpoint <pa
7373
```
7474
For convenience, an [exported ExecuTorch bf16 model](https://huggingface.co/executorch-community/Llama-3.2-1B-ET/blob/main/llama3_2-1B.pte) is available on Hugging Face. The export was created using [this detailed recipe notebook](https://huggingface.co/executorch-community/Llama-3.2-1B-ET/blob/main/ExportRecipe_1B.ipynb).
7575

76-
For more detail using Llama 3.2 lightweight models including prompt template, please go to our official [website](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2#-llama-3.2-lightweight-models-(1b/3b)-).
76+
For more detail using Llama 3.2 lightweight models including prompt template, please go to our official [website](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/#-llama-3.2-lightweight-models-(1b/3b)-).
7777

7878
### For Llama 3.1 and Llama 2 models
7979

@@ -134,7 +134,7 @@ BUCK2_RELEASE_DATE="2024-12-16"
134134
BUCK2_ARCHIVE="buck2-aarch64-apple-darwin.zst"
135135
BUCK2=".venv/bin/buck2"
136136
137-
curl -LO "https://github.com/facebook/buck2/releases/download/$BUCK2_RELEASE_DATE/$BUCK2_ARCHIVE"
137+
curl -LO "https://github.com/facebook/buck2/releases/download/${BUCK2_RELEASE_DATE}/${BUCK2_ARCHIVE}"
138138
zstd -cdq "$BUCK2_ARCHIVE" > "$BUCK2" && chmod +x "$BUCK2"
139139
rm "$BUCK2_ARCHIVE"
140140

0 commit comments

Comments
 (0)