Skip to content

Commit 3ca3738

Browse files
authored
Merge pull request #1521 from madeline-underwood/python_IG_update
Editorial changes following PyTorch IG update
2 parents 20354fb + c7920cd commit 3ca3738

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

content/install-guides/pytorch.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -21,17 +21,17 @@ tool_install: true
2121
weight: 1
2222
---
2323

24-
[PyTorch](https://pytorch.org/) is a popular end-to-end machine learning framework for Python. It's used to build and deploy neural networks, especially around tasks such as computer vision and natural language processing (NLP).
24+
[PyTorch](https://pytorch.org/) is a popular end-to-end machine learning framework for Python. It is used to build and deploy neural networks, especially around tasks such as computer vision and natural language processing (NLP).
2525

2626
Follow the instructions below to install and use PyTorch on Arm Linux.
2727

2828
{{% notice Note %}}
29-
Anaconda provides another way to install PyTorch. Refer to the [Anaconda install guide](/install-guides/anaconda/) to find out how to use PyTorch from Anaconda. The Anaconda version of PyTorch may be older than the version available using `pip`.
29+
Anaconda provides another way to install PyTorch. See the [Anaconda install guide](/install-guides/anaconda/) to find out how to use PyTorch from Anaconda. The Anaconda version of PyTorch might be older than the version available using `pip`.
3030
{{% /notice %}}
3131

3232
## Before you begin
3333

34-
Confirm you are using an Arm Linux system by running:
34+
Confirm that you are using an Arm Linux system by running:
3535

3636
```bash
3737
uname -m
@@ -43,17 +43,17 @@ The output should be:
4343
aarch64
4444
```
4545

46-
If you see a different result, you are not using an Arm computer running 64-bit Linux.
46+
If you see a different result, then you are not using an Arm computer running 64-bit Linux.
4747

48-
PyTorch requires Python 3 and can be installed with `pip`.
48+
PyTorch requires Python 3, and this can be installed with `pip`.
4949

50-
For Ubuntu run:
50+
For Ubuntu, run:
5151

5252
```console
5353
sudo apt install python-is-python3 python3-pip python3-venv -y
5454
```
5555

56-
For Amazon Linux run:
56+
For Amazon Linux, run:
5757

5858
```console
5959
sudo dnf install python-pip -y
@@ -62,9 +62,9 @@ alias python=python3
6262

6363
## Download and install PyTorch
6464

65-
It's recommended that you install PyTorch in your own Python virtual environment. Setup your virtual environment:
65+
It is recommended that you install PyTorch in your own Python virtual environment. Set up your virtual environment:
6666

67-
```bash
67+
```bash
6868
python -m venv venv
6969
source venv/bin/activate
7070
```
@@ -79,7 +79,7 @@ pip install torch torchvision torchaudio
7979

8080
Test PyTorch:
8181

82-
Use a text editor to copy and paste the code below into a text file named `pytorch.py`
82+
Use a text editor to copy and paste the code below into a text file named `pytorch.py`:
8383

8484
```console
8585
import torch
@@ -106,7 +106,7 @@ tensor([[0.1334, 0.7932, 0.4396],
106106
[0.8832, 0.5077, 0.6830]])
107107
```
108108

109-
To get more details about the build options for PyTorch run:
109+
To get more information about the build options for PyTorch, run:
110110

111111
```console
112112
python -c "import torch; print(*torch.__config__.show().split(\"\n\"), sep=\"\n\")"
@@ -145,7 +145,7 @@ Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asi
145145

146146
If the result is blank, you do not have a processor with BFloat16.
147147

148-
BFloat16 provides improved performance and smaller memory footprint with the same dynamic range. You may see a slight drop in model inference accuracy with BFloat16, but the impact is acceptable for the majority of applications.
148+
BFloat16 provides improved performance and smaller memory footprint with the same dynamic range. You might experience a drop in model inference accuracy with BFloat16, but the impact is acceptable for the majority of applications.
149149

150150
You can use an environment variable to enable BFloat16:
151151

@@ -169,7 +169,7 @@ export LRU_CACHE_CAPACITY=1024
169169

170170
## Transparent huge pages
171171

172-
Transparent huge pages (THP) provide an alternative method of utilizing huge pages for virtual memory. Enabling THP may result in improved performance because it reduces the overhead of Translation Lookaside Buffer (TLB) lookups by using a larger virtual memory page size.
172+
Transparent huge pages (THP) provide an alternative method of utilizing huge pages for virtual memory. Enabling THP might result in improved performance because it reduces the overhead of Translation Lookaside Buffer (TLB) lookups by using a larger virtual memory page size.
173173

174174
To check if THP is available on your system, run:
175175

@@ -201,7 +201,7 @@ export THP_MEM_ALLOC_ENABLE=1
201201

202202
## Profiling example
203203

204-
To profile an [Vision Transformer (ViT) model](https://huggingface.co/google/vit-base-patch16-224), first download the transformers and datasets libraries:
204+
To profile a [Vision Transformer (ViT) model](https://huggingface.co/google/vit-base-patch16-224), first download the transformers and datasets libraries:
205205

206206
```
207207
pip install transformers datasets
@@ -295,13 +295,13 @@ Predicted class: Egyptian cat
295295
Self CPU time total: 786.880ms
296296
```
297297

298-
Experiment with the 2 environment variables for BFloat16 and THP and observe the performance differences.
298+
Experiment with the two environment variables for BFloat16 and THP and observe the performance differences.
299299

300300
You can set each variable and run the test again and observe the new profile data and run time.
301301

302302
## Profiling example with dynamic quantization
303303

304-
You can improve the performance of model inference with the `torch.nn.Linear` layer using dynamic quantization. This technique converts weights to 8-bit integers before inference and dynamically quantizes activations during inference, without needing fine-tuning. However, it may impact accuracy of your model.
304+
You can improve the performance of model inference with the `torch.nn.Linear` layer using dynamic quantization. This technique converts weights to 8-bit integers before inference and dynamically quantizes activations during inference, without the requirement for fine-tuning. However, it might impact the accuracy of your model.
305305

306306
Use a text editor to save the code below as `profile-vit-dq.py`:
307307
```python
@@ -396,6 +396,6 @@ Self CPU time total: 633.541ms
396396

397397
You should see the `quantized::linear_dynamic` layer being profiled. You can see the improvement in the model inference performance using dynamic quantization.
398398

399-
You are ready to use PyTorch on Arm Linux.
399+
You are now ready to use PyTorch on Arm Linux.
400400

401-
Now explore the many [machine learning articles and examples using PyTorch](https://pytorch.org/tutorials/).
401+
Continue learning by exploring the many [machine learning articles and examples using PyTorch](https://pytorch.org/tutorials/).

0 commit comments

Comments
 (0)