You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/install-guides/pytorch.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,17 +21,17 @@ tool_install: true
21
21
weight: 1
22
22
---
23
23
24
-
[PyTorch](https://pytorch.org/) is a popular end-to-end machine learning framework for Python. It's used to build and deploy neural networks, especially around tasks such as computer vision and natural language processing (NLP).
24
+
[PyTorch](https://pytorch.org/) is a popular end-to-end machine learning framework for Python. It is used to build and deploy neural networks, especially around tasks such as computer vision and natural language processing (NLP).
25
25
26
26
Follow the instructions below to install and use PyTorch on Arm Linux.
27
27
28
28
{{% notice Note %}}
29
-
Anaconda provides another way to install PyTorch. Refer to the [Anaconda install guide](/install-guides/anaconda/) to find out how to use PyTorch from Anaconda. The Anaconda version of PyTorch may be older than the version available using `pip`.
29
+
Anaconda provides another way to install PyTorch. See the [Anaconda install guide](/install-guides/anaconda/) to find out how to use PyTorch from Anaconda. The Anaconda version of PyTorch might be older than the version available using `pip`.
30
30
{{% /notice %}}
31
31
32
32
## Before you begin
33
33
34
-
Confirm you are using an Arm Linux system by running:
34
+
Confirm that you are using an Arm Linux system by running:
35
35
36
36
```bash
37
37
uname -m
@@ -43,17 +43,17 @@ The output should be:
43
43
aarch64
44
44
```
45
45
46
-
If you see a different result, you are not using an Arm computer running 64-bit Linux.
46
+
If you see a different result, then you are not using an Arm computer running 64-bit Linux.
47
47
48
-
PyTorch requires Python 3 and can be installed with `pip`.
48
+
PyTorch requires Python 3, and this can be installed with `pip`.
If the result is blank, you do not have a processor with BFloat16.
147
147
148
-
BFloat16 provides improved performance and smaller memory footprint with the same dynamic range. You may see a slight drop in model inference accuracy with BFloat16, but the impact is acceptable for the majority of applications.
148
+
BFloat16 provides improved performance and smaller memory footprint with the same dynamic range. You might experience a drop in model inference accuracy with BFloat16, but the impact is acceptable for the majority of applications.
149
149
150
150
You can use an environment variable to enable BFloat16:
Transparent huge pages (THP) provide an alternative method of utilizing huge pages for virtual memory. Enabling THP may result in improved performance because it reduces the overhead of Translation Lookaside Buffer (TLB) lookups by using a larger virtual memory page size.
172
+
Transparent huge pages (THP) provide an alternative method of utilizing huge pages for virtual memory. Enabling THP might result in improved performance because it reduces the overhead of Translation Lookaside Buffer (TLB) lookups by using a larger virtual memory page size.
173
173
174
174
To check if THP is available on your system, run:
175
175
@@ -201,7 +201,7 @@ export THP_MEM_ALLOC_ENABLE=1
201
201
202
202
## Profiling example
203
203
204
-
To profile an[Vision Transformer (ViT) model](https://huggingface.co/google/vit-base-patch16-224), first download the transformers and datasets libraries:
204
+
To profile a[Vision Transformer (ViT) model](https://huggingface.co/google/vit-base-patch16-224), first download the transformers and datasets libraries:
Experiment with the 2 environment variables for BFloat16 and THP and observe the performance differences.
298
+
Experiment with the two environment variables for BFloat16 and THP and observe the performance differences.
299
299
300
300
You can set each variable and run the test again and observe the new profile data and run time.
301
301
302
302
## Profiling example with dynamic quantization
303
303
304
-
You can improve the performance of model inference with the `torch.nn.Linear` layer using dynamic quantization. This technique converts weights to 8-bit integers before inference and dynamically quantizes activations during inference, without needing fine-tuning. However, it may impact accuracy of your model.
304
+
You can improve the performance of model inference with the `torch.nn.Linear` layer using dynamic quantization. This technique converts weights to 8-bit integers before inference and dynamically quantizes activations during inference, without the requirement for fine-tuning. However, it might impact the accuracy of your model.
305
305
306
306
Use a text editor to save the code below as `profile-vit-dq.py`:
307
307
```python
@@ -396,6 +396,6 @@ Self CPU time total: 633.541ms
396
396
397
397
You should see the `quantized::linear_dynamic` layer being profiled. You can see the improvement in the model inference performance using dynamic quantization.
398
398
399
-
You are ready to use PyTorch on Arm Linux.
399
+
You are now ready to use PyTorch on Arm Linux.
400
400
401
-
Now explore the many [machine learning articles and examples using PyTorch](https://pytorch.org/tutorials/).
401
+
Continue learning by exploring the many [machine learning articles and examples using PyTorch](https://pytorch.org/tutorials/).
0 commit comments