You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/large-volumes-requirements-considerations.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ The following requirements and considerations apply to large volumes. For perfor
73
73
74
74
## About 64-bit file IDs
75
75
76
-
Whereas regular volume use 32-bit file IDs, large volumes employ 64-bit file IDs. File IDs are unique identifiers that allow Azure NetApp Files to keep track of files in the file system. 64-bit IDs are utilized to increase the number of files allowed in a single volume, enabling a large volume able to hold more files than a regular volume.
76
+
Whereas regular volumes use 32-bit file IDs, large volumes employ 64-bit file IDs. File IDs are unique identifiers that allow Azure NetApp Files to keep track of files in the file system. 64-bit IDs are utilized to increase the number of files allowed in a single volume, enabling a large volume able to hold more files than a regular volume.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/manage-manual-qos-capacity-pool.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: how-to
8
-
ms.date: 06/14/2021
8
+
ms.date: 01/14/2025
9
9
ms.author: anfdocs
10
10
---
11
11
# Manage a manual QoS capacity pool
@@ -40,7 +40,7 @@ You can change a capacity pool that currently uses the auto QoS type to use the
40
40
41
41
## Monitor the throughput of a manual QoS capacity pool
42
42
43
-
Metrics are available to help you monitor the read and write throughput of a volume. See [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md).
43
+
Metrics are available to help you monitor the read and write throughput of a volume. See [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md).
44
44
45
45
## Modify the allotted throughput of a manual QoS volume
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.custom: linux-related-content
8
8
ms.topic: conceptual
9
-
ms.date: 08/02/2021
9
+
ms.date: 08/02/2024
10
10
ms.author: anfdocs
11
11
---
12
12
# Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries
@@ -36,15 +36,15 @@ A concurrency level as low as 155 is sufficient to achieve 155,000 Oracle DB NFS
36
36
37
37
See [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) for details.
38
38
39
-
The `sunrpc.tcp_max_slot_table_entries` tunable is a connection-level tuning parameter. *As a best practice, set this value to 128 or less per connection, not surpassing 10,000 slots environment wide.*
39
+
The `sunrpc.tcp_max_slot_table_entries` tunable is a connection-level tuning parameter. *As a best practice, set this value to 128 or less per connection, not surpassing 10,000 slots environment wide.*
40
40
41
41
### Examples of slot count based on concurrency recommendation
42
42
43
43
Examples in this section demonstrate the slot count based on concurrency recommendation.
44
44
45
45
#### Example 1 – One NFS client, 65,536 `sunrpc.tcp_max_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128 based on the server-side limit of 128
46
46
47
-
Example 1 is based on a single client workload with the default `sunrpc.tcp_max_slot_table_entry` value of 65,536 and a single network connection, that is, no `nconnect`. In this case, a concurrency of 128 is achievable.
47
+
Example 1 is based on a single client workload with the default `sunrpc.tcp_max_slot_table_entry` value of 65,536 and a single network connection, that is, no `nconnect`. In this case, a concurrency of 128 is achievable.
@@ -62,7 +62,7 @@ Example 2 is based on a single client workload with a `sunrpc.tcp_max_slot_table
62
62
63
63
#### Example 3 – One NFS client, 100 `sunrpc.tcp_max_slot_table_entries`, and `nconnect=8` for a maximum concurrency of 800
64
64
65
-
Example 3 is based on a single client workload, but with a lower `sunrpc.tcp_max_slot_table_entry` value of 100. This time, the `nconnect=8` mount option used spreading the workload across 8 connection. With this setting, a concurrency of 800 is achievable spread across the 8 connections. This amount is the concurrency needed to achieve 400,000 IOPS.
65
+
Example 3 is based on a single client workload, but with a lower `sunrpc.tcp_max_slot_table_entry` value of 100. This time, the `nconnect=8` mount option used spreading the workload across 8 connection. With this setting, a concurrency of 800 is achievable spread across the 8 connections. This amount is the concurrency needed to achieve 400,000 IOPS.
@@ -145,7 +145,7 @@ In NFSv4.1, sessions define the relationship between the client and the server.
145
145
|-|-|-|
146
146
| 180 | 64 | 64 |
147
147
148
-
Although Linux clients default to 64 maximum requests per session, the value of `max_session_slots` is tunable. A reboot is required for changes to take effect. Use the `systool -v -m nfs` command to see the current maximum in use by the client. For the command to work, at least one NFSv4.1 mount must be in place:
148
+
Although Linux clients default to 64 maximum requests per session, the value of `max_session_slots` is tunable. A reboot is required for changes to take effect. Use the `systool -v -m nfs` command to see the current maximum in use by the client. For the command to work, at least one NFSv4.1 mount must be in place:
149
149
150
150
```shell
151
151
$ systool -v -m nfs
@@ -164,7 +164,7 @@ To tune `max_session_slots`, create a configuration file under `/etc/modprobe.d`
Azure NetApp Files limits each session to 180 max commands. As such, consider 180 the maximum value currently configurable. The client will be unable to achieve a concurrency greater than 128 unless the session is divided across more than one connection as Azure NetApp Files restricts each connection to 128 max NFS commands. To get more than one connection, the `nconnect` mount option is recommended, and a value of two or greater is required.
167
+
Azure NetApp Files limits each session to 180 max commands. As such, consider 180 the maximum value currently configurable. The client will be unable to achieve a concurrency greater than 128 unless the session is divided across more than one connection as Azure NetApp Files restricts each connection to 128 max NFS commands. To get more than one connection, the `nconnect` mount option is recommended, and a value of two or greater is required.
168
168
169
169
### Examples of expected concurrency maximums
170
170
@@ -181,7 +181,7 @@ Example 1 is based on default setting of 64 `max_session_slots` and no `nconnect
181
181
182
182
#### Example 2 – 64 `max_session_slots` and `nconnect=2`
183
183
184
-
Example 2 is based on 64 max `session_slots` but with the added mount option of `nconnect=2`. A concurrency of 64 is achievable but divided across two connections. Although multiple connections bring no greater concurrency in this scenario, the decreased queue depth per connection has a positive impact on latency.
184
+
Example 2 is based on 64 max `session_slots` but with the added mount option of `nconnect=2`. A concurrency of 64 is achievable but divided across two connections. Although multiple connections bring no greater concurrency in this scenario, the decreased queue depth per connection has a positive impact on latency.
185
185
186
186
With the `max_session_slots` still at 64 but `nconnect=2`, notice that maximum number of requests get divided across the connections.
187
187
@@ -196,7 +196,7 @@ With the `max_session_slots` still at 64 but `nconnect=2`, notice that maximum n
196
196
197
197
#### Example 3 – 180 `max_session_slots` and no `nconnect`
198
198
199
-
Example 3 drops the `nconnect` mount option and sets the `max_session_slots` value to 180, matching the server’s maximum NFSv4.1 session concurrency. In this scenario, with only one connection and given the Azure NetApp Files 128 maximum outstanding operation per NFS connection, the session is limited to 128 operations in flight.
199
+
Example 3 drops the `nconnect` mount option and sets the `max_session_slots` value to 180, matching the server’s maximum NFSv4.1 session concurrency. In this scenario, with only one connection and given the Azure NetApp Files 128 maximum outstanding operation per NFS connection, the session is limited to 128 operations in flight.
200
200
201
201
Although `max_session_slots` has been set to 180, the single network connection is limited to 128 maximum requests as such:
202
202
@@ -226,7 +226,7 @@ With two connections in play, the session supports the full allotment of 180 out
226
226
227
227
### How to check for the maximum requests outstanding for the session
228
228
229
-
To see the `session_slot` sizes supported by the client and server, capture the mount command in a packet trace. Look for the `CREATE_SESSION` call and `CREATE_SESSION` reply as shown in the following example. The call originated from the client, and the reply originated from the server.
229
+
To see the `session_slot` sizes supported by the client and server, capture the mount command in a packet trace. Look for the `CREATE_SESSION` call and `CREATE_SESSION` reply as shown in the following example. The call originated from the client, and the reply originated from the server.
230
230
231
231
Use the following `tcpdump` command to capture the mount command:
232
232
@@ -244,7 +244,7 @@ Within these two packets, look at the `max_reqs` field within the middle section
244
244
* `csa_fore_channel_attrs`
245
245
* `max reqs`
246
246
247
-
Packet 12 (client maximum requests) shows that the client had a `max_session_slots` value of 64. In the next section, notice that the server supports a concurrency of 180 for the session. The session ends up negotiating the lower of the two provided values.
247
+
Packet 12 (client maximum requests) shows that the client had a `max_session_slots` value of 64. In the next section, notice that the server supports a concurrency of 180 for the session. The session ends up negotiating the lower of the two provided values.
248
248
249
249

Copy file name to clipboardExpand all lines: articles/azure-netapp-files/performance-virtual-machine-sku.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
-
title: Azure virtual machine stock-keeping units (SKUs) best practices for Azure NetApp Files | Microsoft Docs
2
+
title: Azure virtual machine stock-keeping unit (SKUs) best practices for Azure NetApp Files | Microsoft Docs
3
3
description: Describes Azure NetApp Files best practices about Azure virtual machine stocking-keeping units (SKUs), including differences within and between SKUs.
4
4
services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 07/02/2021
8
+
ms.date: 10/02/2024
9
9
ms.author: anfdocs
10
10
---
11
-
# Azure virtual machine stock-keeping unit best practices for Azure NetApp Files
11
+
# Azure virtual machine stock-keeping unit (SKU) best practices for Azure NetApp Files
12
12
13
13
This article describes Azure NetApp Files best practices about Azure virtual machine stock-keeping units (SKUs), including differences within and between SKUs.
0 commit comments