You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/files/files-nfs-protocol.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn about file shares hosted in Azure Files using the Network Fil
4
4
author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.topic: concept-article
7
-
ms.date: 05/20/2025
7
+
ms.date: 07/11/2025
8
8
ms.author: kendownie
9
9
ms.custom: references_regions
10
10
# Customer intent: "As a DevOps engineer, I want to deploy and manage NFS file shares in Azure Files, so that I can support Linux-based applications and workloads that require POSIX compliance and efficient data access."
@@ -22,6 +22,8 @@ This article covers NFS Azure file shares. For information about SMB Azure file
22
22
## Applies to
23
23
| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
@@ -116,7 +118,7 @@ NFS Azure file shares are supported in all regions that support SSD file shares.
116
118
117
119
## Performance
118
120
119
-
NFS Azure file shares are only offered on SSD file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned v1 model](understanding-billing.md#provisioned-v1-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations.
121
+
NFS Azure file shares are only offered on SSD file shares, which store data on solid-state drives (SSD). In provisioned v2 model, you get to customize on provisioned capactiy, IOPS, and throughput independently. See the [provisioned v2 model](understanding-billing.md#provisioned-v2-model) section of the **Understanding billing** article to understand more. In provisioned v1 model, the IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned v1 model](understanding-billing.md#provisioned-v1-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations.
120
122
121
123
> [!NOTE]
122
124
> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md).
Copy file name to clipboardExpand all lines: articles/storage/files/files-whats-new.md
+7-4Lines changed: 7 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn about new features and enhancements in Azure Files and Azure
4
4
author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.topic: concept-article
7
-
ms.date: 06/30/2025
7
+
ms.date: 07/11/2025
8
8
ms.author: kendownie
9
9
ms.custom:
10
10
- build-2025
@@ -18,10 +18,13 @@ Azure Files and Azure File Sync are updated regularly to offer new features and
18
18
## What's new in 2025
19
19
20
20
### 2025 quarter 3 (July, August, September)
21
+
#### Provisioned v2 for SSD file shares
22
+
The provisioned v2 model for Azure Files SSD (premium) pairs predictability of total cost of ownership with flexibility, allowing you to create a file share that meets your exact storage and performance requirements. Provisioned v2 SSD shares enable independent provisioning of storage, IOPS, and throughput. In addition to predictable pricing and flexible provisioning, provisioned v2 SSD also enables increased file share size range from 32 GiB up to 256 TiB.
21
23
22
-
#### Azure File Sync Agent now available via Azure Arc extension
24
+
To learn more, see [understanding the provisioned v2 model](./understanding-billing.md#provisioned-v2-model).
23
25
24
-
Windows servers connected through Azure Arc can now install the Azure File Sync agent using a new extension called Azure File Sync Agent for Windows. The new extension is published by Microsoft and can be managed using the Azure Portal, PowerShell, or Azure CLI. To learn more, see the [Azure File Sync agent extension documentation](../file-sync/file-sync-extension.md).
26
+
#### Azure File Sync Agent now available via Azure Arc extension
27
+
Windows servers connected through Azure Arc can now install the Azure File Sync agent using a new extension called Azure File Sync Agent for Windows. The new extension is published by Microsoft and can be managed using the Azure portal, PowerShell, or Azure CLI. To learn more, see the [Azure File Sync agent extension documentation](../file-sync/file-sync-extension.md).
25
28
26
29
### 2025 quarter 2 (April, May, June)
27
30
@@ -53,7 +56,7 @@ Data plane REST API access to NFS Azure file shares will enable further developm
53
56
54
57
#### Support for customer initiated LRS-ZRS redundancy conversion for SSD file shares
55
58
56
-
Azure Files now supports customer initiated LRS to ZRS (and vice versa) redundancy conversions for SSD file shares. NFS file shares supported if using private endpoints. You can easily manage the migration of your storage accounts through the Azure Portal, PowerShell, or CLI. To learn more, see [Azure Files data redundancy](files-redundancy.md).
59
+
Azure Files now supports customer initiated LRS to ZRS (and vice versa) redundancy conversions for SSD file shares. NFS file shares supported if using private endpoints. You can easily manage the migration of your storage accounts through the Azure portal, PowerShell, or CLI. To learn more, see [Azure Files data redundancy](files-redundancy.md).
Copy file name to clipboardExpand all lines: articles/storage/files/nfs-performance.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: khdownie
5
5
ms.service: azure-file-storage
6
6
ms.custom: linux-related-content
7
7
ms.topic: concept-article
8
-
ms.date: 05/09/2024
8
+
ms.date: 07/11/2024
9
9
ms.author: kendownie
10
10
# Customer intent: "As a system administrator managing NFS Azure file shares, I want to optimize performance using features like read-ahead and nconnect, so that I can enhance throughput and reduce costs while efficiently handling large-scale workloads."
11
11
---
@@ -17,6 +17,8 @@ This article explains how you can improve performance for network file system (N
17
17
## Applies to
18
18
| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
@@ -40,20 +42,20 @@ Storage account scale targets apply at the storage account level. There are two
40
42
41
43
-**General purpose version 2 (GPv2) storage accounts**: GPv2 storage accounts allow you to deploy pay-as-you-go file shares on HDD-based hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables.
| Maximum number of virtual network rules | 200 | 200 | 200 | 200 |
55
+
| Maximum number of IP address rules | 200 | 200 | 200 | 200 |
56
+
| Management read operations | 800 per 5 minutes | 800 per 5 minutes | 800 per 5 minutes | 800 per 5 minutes |
57
+
| Management write operations | 10 per second/1200 per hour | 10 per second/1200 per hour | 10 per second/1200 per hour | 10 per second/1200 per hour |
58
+
| Management list operations | 100 per 5 minutes | 100 per 5 minutes | 100 per 5 minutes | 100 per 5 minutes |
57
59
58
60
#### Selected regions with increased maximum throughput for HDD pay-as-you-go
59
61
The following regions have an increased maximum throughput for HDD pay-as-you-go storage accounts (StorageV2):
@@ -92,23 +94,23 @@ The following regions have an increased maximum throughput for HDD pay-as-you-go
92
94
### Azure file share scale targets
93
95
Azure file share scale targets apply at the file share level.
| Maximum number of files | Unlimited | Unlimited | Unlimited |
103
-
| Maximum IOPS (Data) | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) | 20,000 IOPS |
104
-
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 35,000 IOPS<sup>2</sup>| Up to 12,000 IOPS | Up to 12,000 IOPS |
105
-
| Maximum throughput | 10,340 MiB / sec (dependent on provisioning) | 5,120 MiB / sec (dependent on provisioning) | Up to storage account limits |
106
-
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | 200 snapshots |
107
-
| Maximum filename length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
108
-
| Maximum length of individual pathname component (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters |
109
-
| Hard link limit (NFS only) | 178 | N/A | N/A |
110
-
| Maximum number of SMB Multichannel channels | 4 | N/A | N/A |
111
-
| Maximum number of stored access policies per file share | 5 | 5 | 5 |
| Maximum number of files | Unlimited | Unlimited | Unlimited | Unlimited |
105
+
| Maximum IOPS (Data) | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) |102,400 IOPS (dependent on provisioning) |20,000 IOPS |
106
+
| Maximum IOPS (Metadata<sup>1</sup>) | Up to 35,000 IOPS | Up to 12,000 IOPS | Up to 35,000 IOPS | Up to 12,000 IOPS |
107
+
| Maximum throughput | 10,340 MiB / sec (dependent on provisioning) | 5,120 MiB / sec (dependent on provisioning) |10,340 MiB / sec (dependent on provisioning) |Up to storage account limits |
108
+
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | 200 snapshots | 200 snapshots |
109
+
| Maximum filename length<sup>2</sup> (full pathname including all directories, file names, and backslash characters)| 2,048 characters| 2,048 characters | 2,048 characters | 2,048 characters |
110
+
| Maximum length of individual pathname component (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters | 255 characters |
111
+
| Hard link limit (NFS only) | 178 | N/A |178 |N/A |
112
+
| Maximum number of SMB Multichannel channels | 4 | N/A |4 |N/A |
113
+
| Maximum number of stored access policies per file share | 5 | 5 | 5 | 5 |
112
114
113
115
<sup>1</sup> Metadata IOPS (open/close/delete). See [Monitor Metadata IOPS](analyze-files-metrics.md#monitor-utilization-by-metadata-iops) for guidance.<br>
114
116
<sup>2</sup> Scaling to 35,000 IOPS for SSD file shares requires [registering for the metadata caching feature](smb-performance.md#register-for-the-metadata-caching-feature).<br>
@@ -117,13 +119,13 @@ Azure file share scale targets apply at the file share level.
117
119
### File scale targets
118
120
File scale targets apply to individual files stored in Azure file shares.
| Maximum concurrent handles for root directory | 10,000 handles | 10,000 handles | 10,000 handles | 10,000 handles|
128
+
| Maximum concurrent handles per file and directory | 2,000 handles| 2,000 handles| 2,000 handles | 2,000 handles |
127
129
128
130
\* The maximum number of concurrent handles per file and directory is a soft limit for SSD SMB file shares. If you need to scale beyond this limit, you can [enable metadata caching](smb-performance.md#register-for-the-metadata-caching-feature), and register for [increased file handle limits (preview)](smb-performance.md#register-for-increased-file-handle-limits-preview).
0 commit comments