Skip to content

Commit 93b78e6

Browse files
Merge pull request #297375 from wmgries/consistent-media-tiers
Remove unneeded large file share for Geo doc.
2 parents b1297e6 + d2fc6be commit 93b78e6

File tree

5 files changed

+29
-111
lines changed

5 files changed

+29
-111
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6738,6 +6738,11 @@
67386738
"source_path": "articles/notification-hubs/xamarin-notification-hubs-push-notifications-android-gcm.md",
67396739
"redirect_url": "/dotnet/maui/data-cloud/push-notifications",
67406740
"redirect_document_id": false
6741+
},
6742+
{
6743+
"source_path": "articles/storage/files/geo-redundant-storage-for-large-file-shares.md",
6744+
"redirect_url": "/azure/storage/files/files-redundancy",
6745+
"redirect_document_id": false
67416746
}
67426747
]
67436748
}

articles/storage/files/TOC.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -235,8 +235,6 @@
235235
href: files-redundancy.md
236236
- name: Disaster recovery and failover
237237
href: files-disaster-recovery.md
238-
- name: Geo-redundancy for large file shares
239-
href: geo-redundant-storage-for-large-file-shares.md
240238
- name: Change the redundancy configuration
241239
href: files-change-redundancy-configuration.md
242240
- name: Initiate storage account failover

articles/storage/files/files-redundancy.md

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,13 +127,34 @@ Only standard general-purpose v2 storage accounts support GZRS.
127127

128128
To determine if a region supports GZRS, see the [Azure regions list](/azure/reliability/regions-list#azure-regions-list-1). To support GZRS, a region must support availability zones and have a paired region.
129129

130-
### Disaster recovery and failover
130+
### Snapshot and sync frequency
131131

132+
To ensure Geo and GeoZone redundant file shares are in a consistent state when a failover occurs, a system snapshot is created in the primary region every 15 minutes and is replicated to the secondary region. When a failover occurs to the secondary region, the share state is based on the latest system snapshot in the secondary region. Due to geo-lag or other issues, the latest system snapshot in the secondary region may be older than 15 minutes.
133+
134+
The Last Sync Time (LST) property on the storage account indicates the last time that data from the primary region was written successfully to the secondary region. For Azure Files, the Last Sync Time is based on the latest system snapshot in the secondary region. You can use PowerShell or Azure CLI to [check the Last Sync Time](../common/last-sync-time-get.md#get-the-last-sync-time-property) for a storage account.
135+
136+
It's important to understand the following about the Last Sync Time property:
137+
138+
- The Last Sync Time property on the storage account is based on the service (Files, Blobs, Tables, Queues) in the storage account that's the furthest behind.
139+
- The Last Sync Time isn't updated if no changes have been made on the storage account.
140+
- The Last Sync Time calculation can time out if the number of file shares exceeds 100 per storage account. Less than 100 file shares per storage account is recommended.
141+
142+
### Failover considerations
132143
With GRS or GZRS, the file shares won't be accessible in the secondary region unless a failover occurs. If the primary region becomes unavailable, you can choose to fail over to the secondary region. The failover process updates the DNS entry provided by Azure Files so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data. For more information, see [Azure Files disaster recovery and failover](files-disaster-recovery.md).
133144

134145
> [!IMPORTANT]
135146
> Azure Files doesn't support read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). If a storage account is configured to use RA-GRS or RA-GZRS, the file shares will be configured and billed as GRS or GZRS.
136147
148+
The following items might impact your ability to fail over to the secondary region:
149+
150+
- Storage account failover is blocked if a system snapshot doesn't exist in the secondary region.
151+
- Storage account failover is blocked if the storage account contains more than 100,000 file shares. To failover the storage account, open a support request.
152+
- File handles and leases aren't retained on failover, and clients must unmount and remount the file shares.
153+
- File share quota might change after failover. The file share quota in the secondary region will be based on the quota that was configured when the system snapshot was taken in the primary region.
154+
- Copy operations in progress are aborted when a failover occurs. When the failover to the secondary region completes, retry the copy operation.
155+
156+
To failover a storage account, see [initiate an account failover](../common/storage-initiate-account-failover.md).
157+
137158
### Geo-redundancy for SSD file shares
138159

139160
As previously mentioned, geo-redundancy options (GRS and GZRS) aren't supported for SSD file shares. However, you can achieve geo-redundancy in other ways.

articles/storage/files/files-whats-new.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Azure Backup now enables you to perform a vaulted backup of Azure Files to prote
7878
### 2024 quarter 1 (January, February, March)
7979

8080
#### Generally available: Azure Files large file share support for Geo and GeoZone redundancy
81-
HDD file shares that are Geo (GRS) or GeoZone (GZRS) redundant can now scale up to 100 TiB capacity with significantly improved IOPS and throughput limits. For more information, see [blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-files-geo-redundancy-for-standard/ba-p/4097935) and [documentation](geo-redundant-storage-for-large-file-shares.md).
81+
HDD file shares that are Geo (GRS) or GeoZone (GZRS) redundant can now scale up to 100 TiB capacity with significantly improved IOPS and throughput limits. For more information, see [Geo and GeoZone redundancy](./files-redundancy.md#redundancy-in-a-secondary-region).
8282

8383
#### Metadata caching for SSD SMB file shares is in public preview
8484

@@ -125,7 +125,7 @@ Note: The number of active users supported per share is dependent on the applica
125125
The root directory handle limit has been increased in all regions and applies to all existing and new file shares. For more information about Azure Files scale targets, see: [Azure Files scalability and performance targets](storage-files-scale-targets.md).
126126

127127
#### Preview: Azure Files large file share support for Geo and GeoZone redundancy
128-
Azure Files geo-redundancy for large file shares preview significantly improves capacity and performance for HDD file shares when using geo-redundant storage (GRS) and geo-zone redundant storage (GZRS) options. The preview is only available for HDD file shares. For more information, see [Azure Files geo-redundancy for large file shares preview](geo-redundant-storage-for-large-file-shares.md).
128+
Azure Files geo-redundancy for large file shares preview significantly improves capacity and performance for HDD file shares when using geo-redundant storage (GRS) and geo-zone redundant storage (GZRS) options. The preview is only available for HDD file shares. For more information, see [Geo and GeoZone redundancy](./files-redundancy.md#redundancy-in-a-secondary-region).
129129

130130
#### New SLA of 99.99% uptime for SSD file shares
131131

articles/storage/files/geo-redundant-storage-for-large-file-shares.md

Lines changed: 0 additions & 106 deletions
This file was deleted.

0 commit comments

Comments
 (0)