Skip to content

Commit 4a3df7c

Browse files
update using product team feedback
1 parent 7f8befa commit 4a3df7c

File tree

2 files changed

+6
-4
lines changed

2 files changed

+6
-4
lines changed
-31 KB
Loading

articles/reliability/reliability-container-registry.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Azure Container Registry is built as a distributed service with distinct control
3737

3838
**Key Architecture Components:**
3939

40-
- **Control Plane**: Centralized management in the home region for registry configuration, authentication, and replication policies
40+
- **Control Plane**: Centralized management in the home region for registry configuration, authentication configuration, and replication policies
4141
- **Data Plane**: Distributed service that handles container image push and pull operations across regions and availability zones
4242
- **Storage Layer**: Content-addressable Azure Storage with automatic deduplication, encryption at rest, and built-in replication
4343

@@ -145,7 +145,7 @@ During normal multi-region operations, Azure Container Registry synchronizes dat
145145

146146
### Region failure operations
147147

148-
When a region becomes unavailable, container operations can continue using alternative regional endpoints:
148+
When a region becomes unavailable, container operations are automatically routed to another replica in a healthy region. Clients do not need to change the endpoint in which they interact with the registry, with routing, failover, and failback automatically handled by Microsoft.
149149

150150
:::image type="content" source="./media/reliability-acr/acr-multi-region-region-failure.png" alt-text="Diagram showing Azure Container Registry behavior during regional failure with automatic Traffic Manager failover routing clients to healthy regions while West Europe is marked as failed, and continued bidirectional replication between operational regions." lightbox="./media/reliability-acr/acr-multi-region-region-failure.png":::
151151

@@ -159,7 +159,7 @@ You must use the Premium tier to enable geo-replication. Geo-replication can be
159159

160160
### Considerations
161161

162-
Each geo-replicated region functions as an independent registry endpoint that supports read and write operations. Container clients can connect to any regional endpoint for registry operations.
162+
Each geo-replicated region functions as an independent registry endpoint that supports read and write operations. Container clients can be routed by Microsoft-managed Traffic Manager to any geo-replica for read and write operations.
163163

164164
Geo-replication provides eventual consistency across regions using asynchronous replication. There's no SLA on data replication timing, and replication typically completes within minutes of changes. Large container images or high-frequency updates may take longer to replicate across all regions.
165165

@@ -197,7 +197,9 @@ When a region recovers, data plane operations automatically resume for that regi
197197

198198
### Testing for region failures
199199

200-
Regional failover for data plane operations is fully automated through Traffic Manager and can't be simulated by customers. The service is designed to automatically handle regional failures without impacting registry availability or data integrity for data plane operations.
200+
Regional failover can be simulated by temporarily disabling geo-replicas, which removes them from Traffic Manager routing. This allows for testing failover scenarios without actually experiencing a regional outage. For details on this process, see [Temporarily disable routing to replication](/azure/container-registry/container-registry-geo-replication#temporarily-disable-routing-to-replication).
201+
202+
When customers re-enable the replica, Traffic Manager routing to the re-enabled replica resumes automatically. Additionally, metadata and images are synchronized with eventual consistency to the re-enabled replica to ensure data consistency across all regions.
201203

202204
## Backups
203205

0 commit comments

Comments
 (0)