Skip to content

Commit 34ceb9e

Browse files
authored
Update durable-functions-disaster-recovery-geo-distribution.md
1 parent 56a998d commit 34ceb9e

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/azure-functions/durable/durable-functions-disaster-recovery-geo-distribution.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,18 +51,18 @@ However, using this scenario consider:
5151
>
5252
> Prior to v2.3.0, function apps that are configured to use the same storage account will process messages and update storage artifacts concurrently, resulting in much higher overall latencies and egress costs. If the primary and replica apps ever have different code deployed to them, even temporarily, then orchestrations could also fail to execute correctly because of orchestrator function inconsistencies across the two apps. It is therefore recommended that all apps that require geo-distribution for disaster recovery purposes use v2.3.0 or higher of the Durable extension.
5353
54-
## Scenario 2 - Load balanced compute with regional storage
54+
## Scenario 2 - Load balanced compute with regional storage or regional Durable Task Scheduler
5555

56-
The preceding scenario covers only the case of failure in the compute infrastructure. If the storage service fails, it will result in an outage of the function app.
57-
To ensure continuous operation of the durable functions, this scenario uses a local storage account on each region to which the function apps are deployed.
56+
The preceding scenario only covers failure scenarios limited to the compute infrastructure and is the recommended solution for failovers. If either the storage service or the Durable Task Scheduler (DTS) fails, it will result in an outage of the function app.
57+
To ensure continuous operation of durable functions, this scenario deploys a dedicated storage account or a Scheduler (DTS instance) in each region where function apps are hosted. ***Currently, this is the recommended disaster recovery approach when using Durable Task Scheduler.***
5858

5959
![Diagram showing scenario 2.](./media/durable-functions-disaster-recovery-geo-distribution/durable-functions-geo-scenario02.png)
6060

6161
This approach adds improvements on the previous scenario:
6262

63-
- If the function app fails, Traffic Manager takes care of failing over to the secondary region. However, because the function app relies on its own storage account, the durable functions continue to work.
64-
- During a failover, there is no additional latency in the failover region since the function app and the storage account are colocated.
65-
- Failure of the storage layer will cause failures in the durable functions, which in turn will trigger a redirection to the failover region. Again, since the function app and storage are isolated per region, the durable functions will continue to work.
63+
- **Regional State Isolation:** Each function app is linked to its own regional storage account or DTS instance. If the function app fails, Traffic Manager redirects traffic to the secondary region. Because the function app in each region uses its local storage or DTS, durable functions can continue processing using local state.
64+
- **No Added Latency on Failover:** During a failover, function app and state provider (storage or DTS) are co-located, so there is no additional latency in the failover region.
65+
- **Resilience to State Backing Failures:** If the storage account or DTS instance in one region fails, the durable functions in that region will fail, which will trigger redirection to the secondary region. Because compute and state are isolated per region, the failover region’s durable functions remain operational.
6666

6767
Important considerations for this scenario:
6868

0 commit comments

Comments
 (0)