You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/high-availability.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -140,9 +140,9 @@ Multi-region accounts experience different behaviors depending on the following
140
140
141
141
| Configuration | Outage | Availability impact | Durability impact| What to do |
142
142
| -- | -- | -- | -- | -- |
143
-
| Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, readjust provisioned RUs as appropriate. |
144
-
| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-managed failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, readjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
145
-
| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. |
143
+
| Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency, which loses write availability until restoration of the service or, if you enable **service-managed failover**, the service marks the region as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> When the outage is over, readjust provisioned RUs as appropriate. |
144
+
| Single write region | Write region outage | Clients will redirect reads to other regions. <br/> **Without service-managed failover**, clients experience write availability loss, until restoration of write availability occurs automatically when the outage ends. <br/> **With service-managed failover** clients experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you could lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, readjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
145
+
| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/> When the outage is over, you may readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers non-replicated data in the failed region. This automatic recovery uses the configured conflict resolution method for API for NoSQL accounts. For accounts other APIs, this automatic recovery uses *Last Write Wins*. |
0 commit comments