Skip to content

Commit 92a64bd

Browse files
committed
edit pass: azure-cosmos-db-high-availability
1 parent 5f348ae commit 92a64bd

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/cosmos-db/high-availability.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -154,8 +154,8 @@ Multiple-region accounts experience different behaviors depending on the followi
154154

155155
| Configuration | Outage | Availability impact | Durability impact| What to do |
156156
| -- | -- | -- | -- | -- |
157-
| Single write region | Read region outage | All clients automatically redirect reads to other regions. There's no read or write availability loss for all configurations. The exception is a configuration of two regions with strong consistency, which loses write availability until restoration of the service. Or, *if you enable service-managed failover*, the service marks the region as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned request units (RUs) in the remaining regions to support read traffic. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. |
158-
| Single write region | Write region outage | Clients redirect reads to other regions. <br/><br/> *Without service-managed failover*, clients experience write availability loss. Restoration of write availability occurs automatically when the outage ends. <br/><br/> *With service-managed failover*, clients experience write availability loss until the services manages a failover to a new write region that you select. | If you don't select the strong consistency level, the service might not replicate some data to the remaining active regions. This replication depends on the [consistency level](consistency-levels.md#rto) that you select. If the affected region suffers permanent data loss, you could lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. Accounts that use the API for NoSQL might also recover the unreplicated data in the failed region from your [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
157+
| Single write region | Read region outage | All clients automatically redirect reads to other regions. There's no read or write availability loss for all configurations. The exception is a configuration of two regions with strong consistency, which loses write availability until restoration of the service. Or, *if you enable service-managed failover*, the service marks the region as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned Request Units (RUs) in the remaining regions to support read traffic. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. |
158+
| Single write region | Write region outage | Clients redirect reads to other regions. <br/><br/> *Without service-managed failover*, clients experience write availability loss. Restoration of write availability occurs automatically when the outage ends. <br/><br/> *With service-managed failover*, clients experience write availability loss until the service manages a failover to a new write region that you select. | If you don't select the strong consistency level, the service might not replicate some data to the remaining active regions. This replication depends on the [consistency level](consistency-levels.md#rto) that you select. If the affected region suffers permanent data loss, you could lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. Accounts that use the API for NoSQL might also recover the unreplicated data in the failed region from your [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
159159
| Multiple write regions | Any regional outage | There's a possibility of temporary loss of write availability, which is analogous to a single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) might also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region might be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/><br/> When the outage is over, you can readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers unreplicated data in the failed region. This automatic recovery uses the conflict resolution method that you configure for accounts that use the API for NoSQL. For accounts that use other APIs, this automatic recovery uses *last write wins*. |
160160

161161
### Additional information on read region outages
@@ -168,7 +168,7 @@ Multiple-region accounts experience different behaviors depending on the followi
168168

169169
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, Azure Cosmos DB continues to honor read consistency guarantees.
170170

171-
* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there's no data loss if your multiple-region Azure Cosmos DB account is configured with *strong* consistency. In the rare event of a permanently irrecoverable write region, a multiple-region Azure Cosmos DB account has the durability characteristics specified earlier in the [Durability](#durability) section.
171+
* Even in a rare and unfortunate event where an Azure write region is permanently irrecoverable, there's no data loss if your multiple-region Azure Cosmos DB account is configured with strong consistency. A multiple-region Azure Cosmos DB account has the durability characteristics specified earlier in the [Durability](#durability) section.
172172

173173
### Additional information on write region outages
174174

@@ -192,7 +192,7 @@ The following table summarizes the high-availability capabilities of various acc
192192
|Zone failures: availability | Availability loss | No availability loss | No availability loss | No availability loss | No availability loss |
193193
|Regional outage: data loss | Data loss | Data loss | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md).
194194
|Regional outage: availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No read availability loss, temporary write availability loss in the affected region |
195-
|Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x *N* regions | Provisioned RU/s x 1.25 rate x *N* regions (***2***) | Multiple-region write rate x *N* regions |
195+
|Price (***1***) | Not applicable | Provisioned RU/s x 1.25 rate | Provisioned RU/s x *N* regions | Provisioned RU/s x 1.25 rate x *N* regions (***2***) | Multiple-region write rate x *N* regions |
196196

197197
***1*** For serverless accounts, RUs are multiplied by a factor of 1.25.
198198

@@ -205,7 +205,7 @@ The following table summarizes the high-availability capabilities of various acc
205205

206206
* Review the expected [behavior of the Azure Cosmos DB SDKs](troubleshoot-sdk-availability.md) during events and which configurations affect it.
207207

208-
* To ensure high write and read availability, configure your Azure Cosmos DB account to span at least two regions (or three, if you're using strong consistency). Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see [Tutorial: Set up Azure Cosmos DB global distribution using the API for NoSQL](nosql/tutorial-global-distribution.md).
208+
* To ensure high write and read availability, configure your Azure Cosmos DB account to span at least two regions (or three, if you're using strong consistency). Remember that the best configuration to achieve high availability for a region outage is a single write region with service-managed failover. To learn more, see [Tutorial: Set up Azure Cosmos DB global distribution using the API for NoSQL](nosql/tutorial-global-distribution.md).
209209

210210
* For multiple-region Azure Cosmos DB accounts that are configured with a single write region, [enable service-managed failover by using the Azure CLI or the Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable service-managed failover, whenever there's a regional disaster, Azure Cosmos DB will fail over your account without any user input.
211211

0 commit comments

Comments
 (0)