You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/blobs/data-lake-storage-best-practices.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ When architecting a system with Data Lake Storage Gen2 or any cloud service, you
44
44
45
45
### High availability and disaster recovery
46
46
47
-
High availability (HA) and disaster recovery (DR) can sometimes be combined together, although each has a slightly different strategy, especially when it comes to data. Data Lake Storage Gen2 already handles 3x replication under the hood to guard against localized hardware failures. Additionally, other replication options, such as ZRS or GZRS (preview), improve HA, while GRS & RA-GRS improve DR. When building a plan for HA, in the event of a service interruption the workload needs access to the latest data as quickly as possible by switching over to a separately replicated instance locally or in a new region.
47
+
High availability (HA) and disaster recovery (DR) can sometimes be combined together, although each has a slightly different strategy, especially when it comes to data. Data Lake Storage Gen2 already handles 3x replication under the hood to guard against localized hardware failures. Additionally, other replication options, such as ZRS or GZRS, improve HA, while GRS & RA-GRS improve DR. When building a plan for HA, in the event of a service interruption the workload needs access to the latest data as quickly as possible by switching over to a separately replicated instance locally or in a new region.
48
48
49
49
In a DR strategy, to prepare for the unlikely event of a catastrophic failure of a region, it is also important to have data replicated to a different region using GRS or RA-GRS replication. You must also consider your requirements for edge cases such as data corruption where you may want to create periodic snapshots to fall back to. Depending on the importance and size of the data, consider rolling delta snapshots of 1-, 6-, and 24-hour periods, according to risk tolerances.
Copy file name to clipboardExpand all lines: articles/storage/blobs/simulate-primary-region-failure.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,23 +1,23 @@
1
1
---
2
2
title: Tutorial - Simulate a failure in reading data from the primary region
3
3
titleSuffix: Azure Storage
4
-
description: Simulate an error in reading data from the primary region when read-access geo-redundant storage (RA-GRS) is enabled for the storage account.
4
+
description: Simulate an error in reading data from the primary region when the storage account is configured for read-access geo-zone-redundant storage (RA-GZRS). After the error occurs, read data from the secondary region.
5
5
services: storage
6
6
author: tamram
7
7
8
8
ms.service: storage
9
9
ms.subservice: blobs
10
10
ms.topic: tutorial
11
-
ms.date: 12/04/2019
11
+
ms.date: 04/16/2020
12
12
ms.author: tamram
13
13
ms.reviewer: artek
14
14
---
15
15
16
16
# Tutorial: Simulate a failure in reading data from the primary region
17
17
18
-
This tutorial is part two of a series. In it, you learn about the benefits of [read-access geo-redundant storage](../common/storage-redundancy.md) (RA-GRS) by simulating a failure.
18
+
This tutorial is part two of a series. In it, you learn about the benefits of [read-access geo-zone-redundant storage](../common/storage-redundancy.md) (RA-GZRS) by simulating a failure.
19
19
20
-
In order to simulate a failure, you can use either [Static Routing](#simulate-a-failure-with-an-invalid-static-route) or [Fiddler](#simulate-a-failure-with-fiddler). Both methods will allow you to simulate failure for requests to the primary endpoint of your [read-access geo-redundant](../common/storage-redundancy.md) (RA-GRS) storage account, causing the application read from the secondary endpoint instead.
20
+
In order to simulate a failure, you can use either [static routing](#simulate-a-failure-with-an-invalid-static-route) or [Fiddler](#simulate-a-failure-with-fiddler). Both methods will allow you to simulate failure for requests to the primary endpoint of your [read-access geo-redundant](../common/storage-redundancy.md) (RA-GZRS) storage account, leading the application to read from the secondary endpoint instead.
21
21
22
22
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
23
23
@@ -38,7 +38,7 @@ To simulate a failure using Fiddler, download and [install Fiddler](https://www.
38
38
39
39
## Simulate a failure with an invalid static route
40
40
41
-
You can create an invalid static route for all requests to the primary endpoint of your [read-access geo-redundant](../common/storage-redundancy.md) (RA-GRS) storage account. In this tutorial, the local host is used as the gateway for routing requests to the storage account. Using the local host as the gateway causes all requests to your storage account primary endpoint to loop back inside the host, which subsequently leads to failure. Follow the following steps to simulate a failure, and primary endpoint restoration with an invalid static route.
41
+
You can create an invalid static route for all requests to the primary endpoint of your [read-access geo-redundant](../common/storage-redundancy.md) (RA-GZRS) storage account. In this tutorial, the local host is used as the gateway for routing requests to the storage account. Using the local host as the gateway causes all requests to your storage account primary endpoint to loop back inside the host, which subsequently leads to failure. Follow the following steps to simulate a failure, and primary endpoint restoration with an invalid static route.
42
42
43
43
### Start and pause the application
44
44
@@ -80,29 +80,29 @@ To simulate the primary endpoint becoming functional again, delete the invalid s
80
80
81
81
#### Linux
82
82
83
-
```
83
+
```bash
84
84
route del <destination_ip> gw <gateway_ip>
85
85
```
86
86
87
87
#### Windows
88
88
89
-
```
89
+
```console
90
90
route delete <destination_ip>
91
91
```
92
92
93
93
You can then resume the application or press the appropriate key to download the sample file again, this time confirming that it once again comes from primary storage.
94
94
95
95
## Simulate a failure with Fiddler
96
96
97
-
To simulate failure with Fiddler, you inject a failed response for requests to the primary endpoint of your RA-GRS storage account.
97
+
To simulate failure with Fiddler, you inject a failed response for requests to the primary endpoint of your RA-GZRS storage account.
98
98
99
99
The following sections depict how to simulate a failure and primary endpoint restoration with fiddler.
100
100
101
101
### Launch fiddler
102
102
103
103
Open Fiddler, select **Rules** and **Customize Rules**.
Use the instructions in the [previous tutorial][previous-tutorial] to launch the sample and download the test file, confirming that it comes from primary storage. Depending on your target platform, you can then manually pause the sample or wait at a prompt.
134
134
135
135
### Simulate failure
136
136
137
-
While the application is paused, switch back to Fiddler and uncomment the custom rule you saved in the `OnBeforeResponse` function. Be sure to select **File** and **Save** to save your changes so the rule will take effect. This code looks for requests to the RA-GRS storage account and, if the path contains the name of the sample file, returns a response code of `503 - Service Unavailable`.
137
+
While the application is paused, switch back to Fiddler and uncomment the custom rule you saved in the `OnBeforeResponse` function. Be sure to select **File** and **Save** to save your changes so the rule will take effect. This code looks for requests to the RA-GZRS storage account and, if the path contains the name of the sample file, returns a response code of `503 - Service Unavailable`.
138
138
139
139
In the window with the running sample, resume the application or press the appropriate key to download the sample file and confirm that it comes from secondary storage. You can then pause the sample again or wait at the prompt.
140
140
@@ -148,9 +148,9 @@ In the window with the running sample, resume the application or press the appro
148
148
149
149
In part two of the series, you learned about simulating a failure to test read access geo-redundant storage.
150
150
151
-
To learn more about how RA-GRS storage works, as well as its associated risks, read the following article:
151
+
To learn more about how RA-GZRS storage works, as well as its associated risks, read the following article:
152
152
153
153
> [!div class="nextstepaction"]
154
-
> [Designing HA apps with RA-GRS](../common/storage-designing-ha-apps-with-ragrs.md)
154
+
> [Designing HA apps with RA-GZRS](../common/geo-redundant-design.md)
Copy file name to clipboardExpand all lines: articles/storage/blobs/storage-blob-reserved-capacity.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,7 +80,7 @@ Follow these steps to purchase reserved capacity:
80
80
|**Subscription**| The subscription that's used to pay for the Azure Storage reservation. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
81
81
|**Region**| The region where the reservation is in effect. |
82
82
|**Access tier**| The access tier where the for which the reservation is in effect. Options include *Hot*, *Cool*, or *Archive*. For more information about access tiers, see [Azure Blob storage: hot, cool, and archive access tiers](storage-blob-storage-tiers.md). |
83
-
|**Redundancy**| The redundancy option for the reservation. Options include *LRS*, *ZRS*, *GRS*, and *RA-GZRS*. For more information about redundancy options, see [Azure Storage redundancy](../common/storage-redundancy.md). |
83
+
|**Redundancy**| The redundancy option for the reservation. Options include *LRS*, *ZRS*, *GRS*, *GZRS*, *RA-GRS*, and *RA-GZRS*. For more information about redundancy options, see [Azure Storage redundancy](../common/storage-redundancy.md). |
84
84
|**Billing frequency**| Indicates how often the account is billed for the reservation. Options include *Monthly* or *Upfront*. |
85
85
|**Size**| The region where the reservation is in effect. |
Copy file name to clipboardExpand all lines: articles/storage/blobs/storage-blobs-introduction.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ For more information about the different types of blobs, see [Understanding Bloc
61
61
62
62
A number of solutions exist for migrating existing data to Blob storage:
63
63
64
-
-**AzCopy** is an easy-to-use command-line tool for Windows and Linux that copies data to and from Blob storage, across containers, or across storage accounts. For more information about AzCopy, see [Transfer data with the AzCopy v10 (Preview)](../common/storage-use-azcopy-v10.md).
64
+
-**AzCopy** is an easy-to-use command-line tool for Windows and Linux that copies data to and from Blob storage, across containers, or across storage accounts. For more information about AzCopy, see [Transfer data with the AzCopy v10](../common/storage-use-azcopy-v10.md).
65
65
- The **Azure Storage Data Movement library** is a .NET library for moving data between Azure Storage services. The AzCopy utility is built with the Data Movement library. For more information, see the [reference documentation](/dotnet/api/microsoft.azure.storage.datamovement) for the Data Movement library.
66
66
-**Azure Data Factory** supports copying data to and from Blob storage by using the account key, a shared access signature, a service principal, or managed identities for Azure resources. For more information, see [Copy data to or from Azure Blob storage by using Azure Data Factory](../../data-factory/connector-azure-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
67
67
-**Blobfuse** is a virtual file system driver for Azure Blob storage. You can use blobfuse to access your existing block blob data in your Storage account through the Linux file system. For more information, see [How to mount Blob storage as a file system with blobfuse](storage-how-to-mount-container-linux.md).
0 commit comments