You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md)
23
+
24
+
### Download the sample
25
+
26
+
[Download the sample project](https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs) and unzip the file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Node.js application.
To run this sample, you must add your storage account credentials to the `.env.example` file and then rename it to `.env`.
35
+
36
+
```
37
+
AZURE_STORAGE_ACCOUNT_NAME=<replace with your storage account name>
38
+
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<replace with your storage account access key>
39
+
```
40
+
41
+
You can find this information in the Azure portal by navigating to your storage account and selecting **Access keys** in the **Settings** section.
42
+
43
+
Install the required dependencies by opening a command prompt, navigating to the sample folder, then entering `npm install`.
44
+
45
+
### Run the console application
46
+
47
+
To run the sample, open a command prompt, navigate to the sample folder, then enter `node index.js`.
48
+
49
+
The sample creates a container in your Blob storage account, uploads **HelloWorld.png** into the container, then repeatedly checks whether the container and image have replicated to the secondary region. After replication, it prompts you to enter **D** or **Q** (followed by ENTER) to download or quit. Your output should look similar to the following example:
50
+
51
+
```
52
+
Created container successfully: newcontainer1550799840726
53
+
Uploaded blob: HelloWorld.png
54
+
Checking to see if container and blob have replicated to secondary region.
55
+
[0] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
56
+
[1] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
57
+
...
58
+
[31] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
59
+
[32] Container found, but blob has not replicated to secondary region yet.
60
+
...
61
+
[67] Container found, but blob has not replicated to secondary region yet.
62
+
[68] Blob has replicated to secondary region.
63
+
Ready for blob download. Enter (D) to download or (Q) to quit, followed by ENTER.
64
+
> D
65
+
Attempting to download blob...
66
+
Blob downloaded from primary endpoint.
67
+
> Q
68
+
Exiting...
69
+
Deleted container newcontainer1550799840726
70
+
```
71
+
72
+
### Understand the code sample
73
+
74
+
With the Node.js V10 SDK, callback handlers are unnecessary. Instead, the sample creates a pipeline configured with retry options and a secondary endpoint. This configuration allows the application to automatically switch to the secondary pipeline if it fails to reach your data through the primary pipeline.
Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md)
23
+
24
+
### Download the sample
25
+
26
+
[Download the sample project](https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip) and extract (unzip) the storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Python application.
In the application, you must provide your storage account credentials. You can store this information in environment variables on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variables.
35
+
36
+
In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Paste the **Storage account name** and **Key** values into the following commands, replacing the \<youraccountname\> and \<youraccountkey\> placeholders. This command saves the environment variables to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using.
37
+
38
+
#### Linux
39
+
40
+
```bash
41
+
export accountname=<youraccountname>
42
+
export accountkey=<youraccountkey>
43
+
```
44
+
45
+
#### Windows
46
+
47
+
```powershell
48
+
setx accountname "<youraccountname>"
49
+
setx accountkey "<youraccountkey>"
50
+
```
51
+
52
+
### Run the console application
53
+
54
+
To run the application on a terminal or command prompt, go to the **circuitbreaker.py** directory, then enter `python circuitbreaker.py`. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint.
55
+
56
+

57
+
58
+
In the sample code, the `run_circuit_breaker` method in the `circuitbreaker.py` file is used to download an image from the storage account using the [get_blob_to_path](/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice#get-blob-to-path-container-name--blob-name--file-path--open-mode--wb---snapshot-none--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--lease-id-none--if-modified-since-none--if-unmodified-since-none--if-match-none--if-none-match-none--timeout-none-) method.
59
+
60
+
The Storage object retry function is set to a linear retry policy. The retry function determines whether to retry a request, and specifies the number of seconds to wait before retrying the request. Set the **retry\_to\_secondary** value to true, if request should be retried to secondary in case the initial request to primary fails. In the sample application, a custom retry policy is defined in the `retry_callback` function of the storage object.
61
+
62
+
Before the download, the Service object [retry_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) and [response_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) function is defined. These functions define event handlers that fire when a download completes successfully or if a download fails and is retrying.
63
+
64
+
### Understand the code sample
65
+
66
+
#### Retry event handler
67
+
68
+
The `retry_callback` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) of the request is changed to `SECONDARY`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely.
69
+
70
+
```python
71
+
defretry_callback(retry_context):
72
+
global retry_count
73
+
retry_count = retry_context.count
74
+
sys.stdout.write(
75
+
"\nRetrying event because of failure reading the primary. RetryCount= {0}".format(retry_count))
76
+
sys.stdout.flush()
77
+
78
+
# Check if we have more than n-retries in which case switch to secondary
79
+
if retry_count >= retry_threshold:
80
+
81
+
# Check to see if we can fail over to secondary.
82
+
if blob_client.location_mode != LocationMode.SECONDARY:
raiseException("Both primary and secondary are unreachable. "
87
+
"Check your application's network connection.")
88
+
```
89
+
90
+
#### Request completed event handler
91
+
92
+
The `response_callback` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) back to `PRIMARY` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint.
93
+
94
+
```python
95
+
defresponse_callback(response):
96
+
global secondary_read_count
97
+
if blob_client.location_mode == LocationMode.SECONDARY:
98
+
99
+
# You're reading the secondary. Let it read the secondary [secondaryThreshold] times,
100
+
# then switch back to the primary and see if it is available now.
Copy file name to clipboardExpand all lines: articles/storage/blobs/concurrency-manage.md
+4-112Lines changed: 4 additions & 112 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,63 +45,8 @@ The outline of this process is as follows:
45
45
46
46
The following code examples show how to construct an **If-Match** condition on the write request that checks the ETag value for a blob. Azure Storage evaluates whether the blob's current ETag is the same as the ETag provided on the request and performs the write operation only if the two ETag values match. If another process has updated the blob in the interim, then Azure Storage returns an HTTP 412 (Precondition Failed) status message.
if (ex.RequestInformation.HttpStatusCode== (int)HttpStatusCode.PreconditionFailed)
90
-
{
91
-
Console.WriteLine(@"Precondition failure as expected.
92
-
Blob's ETag does not match.");
93
-
}
94
-
else
95
-
{
96
-
throw;
97
-
}
98
-
}
99
-
Console.WriteLine();
100
-
}
101
-
```
102
-
103
-
---
104
-
105
50
Azure Storage also supports other conditional headers, including as **If-Modified-Since**, **If-Unmodified-Since** and **If-None-Match**. For more information, see [Specifying Conditional Headers for Blob Service Operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations).
106
51
107
52
## Pessimistic concurrency for blobs
@@ -112,65 +57,8 @@ Leases enable different synchronization strategies to be supported, including ex
112
57
113
58
The following code examples show how to acquire an exclusive lease on a blob, update the content of the blob by providing the lease ID, and then release the lease. If the lease is active and the lease ID isn't provided on a write request, then the write operation fails with error code 412 (Precondition Failed).
Console.WriteLine("Blob updated using an exclusive lease");
145
-
146
-
// Simulate another client attempting to update to blob without providing lease.
147
-
try
148
-
{
149
-
// Operation will fail as no valid lease was provided.
150
-
Console.WriteLine("Now try to update blob without valid lease.");
151
-
blockBlob.UploadText("Update operation will fail without lease.");
152
-
}
153
-
catch (StorageExceptionex)
154
-
{
155
-
if (ex.RequestInformation.HttpStatusCode== (int)HttpStatusCode.PreconditionFailed)
156
-
{
157
-
Console.WriteLine(@"Precondition failure error as expected.
158
-
Blob lease not provided.");
159
-
}
160
-
else
161
-
{
162
-
throw;
163
-
}
164
-
}
165
-
166
-
// Release lease proactively.
167
-
blockBlob.ReleaseLease(accessCondition);
168
-
Console.WriteLine();
169
-
}
170
-
```
171
-
172
-
---
173
-
174
62
## Pessimistic concurrency for containers
175
63
176
64
Leases on containers enable the same synchronization strategies that are supported for blobs, including exclusive write/shared read, exclusive write/exclusive read, and shared write/exclusive read. For containers, however, the exclusive lock is enforced only on delete operations. To delete a container with an active lease, a client must include the active lease ID with the delete request. All other container operations succeed on a leased container without the lease ID.
@@ -180,3 +68,7 @@ Leases on containers enable the same synchronization strategies that are support
180
68
-[Specifying conditional headers for Blob service operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations)
For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#optimistic-concurrency-for-blobs).
0 commit comments