Skip to content

Commit f166cc9

Browse files
authored
Merge pull request #233370 from pauljewellmsft/pauljewell-v11-samples
Add new article for .NET v11 code examples
2 parents 7305431 + 46f34e8 commit f166cc9

19 files changed

+1274
-1219
lines changed

articles/storage/blobs/TOC.yml

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -488,11 +488,7 @@ items:
488488
- name: Change redundancy configuration
489489
href: ../common/redundancy-migration.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
490490
- name: Design highly available applications
491-
items:
492-
- name: .NET (v12 SDK)
493-
href: ../common/geo-redundant-design.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
494-
- name: .NET (v11 SDK)
495-
href: ../common/geo-redundant-design-legacy.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
491+
href: ../common/geo-redundant-design.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
496492
- name: Check the Last Sync Time property
497493
href: ../common/last-sync-time-get.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
498494
- name: Initiate account failover
@@ -1135,6 +1131,18 @@ items:
11351131
href: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage
11361132
- name: Azure Storage client library version 2.1
11371133
href: https://github.com/Azure/azure-storage-python
1134+
- name: Code samples using deprecated SDKs
1135+
items:
1136+
- name: .NET
1137+
items:
1138+
- name: Version 11.x samples
1139+
href: blob-v11-samples-dotnet.md
1140+
- name: Design highly available applications (.NET version 11.x)
1141+
href: ../common/geo-redundant-design-legacy.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
1142+
- name: JavaScript version 11.x samples
1143+
href: blob-v11-samples-javascript.md
1144+
- name: Python version 2.1 samples
1145+
href: blob-v2-samples-python.md
11381146
- name: Compliance offerings
11391147
href: ../common/storage-compliance-offerings.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json
11401148
- name: Data Lake Storage Gen2

articles/storage/blobs/blob-v11-samples-dotnet.md

Lines changed: 924 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
---
2+
title: Azure Blob Storage code samples using JavaScript version 11.x client libraries
3+
titleSuffix: Azure Storage
4+
description: View code samples that use the Azure Blob Storage client library for JavaScript version 11.x.
5+
services: storage
6+
author: pauljewellmsft
7+
ms.service: storage
8+
ms.subservice: blobs
9+
ms.topic: how-to
10+
ms.date: 04/03/2023
11+
ms.author: pauljewell
12+
---
13+
14+
# Azure Blob Storage code samples using JavaScript version 11.x client libraries
15+
16+
This article shows code samples that use version 11.x of the Azure Blob Storage client library for JavaScript.
17+
18+
[!INCLUDE [storage-v11-sdk-support-retirement](../../../includes/storage-v11-sdk-support-retirement.md)]
19+
20+
## Build a highly available app with Blob Storage
21+
22+
Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md)
23+
24+
### Download the sample
25+
26+
[Download the sample project](https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs) and unzip the file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Node.js application.
27+
28+
```bash
29+
git clone https://github.com/Azure-Samples/storage-node-v10-ha-ra-grs.git
30+
```
31+
32+
### Configure the sample
33+
34+
To run this sample, you must add your storage account credentials to the `.env.example` file and then rename it to `.env`.
35+
36+
```
37+
AZURE_STORAGE_ACCOUNT_NAME=<replace with your storage account name>
38+
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<replace with your storage account access key>
39+
```
40+
41+
You can find this information in the Azure portal by navigating to your storage account and selecting **Access keys** in the **Settings** section.
42+
43+
Install the required dependencies by opening a command prompt, navigating to the sample folder, then entering `npm install`.
44+
45+
### Run the console application
46+
47+
To run the sample, open a command prompt, navigate to the sample folder, then enter `node index.js`.
48+
49+
The sample creates a container in your Blob storage account, uploads **HelloWorld.png** into the container, then repeatedly checks whether the container and image have replicated to the secondary region. After replication, it prompts you to enter **D** or **Q** (followed by ENTER) to download or quit. Your output should look similar to the following example:
50+
51+
```
52+
Created container successfully: newcontainer1550799840726
53+
Uploaded blob: HelloWorld.png
54+
Checking to see if container and blob have replicated to secondary region.
55+
[0] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
56+
[1] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
57+
...
58+
[31] Container has not replicated to secondary region yet: newcontainer1550799840726 : ContainerNotFound
59+
[32] Container found, but blob has not replicated to secondary region yet.
60+
...
61+
[67] Container found, but blob has not replicated to secondary region yet.
62+
[68] Blob has replicated to secondary region.
63+
Ready for blob download. Enter (D) to download or (Q) to quit, followed by ENTER.
64+
> D
65+
Attempting to download blob...
66+
Blob downloaded from primary endpoint.
67+
> Q
68+
Exiting...
69+
Deleted container newcontainer1550799840726
70+
```
71+
72+
### Understand the code sample
73+
74+
With the Node.js V10 SDK, callback handlers are unnecessary. Instead, the sample creates a pipeline configured with retry options and a secondary endpoint. This configuration allows the application to automatically switch to the secondary pipeline if it fails to reach your data through the primary pipeline.
75+
76+
```javascript
77+
const accountName = process.env.AZURE_STORAGE_ACCOUNT_NAME;
78+
const storageAccessKey = process.env.AZURE_STORAGE_ACCOUNT_ACCESS_KEY;
79+
const sharedKeyCredential = new SharedKeyCredential(accountName, storageAccessKey);
80+
81+
const primaryAccountURL = `https://${accountName}.blob.core.windows.net`;
82+
const secondaryAccountURL = `https://${accountName}-secondary.blob.core.windows.net`;
83+
84+
const pipeline = StorageURL.newPipeline(sharedKeyCredential, {
85+
retryOptions: {
86+
maxTries: 3,
87+
tryTimeoutInMs: 10000,
88+
retryDelayInMs: 500,
89+
maxRetryDelayInMs: 1000,
90+
secondaryHost: secondaryAccountURL
91+
}
92+
});
93+
```
Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
---
2+
title: Azure Blob Storage code samples using Python version 2.1 client libraries
3+
titleSuffix: Azure Storage
4+
description: View code samples that use the Azure Blob Storage client library for Python version 2.1.
5+
services: storage
6+
author: pauljewellmsft
7+
ms.service: storage
8+
ms.subservice: blobs
9+
ms.topic: how-to
10+
ms.date: 04/03/2023
11+
ms.author: pauljewell
12+
---
13+
14+
# Azure Blob Storage code samples using Python version 2.1 client libraries
15+
16+
This article shows code samples that use version 2.1 of the Azure Blob Storage client library for Python.
17+
18+
[!INCLUDE [storage-v11-sdk-support-retirement](../../../includes/storage-v11-sdk-support-retirement.md)]
19+
20+
## Build a highly available app with Blob Storage
21+
22+
Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md)
23+
24+
### Download the sample
25+
26+
[Download the sample project](https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip) and extract (unzip) the storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Python application.
27+
28+
```bash
29+
git clone https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.git
30+
```
31+
32+
### Configure the sample
33+
34+
In the application, you must provide your storage account credentials. You can store this information in environment variables on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variables.
35+
36+
In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Paste the **Storage account name** and **Key** values into the following commands, replacing the \<youraccountname\> and \<youraccountkey\> placeholders. This command saves the environment variables to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using.
37+
38+
#### Linux
39+
40+
```bash
41+
export accountname=<youraccountname>
42+
export accountkey=<youraccountkey>
43+
```
44+
45+
#### Windows
46+
47+
```powershell
48+
setx accountname "<youraccountname>"
49+
setx accountkey "<youraccountkey>"
50+
```
51+
52+
### Run the console application
53+
54+
To run the application on a terminal or command prompt, go to the **circuitbreaker.py** directory, then enter `python circuitbreaker.py`. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint.
55+
56+
![Screnshot of console app running.](media/storage-create-geo-redundant-storage/figure3.png)
57+
58+
In the sample code, the `run_circuit_breaker` method in the `circuitbreaker.py` file is used to download an image from the storage account using the [get_blob_to_path](/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice#get-blob-to-path-container-name--blob-name--file-path--open-mode--wb---snapshot-none--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--lease-id-none--if-modified-since-none--if-unmodified-since-none--if-match-none--if-none-match-none--timeout-none-) method.
59+
60+
The Storage object retry function is set to a linear retry policy. The retry function determines whether to retry a request, and specifies the number of seconds to wait before retrying the request. Set the **retry\_to\_secondary** value to true, if request should be retried to secondary in case the initial request to primary fails. In the sample application, a custom retry policy is defined in the `retry_callback` function of the storage object.
61+
62+
Before the download, the Service object [retry_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) and [response_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) function is defined. These functions define event handlers that fire when a download completes successfully or if a download fails and is retrying.
63+
64+
### Understand the code sample
65+
66+
#### Retry event handler
67+
68+
The `retry_callback` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) of the request is changed to `SECONDARY`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely.
69+
70+
```python
71+
def retry_callback(retry_context):
72+
global retry_count
73+
retry_count = retry_context.count
74+
sys.stdout.write(
75+
"\nRetrying event because of failure reading the primary. RetryCount= {0}".format(retry_count))
76+
sys.stdout.flush()
77+
78+
# Check if we have more than n-retries in which case switch to secondary
79+
if retry_count >= retry_threshold:
80+
81+
# Check to see if we can fail over to secondary.
82+
if blob_client.location_mode != LocationMode.SECONDARY:
83+
blob_client.location_mode = LocationMode.SECONDARY
84+
retry_count = 0
85+
else:
86+
raise Exception("Both primary and secondary are unreachable. "
87+
"Check your application's network connection.")
88+
```
89+
90+
#### Request completed event handler
91+
92+
The `response_callback` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) back to `PRIMARY` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint.
93+
94+
```python
95+
def response_callback(response):
96+
global secondary_read_count
97+
if blob_client.location_mode == LocationMode.SECONDARY:
98+
99+
# You're reading the secondary. Let it read the secondary [secondaryThreshold] times,
100+
# then switch back to the primary and see if it is available now.
101+
secondary_read_count += 1
102+
if secondary_read_count >= secondary_threshold:
103+
blob_client.location_mode = LocationMode.PRIMARY
104+
secondary_read_count = 0
105+
```

articles/storage/blobs/concurrency-manage.md

Lines changed: 4 additions & 112 deletions
Original file line numberDiff line numberDiff line change
@@ -45,63 +45,8 @@ The outline of this process is as follows:
4545

4646
The following code examples show how to construct an **If-Match** condition on the write request that checks the ETag value for a blob. Azure Storage evaluates whether the blob's current ETag is the same as the ETag provided on the request and performs the write operation only if the two ETag values match. If another process has updated the blob in the interim, then Azure Storage returns an HTTP 412 (Precondition Failed) status message.
4747

48-
# [.NET v12 SDK](#tab/dotnet)
49-
5048
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Concurrency.cs" id="Snippet_DemonstrateOptimisticConcurrencyBlob":::
5149

52-
# [.NET v11 SDK](#tab/dotnetv11)
53-
54-
```csharp
55-
public void DemonstrateOptimisticConcurrencyBlob(string containerName, string blobName)
56-
{
57-
Console.WriteLine("Demonstrate optimistic concurrency");
58-
59-
// Parse connection string and create container.
60-
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
61-
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
62-
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
63-
container.CreateIfNotExists();
64-
65-
// Create test blob. The default strategy is last writer wins, so
66-
// write operation will overwrite existing blob if present.
67-
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
68-
blockBlob.UploadText("Hello World!");
69-
70-
// Retrieve the ETag from the newly created blob.
71-
string originalETag = blockBlob.Properties.ETag;
72-
Console.WriteLine("Blob added. Original ETag = {0}", originalETag);
73-
74-
/// This code simulates an update by another client.
75-
string helloText = "Blob updated by another client.";
76-
// No ETag was provided, so original blob is overwritten and ETag updated.
77-
blockBlob.UploadText(helloText);
78-
Console.WriteLine("Blob updated. Updated ETag = {0}", blockBlob.Properties.ETag);
79-
80-
// Now try to update the blob using the original ETag value.
81-
try
82-
{
83-
Console.WriteLine(@"Attempt to update blob using original ETag
84-
to generate if-match access condition");
85-
blockBlob.UploadText(helloText, accessCondition: AccessCondition.GenerateIfMatchCondition(originalETag));
86-
}
87-
catch (StorageException ex)
88-
{
89-
if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
90-
{
91-
Console.WriteLine(@"Precondition failure as expected.
92-
Blob's ETag does not match.");
93-
}
94-
else
95-
{
96-
throw;
97-
}
98-
}
99-
Console.WriteLine();
100-
}
101-
```
102-
103-
---
104-
10550
Azure Storage also supports other conditional headers, including as **If-Modified-Since**, **If-Unmodified-Since** and **If-None-Match**. For more information, see [Specifying Conditional Headers for Blob Service Operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations).
10651

10752
## Pessimistic concurrency for blobs
@@ -112,65 +57,8 @@ Leases enable different synchronization strategies to be supported, including ex
11257

11358
The following code examples show how to acquire an exclusive lease on a blob, update the content of the blob by providing the lease ID, and then release the lease. If the lease is active and the lease ID isn't provided on a write request, then the write operation fails with error code 412 (Precondition Failed).
11459

115-
# [.NET v12 SDK](#tab/dotnet)
116-
11760
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Concurrency.cs" id="Snippet_DemonstratePessimisticConcurrencyBlob":::
11861

119-
# [.NET v11 SDK](#tab/dotnetv11)
120-
121-
```csharp
122-
public void DemonstratePessimisticConcurrencyBlob(string containerName, string blobName)
123-
{
124-
Console.WriteLine("Demonstrate pessimistic concurrency");
125-
126-
// Parse connection string and create container.
127-
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
128-
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
129-
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
130-
container.CreateIfNotExists();
131-
132-
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
133-
blockBlob.UploadText("Hello World!");
134-
Console.WriteLine("Blob added.");
135-
136-
// Acquire lease for 15 seconds.
137-
string lease = blockBlob.AcquireLease(TimeSpan.FromSeconds(15), null);
138-
Console.WriteLine("Blob lease acquired. Lease = {0}", lease);
139-
140-
// Update blob using lease. This operation should succeed.
141-
const string helloText = "Blob updated";
142-
var accessCondition = AccessCondition.GenerateLeaseCondition(lease);
143-
blockBlob.UploadText(helloText, accessCondition: accessCondition);
144-
Console.WriteLine("Blob updated using an exclusive lease");
145-
146-
// Simulate another client attempting to update to blob without providing lease.
147-
try
148-
{
149-
// Operation will fail as no valid lease was provided.
150-
Console.WriteLine("Now try to update blob without valid lease.");
151-
blockBlob.UploadText("Update operation will fail without lease.");
152-
}
153-
catch (StorageException ex)
154-
{
155-
if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
156-
{
157-
Console.WriteLine(@"Precondition failure error as expected.
158-
Blob lease not provided.");
159-
}
160-
else
161-
{
162-
throw;
163-
}
164-
}
165-
166-
// Release lease proactively.
167-
blockBlob.ReleaseLease(accessCondition);
168-
Console.WriteLine();
169-
}
170-
```
171-
172-
---
173-
17462
## Pessimistic concurrency for containers
17563

17664
Leases on containers enable the same synchronization strategies that are supported for blobs, including exclusive write/shared read, exclusive write/exclusive read, and shared write/exclusive read. For containers, however, the exclusive lock is enforced only on delete operations. To delete a container with an active lease, a client must include the active lease ID with the delete request. All other container operations succeed on a leased container without the lease ID.
@@ -180,3 +68,7 @@ Leases on containers enable the same synchronization strategies that are support
18068
- [Specifying conditional headers for Blob service operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations)
18169
- [Lease Container](/rest/api/storageservices/lease-container)
18270
- [Lease Blob](/rest/api/storageservices/lease-blob)
71+
72+
## Resources
73+
74+
For related code samples using deprecated .NET version 11.x SDKs, see [Code samples using .NET version 11.x](blob-v11-samples-dotnet.md#optimistic-concurrency-for-blobs).

0 commit comments

Comments
 (0)