|
| 1 | +--- |
| 2 | +title: Azure Blob Storage code samples using Python version 2.1 client libraries |
| 3 | +titleSuffix: Azure Storage |
| 4 | +description: View code samples that use the Azure Blob Storage client library for Python version 2.1. |
| 5 | +services: storage |
| 6 | +author: pauljewellmsft |
| 7 | +ms.service: storage |
| 8 | +ms.subservice: blobs |
| 9 | +ms.topic: how-to |
| 10 | +ms.date: 04/03/2023 |
| 11 | +ms.author: pauljewell |
| 12 | +--- |
| 13 | + |
| 14 | +# Azure Blob Storage code samples using Python version 2.1 client libraries |
| 15 | + |
| 16 | +This article shows code samples that use version 2.1 of the Azure Blob Storage client library for Python. |
| 17 | + |
| 18 | +[!INCLUDE [storage-v11-sdk-support-retirement](../../../includes/storage-v11-sdk-support-retirement.md)] |
| 19 | + |
| 20 | +## Build a highly available app with Blob Storage |
| 21 | + |
| 22 | +Related article: [Tutorial: Build a highly available application with Blob storage](storage-create-geo-redundant-storage.md) |
| 23 | + |
| 24 | +### Download the sample |
| 25 | + |
| 26 | +[Download the sample project](https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs/archive/master.zip) and extract (unzip) the storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.zip file. You can also use [git](https://git-scm.com/) to download a copy of the application to your development environment. The sample project contains a basic Python application. |
| 27 | + |
| 28 | +```bash |
| 29 | +git clone https://github.com/Azure-Samples/storage-python-circuit-breaker-pattern-ha-apps-using-ra-grs.git |
| 30 | +``` |
| 31 | + |
| 32 | +### Configure the sample |
| 33 | + |
| 34 | +In the application, you must provide your storage account credentials. You can store this information in environment variables on the local machine running the application. Follow one of the examples below depending on your Operating System to create the environment variables. |
| 35 | + |
| 36 | +In the Azure portal, navigate to your storage account. Select **Access keys** under **Settings** in your storage account. Paste the **Storage account name** and **Key** values into the following commands, replacing the \<youraccountname\> and \<youraccountkey\> placeholders. This command saves the environment variables to the local machine. In Windows, the environment variable isn't available until you reload the **Command Prompt** or shell you're using. |
| 37 | + |
| 38 | +#### Linux |
| 39 | + |
| 40 | +```bash |
| 41 | +export accountname=<youraccountname> |
| 42 | +export accountkey=<youraccountkey> |
| 43 | +``` |
| 44 | + |
| 45 | +#### Windows |
| 46 | + |
| 47 | +```powershell |
| 48 | +setx accountname "<youraccountname>" |
| 49 | +setx accountkey "<youraccountkey>" |
| 50 | +``` |
| 51 | + |
| 52 | +### Run the console application |
| 53 | + |
| 54 | +To run the application on a terminal or command prompt, go to the **circuitbreaker.py** directory, then enter `python circuitbreaker.py`. The application uploads the **HelloWorld.png** image from the solution to the storage account. The application checks to ensure the image has replicated to the secondary RA-GZRS endpoint. It then begins downloading the image up to 999 times. Each read is represented by a **P** or an **S**. Where **P** represents the primary endpoint and **S** represents the secondary endpoint. |
| 55 | + |
| 56 | + |
| 57 | + |
| 58 | +In the sample code, the `run_circuit_breaker` method in the `circuitbreaker.py` file is used to download an image from the storage account using the [get_blob_to_path](/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice#get-blob-to-path-container-name--blob-name--file-path--open-mode--wb---snapshot-none--start-range-none--end-range-none--validate-content-false--progress-callback-none--max-connections-2--lease-id-none--if-modified-since-none--if-unmodified-since-none--if-match-none--if-none-match-none--timeout-none-) method. |
| 59 | + |
| 60 | +The Storage object retry function is set to a linear retry policy. The retry function determines whether to retry a request, and specifies the number of seconds to wait before retrying the request. Set the **retry\_to\_secondary** value to true, if request should be retried to secondary in case the initial request to primary fails. In the sample application, a custom retry policy is defined in the `retry_callback` function of the storage object. |
| 61 | + |
| 62 | +Before the download, the Service object [retry_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) and [response_callback](/python/api/azure-storage-common/azure.storage.common.storageclient.storageclient) function is defined. These functions define event handlers that fire when a download completes successfully or if a download fails and is retrying. |
| 63 | + |
| 64 | +### Understand the code sample |
| 65 | + |
| 66 | +#### Retry event handler |
| 67 | + |
| 68 | +The `retry_callback` event handler is called when the download of the image fails and is set to retry. If the maximum number of retries defined in the application are reached, the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) of the request is changed to `SECONDARY`. This setting forces the application to attempt to download the image from the secondary endpoint. This configuration reduces the time taken to request the image as the primary endpoint isn't retried indefinitely. |
| 69 | + |
| 70 | +```python |
| 71 | +def retry_callback(retry_context): |
| 72 | + global retry_count |
| 73 | + retry_count = retry_context.count |
| 74 | + sys.stdout.write( |
| 75 | + "\nRetrying event because of failure reading the primary. RetryCount= {0}".format(retry_count)) |
| 76 | + sys.stdout.flush() |
| 77 | + |
| 78 | + # Check if we have more than n-retries in which case switch to secondary |
| 79 | + if retry_count >= retry_threshold: |
| 80 | + |
| 81 | + # Check to see if we can fail over to secondary. |
| 82 | + if blob_client.location_mode != LocationMode.SECONDARY: |
| 83 | + blob_client.location_mode = LocationMode.SECONDARY |
| 84 | + retry_count = 0 |
| 85 | + else: |
| 86 | + raise Exception("Both primary and secondary are unreachable. " |
| 87 | + "Check your application's network connection.") |
| 88 | +``` |
| 89 | + |
| 90 | +#### Request completed event handler |
| 91 | + |
| 92 | +The `response_callback` event handler is called when the download of the image is successful. If the application is using the secondary endpoint, the application continues to use this endpoint up to 20 times. After 20 times, the application sets the [LocationMode](/python/api/azure-storage-common/azure.storage.common.models.locationmode) back to `PRIMARY` and retries the primary endpoint. If a request is successful, the application continues to read from the primary endpoint. |
| 93 | + |
| 94 | +```python |
| 95 | +def response_callback(response): |
| 96 | + global secondary_read_count |
| 97 | + if blob_client.location_mode == LocationMode.SECONDARY: |
| 98 | + |
| 99 | + # You're reading the secondary. Let it read the secondary [secondaryThreshold] times, |
| 100 | + # then switch back to the primary and see if it is available now. |
| 101 | + secondary_read_count += 1 |
| 102 | + if secondary_read_count >= secondary_threshold: |
| 103 | + blob_client.location_mode = LocationMode.PRIMARY |
| 104 | + secondary_read_count = 0 |
| 105 | +``` |
0 commit comments