You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/blobs/blob-powershell.md
+55-50Lines changed: 55 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: StevenMatthew
6
6
ms.author: shaas
7
7
ms.service: storage
8
8
ms.topic: how-to
9
-
ms.date: 01/03/2022
9
+
ms.date: 05/02/2023
10
10
ms.devlang: powershell
11
11
ms.custom: devx-track-azurepowershell
12
12
---
@@ -34,7 +34,7 @@ Connect-AzAccount
34
34
35
35
After the connection has been established, create the Azure context. Authenticating with Azure AD automatically creates an Azure context for your default subscription. In some cases, you may need to access resources in a different subscription after authenticating. You can change the subscription associated with your current Azure session by modifying the active session context.
36
36
37
-
To use your default subscription, create the context by calling the `New-AzStorageContext` cmdlet. Include the `-UseConnectedAccount` parameter so that data operations will be performed using your Azure AD credentials.
37
+
To use your default subscription, create the context by calling the `New-AzStorageContext` cmdlet. Include the `-UseConnectedAccount` parameter so that data operations are performed using your Azure AD credentials.
38
38
39
39
```azurepowershell
40
40
#Create a context object using Azure AD credentials
@@ -45,18 +45,18 @@ To change subscriptions, retrieve the context object with the [Get-AzSubscriptio
45
45
46
46
### Create a container
47
47
48
-
All blob data is stored within containers, so you'll need at least one container resource before you can upload data. If needed, use the following example to create a storage container. For more information, see [Managing blob containers using PowerShell](blob-containers-powershell.md).
48
+
All blob data is stored within containers, so you need at least one container resource before you can upload data. If needed, use the following example to create a storage container. For more information, see [Managing blob containers using PowerShell](blob-containers-powershell.md).
When you use the following examples, you'll need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
55
+
When you use the following examples, you need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
56
56
57
57
## Upload a blob
58
58
59
-
To upload a file to a block blob, pass the required parameter values to the `Set-AzStorageBlobContent` cmdlet. Supply the path and file name with the `-File` parameter, and the name of the container with the `-Container` parameter. You'll also need to provide a reference to the context object with the `-Context` parameter.
59
+
To upload a file to a block blob, pass the required parameter values to the `Set-AzStorageBlobContent` cmdlet. Supply the path and file name with the `-File` parameter, and the name of the container with the `-Container` parameter. You also need to provide a reference to the context object with the `-Context` parameter.
60
60
61
61
This command creates the blob if it doesn't exist, or prompts for overwrite confirmation if it exists. You can overwrite the file without confirmation if you pass the `-Force` parameter to the cmdlet.
62
62
@@ -182,7 +182,7 @@ Processed 5257 blobs in demo-container.
182
182
183
183
Depending on your use case, the `Get-AzStorageBlobContent` cmdlet can be used to download either single or multiple blobs. As with most operations, both approaches require a context object.
184
184
185
-
To download a single named blob, you can call the cmdlet directly and supply values for the `-Blob` and `-Container` parameters. The blob will be downloaded to the working PowerShell directory by default, but an alternate location can be specified. To change the target location, a valid, existing path must be passed with the `-Destination` parameter. Because the operation can't create a destination, it will fail with an error if your specified path doesn't exist.
185
+
To download a single named blob, you can call the cmdlet directly and supply values for the `-Blob` and `-Container` parameters. The blob is downloaded to the working PowerShell directory by default, but an alternate location can be specified. To change the target location, a valid, existing path must be passed with the `-Destination` parameter. Because the operation can't create a destination, it fails with an error if your specified path doesn't exist.
186
186
187
187
Multiple blobs can be downloaded by combining the `Get-AzStorageBlob` cmdlet and the PowerShell pipeline operator. First, create a list of blobs with the `Get-AzStorageBlob` cmdlet. Next, use the pipeline operator and the `Get-AzStorageBlobContent` cmdlet to retrieve the blobs from the container.
The result displays a list of the blob's properties as shown below.
261
+
The result displays a list of the blob's properties as shown in the following example.
262
262
263
263
```Result
264
264
LastModified : 11/16/2021 3:42:07 PM +00:00
@@ -282,7 +282,7 @@ HasLegalHold : False
282
282
283
283
### Read and write blob metadata
284
284
285
-
Blob metadata is an optional set of name/value pairs associated with a blob. As shown in the previous example, there's no metadata associated with a blob initially, though it can be added when necessary. To update blob metadata, you'll use the `BlobClient.UpdateMetadata` method. This method only accepts key-value pairs stored in a generic `IDictionary` object. For more information, see the [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) class definition.
285
+
Blob metadata is an optional set of name/value pairs associated with a blob. As shown in the previous example, there's no metadata associated with a blob initially, though it can be added when necessary. To update blob metadata, use the `BlobClient.UpdateMetadata` method. This method only accepts key-value pairs stored in a generic `IDictionary` object. For more information, see the [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) class definition.
286
286
287
287
The example below first updates and then commits a blob's metadata, and then retrieves it. The sample blob is flushed from memory to ensure the metadata isn't being read from the in-memory object.
The result returns the blob's newly updated metadata as shown below.
317
+
The result returns the blob's newly updated metadata as shown in the following example.
318
318
319
319
```Result
320
320
Key Value
@@ -357,7 +357,7 @@ You can use the `-Force` parameter to overwrite an existing blob with the same n
357
357
358
358
The resulting destination blob is a writeable blob and not a snapshot.
359
359
360
-
The source blob for a copy operation may be a block blob, an append blob, a page blob, or a snapshot. If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob will be overwritten.
360
+
The source blob for a copy operation may be a block blob, an append blob, a page blob, or a snapshot. If the destination blob already exists, it must be of the same blob type as the source blob. An existing destination blob is overwritten.
361
361
362
362
The destination blob can't be modified while a copy operation is in progress. A destination blob can only have one outstanding copy operation. In other words, a blob can't be the destination for multiple pending copy operations.
When you change a blob's tier, you move the blob and all of its data to the target tier. To make the change, retrieve a blob with the `Get-AzStorageBlob` cmdlet, and call the `BlobClient.SetAccessTier` method. This approach can be used to change the tier between **Hot**, **Cool**, and **Archive**.
388
388
389
-
Changing tiers from **Cool** or **Hot** to **Archive** take place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline, and can't be read or modified. Before you can read or modify an archived blob's data, you'll need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
389
+
Changing tiers from **Cool** or **Hot** to **Archive** take place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline, and can't be read or modified. Before you can read or modify an archived blob's data, you need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
390
390
391
391
The following sample code sets the tier to **Hot** for all blobs within the `archive` container.
392
392
@@ -401,7 +401,7 @@ Foreach($blob in $blobs) {
401
401
402
402
Blob index tags make data management and discovery easier. Blob index tags are user-defined key-value index attributes that you can apply to your blobs. Once configured, you can categorize and find objects within an individual container or across all containers. Blob resources can be dynamically categorized by updating their index tags without requiring a change in container organization. Index tags offer a flexible way to cope with changing data requirements. You can use both metadata and index tags simultaneously. For more information on index tags, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
403
403
404
-
The following example illustrates how to add blob index tags to a series of blobs. The example reads data from an XML file and uses it to create index tags on several blobs. To use the sample code, create a local *blob-list.xml* file in your *C:\temp* directory. The XML data is provided below.
404
+
The following example illustrates how to add blob index tags to a series of blobs. The example reads data from an XML file and uses it to create index tags on several blobs. To use the sample code, create a local *blob-list.xml* file in your *C:\temp* directory. The XML data is provided in the following example.
You can delete either a single blob or series of blobs with the `Remove-AzStorageBlob` cmdlet. When deleting multiple blobs, you can utilize conditional operations, loops, or the PowerShell pipeline as shown in the examples below.
442
+
You can delete either a single blob or series of blobs with the `Remove-AzStorageBlob` cmdlet. When deleting multiple blobs, you can utilize conditional operations, loops, or the PowerShell pipeline as shown in the following examples.
443
443
444
444
> [!WARNING]
445
445
> Running the following examples may permanently delete blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see [Soft delete for containers](soft-delete-blob-overview.md).
In some cases, it's possible to retrieve blobs that have been deleted. If your storage account's soft delete data protection option is enabled, the `-IncludeDeleted` parameter will return blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
465
+
In some cases, it's possible to retrieve blobs that have been deleted. If your storage account's soft delete data protection option is enabled, the `-IncludeDeleted` parameter returns blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
466
466
467
467
Use the following example to retrieve a list of blobs deleted within container's associated retention period. The result displays a list of recently deleted blobs.
As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore blobs deleted within the associated retention period. You may also use versioning to maintain previous versions of your blobs for each recovery and restoration.
486
486
487
-
If blob versioning and blob soft delete are both enabled, then modifying, overwriting, deleting, or restoring a blob automatically creates a new version. The method you'll use to restore a deleted blob will depend upon whether versioning is enabled on your storage account.
487
+
If blob versioning and blob soft delete are both enabled, then modifying, overwriting, deleting, or restoring a blob automatically creates a new version. The method you use to restore a deleted blob depends upon whether versioning is enabled on your storage account.
488
488
489
489
The following code sample restores all soft-deleted blobs or, if versioning is enabled, restores the latest version of a blob. It first determines whether versioning is enabled with the `Get-AzStorageBlobServiceProperty` cmdlet.
490
490
491
491
If versioning is enabled, the `Get-AzStorageBlob` cmdlet retrieves a list of all uniquely named blob versions. Next, the blob versions on the list are retrieved and ordered by date. If no versions are found with the `LatestVersion` attribute value, the `Copy-AzBlob` cmdlet is used to make an active copy of the latest version.
492
492
493
493
If versioning is disabled, the `BlobBaseClient.Undelete` method is used to restore each soft-deleted blob in the container.
494
494
495
-
Before you can follow this example, you'll need to enable soft delete or versioning on at least one of your storage accounts.
495
+
Before you can follow this example, you need to enable soft delete or versioning on at least one of your storage accounts.
496
+
497
+
> [!IMPORTANT]
498
+
> The following example enumerates a group of blobs and stores them in memory before processing them. If versioning is enabled, the blobs are also sorted. The use of the `-ContinuationToken` parameter with `$maxCount` variable limits the number of blobs within the group to conserve resources. If a container has millions of blobs, this will be extremely expensive. You can adjust the value of the `$maxCount` variable, though if a container has millions of blobs the script will process the blobs slowly.
496
499
497
500
To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
0 commit comments