Skip to content

Commit c5dac7a

Browse files
authored
Merge pull request #90602 from tamram/tamram-1003
update perf checklist for block blob accounts
2 parents 4ded5e8 + 3ce1955 commit c5dac7a

12 files changed

+800
-509
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24581,6 +24581,11 @@
2458124581
"redirect_url": "/azure/storage/common/storage-performance-checklist",
2458224582
"redirect_document_id": true
2458324583
},
24584+
{
24585+
"source_path": "articles/storage/common/storage-performance-checklist.md",
24586+
"redirect_url": "/azure/storage/blobs/storage-performance-checklist",
24587+
"redirect_document_id": true
24588+
},
2458424589
{
2458524590
"source_path": "articles/storage/storage-php-how-to-use-blobs.md",
2458624591
"redirect_url": "/azure/storage/blobs/storage-php-how-to-use-blobs",

articles/batch/batch-task-output-file-conventions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,12 +59,12 @@ To persist output data to Azure Storage using the File Conventions library, you
5959

6060
## Persist output data
6161

62-
To persist job and task output data with the File Conventions library, create a container in Azure Storage, then save the output to the container. Use the [Azure Storage client library for .NET](https://www.nuget.org/packages/WindowsAzure.Storage) in your task code to upload the task output to the container.
62+
To persist job and task output data with the File Conventions library, create a container in Azure Storage, then save the output to the container. Use the [Azure Storage client library for .NET](https://www.nuget.org/packages/WindowsAzure.Storage) in your task code to upload the task output to the container.
6363

6464
For more information about working with containers and blobs in Azure Storage, see [Get started with Azure Blob storage using .NET](../storage/blobs/storage-dotnet-how-to-use-blobs.md).
6565

6666
> [!WARNING]
67-
> All job and task outputs persisted with the File Conventions library are stored in the same container. If a large number of tasks try to persist files at the same time, [storage throttling limits](../storage/common/storage-performance-checklist.md#blobs) may be enforced.
67+
> All job and task outputs persisted with the File Conventions library are stored in the same container. If a large number of tasks try to persist files at the same time, Azure Storage throttling limits may be enforced. For more information about throttling limits, see [Performance and scalability checklist for Blob storage](../storage/blobs/storage-performance-checklist.md).
6868
6969
### Create storage container
7070

articles/storage/blobs/TOC.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@
150150
- name: Scalability and performance targets
151151
href: ../common/storage-scalability-targets.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json
152152
- name: Performance and scalability checklist
153-
href: ../common/storage-performance-checklist.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json
153+
href: storage-performance-checklist.md
154154
- name: Latency in Blob storage
155155
href: storage-blobs-latency.md
156156
- name: Concurrency

articles/storage/blobs/storage-blob-scalable-app-upload-files.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
title: Upload large amounts of random data in parallel to Azure Storage | Microsoft Docs
3-
description: Learn how to use the Azure SDK to upload large amounts of random data in parallel to an Azure Storage account
3+
description: Learn how to use the Azure Storage client library to upload large amounts of random data in parallel to an Azure Storage account
44
author: roygara
55
ms.service: storage
66
ms.topic: tutorial
7-
ms.date: 02/20/2018
7+
ms.date: 10/08/2019
88
ms.author: rogarana
99
ms.subservice: blobs
1010
---
@@ -23,7 +23,7 @@ In part two of the series, you learn how to:
2323
2424
Azure blob storage provides a scalable service for storing your data. To ensure your application is as performant as possible, an understanding of how blob storage works is recommended. Knowledge of the limits for Azure blobs is important, to learn more about these limits visit: [blob storage scalability targets](../common/storage-scalability-targets.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#azure-blob-storage-scale-targets).
2525

26-
[Partition naming](../common/storage-performance-checklist.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#subheading47) is another potentially important factor when designing a highly performing application using blobs. For block sizes greater than or equal to four MiB, [High-Throughput block blobs](https://azure.microsoft.com/blog/high-throughput-with-azure-blob-storage/) are used, and partition naming will not impact performance. For block sizes less than four MiB, Azure storage uses a range-based partitioning scheme to scale and load balance. This configuration means that files with similar naming conventions or prefixes go to the same partition. This logic includes the name of the container that the files are being uploaded to. In this tutorial, you use files that have GUIDs for names as well as randomly generated content. They are then uploaded to five different containers with random names.
26+
[Partition naming](../blobs/storage-performance-checklist.md#partitioning) is another potentially important factor when designing a high-performance application using blobs. For block sizes greater than or equal to 4 MiB, [High-Throughput block blobs](https://azure.microsoft.com/blog/high-throughput-with-azure-blob-storage/) are used, and partition naming will not impact performance. For block sizes less than 4 MiB, Azure storage uses a range-based partitioning scheme to scale and load balance. This configuration means that files with similar naming conventions or prefixes go to the same partition. This logic includes the name of the container that the files are being uploaded to. In this tutorial, you use files that have GUIDs for names as well as randomly generated content. They are then uploaded to five different containers with random names.
2727

2828
## Prerequisites
2929

0 commit comments

Comments
 (0)