Skip to content

Commit 21bffb6

Browse files
authored
Merge pull request #281722 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents ff1af5f + cbe7aeb commit 21bffb6

File tree

4 files changed

+7
-3
lines changed

4 files changed

+7
-3
lines changed

articles/azure-functions/functions-bindings-storage-blob-trigger.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -553,7 +553,7 @@ If all 5 tries fail, Azure Functions adds a message to a Storage queue named *we
553553
## Memory usage and concurrency
554554

555555
::: zone pivot="programming-language-csharp"
556-
When you bind to an [output type](#usage) that doesn't support steaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types).
556+
When you bind to an [output type](#usage) that doesn't support streaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types).
557557
::: zone-end
558558
::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-python,programming-language-powershell,programming-language-java"
559559
At this time, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs.

articles/azure-resource-manager/management/move-resource-group-and-subscription.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,9 @@ If your move requires setting up new dependent resources, you'll experience an i
1919

2020
Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource.
2121

22+
> [!NOTE]
23+
> You can't move Azure resources to another resource group or another subscription if there's a read-only lock, whether in the source or in the destination.
24+
2225
## Changed resource ID
2326

2427
When you move a resource, you change its resource ID. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When you move a resource to a new resource group or subscription, you change one or more values in that path.

articles/communication-services/quickstarts/email/handle-email-events.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,8 @@ To generate and receive Email events, take the steps in the following sections.
8888

8989
To view event triggers, we need to generate some events. To trigger an event, [send email](../email/send-email.md) using the Email domain resource attached to the Communication Services resource.
9090

91-
- `Email Delivery Report Received` events are generated when the Email status is in terminal state, i.e. Delivered, Failed, FilteredSpam, Quarantined.
91+
- `Email Delivery Report Received` events are generated when the Email status is in terminal state, like Delivered, Failed, FilteredSpam, Quarantined.
92+
9293
- `Email Engagement Tracking Report Received` events are generated when the email sent is either opened or a link within the email is clicked. To trigger an event, you need to turn on the `User Interaction Tracking` option on the Email domain resource
9394

9495
Check out the full list of [events that Communication Services supports](../../../event-grid/event-schema-communication-services.md).

articles/cosmos-db/hierarchical-partition-keys.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ When you choose each level of your hierarchical partition key, it's important to
3131

3232
- **Have a high cardinality**. The first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values.
3333

34-
- Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be colloacted on the same physical partition to avoid full-fanout queries.
34+
- Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be collocated on the same physical partition to avoid full-fanout queries.
3535
- While this may be okay for workloads where we do a one-time ingest of all our tenants' data and the following operations are primarily read-heavy afterwards, this can be unideal for workloads where your business requirements involve ingestion of data within a specific time. For example, if you have strict business requirements to avoid latencies, the maximum throughput your workload can theoretically achieve to ingest data is number of physical partitions * 10k. If your top-level key has low cardinality, your number of physical partitions will likely be 1, unless there is sufficient data for the level 1 key for it to be spread across multiple partitions after splits which can take between 4-6 hours to complete.
3636

3737
- **Spread request unit (RU) consumption and data storage evenly across all logical partitions**. This spread ensures even RU consumption and storage distribution across your physical partitions.

0 commit comments

Comments
 (0)