Skip to content

Commit 5ad0100

Browse files
authored
Acrolinx spelling and grammar
1 parent 87e47a1 commit 5ad0100

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/azure-functions/functions-concurrency.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ For trigger types that support concurrency configuration, the settings you choos
3535

3636
_Applies only to the Flex Consumption plan (preview)_
3737

38-
The Flex Consumption plan scales all HTTP trigger functions together as a group. For more information, see [Per-function scaling](event-driven-scaling.md#per-function-scaling). The following table indicates the default concurrency setting for HTTP triggers on a given instances, based on the configured [instance memory size](./flex-consumption-plan.md#instance-memory).
38+
The Flex Consumption plan scales all HTTP trigger functions together as a group. For more information, see [Per-function scaling](event-driven-scaling.md#per-function-scaling). The following table indicates the default concurrency setting for HTTP triggers on a given instance, based on the configured [instance memory size](./flex-consumption-plan.md#instance-memory).
3939

4040
| Instance size (MB) | Default concurrency<sup>*</sup> |
4141
| ---- | ---- |
@@ -71,11 +71,11 @@ Using dynamic concurrency provides the following benefits:
7171
- **Simplified configuration**: You no longer have to manually determine per-trigger concurrency settings. The system learns the optimal values for your workload over time.
7272
- **Dynamic adjustments**: Concurrency is adjusted up or down dynamically in real time, which allows the system to adapt to changing load patterns over time.
7373
- **Instance health protection**: The runtime limits concurrency to levels a function app instance can comfortably handle. This protects the app from overloading itself by taking on more work than it should.
74-
- **Improved throughput**: Overall throughput is improved because individual instances aren't pulling more work than they can quickly process. This allows work to be load-balanced more effectively across instances. For functions that can handle higher loads, concurrency can be increased to values beyond the default config values, which yields higher throughput.
74+
- **Improved throughput**: Overall throughput is improved because individual instances aren't pulling more work than they can quickly process. This allows work to be load-balanced more effectively across instances. For functions that can handle higher loads, higher throughput can be obtained by increasing concurrency to values above the default configuration.
7575

7676
### Dynamic concurrency configuration
7777

78-
Dynamic concurrency can be enabled at the host level in the host.json file. When, enabled any binding extensions used by your function app that support dynamic concurrency adjust concurrency dynamically as needed. Dynamic concurrency settings override any manually configured concurrency settings for triggers that support dynamic concurrency.
78+
Dynamic concurrency can be enabled at the host level in the host.json file. When enabled, the concurrency levels of any binding extensions that support this feature are adjusted automatically as needed. In these cases, dynamic concurrency settings override any manually configured concurrency settings.
7979

8080
By default, dynamic concurrency is disabled. With dynamic concurrency enabled, concurrency starts at 1 for each function, and is adjusted up to an optimal value, which is determined by the host.
8181

@@ -109,7 +109,7 @@ Dynamic concurrency is enabled for a function app at the host level, and any ext
109109
| --- | --- | --- |
110110
| **Queue storage** | [version 5.x](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) (Storage extension) | The Azure Queue storage trigger has its own message polling loop. When using static config, concurrency is governed by the `BatchSize`/`NewBatchThreshold` config options. When using dynamic concurrency, those configuration values are ignored. Dynamic concurrency is integrated into the message loop, so the number of messages fetched per iteration are dynamically adjusted. When throttles are enabled (host is overloaded), message processing will be paused until throttles are disabled. When throttles are disabled, concurrency will increase. |
111111
| **Blob storage** | [version 5.x](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) (Storage extension) | Internally, the Azure Blob storage trigger uses the same infrastructure that the Azure Queue Trigger uses. When new/updated blobs need to be processed, messages are written to a platform managed control queue, and that queue is processed using the same logic used for QueueTrigger. When dynamic concurrency is enabled, concurrency for the processing of that control queue will be dynamically managed. |
112-
| **Service Bus** | [version 5.x](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus) | The Service Bus trigger currently supports three execution models. Dynamic concurrency affects these execution models as follows:<br/><br/>• **Single dispatch topic/queue processing**: Each invocation of your function processes a single message. When using static config, concurrency is governed by the `MaxConcurrentCalls` config option. When using dynamic concurrency, that config value is ignored, and concurrency is adjusted dynamically.<br/>• **Session based single dispatch topic/queue processing**: Each invocation of your function processes a single message. Depending on the number of active sessions for your topic/queue, each instance leases one or more sessions. Messages in each session are processed serially, to guarantee ordering in a session. When not using dynamic concurrency, concurrency is governed by the `MaxConcurrentSessions` setting. With dynamic concurrency enabled, `MaxConcurrentSessions` is ignored and the number of sessions each instance is processing is dynamically adjusted.<br/>• **Batch processing**: Each invocation of your function processes a batch of messages, governed by the `MaxMessageCount` setting. Because batch invocations are serial, concurrency for your batch-triggered function is always one and dynamic concurrency doesn't apply. |
112+
| **Service Bus** | [version 5.x](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus) | The Service Bus trigger currently supports three execution models. Dynamic concurrency affects these execution models as follows:<br/><br/>• **Single dispatch topic/queue processing**: Each invocation of your function processes a single message. When using static config, concurrency is governed by the `MaxConcurrentCalls` config option. When using dynamic concurrency, that config value is ignored, and concurrency is adjusted dynamically.<br/>• **Session based single dispatch topic/queue processing**: Each invocation of your function processes a single message. Depending on the number of active sessions for your topic/queue, each instance leases one or more sessions. Messages in each session are processed serially, to guarantee ordering in a session. When dynamic concurrency isn't used, concurrency is governed by the `MaxConcurrentSessions` setting. With dynamic concurrency enabled, `MaxConcurrentSessions` is ignored and the number of sessions each instance is processing is dynamically adjusted.<br/>• **Batch processing**: Each invocation of your function processes a batch of messages, governed by the `MaxMessageCount` setting. Because batch invocations are serial, concurrency for your batch-triggered function is always one and dynamic concurrency doesn't apply. |
113113

114114
## Next steps
115115

0 commit comments

Comments
 (0)