Running concurrent activities on Azure durable functions #2214
Replies: 3 comments 5 replies
-
Based on your description, it sounds like explicitly configuring maximum concurrency settings would improve performance for your scenario. See here for details: https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-perf-and-scale#concurrency-throttles
Since your activities are CPU intensive, you should focus on reducing the |
Beta Was this translation helpful? Give feedback.
-
Many thanks Chris for your swift response.
We will try this immediately, currently our maxConcurrentActivityFunctions is set to 2 and maxConcurrentOrchestratorFunctions is set to 10.
Do you think we should reduce maxConcurrentOrchestratorFunctions further?
Steve Lockley
Mob: 079 67 600 923
From: Chris ***@***.***>
Sent: 22 June 2022 22:34
To: ***@***.***>
Cc: Steve ***@***.***>; ***@***.***>
Subject: Re: [Azure/azure-functions-durable-extension] Running concurrent activities on Azure durable functions (Discussion #2214)
Based on your description, it sounds like explicitly configuring maximum concurrency settings would improve performance for your scenario. See here for details: https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-perf-and-scale#concurrency-throttles
Concurrency throttles
Azure Functions supports executing multiple functions concurrently within a single app instance. This concurrent execution helps increase parallelism and minimizes the number of "cold starts" that a typical app will experience over time. However, high concurrency can exhaust per-VM system resources such network connections or available memory. Depending on the needs of the function app, it may be necessary to throttle the per-instance concurrency to avoid the possibility of running out of memory in high-load situations.
Activity, orchestrator, and entity function concurrency limits can be configured in the host.json file. The relevant settings are durableTask/maxConcurrentActivityFunctions for activity functions and durableTask/maxConcurrentOrchestratorFunctions for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that can be loaded into memory concurrently.
Since your activities are CPU intensive, you should focus on reducing the maxConcurrentActivityFunctions setting to a small number, like something between 1 and 4 (perhaps depending on how many cores each VM instance has). The platform will then detect a backlog in activities that need to be scheduled and start adding new VM instances for increased global concurrency. Just be aware that it may take between 15-30 seconds for each new VM instance to get allocated.
—
Reply to this email directly, view it on GitHub<#2214 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AB6XPJJI4W3C36PWDJMGUEDVQOBG5ANCNFSM5ZQZCJVQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I wonder if we (I work with @SteveLockley) are no setting our expectations appropriately. This is maybe us expecting consistency out of a run time, where perhaps that isn't the case? We are seeing deviations of ~50% between runs, would an average be more appropriate? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We have invested heavily in developing a functions app that executes our workflows, but we find the performance with concurrent activities very poor.
Our scenario is an orchestration with 10 - 100 Activities, each that takes about 2-30 seconds to execute.
We launch these and await Task.WhenAll to return.
IO in these tasks is all local to the instance apart from the initial download of a different blob to each task.
We have 2 instances running at cold start and find that the activities are dispatched onto these.
No further instances are fired up by Azure.
We have tried tweaking the settings to allow one activity per instance, changing the batch size to values between 1 and 32, and adjusting buffers. nothing impacts on the outcome.
The 10-100 tasks all run on the two machines causing excessive cpu usage (110%), thrashing which makes a 2 second task take 30 seconds and indications of thread exhaustion.
Generally we would be better to just run the whole set of tasks sequentially. It is quicker
What are we missing here? Is this not what Azure durable functions is for?
Beta Was this translation helpful? Give feedback.
All reactions