Skip to content

Commit faf58dd

Browse files
committed
addressing Thiago review comments
1 parent dd4414e commit faf58dd

File tree

6 files changed

+37
-53
lines changed

6 files changed

+37
-53
lines changed

articles/azure-functions/analyze-telemetry-data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ The tables that are available are shown in the **Schema** tab on the left. You c
8484

8585
| Table | Description |
8686
| ----- | ----------- |
87-
| **traces** | Logs created by the runtime, scale controller, and traces from your function code. For Flex Consumption, also includes logs created during code deployment. |
87+
| **traces** | Logs created by the runtime, scale controller, and traces from your function code. For Flex Consumption plan hosting, `traces` also includes logs created during code deployment. |
8888
| **requests** | One request for each function invocation. |
8989
| **exceptions** | Any exceptions thrown by the runtime. |
9090
| **customMetrics** | The count of successful and failing invocations, success rate, and duration. |
@@ -159,7 +159,7 @@ traces
159159

160160
## Query Flex Consumption code deployment logs
161161

162-
_Flex Consumption is in preview._
162+
[!INCLUDE [functions-flex-preview-note](../../includes/functions-flex-preview-note.md)]
163163

164164
The following query can be used to search for all code deployment logs for the current function app within the specified time period:
165165

articles/azure-functions/container-concepts.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,17 +16,12 @@ Functions also supports containerized function app deployments. In a containeriz
1616

1717
## Container hosting options
1818

19-
<!----moved from the hosting article -->
20-
You can host Azure Functions instances running in Linux containers in both Premium and Dedicated hosting plans.
21-
22-
However, a better hosting option might be to instead deploy containerized function apps to Kubernetes clusters or to Azure Container Apps. If you choose to host your functions in a Kubernetes cluster, consider using an [Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/overview.md). To learn more about deploying custom container apps, see [Azure Container Apps hosting of Azure Functions](./functions-container-apps-hosting.md).
23-
2419
There are several options for hosting your containerized function apps in Azure:
2520

2621
| Hosting option | Benefits |
2722
| --- | --- |
2823
| **[Azure Container Apps]** | Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a managed Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and KEDA. Container Apps uses the power of the underlying Azure Kubernetes Service (AKS) while removing the complexity of having to work with Kubernetes APIs. |
29-
| **Azure Arc-enabled Kubernetes clusters** | You can host your function apps on Azure Arc-enabled Kubernetes clusters as either a [code-only deployment](./create-first-function-arc-cli.md) or in a [custom Linux container](./create-first-function-arc-custom-container.md). Azure Arc lets you to attach Kubernettes clusters so that you can manage and configure them in Azure. _Hosting Azure Functions containers on Azure Arc-enabled Kubernetes clusters is currently in preview._ |
24+
| **Azure Arc-enabled Kubernetes clusters (preview)** | You can host your function apps on Azure Arc-enabled Kubernetes clusters as either a [code-only deployment](./create-first-function-arc-cli.md) or in a [custom Linux container](./create-first-function-arc-custom-container.md). Azure Arc lets you to attach Kubernettes clusters so that you can manage and configure them in Azure. _Hosting Azure Functions containers on Azure Arc-enabled Kubernetes clusters is currently in preview._ |
3025
| **[Azure Functions]** | You can deploy your containerized function apps to run in either an [Elastic Premium plan](./functions-premium-plan.md) or a [Dedicated plan](./dedicated-plan.md). Premium plan hosting provides you with the benefits of dynamic scaling. You might want to use Dedicated plan hosting to take advantage of existing unused App Service plan resources. |
3126
| **[Kubernettes]** | Because the Azure Functions runtime provides flexibility in hosting where and how you want, you can host and manage your function app containers directly in Kubernettes clusters. [KEDA](https://keda.sh) (Kubernetes-based Event Driven Autoscaling) pairs seamlessly with the Azure Functions runtime and tooling to provide event driven scale in Kubernetes. Just keep in mind that running your containerized function apps on Kubernetes, either by using KEDA or by direct deployment, is an open-source effort that you can use free of cost, with best-effort support provided by contributors and from the community. |
3227

articles/azure-functions/event-driven-scaling.md

Lines changed: 29 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ The way in which your function app scales depends on the hosting plan:
1616

1717
+ **Consumption plan:** Each instance of the Functions host in the Consumption plan is limited, typically to 1.5 GB of memory and one CPU. An instance of the host supports the entire function app. As such, all functions within a function app share resource in an instance are scaled at the same time. When function apps share the same Consumption plan, they're still scaled independently.
1818

19-
+ **Flex Consumption plan:** In the Flex Consumption plan there are [multiple choices for instance memory](flex-consumption-plan.md/#instance-memory). The Flex Consumption plan uses a per-function scaling strategy, where each function is scaled independently, except for HTTP, Blob, and Durable Functions triggered functions which scale in their own groups. For more information, see [Per-function scaling](#per-function-scaling). These instances are then scaled based on the concurrency of your requests.
19+
+ **Flex Consumption plan:** The plan uses a deterministic per-function scaling strategy, where each function is scaled independently, except for HTTP, Blob, and Durable Functions triggered functions which scale in their own groups. For more information, see [Per-function scaling](#per-function-scaling). These instances are then scaled based on the concurrency of your requests.
2020

21-
+ **Premium plan:** The specific size of the Premium plan determines the available memory and CPU for all apps in that plan on that instance. The plan scales out its instances based on the scaling needs of the apps in the plan, and the apps will scale within the plan as needed.
21+
+ **Premium plan:** The specific size of the Premium plan determines the available memory and CPU for all apps in that plan on that instance. The plan scales out its instances based on the scaling needs of the apps in the plan, and the apps scale within the plan as needed.
2222

2323
Function code files are stored on Azure Files shares on the function's main storage account. When you delete the main storage account of the function app, the function code files are deleted and can't be recovered.
2424

@@ -32,11 +32,11 @@ The unit of scale for Azure Functions is the function app. When the function app
3232

3333
## Cold Start
3434

35-
After your function app has been idle for a number of minutes, the platform might decide to scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can affect the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider using a plan other than the Consumption. The other plans offer these strategies to mitigate or eliminate cold starts:
35+
Should your function app become idle for a few minutes, the platform might decide to scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can affect the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider using a plan other than the Consumption. The other plans offer these strategies to mitigate or eliminate cold starts:
3636

37-
+ [Premium plan](functions-premium-plan.md#eliminate-cold-starts): supports both always ready and prewarmed instances.
37+
+ [Premium plan](functions-premium-plan.md#eliminate-cold-starts): supports both prewarmed instances and always ready instances, with a minimum of one instance.
3838

39-
+ [Flex Consumption plan](flex-consumption-plan.md#always-ready-instances): supports an optional number of always ready instances based on per instance scaling groups.
39+
+ [Flex Consumption plan](flex-consumption-plan.md#always-ready-instances): supports an optional number of always ready instances, which can be defined on a per instance scaling basis.
4040

4141
+ [Dedicated plan](./dedicated-plan.md#always-on): the plan itself doesn't scale dynamically, but you can run your app continuously with the **Always on** setting is enabled.
4242

@@ -47,19 +47,37 @@ Scaling can vary based on several factors, and apps scale differently based on t
4747
* **Maximum instances:** A single function app only scales out to a [maximum allowed by the plan](functions-scale.md#scale). However, a single instance [can process more than one message or request at a time](functions-concurrency.md#concurrency-in-azure-functions). You can [specify a lower maximum](#limit-scale-out) to throttle scale as required.
4848
* **New instance rate:** For HTTP triggers, new instances are allocated, at most, once per second. For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds. Scaling is faster when running in a [Premium plan](functions-premium-plan.md).
4949
* **Target-based scaling:** Target-based scaling provides a fast and intuitive scaling model for customers and is currently supported for Service Bus queues and topics, Storage queues, Event Hubs, Apache Kafka, and Azure Cosmos DB extensions. Make sure to review [target-based scaling](./functions-target-based-scaling.md) to understand their scaling behavior.
50-
* **Per-function scaling:** The Flex Consumption plan scales all HTTP triggered and Durable functions together, and it scales all other types of function independently. For more information, see [per-function scaling](#per-function-scaling).
50+
* **Per-function scaling:** With some notable exceptions, functions running in the Flex Consumption plan scale on independent instances. The exceptions include HTTP triggers and Blob storage (Event Grid) triggers. Each of these trigger types scale together as a group on the same instances. Likewise, all Durable Functions triggers also share instances and scale together. For more information, see [per-function scaling](#per-function-scaling).
5151

5252
## Limit scale-out
5353

54-
You might decide to restrict the maximum number of instances an app can use for scale-out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions can scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
54+
You might decide to restrict the maximum number of instances an app can use for scale-out. This is most common for cases where a downstream component like a database has limited throughput. For the maximum scale limits when running the various hosting plans, see [Scale limits](functions-scale.md#scale).
5555

56-
# [Azure CLI](#tab/azure-cli)
56+
### Flex Consumption plan
57+
58+
By default, apps running in a Flex Consumption plan have limit of `100` overall instances. Currently the lowest maximum instance count value is `40`, and the highest supported maximum instance count value is `1000`. When you use the [`az functionapp create`] command to create a function app in the Flex Consumption plan, use the `--maximum-instance-count` parameter to set this maximum instance count for of your app. This example creates an app with a maximum instance count of `200`:
59+
60+
```azurecli
61+
az functionapp create --resource-group <RESOURCE_GROUP> --name <APP_NAME> --storage <STORAGE_ACCOUNT_NAME> --runtime <LANGUAGE_RUNTIME> --runtime-version <RUNTIME_VERSION> --flexconsumption-location <REGION> --maximum-instance-count 200
62+
```
63+
64+
This example uses the [`az functionapp scale config set`](/cli/azure/functionapp/scale/config#az-functionapp-scale-config-set) command to change the maximum instance count for an existing app to `150`:
65+
66+
```azurecli
67+
az functionapp scale config set --resource-group <RESOURCE_GROUP> --name <APP_NAME> --maximum-instance-count 150
68+
```
69+
70+
### Consumption/Premium plans
71+
72+
In a Consumption or Elastic Premium plan, you can specify a lower maximum limit for your app by modifying the value of the `functionAppScaleLimit` site configuration setting. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
73+
74+
#### [Azure CLI](#tab/azure-cli)
5775

5876
```azurecli
5977
az resource update --resource-type Microsoft.Web/sites -g <RESOURCE_GROUP> -n <FUNCTION_APP-NAME>/config/web --set properties.functionAppScaleLimit=<SCALE_LIMIT>
6078
```
6179

62-
# [Azure PowerShell](#tab/azure-powershell)
80+
#### [Azure PowerShell](#tab/azure-powershell)
6381

6482
```azurepowershell
6583
$resource = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName <RESOURCE_GROUP> -Name <FUNCTION_APP-NAME>/config/web
@@ -80,7 +98,7 @@ The following considerations apply for scale-in behaviors:
8098

8199
## Per-function scaling
82100

83-
_Applies only to the Flex Consumption plan (preview)_
101+
_Applies only to the Flex Consumption plan (preview)_.
84102

85103
The [Flex Consumption plan] is unique in that it implements a _per-function scaling_ behavior. In per-function scaling, except for HTTP triggers, Blob (Event Grid) triggers, and Durable Functions, all other function trigger types in your app scale on independent instances. HTTP triggers in your app all scale together as a group on the same instances, as do all Blob (Event Grid), and all Durable Functions triggers, which have their own shared instances.
86104

@@ -92,7 +110,7 @@ Consider a function app hosted a Flex Consumption plan that has these function:
92110

93111
In this example:
94112

95-
+ The two HTTP triggered functions (`function1` and `function2`) both run together on their own instances and scale together according to [HTTP concurrency settings]().
113+
+ The two HTTP triggered functions (`function1` and `function2`) both run together on their own instances and scale together according to [HTTP concurrency settings](flex-consumption-how-to.md#set-http-concurrency-limits).
96114
+ The two Durable functions (`function3` and `function4`) both run together on their own instances and scale together based on [configured concurrency throttles](./durable/durable-functions-perf-and-scale.md#concurrency-throttles).
97115
+ The Service bus triggered function `function5` runs in its own and is scaled independently according to the [target-based scaling rules for Service Bus queues and topics](functions-target-based-scaling.md#service-bus-queues-and-topics).
98116
+ The Service bus triggered function `function6` runs in its own and is scaled independently according to the [target-based scaling rules for Service Bus queues and topics](functions-target-based-scaling.md#service-bus-queues-and-topics).
@@ -104,17 +122,6 @@ There are many aspects of a function app that impacts how it scales, including h
104122

105123
For more information on scaling in Python and Node.js, see [Azure Functions Python developer guide - Scaling and concurrency](functions-reference-python.md#scaling-and-performance) and [Azure Functions Node.js developer guide - Scaling and concurrency](functions-reference-node.md#scaling-and-concurrency).
106124

107-
## Billing model
108-
109-
Billing for the different plans is described in detail on the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). Usage is aggregated at the function app level and counts only the time that function code is executed. The following are units for billing:
110-
111-
* **Resource consumption in gigabyte-seconds (GB-s)**. Computed as a combination of memory size and execution time for all functions within a function app.
112-
* **Executions**. Counted each time a function is executed in response to an event trigger.
113-
114-
Useful queries and information on how to understand your consumption bill can be found [on the billing FAQ](https://github.com/Azure/Azure-Functions/wiki/Consumption-Plan-Cost-Billing-FAQ).
115-
116-
To learn more about Flex Consumption plan billing, see [Billing](flex-consumption-plan.md#billing) in the Flex Consumption plan documentation.
117-
118125
## Next steps
119126

120127
To learn more, see the following articles:

0 commit comments

Comments
 (0)