You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-functions/durable/durable-functions-azure-storage-provider.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -280,6 +280,16 @@ If you are not seeing the throughput numbers you expect and your CPU and memory
280
280
> [!TIP]
281
281
> In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
282
282
283
+
### Flex Consumption Plan
284
+
The [Flex Consumption plan](../flex-consumption-plan.md) is an Azure Functions hosting plan that provides many of the benefits of the Consumption plan, including a serverless billing model, while also adding useful features, such as private networking, instance memory size selection, and full support for managed identity authentication.
285
+
286
+
Azure Storage is currently the only supported [storage provider](durable-functions-storage-providers.md) for Durable Functions when hosted in the Flex Consumption plan.
287
+
288
+
You should follow these performance recommendations when hosting Durable Functions in the Flex Consumption plan:
289
+
290
+
* Set the [always ready instance count](../flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. This ensures that there is always one instance ready to handle Durable Functions related requests, thus reducing the application's cold start.
291
+
* Reduce the [queue polling interval](durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Since this plan type is more sensitive to queue polling delays, lowering the polling interval will help increase the frequency of polling operations, thus ensuring requests are handled faster. However, more frequent polling operations will lead to a higher Azure Storage account cost.
292
+
283
293
### High throughput processing
284
294
285
295
The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).
Copy file name to clipboardExpand all lines: articles/azure-functions/durable/quickstart-mssql.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,11 @@ Durable Functions supports several [storage providers](durable-functions-storage
16
16
17
17
> [!NOTE]
18
18
>
19
-
> - The MSSQL back end was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md).
19
+
> - The MSSQL backend was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md).
20
20
>
21
21
> - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the MSSQL back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider.
22
+
>
23
+
> - The MSSQL backend currently isn't supported by Durable Functions when running on the [Flex Consumption plan](../flex-consumption-plan.md).
Copy file name to clipboardExpand all lines: articles/azure-functions/durable/quickstart-netherite.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,8 @@ Durable Functions offers several [storage providers](durable-functions-storage-p
19
19
> - Netherite was designed and developed by [Microsoft Research](https://www.microsoft.com/research) for [high throughput](https://microsoft.github.io/durabletask-netherite/#/scenarios) scenarios. In some [benchmarks](https://microsoft.github.io/durabletask-netherite/#/throughput?id=multi-node-throughput), throughput increased by more than an order of magnitude compared to the default Azure Storage provider. To learn more about when to use the Netherite storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation.
20
20
>
21
21
> - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the Netherite back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider.
22
+
>
23
+
> - The Netherite backend currently isn't supported by Durable Functions when running on the [Flex Consumption plan](../flex-consumption-plan.md).
description: Running your function code in the Azure Functions Flex Consumption plan provides virtual network integration, dynamic scale (to zero), and reduced cold starts.
4
4
ms.service: azure-functions
5
5
ms.topic: concept-article
6
-
ms.date: 08/22/2024
6
+
ms.date: 11/01/2024
7
7
ms.custom: references_regions, build-2024
8
8
# Customer intent: As a developer, I want to understand the benefits of using the Flex Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
9
9
---
@@ -137,7 +137,7 @@ In Flex Consumption, many of the standard application settings and site configur
137
137
Keep these other considerations in mind when using Flex Consumption plan during the current preview:
138
138
139
139
+**Host**: There is a 30 seconds timeout for the app initialization. If your function app takes longer than 30 seconds to start you will see gRPC related System.TimeoutException entries. This timeout will be configurable and a more clear exception will be implemented as part of [this host work item](https://github.com/Azure/azure-functions-host/issues/10482).
140
-
+**Durable Functions**: Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Only Azure Storage is supported as a backend storage providers for Flex Consumption hosted durable functions.
140
+
+**Durable Functions**: Azure Storage is currently the only supported [storage provider](./durable/durable-functions-storage-providers.md) for Durable Functions when hosted in the Flex Consumption plan. See [recommendations](./durable/durable-functions-azure-storage-provider.md#flex-consumption-plan) when hosting Durable Functions in the Flex Consumption plan.
141
141
+**VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`.
142
142
+**Triggers**: All triggers are fully supported except for Kafka and Azure SQL triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version.
143
143
+**Regions**: Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).
Copy file name to clipboardExpand all lines: articles/cost-management-billing/costs/tutorial-improved-exports.md
+21-6Lines changed: 21 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,10 +40,9 @@ For Azure Storage accounts:
40
40
- Your Azure storage account must be configured for blob or file storage.
41
41
- Don't configure exports to a storage container that is configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules).
42
42
- To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are:
43
-
- Owner role on the storage account.
44
-
Or
45
-
- Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
46
-
Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
43
+
-**Owner** role or any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
44
+
45
+
- Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
47
46
- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
48
47
:::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
49
48
@@ -211,9 +210,25 @@ You can retrieve up to 13 months of historical data through the portal UI for al
211
210
- All available prices:
212
211
213
212
- MCA/MPA: Up to 13 months.
214
-
213
+
215
214
- EA: Up to 25 months (starting from December 2022).
216
-
215
+
216
+
#### Why do I get the 'Unauthorized' error while trying to create an Export?
217
+
218
+
When attempting to create an Export to a storage account with a firewall, the user must have the Owner role or a custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions. If these permissions are missing, you will encounter an error like:
219
+
220
+
221
+
```json
222
+
{
223
+
"error":{
224
+
"code":"Unauthorized",
225
+
"message":"The user does not have authorization to perform 'Microsoft.Authorization/roleAssignments/write' action on specified storage account, please use a storage account with sufficient permissions. If the permissions have changed recently then retry after some time."
226
+
}
227
+
}
228
+
```
229
+
230
+
You can check for the permissions on the storage account by referring to the steps in [Check access for a user to a single Azure resource](../../role-based-access-control/check-access.md).
231
+
217
232
## Next steps
218
233
219
234
- Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).
Copy file name to clipboardExpand all lines: articles/databox/data-box-deploy-copy-data-via-rest.md
+5-31Lines changed: 5 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,13 +15,6 @@ ms.author: shaas
15
15
16
16
# Tutorial: Use REST APIs to Copy data to Azure Data Box Blob storage
17
17
18
-
> [!IMPORTANT]
19
-
> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
20
-
>
21
-
>For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to Data Box](#copy-data-to-data-box) section to copy your data to the appropriate access tier.
22
-
>
23
-
> The information contained within this section applies to orders placed after April 1, 2024.
24
-
25
18
> [!CAUTION]
26
19
> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
27
20
@@ -44,7 +37,7 @@ Before you begin, make sure that:
44
37
3. You review the [system requirements for Data Box Blob storage](data-box-system-requirements-rest.md) and are familiar with supported versions of APIs, SDKs, and tools.
45
38
4. You have access to a host computer that has the data that you want to copy over to Data Box. Your host computer must:
46
39
* Run a [Supported operating system](data-box-system-requirements.md).
47
-
* Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds are impacted.
40
+
* Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. You can use a 1-GbE data link if a 10-GbE connection isn't available, though copy speeds are impacted.
48
41
5.[Download AzCopy V10](../storage/common/storage-use-azcopy-v10.md) on your host computer. AzCopy is used to copy data to Azure Data Box Blob storage from your host computer.
49
42
50
43
## Connect via http or https
@@ -98,7 +91,7 @@ Use the Azure portal to download certificate.
98
91
99
92
### Import certificate
100
93
101
-
Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after it's imported into the system's certificate store, while other applications don't make use of that mechanism.
94
+
Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after importing it into the system's certificate store, while other applications don't make use of that mechanism.
102
95
103
96
Specific information for some applications is mentioned in this section. For more information on other applications, see the documentation for the application and the operating system used.
104
97
@@ -152,27 +145,9 @@ Follow the same steps to [add device IP address and blob service endpoint when c
152
145
153
146
Follow the steps to [Configure partner software that you used while connecting over *http*](#verify-connection-and-configure-partner-software). The only difference is that you should leave the *Use http option* unchecked.
154
147
155
-
## Determine appropriate access tiers for block blobs
156
-
157
-
> [!IMPORTANT]
158
-
> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
159
-
160
-
Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
161
-
162
-
| Tier | Recommendation | Best practice |
163
-
|---------|----------------|---------------|
164
-
| Hot | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
165
-
| Cool | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
166
-
| Cold | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
167
-
| Archive | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
168
-
169
-
For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
170
-
171
-
You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box. This process is discussed in greater detail within the [Copy data to Azure Data Box](#copy-data-to-data-box) section.
172
-
173
148
## Copy data to Data Box
174
149
175
-
After connecting to one or more Data Box shares, the next step is to copy data. Before you begin the data copy, consider the following limitations:
150
+
After one or more Data Box shares are connected, the next step is to copy data. Before you initiate data copy operations, consider the following limitations:
176
151
177
152
* While copying data, ensure that the data size conforms to the size limits described in the [Azure storage and Data Box limits](data-box-limits.md).
178
153
* Simultaneous uploads by Data Box and another non-Data Box application could potentially result in upload job failures and data corruption.
@@ -200,8 +175,8 @@ The first step is to create a container, because blobs are always uploaded into
4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
204
-
5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After the blob container is successfully created, it's displayed under the **Blob Containers** folder for the selected storage account.
178
+
4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
179
+
5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After successful creation, the blob container is displayed under the selected storage account's **Blob Containers** folder.
0 commit comments