Skip to content

Commit a801101

Browse files
committed
Merge branch 'main' into release-pp-nsp
2 parents 9e1eabb + 809bb9b commit a801101

File tree

48 files changed

+458
-143
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+458
-143
lines changed

articles/azure-functions/durable/durable-functions-azure-storage-provider.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -280,6 +280,16 @@ If you are not seeing the throughput numbers you expect and your CPU and memory
280280
> [!TIP]
281281
> In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
282282
283+
### Flex Consumption Plan
284+
The [Flex Consumption plan](../flex-consumption-plan.md) is an Azure Functions hosting plan that provides many of the benefits of the Consumption plan, including a serverless billing model, while also adding useful features, such as private networking, instance memory size selection, and full support for managed identity authentication.
285+
286+
Azure Storage is currently the only supported [storage provider](durable-functions-storage-providers.md) for Durable Functions when hosted in the Flex Consumption plan.
287+
288+
You should follow these performance recommendations when hosting Durable Functions in the Flex Consumption plan:
289+
290+
* Set the [always ready instance count](../flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. This ensures that there is always one instance ready to handle Durable Functions related requests, thus reducing the application's cold start.
291+
* Reduce the [queue polling interval](durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Since this plan type is more sensitive to queue polling delays, lowering the polling interval will help increase the frequency of polling operations, thus ensuring requests are handled faster. However, more frequent polling operations will lead to a higher Azure Storage account cost.
292+
283293
### High throughput processing
284294

285295
The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).

articles/azure-functions/durable/durable-functions-storage-providers.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,7 @@ There are many significant tradeoffs between the various supported storage provi
197197
| Price-performance configurable? | ❌ No | ✅ Yes (Event Hubs TUs and CUs) | ✅ Yes (SQL vCPUs) |
198198
| Disconnected environment support | ❌ Azure connectivity required | ❌ Azure connectivity required | ✅ Fully supported |
199199
| Identity-based connections | ✅ Fully supported |❌ Not supported | ⚠️ Requires runtime-driven scaling |
200+
| [Flex Consumption plan](../flex-consumption-plan.md) | ✅ Fully supported ([see notes](./durable-functions-azure-storage-provider.md#flex-consumption-plan)) |❌ Not supported | ❌ Not supported |
200201

201202
## Next steps
202203

articles/azure-functions/durable/quickstart-mssql.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,11 @@ Durable Functions supports several [storage providers](durable-functions-storage
1616

1717
> [!NOTE]
1818
>
19-
> - The MSSQL back end was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md).
19+
> - The MSSQL backend was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md).
2020
>
2121
> - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the MSSQL back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider.
22+
>
23+
> - The MSSQL backend currently isn't supported by Durable Functions when running on the [Flex Consumption plan](../flex-consumption-plan.md).
2224
2325
## Prerequisites
2426

articles/azure-functions/durable/quickstart-netherite.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ Durable Functions offers several [storage providers](durable-functions-storage-p
1919
> - Netherite was designed and developed by [Microsoft Research](https://www.microsoft.com/research) for [high throughput](https://microsoft.github.io/durabletask-netherite/#/scenarios) scenarios. In some [benchmarks](https://microsoft.github.io/durabletask-netherite/#/throughput?id=multi-node-throughput), throughput increased by more than an order of magnitude compared to the default Azure Storage provider. To learn more about when to use the Netherite storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation.
2020
>
2121
> - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the Netherite back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider.
22+
>
23+
> - The Netherite backend currently isn't supported by Durable Functions when running on the [Flex Consumption plan](../flex-consumption-plan.md).
2224
2325
## Prerequisites
2426

articles/azure-functions/flex-consumption-plan.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure Functions Flex Consumption plan hosting
33
description: Running your function code in the Azure Functions Flex Consumption plan provides virtual network integration, dynamic scale (to zero), and reduced cold starts.
44
ms.service: azure-functions
55
ms.topic: concept-article
6-
ms.date: 08/22/2024
6+
ms.date: 11/01/2024
77
ms.custom: references_regions, build-2024
88
# Customer intent: As a developer, I want to understand the benefits of using the Flex Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
99
---
@@ -137,7 +137,7 @@ In Flex Consumption, many of the standard application settings and site configur
137137
Keep these other considerations in mind when using Flex Consumption plan during the current preview:
138138

139139
+ **Host**: There is a 30 seconds timeout for the app initialization. If your function app takes longer than 30 seconds to start you will see gRPC related System.TimeoutException entries. This timeout will be configurable and a more clear exception will be implemented as part of [this host work item](https://github.com/Azure/azure-functions-host/issues/10482).
140-
+ **Durable Functions**: Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Only Azure Storage is supported as a backend storage providers for Flex Consumption hosted durable functions.
140+
+ **Durable Functions**: Azure Storage is currently the only supported [storage provider](./durable/durable-functions-storage-providers.md) for Durable Functions when hosted in the Flex Consumption plan. See [recommendations](./durable/durable-functions-azure-storage-provider.md#flex-consumption-plan) when hosting Durable Functions in the Flex Consumption plan.
141141
+ **VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`.
142142
+ **Triggers**: All triggers are fully supported except for Kafka and Azure SQL triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version.
143143
+ **Regions**: Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).

articles/cost-management-billing/costs/tutorial-improved-exports.md

Lines changed: 21 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -40,10 +40,9 @@ For Azure Storage accounts:
4040
- Your Azure storage account must be configured for blob or file storage.
4141
- Don't configure exports to a storage container that is configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules).
4242
- To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are:
43-
- Owner role on the storage account.
44-
Or
45-
- Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
46-
Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
43+
- **Owner** role or any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
44+
45+
- Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
4746
- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
4847
:::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
4948

@@ -211,9 +210,25 @@ You can retrieve up to 13 months of historical data through the portal UI for al
211210
- All available prices:
212211

213212
- MCA/MPA: Up to 13 months.
214-
213+
215214
- EA: Up to 25 months (starting from December 2022).
216-
215+
216+
#### Why do I get the 'Unauthorized' error while trying to create an Export?
217+
218+
When attempting to create an Export to a storage account with a firewall, the user must have the Owner role or a custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions. If these permissions are missing, you will encounter an error like:
219+
220+
221+
```json
222+
{
223+
"error":{
224+
"code":"Unauthorized",
225+
"message":"The user does not have authorization to perform 'Microsoft.Authorization/roleAssignments/write' action on specified storage account, please use a storage account with sufficient permissions. If the permissions have changed recently then retry after some time."
226+
}
227+
}
228+
```
229+
230+
You can check for the permissions on the storage account by referring to the steps in [Check access for a user to a single Azure resource](../../role-based-access-control/check-access.md).
231+
217232
## Next steps
218233

219234
- Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).

articles/databox/data-box-deploy-copy-data-via-rest.md

Lines changed: 5 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,6 @@ ms.author: shaas
1515

1616
# Tutorial: Use REST APIs to Copy data to Azure Data Box Blob storage
1717

18-
> [!IMPORTANT]
19-
> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
20-
>
21-
>For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to Data Box](#copy-data-to-data-box) section to copy your data to the appropriate access tier.
22-
>
23-
> The information contained within this section applies to orders placed after April 1, 2024.
24-
2518
> [!CAUTION]
2619
> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
2720
@@ -44,7 +37,7 @@ Before you begin, make sure that:
4437
3. You review the [system requirements for Data Box Blob storage](data-box-system-requirements-rest.md) and are familiar with supported versions of APIs, SDKs, and tools.
4538
4. You have access to a host computer that has the data that you want to copy over to Data Box. Your host computer must:
4639
* Run a [Supported operating system](data-box-system-requirements.md).
47-
* Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds are impacted.
40+
* Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. You can use a 1-GbE data link if a 10-GbE connection isn't available, though copy speeds are impacted.
4841
5. [Download AzCopy V10](../storage/common/storage-use-azcopy-v10.md) on your host computer. AzCopy is used to copy data to Azure Data Box Blob storage from your host computer.
4942

5043
## Connect via http or https
@@ -98,7 +91,7 @@ Use the Azure portal to download certificate.
9891

9992
### Import certificate
10093

101-
Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after it's imported into the system's certificate store, while other applications don't make use of that mechanism.
94+
Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after importing it into the system's certificate store, while other applications don't make use of that mechanism.
10295

10396
Specific information for some applications is mentioned in this section. For more information on other applications, see the documentation for the application and the operating system used.
10497

@@ -152,27 +145,9 @@ Follow the same steps to [add device IP address and blob service endpoint when c
152145
153146
Follow the steps to [Configure partner software that you used while connecting over *http*](#verify-connection-and-configure-partner-software). The only difference is that you should leave the *Use http option* unchecked.
154147
155-
## Determine appropriate access tiers for block blobs
156-
157-
> [!IMPORTANT]
158-
> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
159-
160-
Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
161-
162-
| Tier | Recommendation | Best practice |
163-
|---------|----------------|---------------|
164-
| Hot | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
165-
| Cool | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
166-
| Cold | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
167-
| Archive | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
168-
169-
For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
170-
171-
You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box. This process is discussed in greater detail within the [Copy data to Azure Data Box](#copy-data-to-data-box) section.
172-
173148
## Copy data to Data Box
174149
175-
After connecting to one or more Data Box shares, the next step is to copy data. Before you begin the data copy, consider the following limitations:
150+
After one or more Data Box shares are connected, the next step is to copy data. Before you initiate data copy operations, consider the following limitations:
176151
177152
* While copying data, ensure that the data size conforms to the size limits described in the [Azure storage and Data Box limits](data-box-limits.md).
178153
* Simultaneous uploads by Data Box and another non-Data Box application could potentially result in upload job failures and data corruption.
@@ -200,8 +175,8 @@ The first step is to create a container, because blobs are always uploaded into
200175
201176
![Blob Containers context menu, Create Blob Container](media/data-box-deploy-copy-data-via-rest/create-blob-container-1.png)
202177
203-
4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
204-
5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After the blob container is successfully created, it's displayed under the **Blob Containers** folder for the selected storage account.
178+
4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
179+
5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After successful creation, the blob container is displayed under the selected storage account's **Blob Containers** folder.
205180
206181
![Blob container created](media/data-box-deploy-copy-data-via-rest/create-blob-container-2.png)
207182
@@ -266,7 +241,6 @@ In this tutorial, you learned about Azure Data Box topics such as:
266241
>
267242
> * Prerequisites for copy data to Azure Data Box Blob storage using REST APIs
268243
> * Connecting to Data Box Blob storage via *http* or *https*
269-
> * Determining appropriate access tiers for block blobs
270244
> * Copy data to Data Box
271245
272246
Advance to the next tutorial to learn how to ship your Data Box back to Microsoft.

articles/defender-for-iot/device-builders/release-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.topic: conceptual
55
ms.date: 04/17/2024
66
---
77

8-
# What's new
8+
# What's new in Microsoft Defender for IoT
99

1010
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
1111

0 commit comments

Comments
 (0)