Skip to content

Commit e6a40d3

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into app-insights
2 parents 7c49980 + 22ed1f6 commit e6a40d3

File tree

849 files changed

+15594
-7832
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

849 files changed

+15594
-7832
lines changed

.openpublishing.redirection.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,10 @@
11
{
22
"redirections": [
3+
{
4+
"source_path": "articles/cdn/cdn-http-variables.md",
5+
"redirect_url": "/previous-versions/azure/cdn/cdn-http-variables",
6+
"redirect_document_id": false
7+
},
38
{
49
"source_path": "articles/cdn/cdn-advanced-http-reports.md",
510
"redirect_url": "/previous-versions/azure/cdn/cdn-advanced-http-reports",
@@ -6804,5 +6809,10 @@
68046809
"redirect_url": "/azure/azure-functions/migration/migrate-lambda-workloads-overview",
68056810
"redirect_document_id": false
68066811
},
6812+
{
6813+
"source_path": "articles/storage/files/storage-files-enable-soft-delete.md",
6814+
"redirect_url": "/azure/storage/files/storage-files-prevent-file-share-deletion",
6815+
"redirect_document_id": false
6816+
}
68076817
]
68086818
}

articles/active-directory-b2c/api-connectors-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Using Azure AD B2C, you can add your own business logic to a user journey by cal
8585
![Diagram of a RESTful service claims exchange](media/api-connectors-overview/restful-service-claims-exchange.png)
8686

8787
> [!NOTE]
88-
> If there is slow or no response from the RESTful service to Azure AD B2C, the timeout is 30 seconds and the retry count is two times (meaning there are 3 tries in total). Currently, you can't configure the timeout and retry count settings.
88+
> HTTP requests may be cancelled if there is a slow or no response from the RESTful service to Azure AD B2C. The default timeout is 10 seconds and the default retry count is one (meaning there are 2 tries in total).
8989
9090
## Calling a RESTful service
9191

articles/active-directory-b2c/manage-custom-policies-powershell.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: kengaderdus
66
manager: CelesteDG
77

88
ms.service: azure-active-directory
9-
ms.custom: has-azure-ad-ps-ref, azure-ad-ref-level-one-done
9+
ms.custom: no-azure-ad-ps-ref
1010
ms.topic: how-to
1111
ms.date: 01/11/2024
1212
ms.author: kengaderdus
@@ -16,7 +16,7 @@ ms.subservice: b2c
1616

1717
# Manage Azure AD B2C custom policies with Microsoft Graph PowerShell
1818

19-
Microsoft Graph PowerShell provides several cmdlets for command line- and script-based custom policy management in your Azure AD B2C tenant. Learn how to use the Azure AD PowerShell module to:
19+
Microsoft Graph PowerShell provides several cmdlets for command line- and script-based custom policy management in your Azure AD B2C tenant. Learn how to use the Microsoft Graph PowerShell SDK to:
2020

2121
* List the custom policies in an Azure AD B2C tenant
2222
* Download a policy from a tenant

articles/api-center/includes/api-center-service-limits.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ ms.custom: Include file
1414

1515
| Resource | Free plan<sup>1</sup> | Standard plan<sup>2</sup> |
1616
| ---------------------------------------------------------------------- | -------------------------- |-------------|
17+
| Maximum number of APIs | 200 | 10,000 |
1718
| Maximum number of versions per API | 5 | 100 |
1819
| Maximum number of definitions per version | 5 | 5 |
1920
| Maximum number of deployments per API | 10 | 10 |

articles/api-center/synchronize-aws-gateway-apis.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,9 @@ ms.date: 02/10/2025
88
ms.author: danlep
99
ms.custom:
1010
- devx-track-azurecli
11-
- migration
12-
- aws-to-azure
11+
ms.collection:
12+
- migration
13+
- aws-to-azure
1314
# Customer intent: As an API program manager, I want to integrate my Azure API Management instance with my API center and synchronize API Management APIs to my inventory.
1415
---
1516

articles/api-management/TOC.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,8 +91,8 @@
9191
href: migrate-stv1-to-stv2-no-vnet.md
9292
- name: Migrate a VNet-injected instance
9393
href: migrate-stv1-to-stv2-vnet.md
94-
- name: Validate service updates
95-
href: validate-service-updates.md
94+
- name: Configure update settings
95+
href: configure-service-update-settings.md
9696
- name: Move instances between regions
9797
href: api-management-howto-migrate.md
9898
- name: Recover a deleted instance

articles/api-management/api-management-sample-flexible-throttling.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,9 @@ title: Advanced request throttling with Azure API Management
33
description: Learn how to create and apply flexible quota and rate limiting policies with Azure API Management.
44
services: api-management
55
author: dlepow
6-
manager: erikre
7-
ms.assetid: fc813a65-7793-4c17-8bb9-e387838193ae
86
ms.service: azure-api-management
97
ms.topic: concept-article
10-
ms.date: 02/03/2018
8+
ms.date: 04/10/2025
119
ms.author: danlep
1210

1311
---
@@ -58,7 +56,7 @@ The following policies restrict a single client IP address to only 10 calls ever
5856
counter-key="@(context.Request.IpAddress)" />
5957
```
6058

61-
If all clients on the Internet used a unique IP address, this might be an effective way of limiting usage by user. However, it is likely that multiple users are sharing a single public IP address due to them accessing the Internet via a NAT device. Despite this, for APIs that allow unauthenticated access the `IpAddress` might be the best option.
59+
If all clients on the internet used a unique IP address, this might be an effective way of limiting usage by user. However, it is likely that multiple users are sharing a single public IP address due to them accessing the internet via a NAT device. Despite this, for APIs that allow unauthenticated access the `IpAddress` might be the best option.
6260

6361
## User identity throttling
6462
If an end user is authenticated, then a throttling key can be generated based on information that uniquely identifies that user.
@@ -85,8 +83,15 @@ When the throttling key is defined using a [policy expression](./api-management-
8583

8684
This enables the developer's client application to choose how they want to create the rate limiting key. The client developers could create their own rate tiers by allocating sets of keys to users and rotating the key usage.
8785

86+
## Considerations for multiple regions or gateways
87+
88+
Rate limiting policies like `rate-limit`, `rate-limit-by-key`, `azure-openai-token-limit`, and `llm-token-limit` use counters at the level of the API Management gateway. This means that in [multi-region deployments](api-management-howto-deploy-multi-region.md) of API Management, each regional gateway has a separate counter, and rate limits are enforced separately for each region. Similarly, in API Management instances with [workspaces](workspaces-overview.md), limits are enforced separately for each workspace gateway.
89+
90+
Quota policies such as `quota` and `quota-by-key` are global, meaning that a single counter is used at the level of the API Management instance.
91+
8892
## Summary
89-
Azure API Management provides rate and quota throttling to both protect and add value to your API service. The new throttling policies with custom scoping rules allow you finer grained control over those policies to enable your customers to build even better applications. The examples in this article demonstrate the use of these new policies by manufacturing rate limiting keys with client IP addresses, user identity, and client generated values. However, there are many other parts of the message that could be used such as user agent, URL path fragments, message size.
93+
Azure API Management provides rate and quota throttling to both protect and add value to your API service. These throttling policies with custom scoping rules allow you finer grained control over those policies to enable your customers to build even better applications. The examples in this article demonstrate the use of these new policies by manufacturing rate limiting keys with client IP addresses, user identity, and client generated values. However, there are many other parts of the message that could be used such as user agent, URL path fragments, and message size.
94+
95+
## Related content
9096

91-
## Next steps
92-
Please give us your feedback as a GitHub issue for this topic. It would be great to hear about other potential key values that have been a logical choice in your scenarios.
97+
* [Rate limit and quota policies](api-management-policies.md#rate-limiting-and-quotas)

articles/api-management/azure-openai-token-limit-policy.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ By relying on token usage metrics returned from the OpenAI endpoint, the policy
7474
* Certain Azure OpenAI endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute. Completion tokens are also estimated when responses are streamed.
7575
* For models that accept image input, image tokens are generally counted by the backend language model and included in limit and quota calculations. However, when streaming is used or `estimate-prompt-tokens` is set to `true`, the policy currently over-counts each image as a maximum count of 1200 tokens.
7676
* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
77+
* [!INCLUDE [api-management-token-limit-gateway-counts](../../includes/api-management-token-limit-gateway-counts.md)]
7778

7879
## Examples
7980

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
---
2+
title: Configure API Management settings for service updates
3+
description: Learn how to configure settings for applying service updates to your Azure API Management instance. Settings include the upgrade group and the maintenance window.
4+
author: dlepow
5+
ms.service: azure-api-management
6+
ms.topic: how-to
7+
ms.date: 04/08/2025
8+
ms.author: danlep
9+
---
10+
11+
# Configure service update settings for your API Management instances
12+
13+
[!INCLUDE [api-management-availability-premium-standard-basic](../../includes/api-management-availability-premium-standard-basic.md)]
14+
15+
16+
This article shows you how to configure *service update* settings (preview) in your API Management instance. Azure periodically applies service updates automatically to API Management instances, using a phased rollout approach. These updates include new features, security enhancements, and reliability improvements.
17+
18+
You can't control exactly when Azure updates each API Management instance, but in select service tiers you can choose an *update group* for your instance so that it receives updates earlier or later than it usually would during an update rollout. You can also configure a *maintenance window* during the day when you want your instance to receive updates.
19+
20+
* **Update group** - A set of instances that receive API Management service updates during a production rollout, which can take from several days to several weeks to complete.
21+
22+
Choose from:
23+
* **Early** - Receive updates early in the rollout, for testing and early access to new features. This option is not recommended for production deployments.
24+
* **Default** - Receive updates as part of the regular release rollout. This option is recommended for most services, including production deployments.
25+
* **Late** - Receive updates later than the previous groups, typically weeks after the initial rollout. This option is recommended for mission-critical deployments only.
26+
* **AI Gateway Early** (GenAI release) - Get early access to the latest [AI gateway features and updates](genai-gateway-capabilities.md) before they reach other update groups. Receive other service updates as part of the **Late** rollout group.
27+
28+
> [!NOTE]
29+
> Azure deploys all updates using a [safe deployment practices (SDP) framework](https://azure.microsoft.com/blog/advancing-safe-deployment-practices/). Updates released early in a rollout might be less stable and replaced later by stable releases. All instances are eventually updated to the most stable release builds.
30+
31+
For example, you might want to add a test instance to the **Early** update group. This instance receives updates before your production instances, which you put in the **Default** or **Late** update group. You can monitor the test instance for any issues caused by the updates before they reach your production instances. [Learn more about canary deployments](#canary-deployment-strategies) with API Management
32+
33+
* **Maintenance window** - An 8-hour daily period when you want your instance to receive updates. By default, the maintenance window is 10 PM to 6 AM in the instance's timezone.
34+
35+
Service disruptions are rare during an update, but you might want to reduce risk by selecting times of low service use. For example, for production instances, set a maintenance window during weekday evenings and weekend mornings.
36+
37+
## Configure service update settings
38+
39+
1. Sign in to the [Azure portal](https://portal.azure.com) and go to your API Management instance.
40+
1. In the left menu, select **Deployment + infrastructure** > **Service update settings**.
41+
1. Under **Update group**, review the current setting and select **Edit** to change it.
42+
1. Under **Maintenance window**, review the current settings and select **Edit** to change them. For each day you can select the default window, a different standard window, or a custom window by day.
43+
44+
## Know when your instances are receiving updates
45+
46+
Here's how to know about service updates that are expected or are in progress.
47+
48+
* API Management updates are announced on the [API Management GitHub repo](https://github.com/Azure/API-Management/releases). Subscribe to receive notifications from this repository to know when update rollouts begin.
49+
50+
* Monitor service updates that are taking place in your API Management instance by using the Azure [Activity log](/azure/azure-monitor/essentials/activity-log). The "Scheduled maintenance" event is emitted when an update begins.
51+
52+
:::image type="content" source="media/configure-service-update-settings/scheduled-maintenance.png" alt-text="Scheduled maintenance event in Activity log in the portal.":::
53+
54+
To receive notifications automatically, [set up an alert](/azure/azure-monitor/alerts/alerts-activity-log) on the Activity log.
55+
56+
* By default, updates roll out to regions in the following phases: Azure EUAP regions, followed by West Central US, followed by remaining regions in several later phases. The sequence of regions updated in the later deployment phases differs from service to service. You can expect at least 24 hours between each phase of the production rollout.
57+
58+
* Within a region, API Management instances in the Premium tier receive updates several hours later than those in other service tiers.
59+
60+
> [!TIP]
61+
> If your API Management instance is deployed to multiple locations (regions), the timing of updates is determined by the instance's **Primary** location.
62+
63+
## Canary deployment strategies
64+
65+
You can use an API Management instance assigned to a specific update group (if that option is available) or deployed in a specific Azure region as a canary deployment that receives updates earlier than your production instances.
66+
67+
* **Add instance to Early update group** - Use an API Management instance in the Early update group to validate updates early in a production rollout. This instance is effectively your canary deployment.
68+
69+
* **Deploy in canary region** - If you have access to an Azure EUAP region, use an instance there to validate updates as soon as they're released to the production pipeline. Learn about the [Azure region access request process](/troubleshoot/azure/general/region-access-request-process).
70+
71+
> [!NOTE]
72+
> Because of capacity constraints in EUAP regions, you might not be able to scale API Management instances as needed.
73+
74+
* **Deploy in pilot region** - Use an instance in the West Central US to simulate your production environment, or use it in production for noncritical API traffic. While this region receives updates after the EUAP regions, a deployment there is more likely to identify regressions that are specific to your service configuration.
75+
76+
* **Deploy duplicate instances in a region** - If your production workload is a Premium tier instance in a specific region, consider deploying a similarly configured instance in a lower tier that receives updates earlier. For example, configure a preproduction instance in the Developer tier to validate updates.
77+
78+
## Related content
79+
80+
* Learn [how to monitor](api-management-howto-use-azure-monitor.md) your API Management instance.
81+
* Learn about other options to [observe](observability.md) your API Management instance.

articles/api-management/llm-token-limit-policy.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,7 @@ By relying on token usage metrics returned from the LLM endpoint, the policy can
7575
* Certain LLM endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute.
7676
* For models that accept image input, image tokens are generally counted by the backend language model and included in limit and quota calculations. However, when streaming is used or `estimate-prompt-tokens` is set to `true`, the policy currently over-counts each image as a maximum count of 1200 tokens.
7777
* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
78+
* [!INCLUDE [api-management-token-limit-gateway-counts](../../includes/api-management-token-limit-gateway-counts.md)]
7879

7980
## Examples
8081

0 commit comments

Comments
 (0)