Skip to content

Commit b3254a4

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into azure-functions-triggers-and-bindings
2 parents 0a929c3 + 17af70a commit b3254a4

File tree

137 files changed

+2488
-1025
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

137 files changed

+2488
-1025
lines changed

articles/active-directory-b2c/partner-trusona.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ In this scenario, Trusona acts as an Identity Provider (IdP) for Azure AD B2C to
7575

7676
## Step 1: Onboard with Trusona Authentication Cloud
7777

78-
1. Sign in to the [Trusona Portal](https://portal.trusona.io).
78+
1. Sign in to the [Trusona Portal](https://portal.trusona.com/).
7979
2. From the left navigation panel, select **Settings**
8080
3. In the Settings menu, select the slider to **Enable OIDC**.
8181
4. Select the appropriate **Inputs** and provide the **Redirect URL** `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/oauth2/authresp`.

articles/api-management/TOC.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -254,8 +254,6 @@
254254
href: azure-openai-enable-semantic-caching.md
255255
- name: Authenticate and authorize to Azure OpenAI
256256
href: api-management-authenticate-authorize-azure-openai.md
257-
- name: Protect Azure OpenAI keys
258-
href: /semantic-kernel/deploy/use-ai-apis-with-api-management?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json
259257
- name: Manage APIs with policies
260258
items:
261259
- name: API Management policies overview

articles/api-management/api-management-authenticate-authorize-azure-openai.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,4 +165,3 @@ Following are high level steps to restrict API access to users or apps that are
165165

166166
* Learn more about [Microsoft Entra ID and OAuth2.0](../active-directory/develop/authentication-vs-authorization.md).
167167
* [Authenticate requests to Azure AI services](/azure/ai-services/authentication)
168-
* [Protect Azure OpenAI keys with API Management](/semantic-kernel/deploy/use-ai-apis-with-api-management?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)

articles/app-service/overview-hosting-plans.md

Lines changed: 9 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Each App Service plan defines:
2323
- Region (West US, East US, and so on)
2424
- Number of virtual machine (VM) instances
2525
- Size of VM instances (small, medium, large)
26-
- Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, IsolatedV2)
26+
- Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, PremiumV4 IsolatedV2)
2727

2828
## Pricing tiers
2929

@@ -32,7 +32,7 @@ The pricing tier of an App Service plan determines what App Service features you
3232
| Category | Tiers | Description |
3333
|:-|:-|:-|
3434
| Shared compute | Free, Shared | Free and Shared, the two base tiers, run an app on the same Azure VM as other App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that runs on the shared resources. The resources can't scale out. These tiers are intended for only development and testing purposes. |
35-
| Dedicated compute | Basic, Standard, Premium, PremiumV2, PremiumV3 | The Basic, Standard, Premium, PremiumV2, and PremiumV3 tiers run apps on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances that are available to you for scale-out. |
35+
| Dedicated compute | Basic, Standard, Premium, PremiumV2, PremiumV3, PremiumV4 | The Basic, Standard, Premium, PremiumV2, PremiumV3, and PremiumV4 tiers run apps on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances that are available to you for scale-out. |
3636
| Isolated | IsolatedV2 | The IsolatedV2 tier runs dedicated Azure VMs on dedicated Azure virtual networks. This tier provides network isolation on top of compute isolation to your apps. It provides the maximum scale-out capabilities. |
3737

3838
Each tier also provides a specific subset of App Service features. These features include custom domains and TLS/SSL certificates, autoscaling, deployment slots, backups, Azure Traffic Manager integration, and more. The higher the tier, the more features that are available. To find out which features are supported in each pricing tier, see the [App Service plan details](https://azure.microsoft.com/pricing/details/app-service/windows/#pricing).
@@ -41,29 +41,7 @@ You can find more comparisons of plans in [App Service limits](../azure-resource
4141

4242
<a name="new-pricing-tier-premiumv3"></a>
4343

44-
### PremiumV3 pricing tier
45-
46-
The PremiumV3 pricing tier provides machines with faster processors (minimum 195 [Azure Compute Units](/azure/virtual-machines/acu) per virtual CPU), SSD storage, memory-optimized options, and quadruple memory-to-core ratio compared to the Standard tier.
47-
48-
PremiumV3 also supports higher scale by using increased instance count, while it still provides the advanced capabilities in the Standard tier. PremiumV3 includes all features available in the PremiumV2 tier.
49-
50-
Multiple VM sizes are available for this tier, including 4-to-1 and 8-to-1 memory-to-core ratios:
51-
52-
| App Service plan | Cores (vCPU) | Memory (GiB) |
53-
|:-|:-|:-|
54-
| P0v3 | 1 | 4 |
55-
| P1v3 | 2 | 8 |
56-
| P1mv3 | 2 | 16 |
57-
| P2v3 | 4 | 16 |
58-
| P2mv3 | 4 | 32 |
59-
| P3v3 | 8 | 32 |
60-
| P3mv3 | 8 | 64 |
61-
| P4mv3 | 16 | 128 |
62-
| P5mv3 | 32 | 256 |
63-
64-
For PremiumV3 pricing information, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
65-
66-
To get started with the PremiumV3 pricing tier, see [Configure PremiumV3 tier for Azure App Service](app-service-configure-premium-tier.md).
44+
For pricing information, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
6745

6846
## Considerations for running and scaling an app
6947

@@ -88,7 +66,7 @@ This section describes how App Service apps are billed. For detailed, region-spe
8866
Except for the Free tier, an App Service plan carries a charge on the compute resources that it uses:
8967

9068
- **Shared tier**: Each app receives a quota of CPU minutes, so *each app* is charged for the CPU quota.
91-
- **Dedicated compute tiers (Basic, Standard, Premium, PremiumV2, PremiumV3)**: The App Service plan defines the number of VM instances that the apps are scaled to, so *each VM instance* in the App Service plan is charged. These VM instances are charged the same, regardless of how many apps are running on them. To avoid unexpected charges, see [Delete an App Service plan](app-service-plan-manage.md#delete-an-app-service-plan).
69+
- **Dedicated compute tiers (Basic, Standard, Premium, PremiumV2, PremiumV3, PremiumV4)**: The App Service plan defines the number of VM instances that the apps are scaled to, so *each VM instance* in the App Service plan is charged. These VM instances are charged the same, regardless of how many apps are running on them. To avoid unexpected charges, see [Delete an App Service plan](app-service-plan-manage.md#delete-an-app-service-plan).
9270
- **IsolatedV2 tier**: The App Service Environment defines the number of isolated workers that run your apps, and *each worker* is charged.
9371

9472
You aren't charged for using the App Service features that are available to you. These features include configuring custom domains, TLS/SSL certificates, deployment slots, and backups. The exceptions are:
@@ -129,12 +107,12 @@ Isolate your app in a new App Service plan when:
129107
| B1, S1, P1v2, I1v1 | 8 |
130108
| B2, S2, P2v2, I2v1 | 16 |
131109
| B3, S3, P3v2, I3v1 | 32 |
132-
| P0v3 | 8 |
133-
| P1v3, I1v2 | 16 |
134-
| P2v3, I2v2, P1mv3 | 32 |
135-
| P3v3, I3v2, P2mv3 | 64 |
110+
| P0v3, P0v4 | 8 |
111+
| P1v3, P1v4, I1v2 | 16 |
112+
| P2v3, P2v4, I2v2, P1mv3, P1mv4 | 32 |
113+
| P3v3, P3v4, I3v2, P2mv3 | 64 |
136114
| I4v2, I5v2, I6v2 | Maximum density bound by vCPU usage |
137-
| P3mv3, P4mv3, P5mv3 | Maximum density bound by vCPU usage |
115+
| P3mv3, P3mv4, P4mv3, P4mv4, P5mv3, P5mv4 | Maximum density bound by vCPU usage |
138116
- You want to scale the app independently from the other apps in the existing plan.
139117
- The app needs resources in a different geographical region. This way, you can allocate a new set of resources for your app and gain greater control of your apps.
140118

articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ Although PHP doesn't support connection pooling, you can try using persistent da
100100
#### Python
101101

102102
* [MySQL](https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html)
103-
* [MariaDB](https://mariadb.com/docs/ent/connect/programming-languages/python/connection-pools/)
103+
* [MariaDB](https://mariadb.com/docs/connectors/mariadb-connector-python/api/pool/)
104104
* [PostgreSQL](https://www.psycopg.org/docs/pool.html)
105105
* [Pyodbc](https://github.com/mkleehammer/pyodbc/wiki/The-pyodbc-Module#pooling)
106106
* [SQLAlchemy](https://docs.sqlalchemy.org/en/20/core/pooling.html)

articles/azure-app-configuration/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,8 @@
210210
href: howto-targetingfilter.md
211211
- name: ASP.NET Core
212212
href: howto-targetingfilter-aspnet-core.md
213+
- name: JavaScript
214+
href: howto-targetingfilter-javascript.md
213215
- name: Use variant feature flags
214216
items:
215217
- name: Overview

articles/azure-app-configuration/concept-ai-configuration.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,8 @@ Chat completion is an AI capability that produces human-like dialogue responses
4646
| Anthropic | Claude 3.7 Sonnet |
4747
| Google | Gemini 2.5 Pro |
4848
| DeepSeek | DeepSeek-R1 |
49+
| xAI | Grok-3 |
50+
| xAI | Grok-3 Mini |
4951

5052
Azure OpenAI Service supports a diverse set of models from OpenAI. For more information, see [Azure OpenAI Service models](/azure/ai-services/openai/concepts/models). To learn more about models from Anthropic, refer to the [Claude models documentation](https://docs.anthropic.com/docs/about-claude/models/overview).
5153
For more details about models provided by Google, see the [Gemini models documentation](https://ai.google.dev/gemini-api/docs/models).

articles/azure-app-configuration/feature-management-javascript-reference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ const featureManager = new FeatureManager(featureProvider);
132132

133133
#### Use Azure App Configuration to dynamically control the state of the feature flag
134134

135-
Azure App Configuration is not only a solution to externalize storage and centralized management of your feature flags, but also it allows to dynamically turn on/off the feature flags.
135+
Azure App Configuration is not only a solution to externalize storage and centralized management of your feature flags, but also it allows you to dynamically turn on/off the feature flags.
136136

137137
To enable the dynamic refresh for feature flags, you need to configure the `refresh` property of `featureFlagOptions` when loading feature flags from Azure App Configuration.
138138

@@ -526,7 +526,7 @@ app.use((req, res, next) => {
526526
const targetingContextAccessor = {
527527
getTargetingContext: () => {
528528
// Get the current request from AsyncLocalStorage
529-
const request = requestContext.getStore();
529+
const request = requestAccesor.getStore();
530530
if (!request) {
531531
return undefined; // Return undefined if there's no current request
532532
}

articles/azure-app-configuration/feature-management-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Feature | .NET | Spring | Python | JavaScript
4141
------- | ---- | ------ | ------ | ----------
4242
Targeting Filter | [GA](./feature-management-dotnet-reference.md#targeting) | GA | [GA](./feature-management-python-reference.md#targeting) | [GA](./feature-management-javascript-reference.md#targeting)
4343
Targeting Exclusion | [GA](./feature-management-dotnet-reference.md#targeting-exclusion) | GA | [GA](./feature-management-python-reference.md#targeting-exclusion) | [GA](./feature-management-javascript-reference.md#targeting-exclusion)
44-
Ambient Targeting | [GA](./feature-management-dotnet-reference.md#targeting-in-a-web-application) | WIP | WIP | [Preview](./feature-management-javascript-reference.md#targeting-in-a-web-application)
44+
Ambient Targeting | [GA](./feature-management-dotnet-reference.md#targeting-in-a-web-application) | WIP | WIP | [GA](./feature-management-javascript-reference.md#targeting-in-a-web-application)
4545
Time Window Filter | [GA](./feature-management-dotnet-reference.md#microsofttimewindow) | GA | [GA](./feature-management-python-reference.md#microsofttimewindow) | [GA](./feature-management-javascript-reference.md#microsofttimewindow)
4646
Recurring Time Window | [GA](./feature-management-dotnet-reference.md#microsofttimewindow) | GA | WIP | WIP
4747
Custom Feature Filter | [GA](./feature-management-dotnet-reference.md#implementing-a-feature-filter) | GA | [GA](./feature-management-python-reference.md#implementing-a-feature-filter) | [GA](./feature-management-javascript-reference.md#implementing-a-feature-filter)

articles/azure-app-configuration/howto-chat-completion-config.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,12 +26,9 @@ In this section, you create a chat completion configuration in Azure portal usin
2626
1. In Azure portal, navigate to your App configuration store. From the **Operations** menu, select **Configuration explorer** > **Create**, and then select **AI configuration**.
2727

2828
1. Specify the following values:
29-
- **Key**: Type **ChatApp:Model**.
29+
- **Key**: Type **ChatApp:ChatCompletion**.
3030
- **Label**: Leave this value blank.
3131
- **Model**: Select **gpt-4o**.
32-
- **Message**: Add a new message.
33-
- **Role**: Select **user**
34-
- **Content**: Type "What is the capital of France?"
3532

3633
> [!div class="mx-imgBorder"]
3734
> ![Screenshot shows the create new AI configuration form.](./media/create-ai-chat-completion-config.png)
@@ -40,13 +37,13 @@ In this section, you create a chat completion configuration in Azure portal usin
4037

4138
## Add model connection configuration
4239

43-
You successfully added your chat completion configuration named **ChatApp:Model** in the previous section. In this section, you add the connection details for your model, including the endpoint and deployment name. If required by your authentication method, you can also specify an API key using a Key Vault reference.
40+
You successfully added your chat completion configuration named **ChatApp:ChatCompletion** in the previous section. In this section, you add the connection details for your model, including the endpoint and deployment name. If required by your authentication method, you can also specify an API key using a Key Vault reference.
4441

4542
> [!NOTE]
4643
> This tutorial demonstrates how to use chat completion configuration with an Azure OpenAI model. However, the chat completion configuration demonstrated in the tutorial can be applied to any AI model you choose to work with in your application.
4744
>
4845
49-
1. Follow the [Get started with Azure OpenAI Service](/azure/ai-services/openai/overview#get-started-with-azure-openai-service) to create and deploy an Azure OpenAI service resource with a **gpt-4o** model. Note down the deployment name for later use.
46+
1. Follow the [Get started with Azure OpenAI Service](/azure/ai-foundry/openai/how-to/create-resource) to create and deploy an Azure OpenAI service resource with a **gpt-4o** model. Note down the deployment name for later use.
5047

5148
1. In your Azure OpenAI resource, from the **Resource Management** menu, select **Keys and Endpoint** and copy the Azure OpenAI resource endpoint. It should follow the format: `https://<open-ai-resource-name>.openai.azure.com`. If using the API key for authentication, copy the API key as well.
5249

0 commit comments

Comments
 (0)