Skip to content

Commit 69fede6

Browse files
committed
Merge branch 'main' into release-ga-lbcd
2 parents 8b88fb4 + f3db198 commit 69fede6

File tree

39 files changed

+201
-95
lines changed

39 files changed

+201
-95
lines changed

articles/api-management/azure-ai-foundry-api.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,16 +27,16 @@ API Management supports two client compatibility options for AI APIs. Choose the
2727

2828
* **Azure AI** - Manage model endpoints in Azure AI Foundry that are exposed through the [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api).
2929

30-
Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. Deployment name is passed in the request body. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI Service.
30+
Clients call the deployment at a `/models` endpoint such as `/my-model/models/chat/completions`. Deployment name is passed in the request body. Use this option if you want flexibility to switch between models exposed through the Azure AI Model Inference API and those deployed in Azure OpenAI in Foundry Models.
3131

32-
* **Azure OpenAI Service** - Manage model endpoints deployed in Azure OpenAI Service.
32+
* **Azure OpenAI** - Manage model endpoints deployed in Azure OpenAI.
3333

34-
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your AI service only includes Azure OpenAI Service model deployments.
34+
Clients call the deployment at an `/openai` endpoint such as `/openai/deployments/my-deployment/chat/completions`. Deployment name is passed in the request path. Use this option if your AI service only includes Azure OpenAI model deployments.
3535

3636
## Prerequisites
3737

3838
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
39-
- An Azure AI service in your subscription with one or more models deployed. Examples include models deployed in Azure AI Foundry or Azure OpenAI Service.
39+
- An Azure AI service in your subscription with one or more models deployed. Examples include Azure OpenAI or other models deployed in Azure AI Foundry.
4040

4141
## Import AI Foundry API using the portal
4242

@@ -67,7 +67,7 @@ To import an AI Foundry API to API Management:
6767
1. In **Base path**, enter a path that your API Management instance uses to access the deployment endpoint.
6868
1. Optionally select one or more **Products** to associate with the API.
6969
1. In **Client compatibility**, select either of the following based on the types of client you intend to support. See [Client compatibility options](#client-compatibility-options) for more information.
70-
* **Azure OpenAI** - Select this option if your clients only need to access Azure OpenAI Service model deployments.
70+
* **Azure OpenAI** - Select this option if your clients only need to access Azure OpenAI model deployments.
7171
* **Azure AI** - Select this option if your clients need to access other models in Azure AI Foundry.
7272
1. Select **Next**.
7373

articles/api-management/azure-openai-enable-semantic-caching.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ Enable semantic caching of responses to Azure OpenAI API requests to reduce band
2222
2323
## Prerequisites
2424

25-
* One or more Azure OpenAI Service APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI Service API to Azure API Management](azure-openai-api-from-specification.md).
26-
* The Azure OpenAI service must have deployments for the following:
25+
* One or more Azure OpenAI in Foundry Models APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI API to Azure API Management](azure-openai-api-from-specification.md).
26+
* The Azure OpenAI instance must have deployments for the following:
2727
* Chat Completion API - Deployment used for API consumer calls
2828
* Embeddings API - Deployment used for semantic caching
2929
* The API Management instance must be configured to use managed identity authentication to the Azure OpenAI APIs. For more information, see [Authenticate and authorize access to Azure OpenAI APIs using Azure API Management ](api-management-authenticate-authorize-azure-openai.md#authenticate-with-managed-identity).
@@ -57,17 +57,17 @@ Configure a [backend](backends.md) resource for the embeddings API deployment wi
5757

5858
* **Name** - A name of your choice, such as `embeddings-backend`. You use this name to reference the backend in policies.
5959
* **Type** - Select **Custom URL**.
60-
* **Runtime URL** - The URL of the embeddings API deployment in the Azure OpenAI Service, similar to:
60+
* **Runtime URL** - The URL of the embeddings API deployment in the Azure OpenAI instance, similar to:
6161
```
6262
https://my-aoai.openai.azure.com/openai/deployments/embeddings-deployment/embeddings
6363
```
6464
* **Authorization credentials** - Go to **Managed Identity** tab.
6565
* **Client identity** - Select *System assigned identity* or type in a User assigned managed identity client ID.
66-
* **Resource ID** - Enter `https://cognitiveservices.azure.com/` for Azure OpenAI Service.
66+
* **Resource ID** - Enter `https://cognitiveservices.azure.com/` for Azure OpenAI.
6767

6868
### Test backend
6969

70-
To test the backend, create an API operation for your Azure OpenAI Service API:
70+
To test the backend, create an API operation for your Azure OpenAI API:
7171

7272
1. On the **Design** tab of your API, select **+ Add operation**.
7373
1. Enter a **Display name** and optionally a **Name** for the operation.

articles/azure-functions/durable/durable-functions-http-features.md

Lines changed: 56 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -350,13 +350,67 @@ By using the "call HTTP" action, you can do the following actions in your orches
350350

351351
The ability to consume HTTP APIs directly from orchestrator functions is intended as a convenience for a certain set of common scenarios. You can implement all of these features yourself using activity functions. In many cases, activity functions might give you more flexibility.
352352

353-
### <a name="http-202-handling"></a>HTTP 202 handling (.NET in-process only)
353+
### <a name="http-202-handling"></a>HTTP 202 handling (.NET only)
354354

355355
The "call HTTP" API can automatically implement the client side of the polling consumer pattern. If a called API returns an HTTP 202 response with a Location header, the orchestrator function automatically polls the Location resource until receiving a response other than 202. This response will be the response returned to the orchestrator function code.
356356

357+
# [C# (InProc)](#tab/csharp-inproc)
358+
359+
```csharp
360+
[FunctionName(nameof(CheckSiteAvailabilityWithPolling))]
361+
public static async Task CheckSiteAvailabilityWithPolling(
362+
[OrchestrationTrigger] IDurableOrchestrationContext context)
363+
{
364+
Uri url = context.GetInput<Uri>();
365+
366+
// HTTP automatic polling on 202 response is enabled by default in .NET in-process.
367+
DurableHttpResponse response =
368+
await context.CallHttpAsync(HttpMethod.Get, url);
369+
}
370+
```
371+
372+
# [C# (Isolated)](#tab/csharp-isolated)
373+
374+
```csharp
375+
[Function(nameof(CheckSiteAvailabilityWithPolling))]
376+
public static async Task CheckSiteAvailabilityWithPolling(
377+
[OrchestrationTrigger] TaskOrchestrationContext context)
378+
{
379+
Uri url = context.GetInput<Uri>();
380+
381+
// Enable HTTP automatic polling on 202 response by setting asynchronousPatternEnabled to true.
382+
DurableHttpResponse response =await context.CallHttpAsync(
383+
HttpMethod.Get,
384+
url!,
385+
content: null,
386+
retryOptions: null,
387+
asynchronousPatternEnabled: true)
388+
}
389+
390+
```
391+
# [JavaScript](#tab/javascript)
392+
393+
This feature is currently not supported in JavaScript.
394+
395+
# [Python](#tab/python)
396+
397+
This feature is currently not supported in Python.
398+
399+
# [PowerShell](#tab/powershell)
400+
401+
This feature is currently not supported in PowerShell.
402+
403+
# [Java](#tab/java)
404+
405+
This feature is currently not supported in Java.
406+
407+
---
408+
357409
> [!NOTE]
358410
> 1. Orchestrator functions also natively support the server-side polling consumer pattern, as described in [Async operation tracking](#async-operation-tracking). This support means that orchestrations in one function app can easily coordinate the orchestrator functions in other function apps. This is similar to the [sub-orchestration](durable-functions-sub-orchestrations.md) concept, but with support for cross-app communication. This support is particularly useful for microservice-style app development.
359-
> 2. The built-in HTTP polling pattern is currently available only in the .NET in-process host.
411+
> 2. The built-in HTTP polling pattern is currently available only in the .NET host.
412+
> 3. The polling pattern is enabled by default in .NET in-process but disabled by default in .NET Isolated. If you want to enable it in .NET Isolated, refer to the sample code and set the asynchronousPatternEnabled argument to true.
413+
> 4. HTTP automatic polling pattern is supported in Durable Functions .NET Isolated starting from version [v1.5.0](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask) or later.
360414
361415
### Managed identities
362416

articles/azure-functions/functions-how-to-azure-devops.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -445,7 +445,7 @@ You'll deploy with the [Azure Function App Deploy v2](/azure/devops/pipelines/ta
445445

446446
The v2 version of the task includes support for newer applications stacks for .NET, Python, and Node. The task includes networking predeployment checks. When there are predeployment issues, deployment stops.
447447

448-
To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`. Deploying to a Flex Consumption app requires you to set both `appType: functionAppLinux` and `isFlexConsumption: true`.
448+
To deploy to Azure Functions, add the following snippet at the end of your `azure-pipelines.yml` file. The default `appType` is Windows. You can specify Linux by setting the `appType` to `functionAppLinux`. Deploying to a Flex Consumption app requires you to set both `appType: functionAppLinux` and `isFlexConsumption: true`. The reason must be set to `functionAppLinux` when you use Flex Consumption because [Flex Consumption](/azure/azure-functions/flex-consumption-plan) is a Linux-based Azure Function.
449449

450450
### [Windows App](#tab/windows)
451451
```yaml

articles/azure-resource-manager/bicep/bicep-core-diagnostics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -399,7 +399,7 @@ If you need more information about a particular diagnostic code, select the **Fe
399399
| <a id='BCP417' />BCP417 | Error | The spread operator `{ellipsis}` cannot be used inside objects with property for-expressions. |
400400
| <a id='BCP418' />BCP418 | Error | Extensions cannot be referenced here. Extensions can only be referenced by module extension configurations. |
401401
| <a id='BCP419' />BCP419 | Error | Namespace name `{name}` cannot be used as an extension name.|
402-
| <a id='BCP420' />BCP420 | Error | The scope could not be resolved at compile time because the supplied expression is ambiguous or too complex. Scoping expressions must be reducible to a specific kind of scope without knowledge of parameter values. |
402+
| <a id='BCP420' />[BCP420](./diagnostics/bcp420.md) | Error | The scope could not be resolved at compile time because the supplied expression is ambiguous or too complex. Scoping expressions must be reducible to a specific kind of scope without knowledge of parameter values. |
403403
| <a id='BCP421' />BCP421 | Error | Module `{moduleName}` contains one or more secure outputs, which are not supported with `{LanguageConstants.TargetScopeKeyword}` set to `{LanguageConstants.TargetScopeTypeLocal}`.|
404404
| <a id='BCP422' />BCP422 | Error | A resource of type `{baseType}` may or may not exist when this function is called, which could cause the deployment to fail.|
405405

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
title: BCP420
3+
description: The scope couldn't be resolved at compile time because the supplied expression is ambiguous or too complex. Scoping expressions must be reducible to a specific kind of scope without knowledge of parameter values.
4+
ms.topic: reference
5+
ms.custom: devx-track-bicep
6+
ms.date: 06/25/2025
7+
---
8+
9+
# Bicep diagnostic code - BCP420
10+
11+
In Bicep, every resource or module must have a known deployment scope at compile time. The scope must be statically determinable. If the scope depends on a parameter, a variable, or an expression that can't be evaluated during compilation, Bicep throws BCP420.
12+
13+
## Description
14+
15+
The scope couldn't be resolved at compile time because the supplied expression is ambiguous or too complex. Scoping expressions must be reducible to a specific kind of scope without knowledge of parameter values.
16+
17+
## Level
18+
19+
Error
20+
21+
## Examples
22+
23+
The following code triggers BCP420 because the scope can't be determined at the compile time.
24+
25+
```bicep
26+
param targetResourceGroupName string = 'my-target-rg'
27+
param storageAccountName string = 'mystorageacct'
28+
param location string = 'eastus'
29+
30+
module storageModule './module.bicep' = {
31+
name: 'deployStorage'
32+
scope: empty(targetResourceGroupName) ? resourceGroup() : resourceGroup(targetResourceGroupName)
33+
params: {
34+
storageAccountName: storageAccountName
35+
location: location
36+
}
37+
}
38+
```
39+
40+
## Next steps
41+
42+
For more information about Bicep diagnostics, see [Bicep core diagnostics](../bicep-core-diagnostics.md).

articles/azure-resource-manager/bicep/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -740,5 +740,7 @@ items:
740740
href: diagnostics/bcp401.md
741741
- name: BCP414
742742
href: diagnostics/bcp414.md
743+
- name: BCP420
744+
href: diagnostics/bcp420.md
743745
- name: Azure CLI
744746
href: /cli/azure/resource

articles/communication-services/tutorials/includes/log-file-retrieval-android.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
// Call when a support request is being called
33
private void onSupportRequest(String userMessage) {
44
// Assuming the getSupportFiles method returns a List or similar collection.
5-
List<SupportFile> supportFiles = callClient.getdebugInfo().getSupportFiles();
5+
List<File> supportFiles = callClient.getdebugInfo().getSupportFiles();
66

77
// Send the files and any user message to your Ticket System
88
dispatchSupportRequestToBackend(userMessage, supportFiles);
99
}
10-
```
10+
```

articles/communication-services/tutorials/includes/log-file-retrieval-windows.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
private void OnSupportRequest(string userMessage)
44
{
55
// Assuming the GetSupportFiles method returns a List or similar collection.
6-
IReadOnlyList<SupportFile> supportFiles = callClient.DebugDetails.SupportFiles;
6+
IReadOnlyList<string> supportFiles = callClient.DebugDetails.SupportFiles;
77

88
// Send the files and any user message to your Ticket System
99
DispatchSupportRequestToBackend(userMessage, supportFiles);
1010
}
11-
```
11+
```

articles/confidential-computing/guest-attestation-confidential-virtual-machines-design.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,6 @@ Refer to [Azure Confidential VMs attestation guidance & FAQ](https://github.com/
9595
| Report Payload | 32 | 1184 | The hardware report. |
9696
| Runtime Data | 1216 | variable length | The runtime data includes claims endorsed by the hardware report. |
9797

98-
9998
#### Header
10099

101100
| Name | Offset (bytes) | Size (bytes) | Description |
@@ -115,11 +114,12 @@ The report generated by the hardware (AMD SEV-SNP or Intel TDX). The report_data
115114

116115
| Name | Offset (bytes) | Size (bytes) | Description | Measured |
117116
| :--- | :--- | :--- | :--- | :--- |
118-
| Data Size | 0 | 4 | The size of Runtime Claims. | No |
117+
| Data Size | 0 | 4 | The size of Runtime Data. | No |
119118
| Version | 4 | 4 | Format version. Expected: 1. | No |
120119
| Report Type | 8 | 4 | The type of hardware report. Expected: 2 (AMD SEV-SNP), 4 (Intel TDX) | No |
121120
| Hash Type | 12 | 4 | The algorithm used to hash the runtime data. The hash value is captured in the report_data field of the hardware report. Expected: 1 (SHA-256), 2 (SHA-384), 3 (SHA-512) | No |
122-
| Runtime Claims | 16 | variable length | The runtime claims in JSON format. | Yes |
121+
| Claim Size | 16 | 4 | The size of Runtime Claims. | No |
122+
| Runtime Claims | 20 | variable length | The runtime claims in JSON format. | Yes |
123123

124124
#### Runtime Claims
125125

0 commit comments

Comments
 (0)