You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/develop/langchain.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- ignite-2024
9
9
ms.topic: how-to
10
-
ms.date: 06/24/2025
10
+
ms.date: 06/26/2025
11
11
ms.reviewer: fasantia
12
12
ms.author: sgilley
13
13
author: sdgilley
@@ -31,7 +31,7 @@ To run this tutorial, you need:
31
31
32
32
* An [Azure subscription](https://azure.microsoft.com).
33
33
34
-
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-medium-2505` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
34
+
* A model deployment supporting the [Model Inference API](https://aka.ms/azureai/modelinference) deployed. In this example, we use a `Mistral-Large-2411` deployment in the [Foundry Models](../../../ai-foundry/model-inference/overview.md).
35
35
* Python 3.9 or later installed, including pip.
36
36
* LangChain installed. You can do it with:
37
37
@@ -76,7 +76,7 @@ Once configured, create a client to connect with the chat model by using the `in
Models deployed to Azure AI Foundry support the Model Inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
171
171
172
-
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Medium` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
172
+
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
Copy file name to clipboardExpand all lines: articles/ai-services/luis/includes/deprecation-notice.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,8 +5,8 @@ ms.author: lajanuar
5
5
manager: nitinme
6
6
ms.service: azure-ai-language
7
7
ms.topic: include
8
-
ms.date: 06/12/2025
8
+
ms.date: 06/26/2025
9
9
---
10
10
11
11
> [!IMPORTANT]
12
-
> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
12
+
> Language Understanding Intelligent Service (LUIS) will be fully retired on March 31, 2026. LUIS resource creation isn't available. Beginning on October 31, 2025, the LUIS portal will no longer be available. We recommend [migrating your LUIS applications](../../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
Data zone deployments are available in the same Azure OpenAI resource as all other Azure OpenAI deployment types but allow you to leverage Azure Government infrastructure to dynamically route traffic to the data center within the USGov data zone with the best availability for each request.
33
30
34
31
* USGov DataZone provides access to the model from both usgovarizona and usgovvirginia.
35
32
* Data stored at rest remains in the designated Azure region of the resource.
36
-
* Data may be processed for inferencing in either of the two Azure Government regions.
33
+
* Data may be processed for inferencing in either of the two Azure Government regions.
37
34
38
-
Data zone standard deployments are available in the same Azure OpenAI resource as all other Azure OpenAI deployment types but allow you to leverage Azure Government infrastructure to dynamically route traffic to the data center within the USGov data zone with the best availability for each request.
To request quota increases for these models, submit a request at [https://aka.ms/AOAIGovQuota](https://aka.ms/AOAIGovQuota). Note the following maximum quota limits allowed via that form:
41
45
@@ -45,11 +49,12 @@ To request quota increases for these models, submit a request at [https://aka.ms
Realtime events are used to communicate between the client and server in real-time audio applications. The events are sent as JSON objects over various endpoints, such as WebSockets or WebRTC. The events are used to manage the conversation, audio buffers, and responses in real-time.
17
17
18
-
The Realtime API is a WebSocket-based API that allows you to interact with the Azure OpenAI in real-time.
18
+
You can use audio client and server events with these APIs:
-[Azure AI Voice Live API](/azure/ai-services/speech-service/voice-live)
19
21
20
-
The Realtime API (via `/realtime`) is built on [the WebSockets API](https://developer.mozilla.org/docs/Web/API/WebSockets_API) to facilitate fully asynchronous streaming communication between the end user and model. Device details like capturing and rendering audio data are outside the scope of the Realtime API. It should be used in the context of a trusted, intermediate service that manages both connections to end users and model endpoint connections. Don't use it directly from untrusted end user devices.
21
-
22
-
> [!TIP]
23
-
> To get started with the Realtime API, see the [quickstart](realtime-audio-quickstart.md) and [how-to guide](./how-to/realtime-audio.md).
22
+
Unless otherwise specified, the events described in this document are applicable to both APIs.
Copy file name to clipboardExpand all lines: articles/ai-services/qnamaker/includes/new-version.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,8 +5,8 @@ ms.topic: include
5
5
ms.custom: include file
6
6
ms.service: azure-ai-language
7
7
ms.subservice: azure-ai-qna-maker
8
-
ms.date: 06/12/2025
8
+
ms.date: 06/26/2025
9
9
---
10
10
11
11
> [!NOTE]
12
-
> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure AI Language](../../language-service/index.yml). For question answering capabilities within the Language Service, see [question answering](../../language-service/question-answering/overview.md). Starting 1st October, 2022 you won't be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../../language-service/question-answering/how-to/migrate-qnamaker.md).
12
+
> The QnA Maker service is being retired on the October 31, 2025 (extended from March 31, 2025). A newer version of the question and answering capability is now available as part of [Azure AI Language](../../language-service/index.yml). For question answering capabilities within the Language Service, see [question answering](../../language-service/question-answering/overview.md). As of October 1, 2022, you're no longer able to create new QnA Maker resources. Beginning on March 31, 2025, the QnA Maker portal is no longer available. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../../language-service/question-answering/how-to/migrate-qnamaker.md).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/voice-live-api/realtime-python.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ author: eric-urban
4
4
ms.author: eur
5
5
ms.service: azure-ai-openai
6
6
ms.topic: include
7
-
ms.date: 5/19/2025
7
+
ms.date: 6/27/2025
8
8
---
9
9
10
10
## Prerequisites
@@ -151,6 +151,7 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
151
151
session_update = {
152
152
"type": "session.update",
153
153
"session": {
154
+
"instructions": "You are a helpful AI assistant responding in natural, engaging language.",
154
155
"turn_detection": {
155
156
"type": "azure_semantic_vad",
156
157
"threshold": 0.3,
@@ -170,7 +171,7 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
170
171
"type": "server_echo_cancellation"
171
172
},
172
173
"voice": {
173
-
"name": "en-US-Aria:DragonHDLatestNeural",
174
+
"name": "en-US-Ava:DragonHDLatestNeural",
174
175
"type": "azure-standard",
175
176
"temperature": 0.8,
176
177
},
@@ -417,7 +418,7 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
417
418
The output of the script is printed to the console. You see messages indicating the status of the connection, audio stream, and playback. The audio is played back through your speakers or headphones.
0 commit comments