Skip to content

Commit 42a07e4

Browse files
authored
Merge pull request #3197 from MicrosoftDocs/main
2/26/2025 PM Publish
2 parents ba19a83 + e8a3b22 commit 42a07e4

File tree

50 files changed

+442
-373
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+442
-373
lines changed

articles/ai-foundry/model-inference/includes/create-resources/bicep.md

Lines changed: 2 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -30,19 +30,9 @@ The files for this example are in:
3030
cd azureai-model-inference-bicep/infra
3131
```
3232

33-
## Understand the resources
34-
35-
The tutorial helps you create:
36-
37-
> [!div class="checklist"]
38-
> * An Azure AI Services resource.
39-
> * A model deployment in the Global standard SKU for each of the models supporting pay-as-you-go.
40-
> * (Optionally) An Azure AI project and hub.
41-
> * (Optionally) A connection between the hub and the models in Azure AI Services.
42-
43-
Notice that **you have to deploy an Azure AI project and hub** if you plan to use the Azure AI Foundry portal for managing the resource, using playground, or any other feature from the portal.
33+
## Create the resources
4434

45-
You are using the following assets to create those resources:
35+
Follow these steps:
4636

4737
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Services resource:
4838

@@ -72,10 +62,6 @@ You are using the following assets to create those resources:
7262

7363
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/modules/ai-services-connection-template.bicep":::
7464

75-
## Create the resources
76-
77-
In your console, follow these steps:
78-
7965
1. Define the main deployment:
8066

8167
__deploy-with-project.bicep__

articles/ai-foundry/model-inference/includes/create-resources/intro.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,22 @@ ms.topic: include
1111

1212
In this article, you learn how to create the resources required to use Azure AI model inference and consume flagship models from Azure AI model catalog.
1313

14+
## Understand the resources
15+
16+
Azure AI model inference is a capability in Azure AI Services resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI Hubs and Projects in Azure AI Foundry to create intelligent applications if needed. The following picture shows the high level architecture.
17+
18+
:::image type="content" source="../../media/create-resources/resources-architecture.png" alt-text="A diagram showing the high level architecture of the resources created in the tutorial." lightbox="../../media/create-resources/resources-architecture.png":::
19+
20+
Azure AI Services resources don't require AI projects or AI hubs to operate and you can create them to consume flagship models from your applications. However, additional capabilities are available if you **deploy an Azure AI project and hub**, including playground, or agents.
21+
22+
The tutorial helps you create:
23+
24+
> [!div class="checklist"]
25+
> * An Azure AI Services resource.
26+
> * A model deployment for each of the models supported with pay-as-you-go.
27+
> * (Optionally) An Azure AI project and hub.
28+
> * (Optionally) A connection between the hub and the models in Azure AI Services.
29+
1430
## Prerequisites
1531

1632
To complete this article, you need:
56 KB
Loading

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -460,6 +460,13 @@
460460
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle",
461461
"redirect_document_id": false
462462
},
463+
{
464+
"source_path_from_root": "/articles/ai-services/speech-service/custom-speech-ai-foundry-portal.md",
465+
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-create-project",
466+
"redirect_document_id": false
467+
},
468+
469+
463470
{
464471
"source_path_from_root": "/articles/ai-services/anomaly-detector/how-to/postman.md",
465472
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
@@ -777,7 +784,7 @@
777784
},
778785
{
779786
"source_path_from_root": "/articles/ai-services/speech-service/custom-speech-ai-studio.md",
780-
"redirect_url": "/azure/ai-services/speech-service/custom-speech-ai-foundry-portal",
787+
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-create-project",
781788
"redirect_document_id": true
782789
},
783790
{

articles/ai-services/agents/includes/quickstart-foundry.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
manager: nitinme
33
author: aahill
44
ms.author: aahi
5-
ms.service: azure
5+
ms.service: azure-ai-agent-service
66
ms.topic: include
77
ms.date: 01/21/2025
88
---

articles/ai-services/openai/includes/assistants-csharp.md

Lines changed: 41 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -71,24 +71,35 @@ Passwordless authentication is more secure than key-based alternatives and is th
7171
7272
### Create the assistant
7373
74+
>[!Note]
75+
> For this sample, the following libraries were used:
76+
>- Azure.AI.OpenAI(2.1.0-beta2)
77+
>- Azure.AI.OpenAI.Assistants(1.0.0-beta4)
78+
7479
Update the `Program.cs` file with the following code to create an assistant:
7580
7681
```csharp
7782
using Azure;
78-
using Azure.AI.OpenAI.Assistants;
83+
using Azure.AI.OpenAI;
84+
using Azure.Identity;
85+
using Azure.Security.KeyVault.Secrets;
86+
using OpenAI.Assistants;
87+
using OpenAI.Files;
88+
using System.ClientModel;
7989
8090
// Assistants is a beta API and subject to change
8191
// Acknowledge its experimental status by suppressing the matching warning.
8292
string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
8393
string key = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");
94+
string deploymentName = "<Replace with Deployment Name>"
8495
8596
var openAIClient = new AzureOpenAIClient(new Uri(endpoint), new AzureKeyCredential(key));
8697
8798
// Use for passwordless auth
8899
//var openAIClient = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential());
89100
90-
FileClient fileClient = openAIClient.GetFileClient();
91-
AssistantClient assistantClient = openAIClient.GetAssistantClient();
101+
OpenAIFileClient fileClient = azureClient.GetOpenAIFileClient();
102+
AssistantClient assistantClient = azureClient.GetAssistantClient();
92103
93104
// First, let's contrive a document we'll use retrieval with and upload it.
94105
using Stream document = BinaryData.FromString("""
@@ -120,13 +131,13 @@ using Stream document = BinaryData.FromString("""
120131
}
121132
""").ToStream();
122133
123-
OpenAIFileInfo salesFile = await fileClient.UploadFileAsync(
134+
OpenAI.Files.OpenAIFile salesFile = await fileClient.UploadFileAsync(
124135
document,
125136
"monthly_sales.json",
126137
FileUploadPurpose.Assistants);
127138
128139
// Now, we'll create a client intended to help with that data
129-
AssistantCreationOptions assistantOptions = new()
140+
OpenAI.Assistants.AssistantCreationOptions assistantOptions = new()
130141
{
131142
Name = "Example: Contoso sales RAG",
132143
Instructions =
@@ -136,7 +147,7 @@ AssistantCreationOptions assistantOptions = new()
136147
Tools =
137148
{
138149
new FileSearchToolDefinition(),
139-
new CodeInterpreterToolDefinition(),
150+
new OpenAI.Assistants.CodeInterpreterToolDefinition(),
140151
},
141152
ToolResources = new()
142153
{
@@ -158,7 +169,9 @@ ThreadCreationOptions threadOptions = new()
158169
InitialMessages = { "How well did product 113045 sell in February? Graph its trend over time." }
159170
};
160171
161-
ThreadRun threadRun = await assistantClient.CreateThreadAndRunAsync(assistant.Id, threadOptions);
172+
var initialMessage = new OpenAI.Assistants.ThreadInitializationMessage(OpenAI.Assistants.MessageRole.User, ["hi"]);
173+
174+
ThreadRun threadRun = await assistantClient.CreateThreadAndRunAsync(assistant.Value.Id, threadOptions);
162175
163176
// Check back to see when the run is done
164177
do
@@ -168,15 +181,15 @@ do
168181
} while (!threadRun.Status.IsTerminal);
169182
170183
// Finally, we'll print out the full history for the thread that includes the augmented generation
171-
AsyncCollectionResult<ThreadMessage> messages
184+
AsyncCollectionResult<OpenAI.Assistants.ThreadMessage> messages
172185
= assistantClient.GetMessagesAsync(
173186
threadRun.ThreadId,
174187
new MessageCollectionOptions() { Order = MessageCollectionOrder.Ascending });
175188
176-
await foreach (ThreadMessage message in messages)
189+
await foreach (OpenAI.Assistants.ThreadMessage message in messages)
177190
{
178191
Console.Write($"[{message.Role.ToString().ToUpper()}]: ");
179-
foreach (MessageContent contentItem in message.Content)
192+
foreach (OpenAI.Assistants.MessageContent contentItem in message.Content)
180193
{
181194
if (!string.IsNullOrEmpty(contentItem.Text))
182195
{
@@ -202,9 +215,9 @@ await foreach (ThreadMessage message in messages)
202215
}
203216
if (!string.IsNullOrEmpty(contentItem.ImageFileId))
204217
{
205-
OpenAIFileInfo imageInfo = await fileClient.GetFileAsync(contentItem.ImageFileId);
218+
OpenAI.Files.OpenAIFile imageFile = await fileClient.GetFileAsync(contentItem.ImageFileId);
206219
BinaryData imageBytes = await fileClient.DownloadFileAsync(contentItem.ImageFileId);
207-
using FileStream stream = File.OpenWrite($"{imageInfo.Filename}.png");
220+
using FileStream stream = File.OpenWrite($"{imageFile.Filename}.png");
208221
imageBytes.ToStream().CopyTo(stream);
209222
210223
Console.WriteLine($"<image: {imageInfo.Filename}.png>");
@@ -214,6 +227,22 @@ await foreach (ThreadMessage message in messages)
214227
}
215228
```
216229

230+
It is recommended that you store the API Key in a secure location, such as a Key Vault. The following code snippet can replace the
231+
```GetEnvironmentVariable``` lines to retrieve the Azure OpenAI API Key from your Key Vault instance:
232+
233+
```csharp
234+
string keyVaultName = "<Replace with Key Vault Name>";
235+
var kvUri = $"https://{keyVaultName}.vault.azure.net/";
236+
237+
var client = new SecretClient(new Uri(kvUri), new DefaultAzureCredential());
238+
239+
KeyVaultSecret endpointSecret = await client.GetSecretAsync("AZURE-OPENAI-ENDPOINT");
240+
KeyVaultSecret apiKeySecret = await client.GetSecretAsync("AZURE-OPENAI-API-KEY");
241+
242+
string endpoint = endpointSecret.Value;
243+
string key = apiKeySecret.Value;
244+
```
245+
217246
Run the app using the [`dotnet run`](/dotnet/core/tools/dotnet-run) command:
218247

219248
```csharp

articles/ai-services/speech-service/batch-transcription-create.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -246,7 +246,7 @@ To use a custom speech model for batch transcription, you need the model's URI.
246246
> [!TIP]
247247
> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if you use the [custom speech model](how-to-custom-speech-train-model.md) only for batch transcription.
248248
249-
Batch transcription requests for expired models fail with a 4xx error. Set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
249+
Batch transcription requests for expired models fail with a 4xx error. Set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](./custom-speech-overview.md#choose-your-model) and [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
250250

251251
## Use a Whisper model
252252

0 commit comments

Comments
 (0)