Skip to content

Commit d942a6a

Browse files
authored
Merge pull request #4211 from MicrosoftDocs/main
4/17/2025 PM Publish
2 parents 16d84ed + f93852b commit d942a6a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+156
-73
lines changed

.acrolinx-config.edn

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
{:allowed-branchname-matches ["main" "release-.*"]
2-
:allowed-filename-matches ["(?i)articles/(?:(?!active-directory/saas-apps/toc.yml|role-based-access-control/resource-provider-operations.md|.*policy/samples/|.*resource-graph/samples/))" "(?i)includes/(?:(?!policy/reference/|policy/standards/|resource-graph/samples/))"]}
2+
:allowed-filename-matches ["articles"]}

articles/ai-foundry/how-to/costs-plan-manage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ Here's an example of how to monitor costs for a project. The costs are used as a
111111
1. Under the **Project** heading, select **Overview**.
112112
1. Select **View cost for resources** from the **Total cost** section. The [Azure portal](https://portal.azure.com) opens to the resource group for your project.
113113

114-
:::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Foundry portal portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
114+
:::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Foundry portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
115115

116116
1. Expand the **Resource** column to see the costs for each service that's underlying your [project](../concepts/ai-resources.md#organize-work-in-projects-for-customization). But this view doesn't include costs for all resources that you use in a project.
117117

articles/ai-foundry/how-to/develop/evaluate-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ import os
233233
from azure.identity import DefaultAzureCredential
234234
credential = DefaultAzureCredential()
235235

236-
# Initialize Azure AI project and Azure OpenAI conncetion with your environment variables
236+
# Initialize Azure AI project and Azure OpenAI connection with your environment variables
237237
azure_ai_project = {
238238
"subscription_id": os.environ.get("AZURE_SUBSCRIPTION_ID"),
239239
"resource_group_name": os.environ.get("AZURE_RESOURCE_GROUP"),
@@ -250,7 +250,7 @@ model_config = {
250250

251251
from azure.ai.evaluation import GroundednessProEvaluator, GroundednessEvaluator
252252

253-
# Initialzing Groundedness and Groundedness Pro evaluators
253+
# Initializing Groundedness and Groundedness Pro evaluators
254254
groundedness_eval = GroundednessEvaluator(model_config)
255255
groundedness_pro_eval = GroundednessProEvaluator(azure_ai_project=azure_ai_project, credential=credential)
256256

articles/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@ More advanced users can specify the desired attack strategies instead of using d
224224

225225
Each new attack strategy specified will be applied to the set of baseline adversarial queries used in addition to the baseline adversarial queries.
226226

227-
This following example would generate one attack objective per each of the four risk categories specified. This will first, generate four baseline adversarial prompts which would be sent to your target. Then, each baseline query would get converted into each of the four attack strategies. This will result in a total of 20 attack-response pairs from your AI system. The last attack stratgy is an example of a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition would first encode the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions only support chaining two attack strategies together.
227+
This following example would generate one attack objective per each of the four risk categories specified. This will first, generate four baseline adversarial prompts which would be sent to your target. Then, each baseline query would get converted into each of the four attack strategies. This will result in a total of 20 attack-response pairs from your AI system. The last attack strategy is an example of a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition would first encode the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions only support chaining two attack strategies together.
228228

229229
```python
230230
red_team_agent = RedTeam(

articles/ai-foundry/how-to/develop/sdk-overview.md

Lines changed: 65 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,10 @@ Create a project client in code:
158158

159159
# [Sync](#tab/sync)
160160

161-
:::code language="csharp" source="~/azureai-samples-csharp/scenarios/projects/basic-csharp/Program.cs" id="snippet_get_project":::
161+
```csharp
162+
var connectionString = "<your_connection_string>";
163+
var projectClient = new AIProjectClient(connectionString, new DefaultAzureCredential());
164+
```
162165
163166
# [Async](#tab/async)
164167
@@ -226,7 +229,33 @@ using Azure.AI.OpenAI;
226229
227230
If you have existing code that uses the OpenAI SDK, you can use the project client to create an `AzureOpenAI` client that uses your project's Azure OpenAI connection:
228231
229-
:::code language="csharp" source="~/azureai-samples-csharp/scenarios/projects/basic-csharp/Program.cs" id="azure_openai":::
232+
```csharp
233+
var connections = projectClient.GetConnectionsClient();
234+
ConnectionResponse connection = connections.GetDefaultConnection(ConnectionType.AzureOpenAI, withCredential: true);
235+
var properties = connection.Properties as ConnectionPropertiesApiKeyAuth;
236+
237+
if (properties == null) {
238+
throw new Exception("Invalid auth type, expected API key auth");
239+
}
240+
241+
// Create and use an Azure OpenAI client
242+
AzureOpenAIClient azureOpenAIClient = new(
243+
new Uri(properties.Target),
244+
new AzureKeyCredential(properties.Credentials.Key));
245+
246+
// This must match the custom deployment name you chose for your model
247+
ChatClient chatClient = azureOpenAIClient.GetChatClient("gpt-4o-mini");
248+
249+
ChatCompletion completion = chatClient.CompleteChat(
250+
[
251+
new SystemChatMessage("You are a helpful assistant that talks like a pirate."),
252+
new UserChatMessage("Does Azure OpenAI support customer managed keys?"),
253+
new AssistantChatMessage("Yes, customer managed keys are supported by Azure OpenAI"),
254+
new UserChatMessage("Do other Azure AI services support this too?")
255+
]);
256+
257+
Console.WriteLine($"{completion.Role}: {completion.Content[0].Text}");
258+
```
230259
231260
::: zone-end
232261
@@ -280,7 +309,25 @@ using Azure.AI.Inference;
280309
281310
You can use the project client to get a configured and authenticated `ChatCompletionsClient` or `EmbeddingsClient`:
282311
283-
:::code language="csharp" source="~/azureai-samples-csharp/scenarios/projects/basic-csharp/Program.cs" id="snippet_inference":::
312+
```csharp
313+
var connectionString = Environment.GetEnvironmentVariable("AIPROJECT_CONNECTION_STRING");
314+
var projectClient = new AIProjectClient(connectionString, new DefaultAzureCredential());
315+
316+
ChatCompletionsClient chatClient = projectClient.GetChatCompletionsClient();
317+
318+
var requestOptions = new ChatCompletionsOptions()
319+
{
320+
Messages =
321+
{
322+
new ChatRequestSystemMessage("You are a helpful assistant."),
323+
new ChatRequestUserMessage("How many feet are in a mile?"),
324+
},
325+
Model = "gpt-4o-mini"
326+
};
327+
328+
Response<ChatCompletions> response = chatClient.Complete(requestOptions);
329+
Console.WriteLine(response.Value.Content);
330+
```
284331
285332
::: zone-end
286333
@@ -405,7 +452,21 @@ using Azure.Search.Documents.Models;
405452
406453
Instantiate the search and/or search index client as desired:
407454
408-
:::code language="csharp" source="~/azureai-samples-csharp/scenarios/projects/basic-csharp/Program.cs" id="azure_aisearch":::
455+
```csharp
456+
var connections = projectClient.GetConnectionsClient();
457+
var connection = connections.GetDefaultConnection(ConnectionType.AzureAISearch, withCredential: true).Value;
458+
459+
var properties = connection.Properties as ConnectionPropertiesApiKeyAuth;
460+
if (properties == null) {
461+
throw new Exception("Invalid auth type, expected API key auth");
462+
}
463+
464+
SearchClient searchClient = new SearchClient(
465+
new Uri(properties.Target),
466+
"products",
467+
new AzureKeyCredential(properties.Credentials.Key));
468+
```
469+
409470
410471
::: zone-end
411472

articles/ai-foundry/how-to/develop/visualize-traces.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ For more information on how to send Azure AI Inference traces to Azure Monitor a
101101

102102
From Azure AI Foundry project, you can also open your custom dashboard that provides you with insights specifically to help you monitor your generative AI application.
103103

104-
In this Azure Workbook, you can view your Gen AI spans and jump into the Azure Monitor **End-to-end transaction details view** view to deep dive and investigate.
104+
In this Azure Workbook, you can view your Gen AI spans and jump into the Azure Monitor **End-to-end transaction details view** to deep dive and investigate.
105105

106106
Learn more about using this workbook to monitor your application, see [Azure Workbook documentation](/azure/azure-monitor/visualize/workbooks-create-workbook).
107107

articles/ai-foundry/model-inference/concepts/content-filter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ manager: nitinme
1414
# Content filtering for model inference in Azure AI services
1515

1616
> [!IMPORTANT]
17-
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-models-by-endpoint).
17+
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-deployment-regional-models-by-endpoint).
1818
1919
Azure AI model inference in Azure AI Services includes a content filtering system that works alongside core models and it's powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety). This system works by running both the prompt and completion through an ensemble of classification models designed to detect and prevent the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
2020

articles/ai-foundry/model-inference/concepts/endpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ To learn more about how to create deployments see [Add and configure model deplo
3838

3939
## Azure AI inference endpoint
4040

41-
The Azure AI inference endpoint allows customers to use a single endpoint with the same authentication and schema to generate inference for the deployed models in the resource. This endpoint follows the [Azure AI model inference API](.././reference/reference-model-inference-api.md) which all the models in Azure AI model inference support. It support the following modalidities:
41+
The Azure AI inference endpoint allows customers to use a single endpoint with the same authentication and schema to generate inference for the deployed models in the resource. This endpoint follows the [Azure AI model inference API](.././reference/reference-model-inference-api.md) which all the models in Azure AI model inference support. It support the following modalities:
4242

4343
* Text embeddings
4444
* Image embeddings

articles/ai-foundry/model-inference/how-to/configure-deployment-policies.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Follow these steps to create and assign an example custom policy to control mode
3131

3232
2. From the left side of the Azure Policy Dashboard, select **Authoring**, **Definitions**, and then select **+ Policy definition** from the top of the page.
3333

34-
:::image type="content" source="../media/configure-deployment-policies/create-new-policy.png" alt-text="An screenshot showing how to create a new policy definition in Azure Policies." lightbox="../media/configure-deployment-policies/create-new-policy.png":::
34+
:::image type="content" source="../media/configure-deployment-policies/create-new-policy.png" alt-text="A screenshot showing how to create a new policy definition in Azure Policies." lightbox="../media/configure-deployment-policies/create-new-policy.png":::
3535

3636
3. In the **Policy Definition** form, use the following values:
3737

@@ -157,7 +157,7 @@ To monitor compliance with the policy, follow these steps:
157157

158158
1. From the left side of the Azure Policy Dashboard, select **Compliance**. Each policy assignment is listed with the compliance status. To view more details, select the policy assignment. The following example shows the compliance report for a policy that blocks deployments of type *Global standard*.
159159

160-
:::image type="content" source="../media/configure-deployment-policies/policy-compliance.png" alt-text="An screenshot showing an example of a policy compliance report for a policy that blocks Global standard deployment SKUs." lightbox="../media/configure-deployment-policies/policy-compliance.png":::
160+
:::image type="content" source="../media/configure-deployment-policies/policy-compliance.png" alt-text="A screenshot showing an example of a policy compliance report for a policy that blocks Global standard deployment SKUs." lightbox="../media/configure-deployment-policies/policy-compliance.png":::
161161

162162
## Update the policy assignment
163163

articles/ai-foundry/model-inference/includes/use-chat-completions/csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ foreach (ChatCompletionsToolCall tool in toolsCall)
337337
Dictionary<string, object> toolArguments = JsonSerializer.Deserialize<Dictionary<string, object>>(toolArgumentsString);
338338

339339
// Here you have to call the function defined. In this particular example we use
340-
// reflection to find the method we definied before in an static class called
340+
// reflection to find the method we definied before in a static class called
341341
// `ChatCompletionsExamples`. Using reflection allows us to call a function
342342
// by string name. Notice that this is just done for demonstration purposes as a
343343
// simple way to get the function callable from its string name. Then we can call

0 commit comments

Comments
 (0)