Skip to content

Commit ab6fb05

Browse files
committed
Merge branch 'main' into release-openai-audio-models
2 parents c54f5f7 + dcacc45 commit ab6fb05

31 files changed

+1625
-1286
lines changed

articles/ai-foundry/concepts/models-featured.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -171,9 +171,12 @@ Meta Llama models and tools are a collection of pretrained and fine-tuned genera
171171
- Small language models (SLMs) like 1B and 3B Base and Instruct models for on-device and edge inferencing
172172
- Mid-size large language models (LLMs) like 7B, 8B, and 70B Base and Instruct models
173173
- High-performant models like Meta Llama 3.1-405B Instruct for synthetic data generation and distillation use cases.
174+
- High-performant natively multimodal models, Llama 4 Scout and Llama 4 Maverick, leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
174175

175176
| Model | Type | Capabilities |
176177
| ------ | ---- | ------------ |
178+
| [Llama-4-Scout-17B-16E-Instruct](https://aka.ms/aifoundry/landing/llama-4-scout-17b-16e-instruct) | [chat-completion](../model-inference/how-to/use-chat-completions.md?context=/azure/ai-foundry/context/context) | - **Input:** text and image (128,000 tokens) <br /> - **Output:** text (8,192 tokens) <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text |
179+
| [Llama 4-Maverick-17B-128E-Instruct-FP8](https://aka.ms/aifoundry/landing/llama-4-maverick-17b-128e-instruct-fp8) | [chat-completion](../model-inference/how-to/use-chat-completions.md?context=/azure/ai-foundry/context/context) | - **Input:** text and image (128,000 tokens) <br /> - **Output:** text (8,192 tokens) <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text |
177180
| [Llama-3.3-70B-Instruct](https://ai.azure.com/explore/models/Llama-3.3-70B-Instruct/version/4/registry/azureml-meta) | [chat-completion](../model-inference/how-to/use-chat-completions.md?context=/azure/ai-foundry/context/context) | - **Input:** text (128,000 tokens) <br /> - **Output:** text (8,192 tokens) <br /> - **Tool calling:** No <br /> - **Response formats:** Text |
178181
| [Llama-3.2-90B-Vision-Instruct](https://ai.azure.com/explore/models/Llama-3.2-90B-Vision-Instruct/version/1/registry/azureml-meta) | [chat-completion (with images)](../model-inference/how-to/use-chat-multi-modal.md?context=/azure/ai-foundry/context/context) | - **Input:** text and image (128,000 tokens) <br /> - **Output:** text (8,192 tokens) <br /> - **Tool calling:** No <br /> - **Response formats:** Text |
179182
| [Llama-3.2-11B-Vision-Instruct](https://ai.azure.com/explore/models/Llama-3.2-11B-Vision-Instruct/version/1/registry/azureml-meta) | [chat-completion (with images)](../model-inference/how-to/use-chat-multi-modal.md?context=/azure/ai-foundry/context/context) | - **Input:** text and image (128,000 tokens) <br /> - **Output:** text (8,192 tokens) <br /> - **Tool calling:** No <br /> - **Response formats:** Text |

articles/ai-foundry/toc.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -557,8 +557,6 @@ items:
557557
href: ai-services/content-safety-overview.md
558558
- name: Content safety for models deployed with serverless APIs
559559
href: concepts/model-catalog-content-safety.md
560-
- name: Use Azure AI Content Safety in AI Foundry portal
561-
href: /azure/ai-services/content-safety/how-to/foundry?context=/azure/ai-foundry/context/context
562560
- name: Content filtering
563561
href: concepts/content-filtering.md
564562
- name: Use blocklists

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,16 @@
160160
"redirect_url": "/azure/ai-services/content-safety/quickstart-custom-categories",
161161
"redirect_document_id": true
162162
},
163+
{
164+
"source_path_from_root": "/articles/ai-services/content-safety/how-to/foundry.md",
165+
"redirect_url": "/azure/ai-foundry/ai-services/content-safety-overview",
166+
"redirect_document_id": false
167+
},
168+
{
169+
"source_path_from_root": "/articles/ai-services/content-safety/studio-quickstart.md",
170+
"redirect_url": "/azure/ai-foundry/ai-services/content-safety-overview?context=/azure/ai-services/content-safety/context/context",
171+
"redirect_document_id": false
172+
},
163173
{
164174
"source_path_from_root": "/articles/ai-services/speech-service/how-to-custom-voice-create-voice.md",
165175
"redirect_url": "/azure/ai-services/speech-service/professional-voice-train-voice",

articles/ai-services/agents/how-to/tools/fabric.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,8 @@ You need to first build and publish a Fabric data agent and then connect your Fa
3434

3535
* Developers and end users have at least `READ` access to the Fabric data agent and the underlying data sources it connects with.
3636

37+
* Your Fabric Data Agent and Azure AI Agent need to be in the same tenant.
38+
3739
## Setup
3840
> [!NOTE]
3941
> * The model you selected in Azure AI Agent setup is only used for agent orchestration and response generation. It doesn't impact which model Fabric data agent uses for NL2SQL operation.

articles/ai-services/content-safety/how-to/foundry.md

Lines changed: 0 additions & 115 deletions
This file was deleted.
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
---
2+
title: "Quickstart: Use a blocklist in the Foundry portal"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
## Prerequisites
13+
14+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
15+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
16+
17+
18+
## Setup
19+
20+
Follow these steps to use the Content Safety **try it out** page:
21+
22+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
23+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
24+
25+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
26+
27+
28+
### Use a blocklist
29+
30+
The **Use blocklist** tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
31+
32+
:::image type="content" source="/azure/ai-foundry/media/content-safety/blocklist-panel.png" alt-text="Screenshot of the Use blocklist panel.":::
33+
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: "Quickstart: Use custom categories in the Foundry portal"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
19+
## Setup
20+
21+
Follow these steps to use the Content Safety **try it out** page:
22+
23+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
24+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
25+
26+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
27+
28+
29+
## Use custom categories
30+
31+
This feature lets you create and train your own custom content categories and scan text for matches.
32+
33+
1. Select the **Custom categories** panel.
34+
1. Select **Add a new category** to open a dialog box. Enter your category name and a text description, and connect a blob storage container with text training data. Select **Create and train**.
35+
1. Select a category and enter your sample input text, and select **Run test**.
36+
The service returns the custom category result.
37+
38+
39+
For more information, see the [Custom categories conceptual guide](/azure/ai-services/content-safety/concepts/custom-categories).
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
title: "Quickstart: Use groundedness detection"
3+
author: PatrickFarley
4+
manager: nitinme
5+
ms.service: azure-ai-content-safety
6+
ms.custom:
7+
ms.topic: include
8+
ms.date: 04/10/2025
9+
ms.author: pafarley
10+
---
11+
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
19+
## Setup
20+
21+
Follow these steps to use the Content Safety **try it out** page:
22+
23+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
24+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
25+
26+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
27+
28+
29+
30+
## Use Groundedness detection
31+
32+
The Groundedness detection panel lets you detect whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users.
33+
34+
1. Select the **Groundedness detection** panel.
35+
1. Select a sample content set on the page, or input your own for testing.
36+
1. Optionally, enable the reasoning feature and select your Azure OpenAI resource from the dropdown.
37+
1. Select **Run test**.
38+
The service returns the groundedness detection result.
39+
40+
41+
For more information, see the [Groundedness detection conceptual guide](/azure/ai-services/content-safety/concepts/groundedness).
42+
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
title: "Quickstart: Analyze image content"
3+
description: In this quickstart, get started using Azure AI Content Safety to analyze image content for objectionable material.
4+
author: PatrickFarley
5+
manager: nitinme
6+
ms.service: azure-ai-content-safety
7+
ms.custom:
8+
ms.topic: include
9+
ms.date: 04/10/2025
10+
ms.author: pafarley
11+
---
12+
13+
## Prerequisites
14+
15+
- An Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/pricing/purchase-options/azure-account?icid=ai-services).
16+
- An [Azure AI resource](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AIServices).
17+
18+
## Setup
19+
20+
Follow these steps to use the Content Safety **try it out** page:
21+
22+
1. Go to [Azure AI Foundry](https://ai.azure.com/) and navigate to your project/hub. Then select the **Safety+ Security** tab on the left nav and select the **Try it out** tab.
23+
1. On the **Try it out** page, you can experiment with various content safety features such as text and image content, using adjustable thresholds to filter for inappropriate or harmful content.
24+
25+
:::image type="content" source="/azure/ai-foundry/media/content-safety/try-it-out.png" alt-text="Screenshot of the try it out page for content safety.":::
26+
27+
## Analyze images
28+
29+
The **Moderate image** page provides capability for you to quickly try out image moderation.
30+
31+
1. Select the **Moderate image content** panel.
32+
1. Select a sample image from the panels on the page, or upload your own image.
33+
1. Select **Run test**.
34+
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
35+
36+
## View and export code
37+
38+
You can use the **View Code** feature in either the **Analyze text content** or **Analyze image content** pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
39+
40+
:::image type="content" source="/azure/ai-foundry/media/content-safety/view-code-option.png" alt-text="Screenshot of the View code button.":::

0 commit comments

Comments
 (0)