Skip to content

Commit 183411d

Browse files
Merge pull request #7600 from MicrosoftDocs/main
Auto Publish – main to live - 2025-10-09 22:09 UTC
2 parents bff07be + 6b343b8 commit 183411d

19 files changed

+349
-287
lines changed

articles/ai-foundry/foundry-models/concepts/models-from-partners.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ See [this model collection in Azure AI Foundry portal](https://ai.azure.com/expl
8484

8585
## Microsoft
8686

87-
Microsoft models include various model groups such as MAI models, Phi models, healthcare AI models, and more. To see all the available Microsoft models, view [the Microsoft model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=phi/?cid=learnDocs).
87+
Microsoft models include various model groups such as MAI models, Phi models, healthcare AI models, and more.
8888

8989
| Model | Type | Capabilities | Project type |
9090
| ------ | ---- | ------------ | ------------ |
@@ -146,9 +146,9 @@ The Stability AI collection of image generation models includes Stable Image Cor
146146

147147
| Model | Type | Capabilities | Project type |
148148
| ------ | ---- | ------------ | ------------ |
149-
| [Stable Diffusion 3.5 Large](https://ai.azure.com/explore/models/Stable-Diffusion-3.5-Large/version/1/registry/azureml-stabilityai/?cid=learnDocs) | Image generation | - **Input:** text and image (1,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats**: Image (PNG and JPG) | Hub-based |
150-
| [Stable Image Core](https://ai.azure.com/explore/models/Stable-Image-Core/version/1/registry/azureml-stabilityai/?cid=learnDocs) | Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) | Hub-based |
151-
| [Stable Image Ultra](https://ai.azure.com/explore/models/Stable-Image-Ultra/version/1/registry/azureml-stabilityai/?cid=learnDocs) | Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) | Hub-based |
149+
| [Stable Diffusion 3.5 Large](https://ai.azure.com/explore/models/Stable-Diffusion-3.5-Large/version/1/registry/azureml-stabilityai/?cid=learnDocs) | Image generation | - **Input:** text and image (1,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats**: Image (PNG and JPG) | Foundry, Hub-based |
150+
| [Stable Image Core](https://ai.azure.com/explore/models/Stable-Image-Core/version/1/registry/azureml-stabilityai/?cid=learnDocs) | Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) | Foundry, Hub-based |
151+
| [Stable Image Ultra](https://ai.azure.com/explore/models/Stable-Image-Ultra/version/1/registry/azureml-stabilityai/?cid=learnDocs) | Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) | Foundry, Hub-based |
152152

153153
See [this model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=Stability+AI/?cid=learnDocs).
154154

articles/ai-foundry/foundry-models/includes/models-azure-direct-others.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ You can run these models through the BFL service provider API and through the [i
2222
| Model | Type & API endpoint| Capabilities | Deployment type (region availability) | Project type |
2323
| ------ | ------------------ | ------------ | ------------------------------------- | ------------ |
2424
| [FLUX.1-Kontext-pro](https://ai.azure.com/explore/models/FLUX.1-Kontext-pro/version/1/registry/azureml-blackforestlabs/?cid=learnDocs) | **Image generation** <br> - [Image API](../../openai/reference-preview.md): `https://<resource-name>/openai/deployments/{deployment-id}/images/generations` <br> and <br> `https://<resource-name>/openai/deployments/{deployment-id}/images/edits` <br> <br> - [BFL service provider API](https://docs.bfl.ai/kontext/kontext_text_to_image): ` <resource-name>/providers/blackforestlabs/v1/flux-kontext-pro?api-version=preview ` | - **Input:** text and image (5,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats**: Image (PNG and JPG) <br /> - **Key features:** Character consistency, advanced editing <br /> - **Additional parameters:** *(In provider-specific API only)* `seed`, `aspect ratio`, `input_image`, `prompt_unsampling`, `safety_tolerance`, `output_format`, `webhook_url`, `webhook_secret` |- Global standard (all regions) | Foundry, Hub-based |
25-
| [FLUX-1.1-pro](https://ai.azure.com/explore/models/FLUX-1.1-pro/version/1/registry/azureml-blackforestlabs/?cid=learnDocs) | **Image generation** <br> - [Image API](../../openai/reference-preview.md): `https://<resource-name>/openai/deployments/{deployment-id}/images/generations` <br> <br> - [BFL service provider API](https://docs.bfl.ai/flux_models/flux_1_1_pro): ` <resource-name>/providers/blackforestlabs/v1/flux-pro-1.1?api-version=preview ` | - **Input:** text (5,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) <br /> - **Key features:** Fast inference speed, strong prompt adherence, competitive pricing, scalable generation <br /> - **Additional parameters:** *(In provider-specific API only)* `width`, `height`, `prompt_unsampling`, `seed`, `safety_tolerance`, `output_format`, `webhook_url`, `webhook_secret` | - Global standard (all regions) | Hub-based |
25+
| [FLUX-1.1-pro](https://ai.azure.com/explore/models/FLUX-1.1-pro/version/1/registry/azureml-blackforestlabs/?cid=learnDocs) | **Image generation** <br> - [Image API](../../openai/reference-preview.md): `https://<resource-name>/openai/deployments/{deployment-id}/images/generations` <br> <br> - [BFL service provider API](https://docs.bfl.ai/flux_models/flux_1_1_pro): ` <resource-name>/providers/blackforestlabs/v1/flux-pro-1.1?api-version=preview ` | - **Input:** text (5,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) <br /> - **Key features:** Fast inference speed, strong prompt adherence, competitive pricing, scalable generation <br /> - **Additional parameters:** *(In provider-specific API only)* `width`, `height`, `prompt_unsampling`, `seed`, `safety_tolerance`, `output_format`, `webhook_url`, `webhook_secret` | - Global standard (all regions) | Foundry, Hub-based |
2626

2727

2828
See [this model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=black+forest+labs/?cid=learnDocs).

articles/ai-services/content-understanding/quickstart/use-rest-api.md

Lines changed: 33 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,15 @@ The 200 (`OK`) JSON response includes a `status` field indicating the status of
193193
]
194194
}
195195
]
196-
}
196+
},
197+
"usage": {
198+
"documentPages": 1,
199+
"tokens": {
200+
"contextualization": 1000,
201+
"input": 1866,
202+
"output": 87
203+
}
204+
}
197205
}
198206
```
199207

@@ -229,7 +237,14 @@ The 200 (`OK`) JSON response includes a `status` field indicating the status of
229237
]
230238
}
231239
]
232-
}
240+
},
241+
"usage": {
242+
"tokens": {
243+
"contextualization": 1000,
244+
"input": 1866,
245+
"output": 87
246+
}
247+
}
233248
}
234249
```
235250

@@ -274,7 +289,14 @@ The 200 (`OK`) JSON response includes a `status` field indicating the status of
274289
]
275290
}
276291
]
277-
}
292+
},
293+
"usage": {
294+
"tokens": {
295+
"contextualization": 1000,
296+
"input": 1866,
297+
"output": 87
298+
}
299+
}
278300
}
279301
```
280302

@@ -334,7 +356,14 @@ The 200 (`OK`) JSON response includes a `status` field indicating the status of
334356
]
335357
}
336358
]
337-
}
359+
},
360+
"usage": {
361+
"tokens": {
362+
"contextualization": 1000,
363+
"input": 1866,
364+
"output": 87
365+
}
366+
}
338367
}
339368
```
340369

articles/ai-services/content-understanding/service-limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ Content Understanding supports both basic field value types and nested structure
9999

100100
| Property | Document | Text | Image | Audio | Video |
101101
| --- | --- | --- | --- | --- | --- |
102-
| Max fields | 100 | 100 | 100 | 100 | 100 |
102+
| Max fields | 1000 | 1000 | 1000 | 1000 | 1000 |
103103
| Max classify field categories | 300 | 300 | 300 | 300 | 300 |
104104
| Supported generation methods | extract<br>generate<br>classify | generate<br>classify | generate<br>classify | generate<br>classify | generate<br>classify |
105105

articles/ai-services/content-understanding/whats-new.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,13 @@ ms.custom:
1717

1818
Azure AI Content Understanding service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
1919

20+
## October 2025
21+
22+
Azure AI Content Understanding preview version introduces the following updates:
23+
24+
* Azure AI Content Understanding now has increased field count support (1,000) for all modalities.
25+
* The API response body now inclues input, output, and contextualization tokens consumed as part of the `tokens` object. Check out the [quickstart](quickstart/use-rest-api.md) article for more information.
26+
2027
## May 2025
2128

2229
The Azure AI Content Understanding [**`2025-05-01-preview`**](/rest/api/contentunderstanding/content-analyzers?view=rest-contentunderstanding-2025-05-01-preview&preserve-view=true) REST API is now available. This update introduces the following updates and enhanced capabilities:

articles/search/cognitive-search-how-to-debug-skillset.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,12 +30,6 @@ For background on how a debug session works, see [Debug sessions in Azure AI Sea
3030

3131
+ An existing enrichment pipeline, including a data source, a skillset, an indexer, and an index.
3232

33-
## Security and permissions
34-
35-
+ To save a debug session to Azure storage, the search service identity must have **Storage Blob Data Contributor** permissions on Azure Storage. Otherwise, plan on choosing a full access connection string for the debug session connection to Azure Storage.
36-
37-
+ If the Azure Storage account is behind a firewall, configure it to [allow search service access](search-indexer-howto-access-ip-restricted.md).
38-
3933
## Limitations
4034

4135
Debug sessions work with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources, with the following exceptions:
@@ -50,6 +44,12 @@ Debug sessions work with all generally available [indexer data sources](search-d
5044

5145
+ For custom skills, a user-assigned managed identity isn't supported for a debug session connection to Azure Storage. As stated in the prerequisites, you can use a system managed identity, or specify a full access connection string that includes a key. For more information, see [Connect a search service to other Azure resources using a managed identity](search-how-to-managed-identities.md).
5246

47+
## Security and permissions
48+
49+
+ To save a debug session to Azure storage, the search service identity must have **Storage Blob Data Contributor** permissions on Azure Storage. Otherwise, plan on choosing a full access connection string for the debug session connection to Azure Storage.
50+
51+
+ If the Azure Storage account is behind a firewall, configure it to [allow search service access](search-indexer-howto-access-ip-restricted.md).
52+
5353
## Create a debug session
5454

5555
1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).

articles/search/search-get-started-portal-image-search.md

Lines changed: 25 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: haileytap
66
ms.author: haileytapia
77
ms.service: azure-ai-search
88
ms.topic: quickstart
9-
ms.date: 09/12/2025
9+
ms.date: 10/08/2025
1010
ms.custom:
1111
- references_regions
1212
---
@@ -46,20 +46,30 @@ For content extraction, you can choose either default extraction via Azure AI Se
4646

4747
### Supported embedding methods
4848

49-
For content embedding, you can choose either image verbalization (followed by text vectorization) or multimodal embeddings. Deployment instructions for the models are provided in a [later section](#deploy-models). The following table describes both embedding methods.
49+
For content embedding, choose one of the following methods:
5050

51-
| Method | Description | Supported models |
51+
+ **Image verbalization:** Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.
52+
53+
+ **Multimodal embeddings:** Uses an embedding model to directly vectorize both text and images.
54+
55+
The following table lists the supported providers and models for each method. Deployment instructions for the models are provided in a [later section](#deploy-models).
56+
57+
| Provider | Models for image verbalization | Models for multimodal embeddings |
5258
|--|--|--|
53-
| Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
54-
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
59+
| [Azure OpenAI in Azure AI Foundry Models resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | LLMs:<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
60+
| [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects) | Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large | |
61+
| [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/hub-create-projects) | LLMs:<br>phi-4<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large<br>Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> | Cohere-embed-v3-english <sup>3</sup><br>Cohere-embed-v3-multilingual <sup>3</sup> |
62+
| [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>4</sup> | Embedding model: [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> | [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> |
5563

5664
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
5765

58-
<sup>2</sup> For billing purposes, you must [attach your Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
66+
<sup>2</sup> Azure OpenAI resources (with access to embedding models) that were created in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) aren't supported. You must create an Azure OpenAI resource in the Azure portal.
67+
68+
<sup>3</sup> To use this model in the wizard, you must [deploy it as a serverless API deployment](/azure/ai-foundry/how-to/deploy-models-serverless).
5969

60-
<sup>3</sup> `phi-4` is only available to Azure AI Foundry projects.
70+
<sup>4</sup> For billing purposes, you must [attach your Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
6171

62-
<sup>4</sup> The Azure portal doesn't support `embed-v-4-0` for vectorization, so don't use it for this quickstart. Instead, use the [AML skill](cognitive-search-aml-skill.md) or [Azure AI Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) to programmatically specify this model. You can then use the portal to manage the skillset or vectorizer.
72+
<sup>5</sup> The Azure AI Vision multimodal embeddings APIs are available in [select regions](/azure/ai-services/computer-vision/overview-image-analysis#region-availability).
6373

6474
### Public endpoint requirements
6575

@@ -91,7 +101,7 @@ On your Azure AI Search service:
91101

92102
1. [Configure a system-assigned managed identity](search-how-to-managed-identities.md#create-a-system-managed-identity).
93103

94-
1. [Assign the following roles](search-security-rbac.md) to yourself:
104+
1. [Assign the following roles](search-security-rbac.md) to yourself.
95105

96106
+ **Search Service Contributor**
97107

@@ -101,7 +111,7 @@ On your Azure AI Search service:
101111

102112
### [**Azure Storage**](#tab/storage-perms)
103113

104-
Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to these storage containers, which you create in the next section.
114+
Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to the storage containers you create in the next section.
105115

106116
On your Azure Storage account:
107117

@@ -126,9 +136,9 @@ On your Azure OpenAI resource:
126136
The Azure AI Foundry model catalog provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [AML skill](cognitive-search-aml-skill.md).
127137

128138
> [!NOTE]
129-
> If you're using a hub-based project for multimodal embeddings, skip this step. The wizard requires key-based authentication in this scenario.
139+
> If you're using a hub-based project, skip this step. Hub-based projects support API keys instead of managed identities for authentication.
130140
131-
On your Azure AI Foundry project:
141+
On your Azure AI Foundry resource:
132142

133143
+ Assign **Azure AI Project Manager** to your [search service identity](search-how-to-managed-identities.md#create-a-system-managed-identity).
134144

@@ -269,7 +279,7 @@ To use the skills for image verbalization:
269279

270280
1. Select your Azure subscription, resource, and LLM deployment.
271281

272-
1. For the authentication type, select **System assigned identity**.
282+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
273283

274284
1. Select the checkbox that acknowledges the billing effects of using these resources.
275285

@@ -281,7 +291,7 @@ To use the skills for image verbalization:
281291

282292
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
283293

284-
1. For the authentication type, select **System assigned identity**.
294+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
285295

286296
1. Select the checkbox that acknowledges the billing effects of using these resources.
287297

@@ -307,7 +317,7 @@ To use the skills for multimodal embeddings:
307317

308318
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
309319

310-
1. If you're using Azure AI Vision, select **System assigned identity** for the authentication type. Otherwise, leave it as **API key**.
320+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
311321

312322
1. Select the checkbox that acknowledges the billing effects of using this resource.
313323

0 commit comments

Comments
 (0)