You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/concepts/models-from-partners.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,7 +84,7 @@ See [this model collection in Azure AI Foundry portal](https://ai.azure.com/expl
84
84
85
85
## Microsoft
86
86
87
-
Microsoft models include various model groups such as MAI models, Phi models, healthcare AI models, and more. To see all the available Microsoft models, view [the Microsoft model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=phi/?cid=learnDocs).
87
+
Microsoft models include various model groups such as MAI models, Phi models, healthcare AI models, and more.
88
88
89
89
| Model | Type | Capabilities | Project type |
90
90
| ------ | ---- | ------------ | ------------ |
@@ -146,9 +146,9 @@ The Stability AI collection of image generation models includes Stable Image Cor
146
146
147
147
| Model | Type | Capabilities | Project type |
148
148
| ------ | ---- | ------------ | ------------ |
149
-
|[Stable Diffusion 3.5 Large](https://ai.azure.com/explore/models/Stable-Diffusion-3.5-Large/version/1/registry/azureml-stabilityai/?cid=learnDocs)| Image generation | - **Input:** text and image (1,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats**: Image (PNG and JPG) | Hub-based |
150
-
|[Stable Image Core](https://ai.azure.com/explore/models/Stable-Image-Core/version/1/registry/azureml-stabilityai/?cid=learnDocs)| Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) | Hub-based |
151
-
|[Stable Image Ultra](https://ai.azure.com/explore/models/Stable-Image-Ultra/version/1/registry/azureml-stabilityai/?cid=learnDocs)| Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) | Hub-based |
149
+
|[Stable Diffusion 3.5 Large](https://ai.azure.com/explore/models/Stable-Diffusion-3.5-Large/version/1/registry/azureml-stabilityai/?cid=learnDocs)| Image generation | - **Input:** text and image (1,000 tokens and 1 image) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats**: Image (PNG and JPG) |Foundry, Hub-based |
150
+
|[Stable Image Core](https://ai.azure.com/explore/models/Stable-Image-Core/version/1/registry/azureml-stabilityai/?cid=learnDocs)| Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) |Foundry, Hub-based |
151
+
|[Stable Image Ultra](https://ai.azure.com/explore/models/Stable-Image-Ultra/version/1/registry/azureml-stabilityai/?cid=learnDocs)| Image generation | - **Input:** text (1,000 tokens) <br /> - **Output:** One Image <br /> - **Tool calling:** No <br /> - **Response formats:** Image (PNG and JPG) |Foundry, Hub-based |
152
152
153
153
See [this model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=Stability+AI/?cid=learnDocs).
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/whats-new.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,13 @@ ms.custom:
17
17
18
18
Azure AI Content Understanding service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
19
19
20
+
## October 2025
21
+
22
+
Azure AI Content Understanding preview version introduces the following updates:
23
+
24
+
* Azure AI Content Understanding now has increased field count support (1,000) for all modalities.
25
+
* The API response body now inclues input, output, and contextualization tokens consumed as part of the `tokens` object. Check out the [quickstart](quickstart/use-rest-api.md) article for more information.
26
+
20
27
## May 2025
21
28
22
29
The Azure AI Content Understanding [**`2025-05-01-preview`**](/rest/api/contentunderstanding/content-analyzers?view=rest-contentunderstanding-2025-05-01-preview&preserve-view=true) REST API is now available. This update introduces the following updates and enhanced capabilities:
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-how-to-debug-skillset.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,12 +30,6 @@ For background on how a debug session works, see [Debug sessions in Azure AI Sea
30
30
31
31
+ An existing enrichment pipeline, including a data source, a skillset, an indexer, and an index.
32
32
33
-
## Security and permissions
34
-
35
-
+ To save a debug session to Azure storage, the search service identity must have **Storage Blob Data Contributor** permissions on Azure Storage. Otherwise, plan on choosing a full access connection string for the debug session connection to Azure Storage.
36
-
37
-
+ If the Azure Storage account is behind a firewall, configure it to [allow search service access](search-indexer-howto-access-ip-restricted.md).
38
-
39
33
## Limitations
40
34
41
35
Debug sessions work with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources, with the following exceptions:
@@ -50,6 +44,12 @@ Debug sessions work with all generally available [indexer data sources](search-d
50
44
51
45
+ For custom skills, a user-assigned managed identity isn't supported for a debug session connection to Azure Storage. As stated in the prerequisites, you can use a system managed identity, or specify a full access connection string that includes a key. For more information, see [Connect a search service to other Azure resources using a managed identity](search-how-to-managed-identities.md).
52
46
47
+
## Security and permissions
48
+
49
+
+ To save a debug session to Azure storage, the search service identity must have **Storage Blob Data Contributor** permissions on Azure Storage. Otherwise, plan on choosing a full access connection string for the debug session connection to Azure Storage.
50
+
51
+
+ If the Azure Storage account is behind a firewall, configure it to [allow search service access](search-indexer-howto-access-ip-restricted.md).
52
+
53
53
## Create a debug session
54
54
55
55
1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
Copy file name to clipboardExpand all lines: articles/search/search-get-started-portal-image-search.md
+25-15Lines changed: 25 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: haileytap
6
6
ms.author: haileytapia
7
7
ms.service: azure-ai-search
8
8
ms.topic: quickstart
9
-
ms.date: 09/12/2025
9
+
ms.date: 10/08/2025
10
10
ms.custom:
11
11
- references_regions
12
12
---
@@ -46,20 +46,30 @@ For content extraction, you can choose either default extraction via Azure AI Se
46
46
47
47
### Supported embedding methods
48
48
49
-
For content embedding, you can choose either image verbalization (followed by text vectorization) or multimodal embeddings. Deployment instructions for the models are provided in a [later section](#deploy-models). The following table describes both embedding methods.
49
+
For content embedding, choose one of the following methods:
50
50
51
-
| Method | Description | Supported models |
51
+
+**Image verbalization:** Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.
52
+
53
+
+**Multimodal embeddings:** Uses an embedding model to directly vectorize both text and images.
54
+
55
+
The following table lists the supported providers and models for each method. Deployment instructions for the models are provided in a [later section](#deploy-models).
56
+
57
+
| Provider | Models for image verbalization | Models for multimodal embeddings |
52
58
|--|--|--|
53
-
| Image verbalization | Uses an LLM to generate natural-language descriptions of images, and then uses an embedding model to vectorize plain text and verbalized images.<br><br>Requires an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> or [Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects).<br><br>For text vectorization, you can also use an [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | LLMs:<br>GPT-4o<br>GPT-4o-mini<br>phi-4 <sup>4</sup><br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large |
54
-
| Multimodal embeddings | Uses an embedding model to directly vectorize both text and images.<br><br>Requires an [Azure AI Foundry hub-based project](/azure/ai-foundry/how-to/create-projects) or [Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>3</sup> in a [supported region](cognitive-search-skill-vision-vectorize.md). | Cohere-embed-v3-english<br>Cohere-embed-v3-multilingual<br>Cohere-embed-v4 <sup>5</sup> |
59
+
|[Azure OpenAI in Azure AI Foundry Models resource](/azure/ai-services/openai/how-to/create-resource) <sup>1, 2</sup> | LLMs:<br>gpt-4o<br>gpt-4o-mini<br>gpt-5<br>gpt-5-mini<br>gpt-5-nano<br><br>Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large ||
60
+
|[Azure AI Foundry project](/azure/ai-foundry/how-to/create-projects)| Embedding models:<br>text-embedding-ada-002<br>text-embedding-3-small<br>text-embedding-3-large ||
|[Azure AI services multi-service resource](/azure/ai-services/multi-service-resource#azure-ai-multi-services-resource-for-azure-ai-search-skills) <sup>4</sup> | Embedding model: [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> |[Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) <sup>5</sup> |
55
63
56
64
<sup>1</sup> The endpoint of your Azure OpenAI resource must have a [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains), such as `https://my-unique-name.openai.azure.com`. If you created your resource in the [Azure portal](https://portal.azure.com/), this subdomain was automatically generated during resource setup.
57
65
58
-
<sup>2</sup> For billing purposes, you must [attach your Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md) to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
66
+
<sup>2</sup> Azure OpenAI resources (with access to embedding models) that were created in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) aren't supported. You must create an Azure OpenAI resource in the Azure portal.
67
+
68
+
<sup>3</sup> To use this model in the wizard, you must [deploy it as a serverless API deployment](/azure/ai-foundry/how-to/deploy-models-serverless).
59
69
60
-
<sup>3</sup> `phi-4` is only available to Azure AI Foundry projects.
70
+
<sup>4</sup> For billing purposes, you must [attach your Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md)to the skillset in your Azure AI Search service. Unless you use a [keyless connection (preview)](cognitive-search-attach-cognitive-services.md#bill-through-a-keyless-connection) to create the skillset, both resources must be in the same region.
61
71
62
-
<sup>4</sup> The Azure portal doesn't support `embed-v-4-0` for vectorization, so don't use it for this quickstart. Instead, use the [AML skill](cognitive-search-aml-skill.md) or [Azure AI Foundry model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) to programmatically specify this model. You can then use the portal to manage the skillset or vectorizer.
72
+
<sup>5</sup> The Azure AI Vision multimodal embeddings APIs are available in [select regions](/azure/ai-services/computer-vision/overview-image-analysis#region-availability).
63
73
64
74
### Public endpoint requirements
65
75
@@ -91,7 +101,7 @@ On your Azure AI Search service:
91
101
92
102
1.[Configure a system-assigned managed identity](search-how-to-managed-identities.md#create-a-system-managed-identity).
93
103
94
-
1.[Assign the following roles](search-security-rbac.md) to yourself:
104
+
1.[Assign the following roles](search-security-rbac.md) to yourself.
95
105
96
106
+**Search Service Contributor**
97
107
@@ -101,7 +111,7 @@ On your Azure AI Search service:
101
111
102
112
### [**Azure Storage**](#tab/storage-perms)
103
113
104
-
Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to these storage containers, which you create in the next section.
114
+
Azure Storage is both the data source for your documents and the destination for extracted images. Your search service requires access to the storage containers you create in the next section.
105
115
106
116
On your Azure Storage account:
107
117
@@ -126,9 +136,9 @@ On your Azure OpenAI resource:
126
136
The Azure AI Foundry model catalog provides LLMs for image verbalization and embedding models for text and image vectorization. Your search service requires access to call the [GenAI Prompt skill](cognitive-search-skill-genai-prompt.md) and [AML skill](cognitive-search-aml-skill.md).
127
137
128
138
> [!NOTE]
129
-
> If you're using a hub-based project for multimodal embeddings, skip this step. The wizard requires key-based authentication in this scenario.
139
+
> If you're using a hub-based project, skip this step. Hub-based projects support API keys instead of managed identities for authentication.
130
140
131
-
On your Azure AI Foundry project:
141
+
On your Azure AI Foundry resource:
132
142
133
143
+ Assign **Azure AI Project Manager** to your [search service identity](search-how-to-managed-identities.md#create-a-system-managed-identity).
134
144
@@ -269,7 +279,7 @@ To use the skills for image verbalization:
269
279
270
280
1. Select your Azure subscription, resource, and LLM deployment.
271
281
272
-
1. For the authentication type, select **System assigned identity**.
282
+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
273
283
274
284
1. Select the checkbox that acknowledges the billing effects of using these resources.
275
285
@@ -281,7 +291,7 @@ To use the skills for image verbalization:
281
291
282
292
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
283
293
284
-
1. For the authentication type, select **System assigned identity**.
294
+
1. For the authentication type, select **System assigned identity** if you're not using a hub-based project. Otherwise, leave it as **API key**.
285
295
286
296
1. Select the checkbox that acknowledges the billing effects of using these resources.
287
297
@@ -307,7 +317,7 @@ To use the skills for multimodal embeddings:
307
317
308
318
1. Select your Azure subscription, resource, and embedding model deployment (if applicable).
309
319
310
-
1.If you're using Azure AI Vision, select **System assigned identity**for the authentication type. Otherwise, leave it as **API key**.
320
+
1.For the authentication type, select **System assigned identity**if you're not using a hub-based project. Otherwise, leave it as **API key**.
311
321
312
322
1. Select the checkbox that acknowledges the billing effects of using this resource.
0 commit comments