You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/model-catalog-overview.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -165,7 +165,7 @@ Pay-per-token billing is available only to users whose Azure subscription belong
165
165
166
166
### Network isolation for models deployed via serverless APIs
167
167
168
-
Managed computes for models deployed as serverless APIs follow the public network access flag setting of the Azure AI Foundry hub that has the project in which the deployment exists. To help secure your managed compute, disable the public network access flag on your Azure AI Foundry hub. You can help secure inbound communication from a client to your managed compute by using a private endpoint for the hub.
168
+
Endpoints for models deployed as serverless APIs follow the public network access flag setting of the Azure AI Foundry hub that has the project in which the deployment exists. To help secure your serverless API endpoint, disable the public network access flag on your Azure AI Foundry hub. You can help secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
169
169
170
170
To set the public network access flag for the Azure AI Foundry hub:
171
171
@@ -177,11 +177,11 @@ To set the public network access flag for the Azure AI Foundry hub:
177
177
178
178
#### Limitations
179
179
180
-
* If you have an Azure AI Foundry hub with a managed compute created before July 11, 2024, managed computes added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new managed compute for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.
180
+
* If you have an Azure AI Foundry hub with a private endpoint created before July 11, 2024, serverless API endpoints added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new private endpoint for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.
181
181
182
-
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a managed compute on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
182
+
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing serverless API deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
183
183
184
-
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for MaaS deployments in private hubs, because private hubs have the public network access flag disabled.
184
+
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for serverless API deployments in private hubs, because private hubs have the public network access flag disabled.
185
185
186
186
* Any network configuration change (for example, enabling or disabling the public network access flag) might take up to five minutes to propagate.
- Added `--provision-network-now` property to trigger the provisioning of the managed network when creating a workspace with the managed network enabled, or else it does nothing.
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-model-lifecycle-retirement.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ Models in the model catalog are continually refreshed with newer and more capabl
22
22
> This article describes deprecation and retirement only for models that can be deployed to __serverless APIs__, not managed compute. To learn more about the differences between deployment to serverless APIs and managed computes, see [Model Catalog and Collections](concept-model-catalog.md).
23
23
24
24
> [!NOTE]
25
-
> Azure OpenAI models in the model catalog are provided through Azure OpenAI Service. For information about Azure Open AI model deprecation and retirement, see the [Azure OpenAI service product documentation](/azure/ai-services/openai/concepts/model-retirements).
25
+
> Azure OpenAI models in the model catalog are provided through Azure OpenAI Service. For information about Azure OpenAI model deprecation and retirement, see the [Azure OpenAI service product documentation](/azure/ai-services/openai/concepts/model-retirements).
26
26
27
27
## Model lifecycle stages
28
28
@@ -72,4 +72,4 @@ Models labeled _Retired_ are no longer available for use. You can't create new d
72
72
## Related content
73
73
74
74
-[Model Catalog and Collections](concept-model-catalog.md)
75
-
-[Data, privacy, and security for use of models through the Model Catalog](concept-data-privacy.md)
75
+
-[Data, privacy, and security for use of models through the Model Catalog](concept-data-privacy.md)
Copy file name to clipboardExpand all lines: articles/search/includes/quickstarts/java.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.service: azure-ai-search
5
5
ms.custom:
6
6
- ignite-2023
7
7
ms.topic: include
8
-
ms.date: 11/01/2024
8
+
ms.date: 01/07/2025
9
9
---
10
10
11
11
Build a Java console application using the [Azure.Search.Documents](/java/api/overview/azure/search) library to create, load, and query a search index.
Copy file name to clipboardExpand all lines: articles/search/search-get-started-portal-import-vectors.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -193,12 +193,12 @@ The wizard supports Azure AI Vision image retrieval through multimodal embedding
193
193
194
194
1. Make sure your Azure AI Search service is in the same region.
195
195
196
-
1. After the service is deployed, go to the resource and select **Access control** to assign the **Cognitive Services OpenAI User** role to your search service's managed identity. Optionally, you can use key-based authentication for the connection.
196
+
1. After the service is deployed, go to the resource and select **Access control** to assign the **Cognitive Services User** role to your search service's managed identity. Optionally, you can use key-based authentication for the connection.
197
197
198
198
After you finish these steps, you should be able to select the Azure AI Vision vectorizer in the **Import and vectorize data** wizard.
199
199
200
200
> [!NOTE]
201
-
> If you can't select an Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region. Also make sure that your search service's managed identity has **Cognitive Services OpenAI User** permissions.
201
+
> If you can't select an Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region. Also make sure that your search service's managed identity has **Cognitive Services User** permissions.
202
202
203
203
### [Azure AI Foundry model catalog](#tab/model-catalog)
204
204
@@ -331,7 +331,7 @@ Chunking is built in and nonconfigurable. The effective settings are:
331
331
332
332
1. Specify whether you want your search service to authenticate using an API key or managed identity.
333
333
334
-
+ The identity should have a **Cognitive Services OpenAI User** role on the Azure AI multi-services account.
334
+
+ The identity should have a **Cognitive Services User** role on the Azure AI multi-services account.
335
335
336
336
1. Select the checkbox that acknowledges the billing effects of using these resources.
Copy file name to clipboardExpand all lines: articles/search/search-get-started-text.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,13 +14,15 @@ ms.custom:
14
14
- devx-track-python
15
15
- ignite-2023
16
16
ms.topic: quickstart
17
-
ms.date: 11/01/2024
17
+
ms.date: 01/07/2025
18
18
---
19
19
20
20
# Quickstart: Full text search using the Azure SDKs
21
21
22
22
Learn how to use the *Azure.Search.Documents* client library in an Azure SDK to create, load, and query a search index using sample data for [full text search](search-lucene-query-architecture.md). Full text search uses Apache Lucene for indexing and queries, and a BM25 ranking algorithm for scoring results.
23
23
24
+
This quickstart creates and queries a small hotels-quickstart index containing data about 4 hotels.
25
+
24
26
This quickstart has steps for the following SDKs:
25
27
26
28
+[Azure SDK for .NET](?tabs=dotnet#create-load-and-query-an-index)
Copy file name to clipboardExpand all lines: articles/search/search-limits-quotas-capacity.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ author: HeidiSteen
8
8
ms.author: heidist
9
9
ms.service: azure-ai-search
10
10
ms.topic: conceptual
11
-
ms.date: 12/09/2024
11
+
ms.date: 01/07/2025
12
12
ms.custom:
13
13
- references_regions
14
14
- build-2024
@@ -69,7 +69,7 @@ Maximum number of documents per index are:
69
69
+ 288 billion on L1
70
70
+ 576 billion on L2
71
71
72
-
Each instance of a complex collection counts as a separate document in terms of these limits.
72
+
You can check the number of documents in the Azure portal and through REST calls that include `search=*` and `count=true`.
73
73
74
74
Maximum size of each document is approximately 16 megabytes. Document size is actually a limit on the size of the indexing API request payload, which is 16 megabytes. That payload can be a single document, or a batch of documents. For a batch with a single document, the maximum document size is 16 MB of JSON.
0 commit comments