Skip to content

Commit 3a09be2

Browse files
committed
Updated vector-search-how-to-generate-embeddings.md
1 parent 88ab52b commit 3a09be2

File tree

1 file changed

+173
-43
lines changed

1 file changed

+173
-43
lines changed
Lines changed: 173 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Generate embeddings
2+
title: Generate Embeddings
33
titleSuffix: Azure AI Search
44
description: Learn how to generate embeddings for downstream indexing into an Azure AI Search index.
55

@@ -10,7 +10,7 @@ ms.update-cycle: 180-days
1010
ms.custom:
1111
- ignite-2023
1212
ms.topic: how-to
13-
ms.date: 08/04/2025
13+
ms.date: 08/05/2025
1414
---
1515

1616
# Generate embeddings for search queries and documents
@@ -20,27 +20,43 @@ Azure AI Search doesn't host embedding models, so you're responsible for creatin
2020
| Approach | Description |
2121
| --- | --- |
2222
| [Integrated vectorization](vector-search-integrated-vectorization.md) | Use built-in data chunking and vectorization in Azure AI Search. This approach takes a dependency on indexers, skillsets, and built-in or custom skills that point to external embedding models, such as those in Azure AI Foundry. |
23-
| Manual vectorization | Manage data chunking and vectorization yourself. For indexing, you [push prevectorized documents](vector-search-how-to-create-index.md#load-vector-data-for-indexing) into vector fields in a search index. For queries, you provide precomputed vectors to the search engine. For demos of this approach, see the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples/tree/main) GitHub repository. |
23+
| Manual vectorization | Manage data chunking and vectorization yourself. For indexing, you [push prevectorized documents](vector-search-how-to-create-index.md#load-vector-data-for-indexing) into vector fields in a search index. For querying, you provide precomputed vectors to the search engine. For demos of this approach, see the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples/tree/main) GitHub repository. |
2424

25-
We recommend integrated vectorization for most scenarios and use it for illustration in this article. Although you can use any supported embedding model, this article assumes Azure OpenAI embedding models.
25+
We recommend integrated vectorization for most scenarios. Although you can use any supported embedding model, this article uses Azure OpenAI embedding models for illustration.
2626

2727
## How embedding models are used in vector queries
2828

29-
Embedding models are used to generate vectors for both [query inputs](#query-inputs) and [query outputs](#query-outputs).
29+
Embedding models generate vectors for both [query inputs](#query-inputs) and [query outputs](#query-outputs).
3030

3131
### Query inputs
3232

33-
Query inputs are one of the following:
33+
Query inputs include the following:
3434

35-
+ Text or images that are converted to vectors during query processing. With integrated vectorization, a [vectorizer](vector-search-how-to-configure-vectorizer.md) handles this task.
35+
+ **Text or images that are converted to vectors during query processing**. As part of integrated vectorization, a [vectorizer](vector-search-how-to-configure-vectorizer.md) performs this task.
3636

37-
+ Precomputed vectors. You can generate these vectors by passing the query input to an embedding model of your choice. To avoid [rate limiting](/azure/ai-services/openai/quotas-limits), implement retry logic in your workload. We use [tenacity](https://pypi.org/project/tenacity/) in our Python demo.
37+
+ **Precomputed vectors**. You can generate these vectors by passing the query input to an embedding model of your choice. To avoid [rate limiting](/azure/ai-services/openai/quotas-limits), implement retry logic in your workload. Our [Python demo](https://github.com/Azure/azure-search-vector-samples/tree/93c839591bf92c2f10001d287871497b0f204a7c/demo-python) uses [tenacity](https://pypi.org/project/tenacity/).
3838

3939
### Query outputs
4040

41-
Query outputs are the matching documents retrieved from a search index based on the query input.
41+
Based on the query input, the search engine retrieves matching documents from your search index. These documents are the query outputs.
4242

43-
Your search index must have been previously loaded with documents containing one or more vector fields with embeddings. These embeddings can be generated using either integrated or manual vectorization. To ensure accurate results, use the same embedding model for both indexing and querying.
43+
Your search index must already contain documents with one or more vector fields populated by embeddings. You can create these embeddings through integrated or manual vectorization. To ensure accurate results, use the same embedding model for indexing and querying.
44+
45+
## Tips for embedding model integration
46+
47+
+ **Identify use cases**. Evaluate specific use cases where embedding model integration for vector search features adds value to your search solution. Examples include [multimodal search](multimodal-search-overview.md) or matching image content with text content, multilingual search, and similarity search.
48+
49+
+ **Design a chunking strategy**. Embedding models have limits on the number of tokens they accept, so data chunking is necessary for large files. For more information, see [Chunk large documents for vector search solutions](vector-search-how-to-chunk-documents.md).
50+
51+
+ **Optimize cost and performance**. Vector search is resource intensive and subject to maximum limits, so vectorize only the fields that contain semantic meaning. [Reduce vector size](vector-search-how-to-configure-compression-storage.md) to store more vectors for the same price.
52+
53+
+ **Choose the right embedding model**. Select a model for your use case, such as word embeddings for text-based searches or image embeddings for visual searches. Consider pretrained models, such as text-embedding-ada-002 from OpenAI or the Image Retrieval REST API from [Azure AI Computer Vision](/azure/ai-services/computer-vision/how-to/image-retrieval).
54+
55+
+ **Normalize vector lengths**. To improve the accuracy and performance of similarity search, normalize vector lengths before you store them in a search index. Most pretrained models are already normalized.
56+
57+
+ **Fine-tune the model**. If needed, fine-tune the model on your domain-specific data to improve its performance and relevance to your search application.
58+
59+
+ **Test and iterate**. Continuously test and refine the embedding model integration to achieve your desired search performance and user satisfaction.
4460

4561
## Create resources in the same region
4662

@@ -54,12 +70,142 @@ To use the same region for your resources:
5470

5571
1. Create an Azure OpenAI resource and Azure AI Search service in the same region.
5672

57-
> [!NOTE]
58-
> To use [semantic ranking](semantic-how-to-query-request.md) for hybrid queries or machine learning models for [AI enrichment](cognitive-search-concept-intro.md), choose an Azure AI Search region that provides those features.
73+
> [!TIP]
74+
> Want to use [semantic ranking](semantic-how-to-query-request.md) for [hybrid queries](hybrid-search-overview.md) or a machine learning model in a [custom skill](cognitive-search-custom-skill-interface.md) for [AI enrichment](cognitive-search-concept-intro.md)? Choose an Azure AI Search region that provides those features.
75+
76+
## Choose an embedding model in Azure AI Foundry
77+
78+
When you add knowledge to an agent workflow in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs), you have the option of creating a search index. A wizard guides you through the steps.
79+
80+
One step involves selecting an embedding model to vectorize your plain text content. The following models are supported:
81+
82+
+ text-embedding-3-large
83+
+ text-embedding-3-small
84+
+ text-embedding-ada-002
85+
+ Cohere-embed-v3-english
86+
+ Cohere-embed-v3-multilingual
87+
88+
Your model must already be deployed, and you must have permission to access it. For more information, see [Deployment overview for Azure AI Foundry Models](/azure/ai-foundry/concepts/deployments-overview).
5989

6090
## Generate an embedding for an improvised query
6191

62-
The following Python code generates an embedding that you can paste into the "values" property of a vector query.
92+
If you don't want to use integrated vectorization, you can manually generate an embedding and paste it into the `vectorQueries.vector` property of a vector query. For more information, see [Create a vector query in Azure AI Search](vector-search-how-to-query.md).
93+
94+
The following examples assume the text-embedding-ada-002 model. Replace `YOUR-API-KEY` and `YOUR-OPENAI-RESOURCE` with your Azure OpenAI resource details.
95+
96+
### [.NET](#tab/dotnet)
97+
98+
```csharp
99+
using System;
100+
using System.Net.Http;
101+
using System.Text;
102+
using System.Threading.Tasks;
103+
using Newtonsoft.Json;
104+
105+
class Program
106+
{
107+
static async Task Main(string[] args)
108+
{
109+
var apiKey = "YOUR-API-KEY";
110+
var apiBase = "https://YOUR-OPENAI-RESOURCE.openai.azure.com";
111+
var apiVersion = "2024-02-01";
112+
var engine = "text-embedding-ada-002";
113+
114+
var client = new HttpClient();
115+
client.DefaultRequestHeaders.Add("Authorization", $"Bearer {apiKey}");
116+
117+
var requestBody = new
118+
{
119+
input = "How do I use C# in VS Code?"
120+
};
121+
122+
var response = await client.PostAsync(
123+
$"{apiBase}/openai/deployments/{engine}/embeddings?api-version={apiVersion}",
124+
new StringContent(JsonConvert.SerializeObject(requestBody), Encoding.UTF8, "application/json")
125+
);
126+
127+
var responseBody = await response.Content.ReadAsStringAsync();
128+
Console.WriteLine(responseBody);
129+
}
130+
}
131+
```
132+
133+
### [Java](#tab/java)
134+
135+
```java
136+
import java.net.HttpURLConnection;
137+
import java.net.URL;
138+
import java.io.OutputStream;
139+
import java.io.BufferedReader;
140+
import java.io.InputStreamReader;
141+
142+
public class Main {
143+
public static void main(String[] args) {
144+
String apiKey = "YOUR-API-KEY";
145+
String apiBase = "https://YOUR-OPENAI-RESOURCE.openai.azure.com";
146+
String engine = "text-embedding-ada-002";
147+
String apiVersion = "2024-02-01";
148+
149+
try {
150+
URL url = new URL(String.format("%s/openai/deployments/%s/embeddings?api-version=%s", apiBase, engine, apiVersion));
151+
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
152+
connection.setRequestMethod("POST");
153+
connection.setRequestProperty("Authorization", "Bearer " + apiKey);
154+
connection.setRequestProperty("Content-Type", "application/json");
155+
connection.setDoOutput(true);
156+
157+
String requestBody = "{\"input\": \"How do I use Java in VS Code?\"}";
158+
159+
try (OutputStream os = connection.getOutputStream()) {
160+
os.write(requestBody.getBytes());
161+
}
162+
163+
try (BufferedReader br = new BufferedReader(new InputStreamReader(connection.getInputStream()))) {
164+
StringBuilder response = new StringBuilder();
165+
String line;
166+
while ((line = br.readLine()) != null) {
167+
response.append(line);
168+
}
169+
System.out.println(response);
170+
}
171+
} catch (Exception e) {
172+
e.printStackTrace();
173+
}
174+
}
175+
}
176+
```
177+
178+
### [JavaScript](#tab/javascript)
179+
180+
```javascript
181+
const apiKey = "YOUR-API-KEY";
182+
const apiBase = "https://YOUR-OPENAI-RESOURCE.openai.azure.com";
183+
const engine = "text-embedding-ada-002";
184+
const apiVersion = "2024-02-01";
185+
186+
async function generateEmbedding() {
187+
const response = await fetch(
188+
`${apiBase}/openai/deployments/${engine}/embeddings?api-version=${apiVersion}`,
189+
{
190+
method: "POST",
191+
headers: {
192+
"Authorization": `Bearer ${apiKey}`,
193+
"Content-Type": "application/json",
194+
},
195+
body: JSON.stringify({
196+
input: "How do I use JavaScript in VS Code?",
197+
}),
198+
}
199+
);
200+
201+
const data = await response.json();
202+
console.log(data.data[0].embedding);
203+
}
204+
205+
generateEmbedding();
206+
```
207+
208+
### [Python](#tab/python)
63209

64210
```python
65211
!pip install openai
@@ -79,39 +225,23 @@ embeddings = response['data'][0]['embedding']
79225
print(embeddings)
80226
```
81227

82-
Output is a vector array of 1,536 dimensions.
83-
84-
## Choose an embedding model in Azure AI Foundry
228+
### [REST API](#tab/rest-api)
85229

86-
In the Azure AI Foundry portal, you have the option of creating a search index when you add knowledge to your agent workflow. A wizard guides you through the steps. When asked to provide an embedding model that vectorizes your plain text content, you can use one of the following supported models:
87-
88-
+ text-embedding-3-large
89-
+ text-embedding-3-small
90-
+ text-embedding-ada-002
91-
+ Cohere-embed-v3-english
92-
+ Cohere-embed-v3-multilingual
93-
94-
Your model must already be deployed and you must have permission to access it. For more information, see [Deploy AI models in Azure AI Foundry portal](/azure/ai-foundry/concepts/deployments-overview).
95-
96-
## Tips and recommendations for embedding model integration
97-
98-
+ **Identify use cases**: Evaluate the specific use cases where embedding model integration for vector search features can add value to your search solution. This can include multimodal or matching image content with text content, multilingual search, or similarity search.
99-
100-
+ **Design a chunking strategy**: Embedding models have limits on the number of tokens they can accept, which introduces a data chunking requirement for large files. For more information, see [Chunk large documents for vector search solutions](vector-search-how-to-chunk-documents.md).
101-
102-
+ **Optimize cost and performance**: Vector search can be resource-intensive and is subject to maximum limits, so consider only vectorizing the fields that contain semantic meaning. [Reduce vector size](vector-search-how-to-configure-compression-storage.md) so that you can store more vectors for the same price.
103-
104-
+ **Choose the right embedding model:** Select an appropriate model for your specific use case, such as word embeddings for text-based searches or image embeddings for visual searches. Consider using pretrained models like **text-embedding-ada-002** from OpenAI or **Image Retrieval** REST API from [Azure AI Computer Vision](/azure/ai-services/computer-vision/how-to/image-retrieval).
105-
106-
+ **Normalize Vector lengths**: Ensure that the vector lengths are normalized before storing them in the search index to improve the accuracy and performance of similarity search. Most pretrained models already are normalized but not all.
107-
108-
+ **Fine-tune the model**: If needed, fine-tune the selected model on your domain-specific data to improve its performance and relevance to your search application.
230+
```http
231+
POST https://YOUR-OPENAI-RESOURCE.openai.azure.com/openai/deployments/text-embedding-ada-002/embeddings?api-version=2024-02-01
232+
Authorization: Bearer YOUR-API-KEY
233+
Content-Type: application/json
234+
235+
{
236+
"input": "How do I use REST APIs in VS Code?"
237+
}
238+
```
109239

110-
+ **Test and iterate**: Continuously test and refine your embedding model integration to achieve the desired search performance and user satisfaction.
240+
The output is a vector array of 1,536 dimensions.
111241

112-
## Next steps
242+
## Related content
113243

114-
+ [Understanding embeddings in Azure OpenAI in Azure AI Foundry Models](/azure/ai-services/openai/concepts/understand-embeddings)
115-
+ [Learn how to generate embeddings](/azure/ai-services/openai/how-to/embeddings?tabs=console)
244+
+ [Understand embeddings in Azure OpenAI in Azure AI Foundry Models](/azure/ai-services/openai/concepts/understand-embeddings)
245+
+ [Generate embeddings with Azure OpenAI](/azure/ai-services/openai/how-to/embeddings?tabs=console)
116246
+ [Tutorial: Explore Azure OpenAI embeddings and document search](/azure/ai-services/openai/tutorials/embeddings?tabs=command-line)
117247
+ [Tutorial: Choose a model (RAG solutions in Azure AI Search)](tutorial-rag-build-solution-models.md)

0 commit comments

Comments
 (0)