Skip to content

Commit 5b86a16

Browse files
committed
peer review edits
1 parent df55873 commit 5b86a16

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

articles/ai-services/openai/concepts/understand-embeddings.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,25 @@ ms.custom:
1616

1717
# Understand embeddings in Azure OpenAI Service
1818

19-
An embedding is a special format of data representation that machine learning models and algorithms can easily utilize. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar.
19+
An embedding is a special format of data representation that machine learning models and algorithms can easily use. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar.
2020

2121
## Embedding models
2222

23-
Different Azure OpenAI embedding models are created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure whether long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
23+
Different Azure OpenAI embedding models are created to be good at a particular task:
24+
25+
- **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text.
26+
- **Text search embeddings** help measure whether long documents are relevant to a short query.
27+
- **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
2428

2529
Embeddings make it easier to do machine learning on large inputs representing words by capturing the semantic similarities in a vector space. Therefore, you can use embeddings to determine if two text chunks are semantically related or similar, and provide a score to assess similarity.
2630

2731
## Cosine similarity
2832

2933
Azure OpenAI embeddings rely on cosine similarity to compute similarity between documents and a query.
3034

31-
From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This measurement is beneficial, because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [this article on Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity).
35+
From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multidimensional space. This measurement is beneficial, because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [Cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
3236

33-
An alternative method of identifying similar documents is to count the number of common words between documents. Unfortunately, this approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among disparate topics. For this reason, cosine similarity can offer a more effective alternative.
37+
An alternative method of identifying similar documents is to count the number of common words between documents. This approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among disparate topics. For this reason, cosine similarity can offer a more effective alternative.
3438

3539
## Next steps
3640

0 commit comments

Comments
 (0)