Skip to content

Commit 48265b4

Browse files
committed
docs: update semantic_similarity.md to use collections API
- Updated example to use SemanticSimilarity from collections API - Changed title from 'Answer Similarity' to 'Semantic Similarity' for consistency - Added synchronous usage note - Preserved 'How It's Calculated' section - Moved legacy API examples to bottom with deprecation warning - Tested example and verified it produces expected output
1 parent a27f5cc commit 48265b4

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

docs/concepts/metrics/available_metrics/semantic_similarity.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
## Answer Similarity
1+
## Semantic Similarity
22

3-
The **Answer Similarity** metric evaluates the semantic resemblance between a generated response and a reference (ground truth) answer. It ranges from 0 to 1, with higher scores indicating better alignment between the generated answer and the ground truth.
3+
The **Semantic Similarity** metric evaluates the semantic resemblance between a generated response and a reference (ground truth) answer. It ranges from 0 to 1, with higher scores indicating better alignment between the generated answer and the ground truth.
44

55
This metric uses embeddings and cosine similarity to measure how semantically similar two answers are, which can offer valuable insights into the quality of the generated response.
66

@@ -10,27 +10,27 @@ This metric uses embeddings and cosine similarity to measure how semantically si
1010
```python
1111
from openai import AsyncOpenAI
1212
from ragas.embeddings import OpenAIEmbeddings
13-
from ragas.metrics.collections import AnswerSimilarity
13+
from ragas.metrics.collections import SemanticSimilarity
1414

1515
# Setup embeddings
1616
client = AsyncOpenAI()
1717
embeddings = OpenAIEmbeddings(model="text-embedding-3-small", client=client)
1818

1919
# Create metric
20-
scorer = AnswerSimilarity(embeddings=embeddings)
20+
scorer = SemanticSimilarity(embeddings=embeddings)
2121

2222
# Evaluate
2323
result = await scorer.ascore(
2424
reference="The Eiffel Tower is located in Paris. It has a height of 1000ft.",
2525
response="The Eiffel Tower is located in Paris."
2626
)
27-
print(f"Answer Similarity Score: {result.value}")
27+
print(f"Semantic Similarity Score: {result.value}")
2828
```
2929

3030
Output:
3131

3232
```
33-
Answer Similarity Score: 0.8151
33+
Semantic Similarity Score: 0.8151
3434
```
3535

3636
!!! note "Synchronous Usage"
@@ -54,7 +54,7 @@ Answer Similarity Score: 0.8151
5454

5555
**Low similarity response**: Isaac Newton's laws of motion greatly influenced classical physics.
5656

57-
Let's examine how answer similarity was calculated for the high similarity response:
57+
Let's examine how semantic similarity was calculated for the high similarity response:
5858

5959
- **Step 1:** Vectorize the reference answer using the specified embedding model.
6060
- **Step 2:** Vectorize the generated response using the same embedding model.

0 commit comments

Comments
 (0)