You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: update semantic_similarity.md to use collections API
- Updated example to use SemanticSimilarity from collections API
- Changed title from 'Answer Similarity' to 'Semantic Similarity' for consistency
- Added synchronous usage note
- Preserved 'How It's Calculated' section
- Moved legacy API examples to bottom with deprecation warning
- Tested example and verified it produces expected output
Copy file name to clipboardExpand all lines: docs/concepts/metrics/available_metrics/semantic_similarity.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
-
## Answer Similarity
1
+
## Semantic Similarity
2
2
3
-
The **Answer Similarity** metric evaluates the semantic resemblance between a generated response and a reference (ground truth) answer. It ranges from 0 to 1, with higher scores indicating better alignment between the generated answer and the ground truth.
3
+
The **Semantic Similarity** metric evaluates the semantic resemblance between a generated response and a reference (ground truth) answer. It ranges from 0 to 1, with higher scores indicating better alignment between the generated answer and the ground truth.
4
4
5
5
This metric uses embeddings and cosine similarity to measure how semantically similar two answers are, which can offer valuable insights into the quality of the generated response.
6
6
@@ -10,27 +10,27 @@ This metric uses embeddings and cosine similarity to measure how semantically si
10
10
```python
11
11
from openai import AsyncOpenAI
12
12
from ragas.embeddings import OpenAIEmbeddings
13
-
from ragas.metrics.collections importAnswerSimilarity
13
+
from ragas.metrics.collections importSemanticSimilarity
0 commit comments