Skip to content

Commit 6a16bbe

Browse files
committed
bold text
1 parent f1c42e5 commit 6a16bbe

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/ai-foundry/concepts/observability.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ This is where evaluators become essential. These specialized tools measure both
2929

3030
Evaluators are specialized tools that measure the quality, safety, and reliability of AI responses. By implementing systematic evaluations throughout the AI development lifecycle, teams can identify and address potential issues before they impact users. The following supported evaluators provide comprehensive assessment capabilities across different AI application types and concerns:
3131

32-
[RAG (Retrieval Augmented Generation):](./evaluation-evaluators/rag-evaluators.md)
32+
[**RAG (Retrieval Augmented Generation)**:](./evaluation-evaluators/rag-evaluators.md)
3333

3434
| Evaluator | Purpose |
3535
|--|--|
@@ -40,23 +40,23 @@ Evaluators are specialized tools that measure the quality, safety, and reliabili
4040
| Relevance | Measures how relevant the response is with respect to the query. |
4141
| Response Completeness | Measures to what extent the response is complete (not missing critical information) with respect to the ground truth. |
4242

43-
[Agents:](./evaluation-evaluators/agent-evaluators.md)
43+
[**Agents:**](./evaluation-evaluators/agent-evaluators.md)
4444

4545
| Evaluator | Purpose |
4646
|--|--|
4747
| Intent Resolution | Measures how accurately the agent identifies and addresses user intentions.|
4848
| Task Adherence | Measures how well the agent follows through on identified tasks. |
4949
| Tool Call Accuracy | Measures how well the agent selects and calls the correct tools to.|
5050

51-
[General Purpose:](./evaluation-evaluators/general-purpose-evaluators.md)
51+
[**General Purpose:**](./evaluation-evaluators/general-purpose-evaluators.md)
5252

5353
| Evaluator | Purpose |
5454
|--|--|
5555
| Fluency | Measures natural language quality and readability. |
5656
| Coherence | Measures logical consistency and flow of responses.|
5757
| QA | Measures comprehensively various quality aspects in question-answering.|
5858

59-
[Safety and Security:](./evaluation-evaluators/risk-safety-evaluators.md)
59+
[**Safety and Security:**](./evaluation-evaluators/risk-safety-evaluators.md)
6060

6161
| Evaluator | Purpose |
6262
|--|--|
@@ -69,7 +69,7 @@ Evaluators are specialized tools that measure the quality, safety, and reliabili
6969
| Protected Materials | Detects unauthorized use of copyrighted or protected content. |
7070
| Content Safety | Comprehensive assessment of various safety concerns. |
7171

72-
[Textual Similarity:](./evaluation-evaluators/textual-similarity-evaluators.md)
72+
[**Textual Similarity:**](./evaluation-evaluators/textual-similarity-evaluators.md)
7373

7474
| Evaluator | Purpose |
7575
|--|--|
@@ -80,7 +80,7 @@ Evaluators are specialized tools that measure the quality, safety, and reliabili
8080
| ROUGE | Recall-Oriented Understudy for Gisting Evaluation measures overlaps in n-grams between response and ground truth. |
8181
| METEOR | Metric for Evaluation of Translation with Explicit Ordering measures overlaps in n-grams between response and ground truth. |
8282

83-
[Azure OpenAI Graders:](./evaluation-evaluators/azure-openai-graders.md)
83+
[**Azure OpenAI Graders:**](./evaluation-evaluators/azure-openai-graders.md)
8484

8585
| Evaluator | Purpose |
8686
|--|--|

0 commit comments

Comments
 (0)