Skip to content

Commit ccf3da6

Browse files
committed
fix links
1 parent 2ddc3af commit ccf3da6

File tree

8 files changed

+11
-10
lines changed

8 files changed

+11
-10
lines changed

articles/ai-foundry/concepts/evaluation-evaluators/agent-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,5 +186,5 @@ If you're building agents outside of Azure AI Agent Service, this evaluator acce
186186

187187
## Related content
188188

189-
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets)
189+
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-test-datasets-using-evaluate)
190190
- [How to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target)

articles/ai-foundry/concepts/evaluation-evaluators/azure-openai-graders.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -262,5 +262,5 @@ Aside from individual data evaluation results, the grader also returns a metric
262262

263263
## Related content
264264

265-
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets)
265+
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md##local-evaluation-on-test-datasets-using-evaluate)
266266
- [How to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target)

articles/ai-foundry/concepts/evaluation-evaluators/custom-evaluators.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,4 +151,5 @@ friendliness_score = friendliness_eval(response="I will not apologize for my beh
151151

152152
## Related content
153153

154-
- Learn [how to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets) and [how to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target).
154+
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-test-datasets-using-evaluate)
155+
- [How to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target)

articles/ai-foundry/concepts/evaluation-evaluators/general-purpose-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,5 +160,5 @@ While F1 score outputs a numerical score on 0-1 float scale, the other evaluator
160160

161161
## Related content
162162

163-
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets)
163+
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-test-datasets-using-evaluate)
164164
- [How to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target)

articles/ai-foundry/concepts/evaluation-evaluators/rag-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -316,5 +316,5 @@ The numerical score on a likert scale (integer 1 to 5) and a higher score is bet
316316

317317
## Related content
318318

319-
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets)
319+
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md##local-evaluation-on-test-datasets-using-evaluate)
320320
- [How to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target)

articles/ai-foundry/concepts/evaluation-evaluators/risk-safety-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -485,4 +485,4 @@ The label field returns a boolean true or false based on whether or not either o
485485
## Related content
486486

487487
- Read the [Transparency Note for Safety Evaluators](../safety-evaluations-transparency-note.md) to learn more about its limitations, use cases and how it was evaluated for quality and accuracy.
488-
- Learn [how to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets) and [how to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target).
488+
- Learn [how to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-test-datasets-using-evaluate) and [how to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target).

articles/ai-foundry/concepts/evaluation-evaluators/textual-similarity-evaluators.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,5 +217,5 @@ The numerical score is a 0-1 float and a higher score is better. Given a numeric
217217

218218
## Related content
219219

220-
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-datasets)
220+
- [How to run batch evaluation on a dataset](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-test-datasets-using-evaluate)
221221
- [How to run batch evaluation on a target](../../how-to/develop/evaluate-sdk.md#local-evaluation-on-a-target)

articles/ai-foundry/concepts/observability.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Evaluators are specialized tools that measure the quality, safety, and reliabili
4242
| Response Completeness | Measures to what extent the response is complete (not missing critical information) with respect to the ground truth. |
4343

4444

45-
[**Agents (preview):**](./evaluation-evaluators/agent-evaluators.md)
45+
[**Agents:**](./evaluation-evaluators/agent-evaluators.md)
4646

4747
| Evaluator | Purpose |
4848
|--|--|
@@ -60,7 +60,7 @@ Evaluators are specialized tools that measure the quality, safety, and reliabili
6060
| QA | Measures comprehensively various quality aspects in question-answering.|
6161

6262

63-
[**Safety and Security (preview):**](./evaluation-evaluators/risk-safety-evaluators.md)
63+
[**Safety and Security:**](./evaluation-evaluators/risk-safety-evaluators.md)
6464

6565
| Evaluator | Purpose |
6666
|--|--|
@@ -86,7 +86,7 @@ Evaluators are specialized tools that measure the quality, safety, and reliabili
8686
| METEOR | Metric for Evaluation of Translation with Explicit Ordering measures overlaps in n-grams between response and ground truth. |
8787

8888

89-
[**Azure OpenAI Graders (preview):**](./evaluation-evaluators/azure-openai-graders.md)
89+
[**Azure OpenAI Graders:**](./evaluation-evaluators/azure-openai-graders.md)
9090

9191
| Evaluator | Purpose |
9292
|--|--|

0 commit comments

Comments
 (0)