Skip to content

Commit a82eb79

Browse files
committed
heading changes and small fixes
1 parent e0947bb commit a82eb79

File tree

1 file changed

+4
-6
lines changed

1 file changed

+4
-6
lines changed

articles/ai-foundry/how-to/evaluation-github-action.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -35,17 +35,15 @@ Two GitHub Actions are available for evaluating AI applications: **ai-agent-eval
3535
> [!NOTE]
3636
> The **ai-agent-evals** interface is more straightforward to configure. In contrast, **genai-evals** require customers to prepare structured evaluation input data. Although code samples are provided to facilitate this process, the overall setup might involve additional complexity.
3737
38-
## How to set up
39-
40-
### AI agent evaluations
38+
## How to set up AI agent evaluations
4139

4240
### AI agent evaluations input
4341

4442
The input of ai-agent-evals includes:
4543

4644
**Required:**
4745

48-
- `azure-aiproject-connection-string`: The connection string for the Azure AI Project. This is used to connect to Azure OpenAI to simulate conversations with each agent, and to connect to the Azure AI evaluation SDK to perform the evaluation.
46+
- `azure-aiproject-connection-string`: The connection string for the Azure AI project. This is used to connect to Azure OpenAI to simulate conversations with each agent, and to connect to the Azure AI evaluation SDK to perform the evaluation.
4947
- `deployment-name`: the deployed model name.
5048
- `data-path`: Path to the input data file containing the conversation starters. Each conversation starter is sent to each agent for a pairwise comparison of evaluation results.
5149
- `evaluators`: built-in evaluator names.
@@ -151,11 +149,11 @@ Single agent evaluation result:
151149

152150
:::image type="content" source="../media/evaluations/github-action-single-agent-output.png" alt-text="Screenshot of single agent evaluation result in GitHub Action." lightbox="../media/evaluations/github-action-single-agent-output.png":::
153151

154-
## GenAI evaluations
152+
## How to set up genAI evaluations
155153

156154
### GenAI evaluations input
157155

158-
The input of genai-evals includes (some of them are optional depending on the evaluator used):
156+
The input of genai-evals includes (some of them are optional depending on the evaluator used):
159157

160158
Evaluation configuration file:
161159

0 commit comments

Comments
 (0)