Skip to content

Commit 08b86bf

Browse files
Update PromptFlowEvaluation.md
1 parent 8a092eb commit 08b86bf

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

ResearchAssistant/Deployment/PromptFlowEvaluation.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -17,34 +17,34 @@ In prompt flow, you can customize or create your own evaluation flow and metrics
1717

1818

1919
2. Locate your AI project under recent projects.
20-
![image1](/Deployment/images/evaluation/image1.png)
20+
![image1](/ResearchAssistant/Deployment/images/evaluation/image1.png)
2121

2222
2323
3. Once inside your project, select Evaluation from the left dropdown menu.
24-
![image2](/Deployment/images/evaluation/image2.png)
24+
![image2](/ResearchAssistant/Deployment/images/evaluation/image2.png)
2525

2626

2727
4. From your Evaluation view, select New evaluation in the middle of the page.
28-
![imag3](/Deployment/images/evaluation/image3.png)
28+
![imag3](/ResearchAssistant/Deployment/images/evaluation/image3.png)
2929

3030
5. From here you can create, name a new evaluation and select your scenario.
31-
![image4](/Deployment/images/evaluation/image4.png)
31+
![image4](/ResearchAssistant/Deployment/images/evaluation/image4.png)
3232
6. Select the flow you want to evaluate. (To evaluate the DraftFlow select DraftFlow here)
33-
![image5](/Deployment/images/evaluation/image5.png)
33+
![image5](/ResearchAssistant/Deployment/images/evaluation/image5.png)
3434
7. Select metrics you would like to use. Also, be sure to select an active Connection and active Deployment name/Model.
35-
![image6](/Deployment/images/evaluation/image6.png)
35+
![image6](/ResearchAssistant/Deployment/images/evaluation/image6.png)
3636
8. Use an existing dataset or upload a dataset to use in evaluation. (Upload the provided dataset found in \Deployment\data\EvaluationDataset.csv)
37-
![image7](/Deployment/images/evaluation/image7.png)
37+
![image7](/ResearchAssistant/Deployment/images/evaluation/image7.png)
3838

3939
9. Lastly, map the inputs from your dataset and click submit.
40-
![image8](/Deployment/images/evaluation/image8.png)
40+
![image8](/ResearchAssistant/Deployment/images/evaluation/image8.png)
4141

4242

4343
### Results
4444

4545
Once the flow has been ran successfully, the metrics will be displayed showing a 1-5 score of each respective metric. From here, you can click into the evaluation flow to get a better understanding of the scores.
46-
![image9](/Deployment/images/evaluation/image9.png)
47-
![image10](/Deployment/images/evaluation/image10.png)
46+
![image9](/ResearchAssistant/Deployment/images/evaluation/image9.png)
47+
![image10](/ResearchAssistant/Deployment/images/evaluation/image10.png)
4848

4949

5050

0 commit comments

Comments
 (0)