@@ -17,34 +17,34 @@ In prompt flow, you can customize or create your own evaluation flow and metrics
1717
1818
19192 . Locate your AI project under recent projects.
20- ![ image1] ( /Deployment/images/evaluation/image1.png )
20+ ![ image1] ( /ResearchAssistant/ Deployment/images/evaluation/image1.png )
2121
2222
23233 . Once inside your project, select Evaluation from the left dropdown menu.
24- ![ image2] ( /Deployment/images/evaluation/image2.png )
24+ ![ image2] ( /ResearchAssistant/ Deployment/images/evaluation/image2.png )
2525
2626
27274 . From your Evaluation view, select New evaluation in the middle of the page.
28- ![ imag3] ( /Deployment/images/evaluation/image3.png )
28+ ![ imag3] ( /ResearchAssistant/ Deployment/images/evaluation/image3.png )
2929
30305 . From here you can create, name a new evaluation and select your scenario.
31- ![ image4] ( /Deployment/images/evaluation/image4.png )
31+ ![ image4] ( /ResearchAssistant/ Deployment/images/evaluation/image4.png )
32326 . Select the flow you want to evaluate. (To evaluate the DraftFlow select DraftFlow here)
33- ![ image5] ( /Deployment/images/evaluation/image5.png )
33+ ![ image5] ( /ResearchAssistant/ Deployment/images/evaluation/image5.png )
34347 . Select metrics you would like to use. Also, be sure to select an active Connection and active Deployment name/Model.
35- ![ image6] ( /Deployment/images/evaluation/image6.png )
35+ ![ image6] ( /ResearchAssistant/ Deployment/images/evaluation/image6.png )
36368 . Use an existing dataset or upload a dataset to use in evaluation. (Upload the provided dataset found in \Deployment\data\EvaluationDataset.csv)
37- ![ image7] ( /Deployment/images/evaluation/image7.png )
37+ ![ image7] ( /ResearchAssistant/ Deployment/images/evaluation/image7.png )
3838
39399 . Lastly, map the inputs from your dataset and click submit.
40- ![ image8] ( /Deployment/images/evaluation/image8.png )
40+ ![ image8] ( /ResearchAssistant/ Deployment/images/evaluation/image8.png )
4141
4242
4343### Results
4444
4545Once the flow has been ran successfully, the metrics will be displayed showing a 1-5 score of each respective metric. From here, you can click into the evaluation flow to get a better understanding of the scores.
46- ![ image9] ( /Deployment/images/evaluation/image9.png )
47- ![ image10] ( /Deployment/images/evaluation/image10.png )
46+ ![ image9] ( /ResearchAssistant/ Deployment/images/evaluation/image9.png )
47+ ![ image10] ( /ResearchAssistant/ Deployment/images/evaluation/image10.png )
4848
4949
5050
0 commit comments