You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: app/backend/approaches/prompts/ask_answer_question.prompty
+7-13Lines changed: 7 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -14,26 +14,20 @@ system:
14
14
{% if override_prompt %}
15
15
{{ override_prompt }}
16
16
{% else %}
17
-
You are an intelligent assistant helping Contoso Inc employees with their healthcare plan questions and employee handbook questions.
18
-
Use 'you' to refer to the individual asking the questions even if they ask with 'I'.
19
-
Answer the following question using only the data provided in the sources below.
20
-
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response.
21
-
If you cannot answer using the sources below, say you don't know. Use below example to answer.
17
+
Assistant helps the company employees with their questions about internal documents. Be brief in your answers.
18
+
Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below.
19
+
You CANNOT ask clarifying questions to the user, since the user will have no way to reply.
20
+
If the question is not in English, answer in the language used in the question.
21
+
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response. Use square brackets to reference the source, for example [info1.txt]. Don't combine sources, list each source separately, for example [info1.txt][info2.pdf].
22
22
{% if image_sources %}
23
23
Each image source has the document file name in the top left corner of the image with coordinates (10,10) pixels with format <filename.ext#page=N>,
24
24
and the image figure name is right-aligned in the top right corner of the image.
25
25
The filename of the actual image is in the top right corner of the image and is in the format <figureN_N.png>.
26
26
Each text source starts in a new line and has the file name followed by colon and the actual information.
27
27
Always include the source document filename for each fact you use in the response in the format: [document_name.ext#page=N].
28
28
If you are referencing an image, add the image filename in the format: [document_name.ext#page=N(image_name.png)].
29
-
Answer the following question using only the data provided in the sources below.
30
-
If you cannot answer using the sources below, say you don't know.
31
-
Return just the answer without any input texts.
32
29
{% endif %}
33
-
Possible citations for current question:
34
-
{% for citation in citations %}
35
-
[{{ citation }}]
36
-
{% endfor %}
30
+
Possible citations for current question: {% for citation in citations %} [{{ citation }}] {% endfor %}
37
31
{{ injected_prompt }}
38
32
{% endif %}
39
33
@@ -51,7 +45,7 @@ In-network deductibles are $500 for employee and $1000 for family [info1.txt] an
51
45
52
46
user:
53
47
{{ user_query }}
54
-
{% if image_sources is defined %}{% for image_source in image_sources %}
48
+
{% if image_sources %}{% for image_source in image_sources %}
55
49

56
50
{% endfor %}{% endif %}
57
51
{% if text_sources is defined %}Sources:{% for text_source in text_sources %}
Copy file name to clipboardExpand all lines: app/backend/approaches/prompts/chat_answer_question.prompty
+9-11Lines changed: 9 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -20,22 +20,20 @@ system:
20
20
{% if override_prompt %}
21
21
{{ override_prompt }}
22
22
{% else %}
23
-
Assistant helps the company employees with their healthcare plan questions, and questions about the employee handbook. Be brief in your answers.
24
-
Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below. If asking a clarifying question to the user would help, ask the question.
23
+
Assistant helps the company employees with their questions about internal documents. Be brief in your answers.
24
+
Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below.
25
+
If asking a clarifying question to the user would help, ask the question.
25
26
If the question is not in English, answer in the language used in the question.
26
27
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response. Use square brackets to reference the source, for example [info1.txt]. Don't combine sources, list each source separately, for example [info1.txt][info2.pdf].
27
-
{% if include_images %}
28
+
{% if image_sources %}
28
29
Each image source has the document file name in the top left corner of the image with coordinates (10,10) pixels with format <filename.ext#page=N>,
29
30
and the image figure name is right-aligned in the top right corner of the image.
30
31
The filename of the actual image is in the top right corner of the image and is in the format <figureN_N.png>.
31
32
Each text source starts in a new line and has the file name followed by colon and the actual information
32
-
Always include the source name from the image or text for each fact you use in the response in the format: [filename]
33
-
Answer the following question using only the data provided in the sources below.
34
-
If asking a clarifying question to the user would help, ask the question.
35
-
Be brief in your answers.
36
-
The text and image source can be the same file name, don't use the image title when citing the image source, only use the file name as mentioned
37
-
If you cannot answer using the sources below, say you don't know. Return just the answer without any input texts.
33
+
Always include the source document filename for each fact you use in the response in the format: [document_name.ext#page=N].
34
+
If you are referencing an image, add the image filename in the format: [document_name.ext#page=N(image_name.png)].
38
35
{% endif %}
36
+
Possible citations for current question: {% for citation in citations %} [{{ citation }}] {% endfor %}
39
37
{{ injected_prompt }}
40
38
{% endif %}
41
39
@@ -56,9 +54,9 @@ Make sure the last question ends with ">>".
56
54
57
55
user:
58
56
{{ user_query }}
59
-
{% for image_source in image_sources %}
57
+
{% if image_sources %}{% for image_source in image_sources %}
Copy file name to clipboardExpand all lines: docs/evaluation.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,7 +72,7 @@ Review the generated data in `evals/ground_truth.jsonl` after running that scrip
72
72
73
73
## Run bulk evaluation
74
74
75
-
Review the configuration in `evals/eval_config.json` to ensure that everything is correctly setup. You may want to adjust the metrics used. See [the ai-rag-chat-evaluator README](https://github.com/Azure-Samples/ai-rag-chat-evaluator) for more information on the available metrics.
75
+
Review the configuration in `evals/evaluate_config.json` to ensure that everything is correctly setup. You may want to adjust the metrics used. See [the ai-rag-chat-evaluator README](https://github.com/Azure-Samples/ai-rag-chat-evaluator) for more information on the available metrics.
76
76
77
77
By default, the evaluation script will evaluate every question in the ground truth data.
78
78
Run the evaluation script by running the following command:
@@ -84,10 +84,10 @@ python evals/evaluate.py
84
84
The options are:
85
85
86
86
* `numquestions`: The number of questions to evaluate. By default, this is all questions in the ground truth data.
87
-
* `resultsdir`: The directory to write the evaluation results. By default, this is a timestamped folder in `evals/results`. This option can also be specified in `eval_config.json`.
88
-
* `targeturl`: The URL of the running application to evaluate. By default, this is `http://localhost:50505`. This option can also be specified in `eval_config.json`.
87
+
* `resultsdir`: The directory to write the evaluation results. By default, this is a timestamped folder in `evals/results`. This option can also be specified in `evaluate_config.json`.
88
+
* `targeturl`: The URL of the running application to evaluate. By default, this is `http://localhost:50505`. This option can also be specified in `evaluate_config.json`.
89
89
90
-
🕰️ This may take a long time, possibly several hours, depending on the number of ground truth questions, and the TPM capacity of the evaluation model, and the number of GPT metrics requested.
90
+
🕰️ This may take a long time, possibly several hours, depending on the number of ground truth questions, the TPM capacity of the evaluation model, and the number of LLM-based metrics requested.
91
91
92
92
## Review the evaluation results
93
93
@@ -118,3 +118,9 @@ This repository includes a GitHub Action workflow `evaluate.yaml` that can be us
118
118
In order for the workflow to run successfully, you must first set up [continuous integration](./azd.md#github-actions) for the repository.
119
119
120
120
To run the evaluation on the changes in a PR, a repository member can post a `/evaluate` comment to the PR. This will trigger the evaluation workflow to run the evaluation on the PR changes and will post the results to the PR.
121
+
122
+
## Evaluate multimodal RAG answers
123
+
124
+
The repository also includes an `evaluate_config_multimodal.json` file specifically for evaluating multimodal RAG answers. This configuration uses a different ground truth file, `ground_truth_multimodal.jsonl`, which includes questions based off the sample data that require both text and image sources to answer.
125
+
126
+
Note that the "groundedness" evaluator is not reliable for multimodal RAG, since it does not currently incorporate the image sources. We still include it in the metrics, but the more reliable metrics are "relevance" and "citations matched".
{"question": "Which commodity—oil, gold, or wheat—was the most stable over the last decade?", "truth": "Over the last decade, gold was the most stable commodity compared to oil and wheat. The annual percentage changes for gold mostly stayed within a smaller range, while oil showed significant fluctuations including a large negative change in 2014 and a large positive peak in 2021. Wheat also varied but less than oil and more than gold [Financial Market Analysis Report 2023.pdf#page=6][Financial Market Analysis Report 2023.pdf#page=6(figure6_1.png)]."}
2
+
{"question": "Do cryptocurrencies like Bitcoin or Ethereum show stronger ties to stocks or commodities?", "truth": "Cryptocurrencies like Bitcoin and Ethereum show stronger ties to stocks than to commodities. The correlation values between Bitcoin and stock indices are 0.3 with the S&P 500 and 0.4 with NASDAQ, while for Ethereum, the correlations are 0.35 with the S&P 500 and 0.45 with NASDAQ. In contrast, the correlations with commodities like Oil are lower (0.2 for Bitcoin and 0.25 for Ethereum), and correlations with Gold are slightly negative (-0.1 for Bitcoin and -0.05 for Ethereum) [Financial Market Analysis Report 2023.pdf#page=7]."}
3
+
{"question": "Around what level did the S&P 500 reach its highest point before declining in 2021?", "truth": "The S&P 500 reached its highest point just above the 4500 level before declining in 2021 [Financial Market Analysis Report 2023.pdf#page=4][Financial Market Analysis Report 2023.pdf#page=4(figure4_1.png)]."}
4
+
{"question": "In which month of 2023 did Bitcoin nearly hit 45,000?", "truth": "Bitcoin nearly hit 45,000 in December 2023, as shown by the blue line reaching close to 45,000 on the graph for that month [Financial Market Analysis Report 2023.pdf#page=5(figure5_1.png)]."}
5
+
{"question": "Which year saw oil prices fall the most, and by roughly how much did they drop?", "truth": "The year that saw oil prices fall the most was 2020, with a drop of roughly 20% as shown by the blue bar extending to about -20% on the horizontal bar chart of annual percentage changes for Oil from 2014 to 2022 [Financial Market Analysis Report 2023.pdf#page=6(figure6_1.png)]."}
6
+
{"question": "What was the approximate inflation rate in 2022?", "truth": "The approximate inflation rate in 2022 was near 3.4% according to the orange line in the inflation data on the graph showing trends from 2018 to 2023 [Financial Market Analysis Report 2023.pdf#page=8(figure8_1.png)]."}
7
+
{"question": "By 2028, to what relative value are oil prices projected to move compared to their 2024 baseline of 100?", "truth": "Oil prices are projected to decline to about 90 by 2028, relative to their 2024 baseline of 100. [Financial Market Analysis Report 2023.pdf#page=9(figure9_1.png)]."}
8
+
{"question": "What approximate value did the S&P 500 fall to at its lowest point between 2018 and 2022?", "truth": "The S&P 500 fell in 2018 to an approximate value of around 2600 at its lowest point between 2018 and 2022, as shown by the graph depicting the 5-Year Trend of the S&P 500 Index [Financial Market Analysis Report 2023.pdf#page=4(figure4_1.png)]."}
9
+
{"question": "Around what value did Ethereum finish the year at in 2023?", "truth": "Ethereum finished the year 2023 at a value around 2200, as indicated by the orange line on the price fluctuations graph for the last 12 months [Financial Market Analysis Report 2023.pdf#page=5][Financial Market Analysis Report 2023.pdf#page=5(figure5_1.png)][Financial Market Analysis Report 2023.pdf#page=5(figure5_2.png)]."}
10
+
{"question": "What was the approximate GDP growth rate in 2021?", "truth": "The approximate GDP growth rate in 2021 was about 4.5% according to the line graph showing trends from 2018 to 2023 [Financial Market Analysis Report 2023.pdf#page=8(figure8_1.png)]."}
0 commit comments