Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion app/backend/approaches/approach.py
Original file line number Diff line number Diff line change
Expand Up @@ -438,7 +438,8 @@ def nonewlines(s: str) -> str:
for doc in results:
# Get the citation for the source page
citation = self.get_citation(doc.sourcepage)
citations.append(citation)
if citation not in citations:
citations.append(citation)

# If semantic captions are used, extract captions; otherwise, use content
if use_semantic_captions and doc.captions:
Expand Down
20 changes: 7 additions & 13 deletions app/backend/approaches/prompts/ask_answer_question.prompty
Original file line number Diff line number Diff line change
Expand Up @@ -14,26 +14,20 @@ system:
{% if override_prompt %}
{{ override_prompt }}
{% else %}
You are an intelligent assistant helping Contoso Inc employees with their healthcare plan questions and employee handbook questions.
Use 'you' to refer to the individual asking the questions even if they ask with 'I'.
Answer the following question using only the data provided in the sources below.
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response.
If you cannot answer using the sources below, say you don't know. Use below example to answer.
Assistant helps the company employees with their questions about internal documents. Be brief in your answers.
Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below.
You CANNOT ask clarifying questions to the user, since the user will have no way to reply.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gpt-5 can do negations right

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Discussed offline as well). There's been some talk of avoiding negations in prompting LLMs, but I haven't seen any strong evidence that this is a big issue. I also checked the evaluation results for the "ask" approach and I don't see any follow-up questions in those results.

If the question is not in English, answer in the language used in the question.
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response. Use square brackets to reference the source, for example [info1.txt]. Don't combine sources, list each source separately, for example [info1.txt][info2.pdf].
{% if image_sources %}
Each image source has the document file name in the top left corner of the image with coordinates (10,10) pixels with format <filename.ext#page=N>,
and the image figure name is right-aligned in the top right corner of the image.
The filename of the actual image is in the top right corner of the image and is in the format <figureN_N.png>.
Each text source starts in a new line and has the file name followed by colon and the actual information.
Always include the source document filename for each fact you use in the response in the format: [document_name.ext#page=N].
If you are referencing an image, add the image filename in the format: [document_name.ext#page=N(image_name.png)].
Answer the following question using only the data provided in the sources below.
If you cannot answer using the sources below, say you don't know.
Return just the answer without any input texts.
{% endif %}
Possible citations for current question:
{% for citation in citations %}
[{{ citation }}]
{% endfor %}
Possible citations for current question: {% for citation in citations %} [{{ citation }}] {% endfor %}
{{ injected_prompt }}
{% endif %}

Expand All @@ -51,7 +45,7 @@ In-network deductibles are $500 for employee and $1000 for family [info1.txt] an

user:
{{ user_query }}
{% if image_sources is defined %}{% for image_source in image_sources %}
{% if image_sources %}{% for image_source in image_sources %}
![Image]({{image_source}})
{% endfor %}{% endif %}
{% if text_sources is defined %}Sources:{% for text_source in text_sources %}
Expand Down
20 changes: 9 additions & 11 deletions app/backend/approaches/prompts/chat_answer_question.prompty
Original file line number Diff line number Diff line change
Expand Up @@ -20,22 +20,20 @@ system:
{% if override_prompt %}
{{ override_prompt }}
{% else %}
Assistant helps the company employees with their healthcare plan questions, and questions about the employee handbook. Be brief in your answers.
Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below. If asking a clarifying question to the user would help, ask the question.
Assistant helps the company employees with their questions about internal documents. Be brief in your answers.
Answer ONLY with the facts listed in the list of sources below. If there isn't enough information below, say you don't know. Do not generate answers that don't use the sources below.
If asking a clarifying question to the user would help, ask the question.
If the question is not in English, answer in the language used in the question.
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response. Use square brackets to reference the source, for example [info1.txt]. Don't combine sources, list each source separately, for example [info1.txt][info2.pdf].
{% if include_images %}
{% if image_sources %}
Each image source has the document file name in the top left corner of the image with coordinates (10,10) pixels with format <filename.ext#page=N>,
and the image figure name is right-aligned in the top right corner of the image.
The filename of the actual image is in the top right corner of the image and is in the format <figureN_N.png>.
Each text source starts in a new line and has the file name followed by colon and the actual information
Always include the source name from the image or text for each fact you use in the response in the format: [filename]
Answer the following question using only the data provided in the sources below.
If asking a clarifying question to the user would help, ask the question.
Be brief in your answers.
The text and image source can be the same file name, don't use the image title when citing the image source, only use the file name as mentioned
If you cannot answer using the sources below, say you don't know. Return just the answer without any input texts.
Always include the source document filename for each fact you use in the response in the format: [document_name.ext#page=N].
If you are referencing an image, add the image filename in the format: [document_name.ext#page=N(image_name.png)].
{% endif %}
Possible citations for current question: {% for citation in citations %} [{{ citation }}] {% endfor %}
{{ injected_prompt }}
{% endif %}

Expand All @@ -56,9 +54,9 @@ Make sure the last question ends with ">>".

user:
{{ user_query }}
{% for image_source in image_sources %}
{% if image_sources %}{% for image_source in image_sources %}
![Image]({{image_source}})
{% endfor %}
{% endfor %}{% endif %}
{% if text_sources is defined %}
Sources:
{% for text_source in text_sources %}
Expand Down
14 changes: 10 additions & 4 deletions docs/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Review the generated data in `evals/ground_truth.jsonl` after running that scrip

## Run bulk evaluation

Review the configuration in `evals/eval_config.json` to ensure that everything is correctly setup. You may want to adjust the metrics used. See [the ai-rag-chat-evaluator README](https://github.com/Azure-Samples/ai-rag-chat-evaluator) for more information on the available metrics.
Review the configuration in `evals/evaluate_config.json` to ensure that everything is correctly setup. You may want to adjust the metrics used. See [the ai-rag-chat-evaluator README](https://github.com/Azure-Samples/ai-rag-chat-evaluator) for more information on the available metrics.

By default, the evaluation script will evaluate every question in the ground truth data.
Run the evaluation script by running the following command:
Expand All @@ -84,10 +84,10 @@ python evals/evaluate.py
The options are:

* `numquestions`: The number of questions to evaluate. By default, this is all questions in the ground truth data.
* `resultsdir`: The directory to write the evaluation results. By default, this is a timestamped folder in `evals/results`. This option can also be specified in `eval_config.json`.
* `targeturl`: The URL of the running application to evaluate. By default, this is `http://localhost:50505`. This option can also be specified in `eval_config.json`.
* `resultsdir`: The directory to write the evaluation results. By default, this is a timestamped folder in `evals/results`. This option can also be specified in `evaluate_config.json`.
* `targeturl`: The URL of the running application to evaluate. By default, this is `http://localhost:50505`. This option can also be specified in `evaluate_config.json`.

🕰️ This may take a long time, possibly several hours, depending on the number of ground truth questions, and the TPM capacity of the evaluation model, and the number of GPT metrics requested.
🕰️ This may take a long time, possibly several hours, depending on the number of ground truth questions, the TPM capacity of the evaluation model, and the number of LLM-based metrics requested.

## Review the evaluation results

Expand Down Expand Up @@ -118,3 +118,9 @@ This repository includes a GitHub Action workflow `evaluate.yaml` that can be us
In order for the workflow to run successfully, you must first set up [continuous integration](./azd.md#github-actions) for the repository.

To run the evaluation on the changes in a PR, a repository member can post a `/evaluate` comment to the PR. This will trigger the evaluation workflow to run the evaluation on the PR changes and will post the results to the PR.

## Evaluate multimodal RAG answers

The repository also includes an `evaluate_config_multimodal.json` file specifically for evaluating multimodal RAG answers. This configuration uses a different ground truth file, `ground_truth_multimodal.jsonl`, which includes questions based off the sample data that require both text and image sources to answer.

Note that the "groundedness" evaluator is not reliable for multimodal RAG, since it does not currently incorporate the image sources. We still include it in the metrics, but the more reliable metrics are "relevance" and "citations matched".
30 changes: 26 additions & 4 deletions evals/evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,28 @@

logger = logging.getLogger("ragapp")

# Regex pattern to match citations of the forms:
# [Document Name.pdf#page=7]
# [Document Name.pdf#page=4(figure4_1.png)]
# and supports multiple document extensions such as:
# pdf, html/htm, doc/docx, ppt/pptx, xls/xlsx, csv, txt, json,
# images: jpg/jpeg, png, bmp (listed as BPM in doc), tiff/tif, heif/heiff
# Optional components:
# #page=\d+ -> page anchor (primarily for paged docs like PDFs)
# ( ... ) -> figure/image or sub-resource reference (e.g., (figure4_1.png))
# Explanation of pattern components:
# \[ - Opening bracket
# [^\]]+?\. - Non-greedy match of any chars up to a dot before extension
# (?:pdf|docx?|pptx?|xlsx?|csv|txt|json)
# - Allowed primary file extensions
# (?:#page=\d+)? - Optional page reference
# (?:\([^()\]]+\))? - Optional parenthetical (figure/image reference)
# \] - Closing bracket
CITATION_REGEX = re.compile(
r"\[[^\]]+?\.(?:pdf|html?|docx?|pptx?|xlsx?|csv|txt|json|jpe?g|png|bmp|tiff?|heiff?|heif)(?:#page=\d+)?(?:\([^()\]]+\))?\]",
re.IGNORECASE,
)


class AnyCitationMetric(BaseMetric):
METRIC_NAME = "any_citation"
Expand All @@ -23,7 +45,7 @@ def any_citation(*, response, **kwargs):
if response is None:
logger.warning("Received response of None, can't compute any_citation metric. Setting to -1.")
return {cls.METRIC_NAME: -1}
return {cls.METRIC_NAME: bool(re.search(r"\[([^\]]+)\.\w{3,4}(#page=\d+)*\]", response))}
return {cls.METRIC_NAME: bool(CITATION_REGEX.search(response))}

return any_citation

Expand All @@ -45,9 +67,9 @@ def citations_matched(*, response, ground_truth, **kwargs):
if response is None:
logger.warning("Received response of None, can't compute citation_match metric. Setting to -1.")
return {cls.METRIC_NAME: -1}
# Return true if all citations in the truth are present in the response
truth_citations = set(re.findall(r"\[([^\]]+)\.\w{3,4}(#page=\d+)*\]", ground_truth))
response_citations = set(re.findall(r"\[([^\]]+)\.\w{3,4}(#page=\d+)*\]", response))
# Extract full citation tokens from ground truth and response
truth_citations = set(CITATION_REGEX.findall(ground_truth or ""))
response_citations = set(CITATION_REGEX.findall(response or ""))
# Count the percentage of citations that are present in the response
num_citations = len(truth_citations)
num_matched_citations = len(truth_citations.intersection(response_citations))
Expand Down
11 changes: 6 additions & 5 deletions evals/evaluate_config.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
{
"testdata_path": "ground_truth.jsonl",
"results_dir": "results/experiment<TIMESTAMP>",
"results_dir": "results/baseline-ask",
"requested_metrics": ["gpt_groundedness", "gpt_relevance", "answer_length", "latency", "citations_matched", "any_citation"],
"target_url": "http://localhost:50505/chat",
"target_url": "http://localhost:50505/ask",
"target_parameters": {
"overrides": {
"top": 3,
Expand All @@ -19,9 +19,10 @@
"suggest_followup_questions": false,
"use_oid_security_filter": false,
"use_groups_security_filter": false,
"vector_fields": "textEmbeddingOnly",
"use_gpt4v": false,
"gpt4v_input": "textAndImages",
"search_text_embeddings": true,
"search_image_embeddings": true,
"send_text_sources": true,
"send_image_sources": true,
"language": "en",
"use_agentic_retrieval": false,
"seed": 1
Expand Down
33 changes: 33 additions & 0 deletions evals/evaluate_config_multimodal.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"testdata_path": "ground_truth_multimodal.jsonl",
"results_dir": "results_multimodal/experiment<TIMESTAMP>",
"requested_metrics": ["gpt_relevance", "answer_length", "latency", "citations_matched", "any_citation"],
"target_url": "http://localhost:50505/chat",
"target_parameters": {
"overrides": {
"top": 3,
"max_subqueries": 10,
"results_merge_strategy": "interleaved",
"temperature": 0.3,
"minimum_reranker_score": 0,
"minimum_search_score": 0,
"retrieval_mode": "hybrid",
"semantic_ranker": true,
"semantic_captions": false,
"query_rewriting": false,
"reasoning_effort": "minimal",
"suggest_followup_questions": false,
"use_oid_security_filter": false,
"use_groups_security_filter": false,
"search_text_embeddings": true,
"search_image_embeddings": true,
"send_text_sources": true,
"send_image_sources": true,
"language": "en",
"use_agentic_retrieval": false,
"seed": 1
}
},
"target_response_answer_jmespath": "message.content",
"target_response_context_jmespath": "context.data_points.text"
}
10 changes: 10 additions & 0 deletions evals/ground_truth_multimodal.jsonl
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{"question": "Which commodity—oil, gold, or wheat—was the most stable over the last decade?", "truth": "Over the last decade, gold was the most stable commodity compared to oil and wheat. The annual percentage changes for gold mostly stayed within a smaller range, while oil showed significant fluctuations including a large negative change in 2014 and a large positive peak in 2021. Wheat also varied but less than oil and more than gold [Financial Market Analysis Report 2023.pdf#page=6][Financial Market Analysis Report 2023.pdf#page=6(figure6_1.png)]."}
{"question": "Do cryptocurrencies like Bitcoin or Ethereum show stronger ties to stocks or commodities?", "truth": "Cryptocurrencies like Bitcoin and Ethereum show stronger ties to stocks than to commodities. The correlation values between Bitcoin and stock indices are 0.3 with the S&P 500 and 0.4 with NASDAQ, while for Ethereum, the correlations are 0.35 with the S&P 500 and 0.45 with NASDAQ. In contrast, the correlations with commodities like Oil are lower (0.2 for Bitcoin and 0.25 for Ethereum), and correlations with Gold are slightly negative (-0.1 for Bitcoin and -0.05 for Ethereum) [Financial Market Analysis Report 2023.pdf#page=7]."}
{"question": "Around what level did the S&P 500 reach its highest point before declining in 2021?", "truth": "The S&P 500 reached its highest point just above the 4500 level before declining in 2021 [Financial Market Analysis Report 2023.pdf#page=4][Financial Market Analysis Report 2023.pdf#page=4(figure4_1.png)]."}
{"question": "In which month of 2023 did Bitcoin nearly hit 45,000?", "truth": "Bitcoin nearly hit 45,000 in December 2023, as shown by the blue line reaching close to 45,000 on the graph for that month [Financial Market Analysis Report 2023.pdf#page=5(figure5_1.png)]."}
{"question": "Which year saw oil prices fall the most, and by roughly how much did they drop?", "truth": "The year that saw oil prices fall the most was 2020, with a drop of roughly 20% as shown by the blue bar extending to about -20% on the horizontal bar chart of annual percentage changes for Oil from 2014 to 2022 [Financial Market Analysis Report 2023.pdf#page=6(figure6_1.png)]."}
{"question": "What was the approximate inflation rate in 2022?", "truth": "The approximate inflation rate in 2022 was near 3.4% according to the orange line in the inflation data on the graph showing trends from 2018 to 2023 [Financial Market Analysis Report 2023.pdf#page=8(figure8_1.png)]."}
{"question": "By 2028, to what relative value are oil prices projected to move compared to their 2024 baseline of 100?", "truth": "Oil prices are projected to decline to about 90 by 2028, relative to their 2024 baseline of 100. [Financial Market Analysis Report 2023.pdf#page=9(figure9_1.png)]."}
{"question": "What approximate value did the S&P 500 fall to at its lowest point between 2018 and 2022?", "truth": "The S&P 500 fell in 2018 to an approximate value of around 2600 at its lowest point between 2018 and 2022, as shown by the graph depicting the 5-Year Trend of the S&P 500 Index [Financial Market Analysis Report 2023.pdf#page=4(figure4_1.png)]."}
{"question": "Around what value did Ethereum finish the year at in 2023?", "truth": "Ethereum finished the year 2023 at a value around 2200, as indicated by the orange line on the price fluctuations graph for the last 12 months [Financial Market Analysis Report 2023.pdf#page=5][Financial Market Analysis Report 2023.pdf#page=5(figure5_1.png)][Financial Market Analysis Report 2023.pdf#page=5(figure5_2.png)]."}
{"question": "What was the approximate GDP growth rate in 2021?", "truth": "The approximate GDP growth rate in 2021 was about 4.5% according to the line graph showing trends from 2018 to 2023 [Financial Market Analysis Report 2023.pdf#page=8(figure8_1.png)]."}
33 changes: 33 additions & 0 deletions evals/results/baseline-ask/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"testdata_path": "ground_truth.jsonl",
"results_dir": "results/baseline-ask",
"requested_metrics": ["gpt_groundedness", "gpt_relevance", "answer_length", "latency", "citations_matched", "any_citation"],
"target_url": "http://localhost:50505/ask",
"target_parameters": {
"overrides": {
"top": 3,
"max_subqueries": 10,
"results_merge_strategy": "interleaved",
"temperature": 0.3,
"minimum_reranker_score": 0,
"minimum_search_score": 0,
"retrieval_mode": "hybrid",
"semantic_ranker": true,
"semantic_captions": false,
"query_rewriting": false,
"reasoning_effort": "minimal",
"suggest_followup_questions": false,
"use_oid_security_filter": false,
"use_groups_security_filter": false,
"search_text_embeddings": true,
"search_image_embeddings": true,
"send_text_sources": true,
"send_image_sources": true,
"language": "en",
"use_agentic_retrieval": false,
"seed": 1
}
},
"target_response_answer_jmespath": "message.content",
"target_response_context_jmespath": "context.data_points.text"
}
Loading