Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions evals/results/baseline/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Baseline Evaluation

This evaluation was done with the application using the following models:

* Chat completion: gpt-4o-mini
* Embedding: text-embedding-3-large (with binary quantization, 1024 dimension reducation, and oversampling)

These are the default models and settings as of May 8, 2025.
2 changes: 1 addition & 1 deletion evals/results/baseline/config.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"testdata_path": "ground_truth.jsonl",
"results_dir": "results/gpt-4o-mini",
"results_dir": "results/experiment<TIMESTAMP>",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this get replaced automatically?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes when I ran this eval, I didnt bother changing the results_dir, so thats why we see it changed in the config back to the default configuration. I renamed the folder after the fact instead. I could pretend that I had specified a folder name, but this config.json does actually reflect how I run it, so its true.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And yes, the evaluator replaces with current timestamp, so you dont accidentally override a past evaluation, if you forget to change the dir. I usually use that approach to be safe.

"requested_metrics": ["gpt_groundedness", "gpt_relevance", "answer_length", "latency", "citations_matched", "any_citation"],
"target_url": "http://localhost:50505/chat",
"target_parameters": {
Expand Down
100 changes: 50 additions & 50 deletions evals/results/baseline/eval_results.jsonl

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion evals/results/baseline/evaluate_parameters.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"evaluation_gpt_model": "gpt-4o",
"evaluation_timestamp": 1744920281,
"evaluation_timestamp": 1746818372,
"testdata_path": "/Users/pamelafox/azure-search-openai-demo/evals/ground_truth.jsonl",
"target_url": "http://localhost:50505/chat",
"target_parameters": {
Expand Down
22 changes: 11 additions & 11 deletions evals/results/baseline/summary.json
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
{
"gpt_groundedness": {
"pass_count": 44,
"pass_rate": 0.88,
"mean_rating": 4.62
"pass_count": 43,
"pass_rate": 0.86,
"mean_rating": 4.5
},
"gpt_relevance": {
"pass_count": 42,
"pass_rate": 0.84,
"mean_rating": 4.12
"mean_rating": 4.22
},
"answer_length": {
"mean": 922.42,
"max": 1616,
"mean": 919.26,
"max": 1647,
"min": 193
},
"latency": {
"mean": 3.14,
"max": 7.583068,
"min": 1.598833
"mean": 4.46,
"max": 15.129978,
"min": 2.465542
},
"citations_matched": {
"total": 25,
"rate": 0.5
"total": 24,
"rate": 0.49
},
"any_citation": {
"total": 50,
Expand Down
28 changes: 28 additions & 0 deletions evals/results/gpt4omini-ada002/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
{
"testdata_path": "ground_truth.jsonl",
"results_dir": "results/gpt-4o-mini",
"requested_metrics": ["gpt_groundedness", "gpt_relevance", "answer_length", "latency", "citations_matched", "any_citation"],
"target_url": "http://localhost:50505/chat",
"target_parameters": {
"overrides": {
"top": 3,
"temperature": 0.3,
"minimum_reranker_score": 0,
"minimum_search_score": 0,
"retrieval_mode": "hybrid",
"semantic_ranker": true,
"semantic_captions": false,
"suggest_followup_questions": false,
"use_oid_security_filter": false,
"use_groups_security_filter": false,
"vector_fields": [
"embedding"
],
"use_gpt4v": false,
"gpt4v_input": "textAndImages",
"seed": 1
}
},
"target_response_answer_jmespath": "message.content",
"target_response_context_jmespath": "context.data_points.text"
}
50 changes: 50 additions & 0 deletions evals/results/gpt4omini-ada002/eval_results.jsonl

Large diffs are not rendered by default.

27 changes: 27 additions & 0 deletions evals/results/gpt4omini-ada002/evaluate_parameters.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"evaluation_gpt_model": "gpt-4o",
"evaluation_timestamp": 1744920281,
"testdata_path": "/Users/pamelafox/azure-search-openai-demo/evals/ground_truth.jsonl",
"target_url": "http://localhost:50505/chat",
"target_parameters": {
"overrides": {
"top": 3,
"temperature": 0.3,
"minimum_reranker_score": 0,
"minimum_search_score": 0,
"retrieval_mode": "hybrid",
"semantic_ranker": true,
"semantic_captions": false,
"suggest_followup_questions": false,
"use_oid_security_filter": false,
"use_groups_security_filter": false,
"vector_fields": [
"embedding"
],
"use_gpt4v": false,
"gpt4v_input": "textAndImages",
"seed": 1
}
},
"num_questions": null
}
33 changes: 33 additions & 0 deletions evals/results/gpt4omini-ada002/summary.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"gpt_groundedness": {
"pass_count": 44,
"pass_rate": 0.88,
"mean_rating": 4.62
},
"gpt_relevance": {
"pass_count": 42,
"pass_rate": 0.84,
"mean_rating": 4.12
},
"answer_length": {
"mean": 922.42,
"max": 1616,
"min": 193
},
"latency": {
"mean": 3.14,
"max": 7.583068,
"min": 1.598833
},
"citations_matched": {
"total": 25,
"rate": 0.5
},
"any_citation": {
"total": 50,
"rate": 1.0
},
"num_questions": {
"total": 50
}
}
Loading