-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Include report averages #3053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Include report averages #3053
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -324,6 +324,8 @@ async def _handle_case(case: Case[InputsT, OutputT, MetadataT], report_case_name | |
trace_id=trace_id, | ||
) | ||
if (averages := report.averages()) is not None and averages.assertions is not None: | ||
experiment_metadata = {'n_cases': len(self.cases), 'averages': averages} | ||
eval_span.set_attribute('experiment.metadata', experiment_metadata) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you add a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd argue it's specific to pydantic evals, rather than We use the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @cetra3 Yeah I was suggesting (based on @alexmojaki suggestion in Slack) to change it to @alexmojaki What do you think about a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Everything here is specific to pydantic evals. This is particularly specific not just to logfire but to fusionfire popped attributes. n_cases is being repeated here so that it can be displayed cheaply in the frontend. We may add or remove things here freely depending on what the frontend needs, it's not intended to be queried. I think |
||
eval_span.set_attribute('assertion_pass_rate', averages.assertions) | ||
return report | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1496,10 +1496,36 @@ async def mock_async_task(inputs: TaskInput) -> TaskOutput: | |
'assertion_pass_rate': 1.0, | ||
'logfire.msg_template': 'evaluate {name}', | ||
'logfire.msg': 'evaluate mock_async_task', | ||
'experiment.metadata': { | ||
'n_cases': 2, | ||
'averages': { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we're going to be using this on the frontend, I don't know if it's wise to pass along the entire |
||
'name': 'Averages', | ||
'scores': {'confidence': 1.0}, | ||
'labels': {}, | ||
'metrics': {}, | ||
'assertions': 1.0, | ||
'task_duration': 1.0, | ||
'total_duration': 9.0, | ||
}, | ||
}, | ||
'logfire.span_type': 'span', | ||
'logfire.json_schema': { | ||
'type': 'object', | ||
'properties': {'name': {}, 'n_cases': {}, 'assertion_pass_rate': {}}, | ||
'properties': { | ||
'name': {}, | ||
'n_cases': {}, | ||
'experiment.metadata': { | ||
'type': 'object', | ||
'properties': { | ||
'averages': { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd expect |
||
'type': 'object', | ||
'title': 'ReportCaseAggregate', | ||
'x-python-datatype': 'PydanticModel', | ||
} | ||
}, | ||
}, | ||
'assertion_pass_rate': {}, | ||
}, | ||
}, | ||
}, | ||
), | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the test below, it looks like we already had
n_cases
in here, do we need to repeat it? Or could we drop it there and have it just under the metadata?