-
Notifications
You must be signed in to change notification settings - Fork 168
Open
Labels
QuestionFurther information is requestedFurther information is requested
Description
Question
I am trying out the evals framework and want to see how it works with logfire. After setting up a dummy experiment and running it, I can see the run within the general live console on logfire. When I go to the evals tab and click the experiment the info page never loads.

I'm not sure if I'm not configuring something within the experiment wrong here or whats is going on since the experiment also doesn't include the name of the dataset that was used. But it seems weird the experiment is shows up but is unable to load.

I am configuring logfire like this before running the experiment:
logfire.configure(environment='development', service_name='evals')
report = await dataset.evaluate(agent_run, max_concurrency=1)
logfire info:
logfire="4.9.0"
platform="macOS-15.6.1-arm64-arm-64bit"
python="3.12.11 (main, Jul 11 2025, 22:26:01) [Clang 20.1.4 ]"
[related_packages]
requests="2.32.4"
pydantic="2.11.7"
fastapi="0.116.1"
openai="1.98.0"
protobuf="6.31.1"
rich="14.1.0"
executing="2.2.0"
opentelemetry-api="1.36.0"
opentelemetry-exporter-otlp-proto-common="1.36.0"
opentelemetry-exporter-otlp-proto-http="1.36.0"
opentelemetry-instrumentation="0.57b0"
opentelemetry-instrumentation-asgi="0.57b0"
opentelemetry-instrumentation-fastapi="0.57b0"
opentelemetry-proto="1.36.0"
opentelemetry-sdk="1.36.0"
opentelemetry-semantic-conventions="0.57b0"
opentelemetry-util-http="0.57b0"
Any ideas what is going on here?
Metadata
Metadata
Assignees
Labels
QuestionFurther information is requestedFurther information is requested