Feature Request - Transpose view for dataset run comparison table #11803
Replies: 2 comments
-
|
Thanks for the detailed feature request! This is a UI enhancement suggestion for the dataset run comparison table in Langfuse. Based on the available documentation, the current experiment comparison functionality displays runs as rows with metrics as columns(1)(2). The documentation doesn't mention a transpose or alternate view option for the dataset run comparison table. Your use case makes sense - when working with multiple evaluators and metrics, horizontal scrolling can make it difficult to compare results effectively. A transposed view with scores as rows and runs as columns would improve readability for experiments with many evaluation metrics. Since this specific UI feature isn't covered in the current documentation, I'd recommend:
For immediate workarounds, you could:
This would be a valuable addition to Langfuse's experiment comparison capabilities, especially for teams running comprehensive evaluation suites. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
I'll close this because I opened a GitHub Discussion under "Langfuse Ideas", as proposed: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe your question
Hi Langfuse team,
When viewing evaluation results for a dataset run in the UI, the current layout displays runs as rows and metrics/scores as columns. This works well for a small number of scores, but when there are many evaluation metrics, the score column names become difficult to read and the table becomes hard to navigate horizontally.
Feature Request:
Would it be possible to add an option to transpose this view? Ideally, I'd like to see:
Scores/metrics as rows
Dataset runs as columns
This would make it much easier to read score names and compare results across runs when working with multiple evaluation metrics.
Use Case:
I'm running experiments with multiple evaluators and need to quickly compare how different runs perform across various metrics. The current horizontal scrolling makes it challenging to see all score names clearly.
Thanks for considering this enhancement!
Langfuse Cloud or Self-Hosted?
Langfuse Cloud
If Self-Hosted
v3.147.0
If Langfuse Cloud
No response
SDK and integration versions
python-sdk 3.12.0
Pre-Submission Checklist
Beta Was this translation helpful? Give feedback.
All reactions