| title | Annotation Queues |
|---|---|
| description | Manage your annotation tasks with ease using our new workflow tooling. Create queues, add traces to them, and get a simple UI to review and label LLM application traces in Langfuse. |
Annotation Queues are a manual evaluation method which is build for domain experts to add scores and comments to traces, observations or sessions.
<Video src="https://static.langfuse.com/docs-videos/2025-12-19-annotation-queues.mp4" aspectRatio={16 / 9} gifStyle />
- Manually explore application results and add scores and comments to them
- Allow domain experts to add scores and comments to a subset of traces
- Add corrected outputs to capture what the model should have generated
- Align your LLM-as-a-Judge evaluation with human annotation
- Click on
New Queueto create a new queue. - Select the
Score Configsyou want to use for this queue. - Set the
Queue nameandDescription(optional). - Assign users to the queue (optional).
Once you have created annotation queues, you can assign traces, observations or sessions to them.
<Tabs items={["Bulk Selection", "Single Item"]}> To add multiple traces, sessions or observations to a queue:
- Select Traces, Observations or Sessions via the checkboxes.
- Click on the "Actions" dropdown menu
- Click on
Add to queueto add the selected traces, sessions or observations to the queue. - Select the queue you want to add the traces, sessions or observations to.
To add single traces, sessions or observations:
- Click on the
Annotatedropdown - Select the queue you want to add the trace, session or observation to
You will see an annotation task for each item in the queue.
- On the
AnnotateCard add scores on the defined dimensions - Click on
Complete + nextto move to the next annotation task or finish the queue
You can manage annotation queues via the API. This allows for scaling and automating your annotation workflows or using Langfuse as the backbone for a custom vibe coded annotation tool.