Replies: 1 comment
-
|
Hi @dadastory, thanks for adding these points.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the feature or potential improvement
First of all, thanks for building Langfuse — it’s a very promising project and we really hope it continues to grow into a more robust and production-ready observability platform for LLM systems.
While using it at scale, we’ve encountered several limitations that significantly affect usability and performance. We’d like to share them here in the hope of making the project even better.
1. Trace Deletion Limitations (UI & Python SDK)
Currently, trace deletion is quite limited and slow:
This becomes a serious issue in long-running or high-throughput environments where traces accumulate rapidly.
Questions / Suggestions:
2. Dataset Retrieval Is Not Streamed / Paginated
We noticed that:
client.get_dataset()fetches the entire dataset at onceThis behavior makes it very difficult to manage or inspect large datasets programmatically.
Suggestion:
3. Missing Metadata-Based Filtering in Python SDK
In the Python SDK:
Compared to the UI and common observability tooling expectations, this feels like a major functional gap.
Suggestion:
Closing Thoughts
We genuinely believe Langfuse has great potential and is already very useful.
Addressing the issues above would significantly improve its scalability, developer experience, and suitability for real-world production workloads.
Thanks a lot for your work on the project — looking forward to future improvements! 🚀
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions