Conversation
|
Thanks for the pull request, @Henrrypg! This repository is currently maintained by Once you've gone through the following steps feel free to tag them in a comment and let them know that your changes are ready for engineering review. 🔘 Get product approvalIf you haven't already, check this list to see if your contribution needs to go through the product review process.
🔘 Provide contextTo help your reviewers and other members of the community understand the purpose and larger context of your changes, feel free to add as much of the following information to the PR description as you can:
🔘 Get a green buildIf one or more checks are failing, continue working on your changes until this is no longer the case and your build turns green. DetailsWhere can I find more information?If you'd like to get more details on all aspects of the review process for open source pull requests (OSPRs), check out the following resources: When can I expect my changes to be merged?Our goal is to get community contributions seen and reviewed as efficiently as possible. However, the amount of time that it takes to review and merge a PR can vary significantly based on factors such as:
💡 As a result it may take up to several weeks or months to complete a review and merge your PR. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #193 +/- ##
==========================================
- Coverage 94.34% 94.33% -0.01%
==========================================
Files 60 60
Lines 6858 6846 -12
Branches 370 370
==========================================
- Hits 6470 6458 -12
Misses 296 296
Partials 92 92
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
5b56ced to
9b3d2ac
Compare
This pull request standardizes how LLM (Large Language Model) usage data (such as token counts) is captured, serialized, and emitted in workflow events and API responses throughout the codebase. It replaces the previous
tokens_usedfield with a more comprehensiveusagedictionary/object, ensures this data is JSON-serializable for xAPI tracking, and updates orchestrators and tests accordingly.LLM Usage Data Handling and Serialization:
tokens_usedfield with ausagedictionary/object in all LLM processor responses and orchestrators, ensuring richer and more flexible usage tracking. [1] [2] [3] [4] [5] [6]_convert_usage_to_json_serializablemethod toBaseOrchestratorto handle serialization of usage data, including support for Pydantic models, so that all emitted event payloads are JSON-serializable.Workflow Event Emission and xAPI Integration:
usagefield, and updates the event emission logic to include usage data only when present. [1] [2] [3] [4] [5]Testing Updates:
usagefield is only included when appropriate.tokens_usedandmodel_usedmetadata fields in orchestrator and LLM processor tests. [1] [2] [3] [4]Charts