feat: add support for vercel/mastra sdks#1548
feat: add support for vercel/mastra sdks#1548dinmukhamedm merged 2 commits intolmnr-ai:fix/mastra-attrsfrom
Conversation
Signed-off-by: PranshuSrivastava <iampranshu24@gmail.com>
JiwaniZakir
left a comment
There was a problem hiding this comment.
The "stream" entry in AISDK_OPERATION_PREFIXES (in spans.rs) looks suspect — streamText and streamObject are already explicit entries, and there's no standalone "stream" operation in the Vercel AI SDK. Since detect_aisdk_operation_prefix iterates in order, a span with "streamText.usage.inputTokens" won't be affected (the key lookup is exact), but the bare "stream" prefix could match unintended spans in other instrumentation libraries that happen to emit attributes like "stream.usage.inputTokens".
In normalize_aisdk_attributes, the provider normalization splits on '.' and lowercases — e.g., "openai.chat" → "openai" — but .trim() on the result of .split('.').next() is a no-op because split results from a non-whitespace delimiter won't have leading/trailing spaces. It's harmless but slightly misleading.
The new warning in input_tokens() guards with total_input_tokens > 0, which silently suppresses the warning when total_input_tokens == 0 but cache_total > 0 — that's also a semantically invalid state worth surfacing. Consider logging a warning whenever cache_total > total_input_tokens unconditionally (or separately handle the zero case).
Finally, normalize_aisdk_attributes is pub but the diff is truncated — it would be worth confirming this is called before span_type() resolution (line ~416), since the type inference in that block relies on GEN_AI_SYSTEM being present, and the normalization needs to run first to populate it from aisdk.model.provider.
|
@PranshuSrivastava thank you! Continued in #1595 |
Fixes: #838
Add support for Vercel/ Mastra SDKs via normalisation of raw attributes.
Note
Medium Risk
Touches LLM span attribute normalization and span-type detection, which can change how usage/cost and inputs/outputs are extracted for incoming traces. Risk is moderate because it’s additive and guarded (no overwrites), but it affects core telemetry parsing paths.
Overview
Adds support for newer Vercel AI SDK/Mastra telemetry by normalizing
aisdk.*and operation-prefixed keys (e.g.stream.*,generateText.*) into the existinggen_ai.*/ai.*attributes duringSpan::parse_and_enrich_attributes, including model/provider, token usage, and prompt/response fields.Updates LLM span classification to recognize
aisdk.*, filters out operation-prefixed prompt attributes from persisted raw attributes, and adds warning logs for inconsistent token breakdowns (cache tokens > total) and for spans where token usage exists but pricing/model info is missing. Includes a comprehensive new test suite covering normalization behavior and non-overwrite guarantees.Reviewed by Cursor Bugbot for commit b3b85ba. Bugbot is set up for automated code reviews on this repo. Configure here.