feat: add support for vercel/mastra sdks (#1548)#1595
feat: add support for vercel/mastra sdks (#1548)#1595dinmukhamedm wants to merge 5 commits intodevfrom
Conversation
Signed-off-by: PranshuSrivastava <iampranshu24@gmail.com> Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
Greptile SummaryThis PR adds first-class support for Vercel AI SDK v5+ and Mastra SDK spans by normalizing their
Confidence Score: 4/5Safe to merge; all issues are non-blocking style/clarity concerns with no runtime impact Well-implemented feature with comprehensive test coverage (6+ new test cases), non-destructive insert-if-absent normalization semantics, and correct handling of all key edge cases. The only remaining items are: a minor inconsistency between detection and normalization scope in detect_aisdk_operation_prefix() (unlikely to matter in practice since real LLM calls with tool calls always include token usage data), and a backwards comment in utils.rs. app-server/src/traces/spans.rs — review the detect_aisdk_operation_prefix() detection criteria gap for {prefix}.response.toolCalls and {prefix}.response.object Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["Raw OTel span\n(aisdk.* attributes)"] --> B
B["from_otel_span()\nspan_type() checks aisdk.* prefix\n→ SpanType::LLM"] --> C
C["parse_and_enrich_attributes()"] --> D
D["normalize_aisdk_attributes()"] --> E & F
E["aisdk.model.id\n→ gen_ai.request.model\n→ ai.model.id"] --> G
F["aisdk.model.provider\n→ ai.model.provider\n→ gen_ai.system\n (lowercased 1st segment)"] --> G
G["detect_aisdk_operation_prefix()\nstream / generateText / streamText\ngenerateObject / streamObject"] --> H
H{Prefix found?} -->|Yes| I
H -->|No| J
I["Remap usage/prompt/response\n{prefix}.usage.inputTokens → gen_ai.usage.input_tokens\n{prefix}.prompt.messages → ai.prompt.messages\n{prefix}.response.toolCalls → ai.response.toolCalls"] --> J
J["Re-evaluate span_type()\nafter gen_ai.system is populated"] --> K
K["convert_span_to_provider_format()\nis_ai_sdk_llm_span():\nai.operationId OR ai.model.provider\nOR aisdk.model.provider\n→ skip Langchain conversion"] --> L
L["tranform_model_and_provider()\ngateway → vercel_ai_gateway"]
|
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
|
@laminar-coding-agent the normalized spans now have |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit f9b4cfe. Configure here.
The set_usage method already wrote gen_ai.usage.cost (total cost) and individual token/cost attributes, but was missing the total token count that frontend functionality depends on. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
…e reduction Expand should_keep_attribute to filter out all operation-prefixed attributes that have been normalized to standard ai.*/gen_ai.* keys, not just .prompt.messages. This covers .response.text, .response.toolCalls, .response.object, and .usage.* suffixes as well. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Note
Medium Risk
Modifies LLM span classification and attribute extraction by normalizing new
aisdk.*and operation-prefixed keys into existinggen_ai.*/ai.*fields; mistakes could impact token/cost calculation and stored trace payloads. Added tests reduce but don’t eliminate risk across diverse telemetry inputs.Overview
Adds support for newer Vercel AI SDK / Mastra telemetry by normalizing
aisdk.*and operation-prefixed attributes (e.g.stream.*,generateText.*) into the existinggen_ai.*/ai.*keys so the current input/output and usage extraction pipeline works.Updates LLM detection and storage behavior: spans can be reclassified to
LLMafter normalization,llm.usage.total_tokensis now written when setting usage, and the original operation-prefixed attributes are dropped viashould_keep_attributeto reduce stored payload size. Cost calculation now warns whengen_ai.systemexists and tokens are present but no model name is available, and provider detection for AI SDK spans also recognizesaisdk.model.provider.Reviewed by Cursor Bugbot for commit 46df0cf. Bugbot is set up for automated code reviews on this repo. Configure here.