Skip to content

feat: add support for vercel/mastra sdks (#1548)#1595

Open
dinmukhamedm wants to merge 5 commits intodevfrom
fix/mastra-attrs
Open

feat: add support for vercel/mastra sdks (#1548)#1595
dinmukhamedm wants to merge 5 commits intodevfrom
fix/mastra-attrs

Conversation

@dinmukhamedm
Copy link
Copy Markdown
Member

@dinmukhamedm dinmukhamedm commented Apr 7, 2026

Note

Medium Risk
Modifies LLM span classification and attribute extraction by normalizing new aisdk.* and operation-prefixed keys into existing gen_ai.*/ai.* fields; mistakes could impact token/cost calculation and stored trace payloads. Added tests reduce but don’t eliminate risk across diverse telemetry inputs.

Overview
Adds support for newer Vercel AI SDK / Mastra telemetry by normalizing aisdk.* and operation-prefixed attributes (e.g. stream.*, generateText.*) into the existing gen_ai.*/ai.* keys so the current input/output and usage extraction pipeline works.

Updates LLM detection and storage behavior: spans can be reclassified to LLM after normalization, llm.usage.total_tokens is now written when setting usage, and the original operation-prefixed attributes are dropped via should_keep_attribute to reduce stored payload size. Cost calculation now warns when gen_ai.system exists and tokens are present but no model name is available, and provider detection for AI SDK spans also recognizes aisdk.model.provider.

Reviewed by Cursor Bugbot for commit 46df0cf. Bugbot is set up for automated code reviews on this repo. Configure here.

Signed-off-by: PranshuSrivastava <iampranshu24@gmail.com>
Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
@dinmukhamedm dinmukhamedm marked this pull request as ready for review April 7, 2026 19:49
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 7, 2026

Greptile Summary

This PR adds first-class support for Vercel AI SDK v5+ and Mastra SDK spans by normalizing their aisdk.*, stream.*, and generateText.* attribute namespaces into the existing gen_ai.* standard that the rest of Laminar's pipeline already understands. The implementation is clean, non-destructive (all normalization uses insert-if-absent semantics), and backed by thorough unit tests covering provider lowercasing, no-op behavior for non-aisdk spans, and multiple operation prefixes.

  • normalize_aisdk_attributes() maps aisdk.model.idgen_ai.request.model, normalizes aisdk.model.providergen_ai.system (lowercased first segment, e.g. "openai.chat""openai"), and remaps operation-prefixed usage/prompt/response attributes to their ai.*/gen_ai.* equivalents
  • detect_aisdk_operation_prefix() auto-detects which prefix (stream/generateText/streamText/generateObject/streamObject) is in use — but currently omits {prefix}.response.toolCalls and {prefix}.response.object from its detection criteria even though it normalizes them
  • is_ai_sdk_llm_span() extended with aisdk.model.provider check to guard Mastra spans from the Langchain-specific conversion path
  • gateway → vercel_ai_gateway provider transform added for LiteLLM gateway spans; comment wording in this case is slightly misleading

Confidence Score: 4/5

Safe to merge; all issues are non-blocking style/clarity concerns with no runtime impact

Well-implemented feature with comprehensive test coverage (6+ new test cases), non-destructive insert-if-absent normalization semantics, and correct handling of all key edge cases. The only remaining items are: a minor inconsistency between detection and normalization scope in detect_aisdk_operation_prefix() (unlikely to matter in practice since real LLM calls with tool calls always include token usage data), and a backwards comment in utils.rs.

app-server/src/traces/spans.rs — review the detect_aisdk_operation_prefix() detection criteria gap for {prefix}.response.toolCalls and {prefix}.response.object

Important Files Changed

Filename Overview
app-server/src/traces/spans.rs Core aisdk normalization: adds normalize_aisdk_attributes(), detect_aisdk_operation_prefix(), and insert/normalize helpers; updates span_type() and should_keep_attribute() — well-tested, with a minor detection-criteria gap noted
app-server/src/traces/span_attributes.rs Adds AISDK_MODEL_ID and AISDK_MODEL_PROVIDER constants for the new aisdk.* attribute namespace — minimal and correct
app-server/src/traces/provider/mod.rs Extends is_ai_sdk_llm_span() to also check aisdk.model.provider, correctly preventing Langchain conversion for Mastra/newer Vercel SDK spans
app-server/src/traces/utils.rs Adds gateway→vercel_ai_gateway provider transform for LiteLLM compatibility; existing comment wording is slightly backwards

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["Raw OTel span\n(aisdk.* attributes)"] --> B
    B["from_otel_span()\nspan_type() checks aisdk.* prefix\n→ SpanType::LLM"] --> C
    C["parse_and_enrich_attributes()"] --> D
    D["normalize_aisdk_attributes()"] --> E & F
    E["aisdk.model.id\n→ gen_ai.request.model\n→ ai.model.id"] --> G
    F["aisdk.model.provider\n→ ai.model.provider\n→ gen_ai.system\n   (lowercased 1st segment)"] --> G
    G["detect_aisdk_operation_prefix()\nstream / generateText / streamText\ngenerateObject / streamObject"] --> H
    H{Prefix found?} -->|Yes| I
    H -->|No| J
    I["Remap usage/prompt/response\n{prefix}.usage.inputTokens → gen_ai.usage.input_tokens\n{prefix}.prompt.messages → ai.prompt.messages\n{prefix}.response.toolCalls → ai.response.toolCalls"] --> J
    J["Re-evaluate span_type()\nafter gen_ai.system is populated"] --> K
    K["convert_span_to_provider_format()\nis_ai_sdk_llm_span():\nai.operationId OR ai.model.provider\nOR aisdk.model.provider\n→ skip Langchain conversion"] --> L
    L["tranform_model_and_provider()\ngateway → vercel_ai_gateway"]
Loading

Comments Outside Diff (1)

  1. app-server/src/traces/utils.rs, line 351-352 (link)

    P2 Comment describes the mapping backwards

    The comment reads "LiteLLM stores "gateway" as "vercel_ai_gateway"", which implies LiteLLM's internal label is "vercel_ai_gateway". The reality is the opposite: LiteLLM sends "gateway" as the provider name, and this code remaps it to "vercel_ai_gateway" for model-pricing lookup.

Reviews (1): Last reviewed commit: "remove redundant warnings" | Re-trigger Greptile

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
@dinmukhamedm
Copy link
Copy Markdown
Member Author

@laminar-coding-agent the normalized spans now have gen_ai.usage.* attributes for input_cost, output_cost, input_tokens, and output_tokens. Turns out there is frontend functionality that depends on the totals for these values. Let's make sure that after (during) normalization, total_cost is reflected as a gen_ai.usage.cost and total_tokens are reflected as llm.usage.total_cost.

Copy link
Copy Markdown
Contributor

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit f9b4cfe. Configure here.

The set_usage method already wrote gen_ai.usage.cost (total cost) and
individual token/cost attributes, but was missing the total token count
that frontend functionality depends on.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
2 out of 3 committers have signed the CLA.

✅ PranshuSrivastava
✅ dinmukhamedm
❌ greptile-apps[bot]
You have signed the CLA already but the status is still pending? Let us recheck it.

…e reduction

Expand should_keep_attribute to filter out all operation-prefixed attributes
that have been normalized to standard ai.*/gen_ai.* keys, not just
.prompt.messages. This covers .response.text, .response.toolCalls,
.response.object, and .usage.* suffixes as well.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants