Skip to content

Conversation

@logaretm
Copy link
Collaborator

No description provided.

nicohrubec and others added 30 commits November 11, 2025 14:41
…ages (#18157)

This PR adds [truncation support for LangChain integration request
messages](#18018).
All messages already get normalized to arrays of messages, so here we
need no case distinction for strings.

Adds tests to verify behavior for 1. simple string inputs and 2.
conversations in the form of arrays of strings.

Closes #18018
Fixes faulty test assertions where we asserted for certain properties to
_not_ be in an object but used `toMatchObject` to do so.
…18170)

This PR adds two utility functions for testing the profile envelope:
`validateProfilePayloadMetadata` and `validateProfile`. As More tests
are going to be added, I don't want to copy-paste the same tests over
and over.


Part of #17279
[Gitflow] Merge master into develop
…captureConsoleIntegration` (#18096)

This patch creates a synthetic exception already within the captureConsole
handler, so that we minimize the number of Sentry stack frames in the
stack trace. It also adjusts the `Client::captureMessage` method to
favor an already provided `syntheticException` over the one it would
create by itself.
We needed the override because version 10.0.1 didn't have a valid
package.json (embroider-build/embroider#2609).

They released version 10.0.2 now.
Two changes:
1. Reduce bundle size slightly by optimizing `setTag` (+ adding some
more tests around setTag(s))
2. Adjust the integration test message since we no longer classify the 
SUT behaviour as a bug
…18172)

This PR attempts to fix #18001 by not wrapping the middleware files if
Next.js 16 is the current version and is in standalone output mode which
is the problematic scenario.

Investigation:

- Next.js renames `proxy` to `middleware` under the hood.
- By wrapping the middleware a `proxy.js` entry is produced in
`middleware.js.nft.json` which wouldn't be there otherwise, meaning if
we don't wrap it then that entry doesn't get produced. So it seems like
`@vercel/nft` is somehow adding the `proxy` file as a dependency of
itself which fails to copy to the output directory because it was
already copied and renamed to `proxy.js` or at least that is what I'm
guessing is happening.
This came up while working on improvements for React Router wildcard
routes. Looks like the successful browser `idleSpans` are reported with
`unknown` status at the moment.
…ple times(#17972)

When using higher-level integrations that wrap underlying libraries,
both the wrapper integration and the underlying library integration can
instrument the same API calls, resulting in duplicate spans. This is
particularly problematic for:

- LangChain wrapping AI providers (OpenAI, Anthropic, Google GenAI)
- Any future providers that wrap other providers

We expose 3 internal methods

```js
_INTERNAL_skipAiProviderWrapping(providers: string[])
_INTERNAL_shouldSkipAiProviderWrapping(provider: string)
_INTERNAL_clearAiProviderSkips()
```

To bail out of instrumenting providers when they are on the skip list. These are internal methods not meant for public consumers and may be changed or removed in the future.
---------

Co-authored-by: Andrei Borza <[email protected]>
As discussed moving the AI integrations from core/utils to core/tracing.
…18187)

This PR renames and moves the profiler class as the class will be used
for the `trace` and `manual` lifecycle in the future (prevents large git
diffs).

Part of #17279
…8191)

I guess this got through CI because we test latest 18 rather than
18.0.0.

This breaks [some supported Electron
versions](https://github.com/getsentry/sentry-electron/actions/runs/19306230917/job/55215745023)
which are using >18.0.0 but <18.19.0.

This wont have impacted almost anyone else because Otel requires
18.19.0!

```
[App] [    Main] App threw an error during load
[App] [    Main] file:///home/runner/work/sentry-electron/sentry-electron/test/e2e/dist/error-after-ready/node_modules/@sentry/node-core/build/esm/integrations/pino.js:1
[App] [    Main] import { tracingChannel } from 'node:diagnostics_channel';
[App] [    Main]          ^^^^^^^^^^^^^^
[App] [    Main] SyntaxError: The requested module 'node:diagnostics_channel' does not provide an export named 'tracingChannel'
[App] [    Main]     at ModuleJob._instantiate (node:internal/modules/esm/module_job:124:21)
[App] [    Main]     at async ModuleJob.run (node:internal/modules/esm/module_job:190:5)
[App] [    Main] A JavaScript error occurred in the main process
```
With this PR users can set their min replay duration to max 50s,
previously this was capped at 15s.

We cannot bump this value further as this would lead to dropping
buffered replays (we keep max. 60s in-memory at this point)

closes #18109

---------

Co-authored-by: Andrei <[email protected]>
…ion (#18195)

## Problem

The Spotlight configuration logic had a precedence bug where when
`spotlight: true` was set in config AND the `SENTRY_SPOTLIGHT`
environment variable contained a URL string, the SDK would incorrectly
use `true` instead of the URL from the environment variable.

According to the [Spotlight
specification](https://raw.githubusercontent.com/getsentry/sentry-docs/b38e3b307f900665a348f855559ac1d1c58914cc/develop-docs/sdk/expected-features/spotlight.mdx),
when `spotlight: true` is set and the env var contains a URL, the URL
from the env var should be used to allow developers to override the
Spotlight URL via environment variables.

**Previous behavior:**
```typescript
// Config: spotlight: true
// Env: SENTRY_SPOTLIGHT=http://custom:3000/stream
// Result: spotlight = true ❌ (incorrect)
```

**Expected behavior per spec:**
```typescript
// Config: spotlight: true
// Env: SENTRY_SPOTLIGHT=http://custom:3000/stream
// Result: spotlight = "http://custom:3000/stream" ✅ (correct)
```

## Solution

Fixed the precedence logic in `getClientOptions()` to properly implement
the specification:

1. `spotlight: false` → Always disabled (overrides env var)
2. `spotlight: string` → Uses the config URL (ignores env var)
3. `spotlight: true` + env var URL → **Uses the env var URL** (the bug
fix)
4. `spotlight: true` + env var truthy → Uses default URL
5. No config + env var → Parses and uses env var

The implementation reuses the existing `envToBool()` utility to avoid
code duplication.

## Changes

- Fixed Spotlight precedence logic in
`packages/node-core/src/sdk/index.ts`
- Added 12 comprehensive test cases covering all precedence scenarios in
`packages/node-core/test/sdk/init.test.ts`
- Updated CHANGELOG.md

## Test Coverage

The new tests cover:
- ✅ Env var only: truthy values, falsy values, URL strings
- ✅ Config only: `true`, `false`, URL string
- ✅ Precedence: config `false` overrides env var (URL, truthy, falsy)
- ✅ Precedence: config URL overrides env var
- ✅ Precedence: config `true` + env var URL uses env var URL (the fix)
- ✅ Precedence: config `true` + env var truthy uses default URL

## Related

- Original Spotlight implementation: #13325
- Spotlight specification: https://spotlightjs.com/

---------

Co-authored-by: Cursor Agent <[email protected]>
…8194)

While investigating [this
ticket](https://linear.app/getsentry/issue/JS-657/available-tools-json-should-be-a-stringified-json-array-of-objects-not)
I noticed that available tools are sent as a nested instead of a flat
array in google genai, which seems like a bug to me.

The format I would expect and how we do it in other integrations is:
[{tool-definition}, {tool-definition}]

What we actually send atm is:
[[{tool-definition}], [{tool-definition}]]

This PR fixes this to instead send a flat list of tool definitions.
[Linear
Ticket](https://linear.app/getsentry/issue/JS-657/available-tools-json-should-be-a-stringified-json-array-of-objects-not)

The available tools sent from our SDKs should generally be in the format
of a stringified array of objects (where an object stores information
about a particular tool). This is true for all AI SDKs except Vercel,
where we send an array of strings. This PR fixes this by parsing the
available tool array and converting the whole array into a proper string
representation.
…igh `normalizeDepth` (#18206)

Fixes #18203

### Problem

When using `normalizeDepth: 10` with `captureConsoleIntegration`
enabled, Vue VNodes in console arguments would trigger recursive warning
spam. Accessing VNode properties during normalization would trigger
Vue's reactive getters, which emit console warnings. These warnings
would then be captured and normalized again, creating a recursive loop
that could generate hundreds of warnings.

Note that this only happens in `dev` mode

### Solution

Changed `isVueViewModel()` to detect Vue 3 VNodes (`__v_isVNode: true`)
in addition to Vue 2/3 ViewModels. VNodes are now identified early in
the normalization process and stringified as `[VueVNode]` before their
properties are accessed, preventing the recursive warning loop.

Some of the properties on the `VNode` can also be reactive, so it can
incorrectly add those to a watchEffect or a render function reactive
dependencies when accessed by the normalizer.

### Changes

- **`packages/core/src/utils/is.ts`**: Added `__v_isVNode` check to
`isVueViewModel()`.
- **`packages/core/src/utils/normalize.ts`**: Distinguish VNodes from
ViewModels in output (`[VueVNode]` vs `[VueViewModel]`).
- **Tests**: Added comprehensive unit tests for Vue object detection and
integration test that verifies no property access occurs during VNode
normalization.

---

I couldn't reproduce this exactly in a test with a real vue component,
but verified it fixes the reproduction example.

The before and after of the captured logs:

Before:

<img width="1106" height="1137" alt="CleanShot 2025-11-14 at 15 46 30"
src="https://github.com/user-attachments/assets/435dbb04-ba3c-430b-8c39-d886f92072e8"
/>


After:

<img width="908" height="768" alt="CleanShot 2025-11-14 at 15 45 15"
src="https://github.com/user-attachments/assets/e7d8cca2-a0e1-48bb-9f95-3a39d2164d21"
/>


As a Vue developer I don't think the loss of information here would help
debug anything.
The ErrorBoundary exported in the SDK only works on the client and is
not intended to be used.

Use react router's error boundary instead:
https://docs.sentry.io/platforms/javascript/guides/react-router/#report-errors-from-error-boundaries.
…18207)

Looks like we swallowed the log that triggers a flush when
`MAX_LOG_BUFFER_SIZE` is surpassed.

Test demonstrating issue:
[5697b7d](5697b7d)
Fix:
[f7a4d8b](f7a4d8b)

Related metrics pr: #18212
v9 backport: #18213
…owed (#18212)

Looks like we swallowed the metric that triggers a flush when
MAX_METRIC_BUFFER_SIZE is surpassed.

Test demonstrating issue:
[f0737fa](f0737fa)
Fix:
[1a4e02a](1a4e02a)

Related logs pr: #18207
…ous logging (#18211)

The flush timeout was being reset on every incoming log, preventing
flushes when logs arrived continuously. Now, the timer starts on the
first log and won't get reset, ensuring logs flush within the configured
interval.

Fixes #18204, getsentry/sentry-react-native#5378

v9 backport: #18214
We were emitting the non-processed metric in the hook before – I changed
this behaviour + added a test to verify.
This PR adds official support for instrumenting LangGraph StateGraph
operations in Node with Sentry tracing, following OpenTelemetry semantic
conventions for Generative AI.

### Currently supported:
Node.js - Both agent creation and invocation are instrumented in this PR
ESM and CJS - Both module systems are supported

The langGraphIntegration() accepts the following options:
```
// The integration respects your sendDefaultPii client option
interface LangGraphOptions {
  recordInputs?: boolean;   // Whether to record input messages
  recordOutputs?: boolean;  // Whether to record response text and tool calls
}
```
e.g
```
Sentry.init({
  dsn: '__DSN__',
  sendDefaultPii: false, // Even with PII disabled globally
  integrations: [
    Sentry.langGraphIntegration({
      recordInputs: true,    // Force recording input messages
      recordOutputs: true,   // Force recording response text
    }),
  ],
});
```

### Operations traced:

- gen_ai.create_agent - Spans created when StateGraph.compile() is
called
- gen_ai.invoke_agent - Spans created when CompiledGraph.invoke() is
called
This PR was factored out of another PR to make reviewing easier. The
other PR: #18189

Moved the `spanStart` and `spanEnd` listeners into an extra function
(`_setupTraceLifecycleListeners`) to be able to only call it depending
on the lifecycle (used in another PR).

Part of #17279
As Deno requires a valid URL when calling `new Request`, `example.com`
was used but this caused problems.

This PR changes it to a data URL as it does not rely on external
dependencies and is a valid URL in Deno.

Fixes: #18218

Previous PR for that:
#5630
Moves the prioritization hint to a dropdown to avoid users accidentally removing it

---------

Co-authored-by: Lukas Stracke <[email protected]>
**Summary**

ISR pages will have a `sentry-trace` and `baggage` meta tags rendered on
them following the initial render or after the first invalidation
causing a cached trace id to be present until the next invalidation.

This happens in Next.js 15/16 and both on Turbopack and Webpack.


**What I tried and didn't work**

I Found no way to conditionally set/unset/change the values set by the
`clientTraceMetadata` option, I found nothing useful on unit async
storages, nor re-setting the propagation context works. The
`clientTraceMetadata` gets called way earlier at the `app-render.tsx`
level, which would call our `SentryPropagator.inject()` then. We cannot
intercept it either because it runs before the page wrapper is called.

The main issue is _timing_:

- Suppressing the tracing wouldn't work either because it is too late.
Ideally we want a way to tell Next to remove those attributes at
runtime, or render them conditionally.
- I also tried setting everything that has to do with `sentry-trace` or
baggage to dummy values as some sort of "marker" for the SDK on the
browser side to drop them, but again it is too late since
`clientTraceMetadata` is picked up too early.


**Implementation**

so I figured a workaround, I decided to do it on the client side by:

- Marking ISR page routes via the route manifest we already have.
- In `Sentry.init` call we remove the tags before the browser
integration has had a chance to grab the meta tags.

Not the cleanest way, but I verified the issue by writing tests for it
and observing page loads across multiple page visits having the same
trace id. The meta deletion forces them to have new id for every visit
which is what we want.
Make use of our existing `LRUMap` for the ISR route cache to avoid the
map growing too big.
andreiborza and others added 8 commits November 18, 2025 09:23
…18237)

Restores the office quote in our changelog. This quote acts as a
placement marker for attribution and is needed by our action.

Alternatively, we could change our action but this is a long-standing
quote and part of our culture :)
…#18098)

Fixes an issue where consecutive navigations to different routes fail to
create separate navigation spans, causing span leaks and missing
transaction data.

This came up in a React Router v6/v7 application where the pageload /
navigation transactions take longer and there is a high `finalTimeout`
set in config. When users navigate between different routes (e.g.,
`/users/:id` → `/projects/:projectId` → `/settings`). The SDK was
incorrectly preventing new navigation spans from being created whenever
an ongoing navigation span was active, regardless of whether the
navigation was to a different route. This resulted in only the first
navigation being tracked, with subsequent navigations being silently
ignored. Also, the spans that needed to be a part of the subsequent
navigation were recorded as a part of the previous one.

The root cause was the `if (!isAlreadyInNavigationSpan)` check that we
used to prevent cross-usage scenarios (multiple wrappers instrumenting
the same navigation), which incorrectly blocked legitimate consecutive
navigations to different routes.

So, this fix changes the logic to check both navigation span state and
the route name: `isSpanForSameRoute = isAlreadyInNavigationSpan &&
spanJson?.description === name`. This allows consecutive navigations to
different routes while preventing duplicate spans for the same route.


Also added tracking using `LAST_NAVIGATION_PER_CLIENT`. When multiple
wrappers (e.g., `wrapCreateBrowserRouter` + `wrapUseRoutes`) instrument
the same application, they may each trigger span creation for the same
navigation event. We store the navigation key
`${location.pathname}${location.search}${location.hash}` while the span
is active and clear it when that span ends.

If the same navigation key shows up again before the original span
finishes, the second wrapper updates that span’s name if it has better
parameterization instead of creating a duplicate, which keeps
cross-usage covered.
URLs were missing from server-side transaction events (server
components, generation functions) in Next.js. This was previously
removed in #18113 because we tried to synchronously access `params` and
`searchParams`, which cause builds to crash.

This PR approach adds the URL at runtime using a `preprocessEvent` hook
as suggested.

**Implementation**

1. Reads `http.target` (actual request path) and `next.route`
(parameterized route) from the transaction's trace data
2. Extracts headers from the captured isolation scope's SDK processing
metadata
3. Builds the full URL using the existing `getSanitizedRequestUrl()`
utility
4. Adds it to `normalizedRequest.url` so the `requestDataIntegration`
includes it in the event

This works uniformly for both Webpack and Turbopack across all of our
supported Next.js versions (13~16), I added missing tests for this case
in the versions that did not have it.

Fixes #18115
Next.js respects the PORT variable. If for some reason this is not
sufficient for users we can ship a follow up with a config option, which
I wanted to avoid in the first step.

Also did a small refactor of the fetching code.

closes #18135
closes
https://linear.app/getsentry/issue/JS-1139/handle-the-case-where-users-define-a-different-portprotocol
Bumps the vendored-in web vitals library to include the changes between
`5.0.2` <-> `5.1.0` from upstream

#### Changes from upstream

- Remove `visibilitychange` event listeners when no longer required
[#627](GoogleChrome/web-vitals#627)
- Register visibility-change early
[#637](GoogleChrome/web-vitals#637)
- Only finalize LCP on user events (isTrusted=true)
[#635](GoogleChrome/web-vitals#635)
- Fallback to default getSelector if custom function is null or
undefined [#634](GoogleChrome/web-vitals#634)

#### Our own Changes

- Added `addPageListener` and `removePageListener` utilities because the
upstream package changed the listeners to be added on `window` instead
of `document`, so I added those utilities to avoid having to check for
window every time we try to add a listener.
This adds instrumentation for the OpenAI Embeddings API. Specifically,
we instrument [Create
embeddings](https://platform.openai.com/docs/api-reference/embeddings/create),
which is also the only endpoint in the embeddings API atm.
Implementation generally follows the same flow we also have for the
`completions` and `responses` APIs. To detect `embedding` requests we
check whether the model name contains `embeddings`.

The embedding results are currently not tracked, as we do not truncate
outputs right now as far as I know and these can get large quite easily.
For instance, [text-embedding-3 uses dimension 1536 (small) or 3072
(large) by
default](https://platform.openai.com/docs/guides/embeddings#use-cases),
resulting in single embeddings sizes of 6KB and 12KB, respectively.

Test updates:
- Added a new scenario-embeddings.mjs file, that covers the embeddings
API tests (tried to put this in the main scenario.mjs, but the linter
starts complaining about the file being too long).
- Added a new scenario file to check that truncation works properly for
the embeddings API. Also moved all truncation scenarios to a folder.
…mes (#18242)

Attaches a `server.address` attribute to all captured metrics on a
`serverRuntimeClient`

Did this by emitting a new `processMetric` hook in core, that we listen
to in the `serverRuntimeClient`. This way we do not need to re-export
all metrics functions from server runtime packages and still only get a
minimal client bundle size bump.

Added integration tests for node + cloudflare

closes #18240
closes
https://linear.app/getsentry/issue/JS-1178/attach-serveraddress-as-a-default-attribute-to-metrics

---------

Co-authored-by: Lukas Stracke <[email protected]>
#18112)

This PR adds manual instrumentation support for LangGraph StateGraph
operations in Cloudflare Workers and Vercel Edge environments.

```
import * as Sentry from '@sentry/cloudflare'; // or '@sentry/vercel-edge'
import { StateGraph, START, END, MessagesAnnotation } from '@langchain/langgraph';

// Create and instrument the graph
const graph = new StateGraph(MessagesAnnotation)
  .addNode('agent', agentFn)
  .addEdge(START, 'agent')
  .addEdge('agent', END);

Sentry.instrumentLangGraph(graph, {
  recordInputs: true,
  recordOutputs: true,
});

const compiled = graph.compile({ name: 'weather_assistant' });

await compiled.invoke({
  messages: [{ role: 'user', content: 'What is the weather in SF?' }],
});
```

- [x] This PR depends on #18114
@logaretm logaretm requested a review from a team as a code owner November 19, 2025 09:01
@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2025

size-limit report 📦

Path Size % Change Change
@sentry/browser 24.62 kB added added
@sentry/browser - with treeshaking flags 23.13 kB added added
@sentry/browser (incl. Tracing) 41.37 kB added added
@sentry/browser (incl. Tracing, Profiling) 45.69 kB added added
@sentry/browser (incl. Tracing, Replay) 79.82 kB added added
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 69.52 kB added added
@sentry/browser (incl. Tracing, Replay with Canvas) 84.5 kB added added
@sentry/browser (incl. Tracing, Replay, Feedback) 96.73 kB added added
@sentry/browser (incl. Feedback) 41.29 kB added added
@sentry/browser (incl. sendFeedback) 29.29 kB added added
@sentry/browser (incl. FeedbackAsync) 34.21 kB added added
@sentry/react 26.32 kB added added
@sentry/react (incl. Tracing) 43.32 kB added added
@sentry/vue 29.11 kB added added
@sentry/vue (incl. Tracing) 43.17 kB added added
@sentry/svelte 24.64 kB added added
CDN Bundle 26.95 kB added added
CDN Bundle (incl. Tracing) 41.95 kB added added
CDN Bundle (incl. Tracing, Replay) 78.5 kB added added
CDN Bundle (incl. Tracing, Replay, Feedback) 83.96 kB added added
CDN Bundle - uncompressed 78.95 kB added added
CDN Bundle (incl. Tracing) - uncompressed 124.33 kB added added
CDN Bundle (incl. Tracing, Replay) - uncompressed 240.36 kB added added
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 253.13 kB added added
@sentry/nextjs (client) 45.73 kB added added
@sentry/sveltekit (client) 41.76 kB added added
@sentry/node-core 50.95 kB added added
@sentry/node 159.26 kB added added
@sentry/node - without tracing 92.83 kB added added
@sentry/aws-serverless 106.58 kB added added

Upgrades OpenAI instrumentation to support OpenAI SDK v6.0.0 and adds
node integration tests to verify compatibility.

### Changes

**Instrumentation:**
- Bumped OpenAI SDK support to v6.0.0 (<v7)
- OpenAI v6 introduces no breaking changes that affect our
instrumentation
- All existing instrumentation logic remains compatible with the new SDK
version

ref: https://github.com/openai/openai-node/releases/tag/v6.0.0

**Testing:**
- Created v6 test suite in
`dev-packages/node-integration-tests/suites/tracing/openai/v6/`
- Tests verify OpenAI SDK v6.0.0 instrumentation across:
  - Chat completions API with and without PII tracking
  - Responses API with streaming support
  - Custom integration options (recordInputs, recordOutputs)
  - Error handling in chat completions and streaming contexts
  - Root span creation without wrapping spans
  - Embeddings API
Copy link
Member

@Lms24 Lms24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what a 🚢

@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2025

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 9,266 - - added
GET With Sentry 1,411 15% - added
GET With Sentry (error only) 6,371 69% - added
POST Baseline 1,211 - - added
POST With Sentry 565 47% - added
POST With Sentry (error only) 1,076 89% - added
MYSQL Baseline 3,419 - - added
MYSQL With Sentry 527 15% - added
MYSQL With Sentry (error only) 2,774 81% - added

@andreiborza
Copy link
Member

andreiborza commented Nov 19, 2025

@logaretm tests are failing becawe need to bump zod to ~3.25.0 minimum in those e2e tests.

If you wait on #18239 I have them bumped there.

Extracted this out to: #18251

The lower version is currently breaking our ci, the version lacks v3
exports that are used by `zod-to-json`.
@logaretm logaretm force-pushed the prepare-release/10.26.0 branch from b631e44 to af4b916 Compare November 19, 2025 09:46
@logaretm logaretm force-pushed the prepare-release/10.26.0 branch from af4b916 to be12569 Compare November 19, 2025 10:01
@logaretm logaretm merged commit d33c795 into master Nov 19, 2025
378 of 380 checks passed
@logaretm logaretm deleted the prepare-release/10.26.0 branch November 19, 2025 10:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.