-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
meta(changelog): Update changelog for 10.26.0 #18249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ages (#18157) This PR adds [truncation support for LangChain integration request messages](#18018). All messages already get normalized to arrays of messages, so here we need no case distinction for strings. Adds tests to verify behavior for 1. simple string inputs and 2. conversations in the form of arrays of strings. Closes #18018
Fixes faulty test assertions where we asserted for certain properties to _not_ be in an object but used `toMatchObject` to do so.
[Gitflow] Merge master into develop
…captureConsoleIntegration` (#18096) This patch creates a synthetic exception already within the captureConsole handler, so that we minimize the number of Sentry stack frames in the stack trace. It also adjusts the `Client::captureMessage` method to favor an already provided `syntheticException` over the one it would create by itself.
We needed the override because version 10.0.1 didn't have a valid package.json (embroider-build/embroider#2609). They released version 10.0.2 now.
Two changes: 1. Reduce bundle size slightly by optimizing `setTag` (+ adding some more tests around setTag(s)) 2. Adjust the integration test message since we no longer classify the SUT behaviour as a bug
…18172) This PR attempts to fix #18001 by not wrapping the middleware files if Next.js 16 is the current version and is in standalone output mode which is the problematic scenario. Investigation: - Next.js renames `proxy` to `middleware` under the hood. - By wrapping the middleware a `proxy.js` entry is produced in `middleware.js.nft.json` which wouldn't be there otherwise, meaning if we don't wrap it then that entry doesn't get produced. So it seems like `@vercel/nft` is somehow adding the `proxy` file as a dependency of itself which fails to copy to the output directory because it was already copied and renamed to `proxy.js` or at least that is what I'm guessing is happening.
This came up while working on improvements for React Router wildcard routes. Looks like the successful browser `idleSpans` are reported with `unknown` status at the moment.
…ple times(#17972) When using higher-level integrations that wrap underlying libraries, both the wrapper integration and the underlying library integration can instrument the same API calls, resulting in duplicate spans. This is particularly problematic for: - LangChain wrapping AI providers (OpenAI, Anthropic, Google GenAI) - Any future providers that wrap other providers We expose 3 internal methods ```js _INTERNAL_skipAiProviderWrapping(providers: string[]) _INTERNAL_shouldSkipAiProviderWrapping(provider: string) _INTERNAL_clearAiProviderSkips() ``` To bail out of instrumenting providers when they are on the skip list. These are internal methods not meant for public consumers and may be changed or removed in the future. --------- Co-authored-by: Andrei Borza <[email protected]>
As discussed moving the AI integrations from core/utils to core/tracing.
…8191) I guess this got through CI because we test latest 18 rather than 18.0.0. This breaks [some supported Electron versions](https://github.com/getsentry/sentry-electron/actions/runs/19306230917/job/55215745023) which are using >18.0.0 but <18.19.0. This wont have impacted almost anyone else because Otel requires 18.19.0! ``` [App] [ Main] App threw an error during load [App] [ Main] file:///home/runner/work/sentry-electron/sentry-electron/test/e2e/dist/error-after-ready/node_modules/@sentry/node-core/build/esm/integrations/pino.js:1 [App] [ Main] import { tracingChannel } from 'node:diagnostics_channel'; [App] [ Main] ^^^^^^^^^^^^^^ [App] [ Main] SyntaxError: The requested module 'node:diagnostics_channel' does not provide an export named 'tracingChannel' [App] [ Main] at ModuleJob._instantiate (node:internal/modules/esm/module_job:124:21) [App] [ Main] at async ModuleJob.run (node:internal/modules/esm/module_job:190:5) [App] [ Main] A JavaScript error occurred in the main process ```
With this PR users can set their min replay duration to max 50s, previously this was capped at 15s. We cannot bump this value further as this would lead to dropping buffered replays (we keep max. 60s in-memory at this point) closes #18109 --------- Co-authored-by: Andrei <[email protected]>
…ion (#18195) ## Problem The Spotlight configuration logic had a precedence bug where when `spotlight: true` was set in config AND the `SENTRY_SPOTLIGHT` environment variable contained a URL string, the SDK would incorrectly use `true` instead of the URL from the environment variable. According to the [Spotlight specification](https://raw.githubusercontent.com/getsentry/sentry-docs/b38e3b307f900665a348f855559ac1d1c58914cc/develop-docs/sdk/expected-features/spotlight.mdx), when `spotlight: true` is set and the env var contains a URL, the URL from the env var should be used to allow developers to override the Spotlight URL via environment variables. **Previous behavior:** ```typescript // Config: spotlight: true // Env: SENTRY_SPOTLIGHT=http://custom:3000/stream // Result: spotlight = true ❌ (incorrect) ``` **Expected behavior per spec:** ```typescript // Config: spotlight: true // Env: SENTRY_SPOTLIGHT=http://custom:3000/stream // Result: spotlight = "http://custom:3000/stream" ✅ (correct) ``` ## Solution Fixed the precedence logic in `getClientOptions()` to properly implement the specification: 1. `spotlight: false` → Always disabled (overrides env var) 2. `spotlight: string` → Uses the config URL (ignores env var) 3. `spotlight: true` + env var URL → **Uses the env var URL** (the bug fix) 4. `spotlight: true` + env var truthy → Uses default URL 5. No config + env var → Parses and uses env var The implementation reuses the existing `envToBool()` utility to avoid code duplication. ## Changes - Fixed Spotlight precedence logic in `packages/node-core/src/sdk/index.ts` - Added 12 comprehensive test cases covering all precedence scenarios in `packages/node-core/test/sdk/init.test.ts` - Updated CHANGELOG.md ## Test Coverage The new tests cover: - ✅ Env var only: truthy values, falsy values, URL strings - ✅ Config only: `true`, `false`, URL string - ✅ Precedence: config `false` overrides env var (URL, truthy, falsy) - ✅ Precedence: config URL overrides env var - ✅ Precedence: config `true` + env var URL uses env var URL (the fix) - ✅ Precedence: config `true` + env var truthy uses default URL ## Related - Original Spotlight implementation: #13325 - Spotlight specification: https://spotlightjs.com/ --------- Co-authored-by: Cursor Agent <[email protected]>
…8194) While investigating [this ticket](https://linear.app/getsentry/issue/JS-657/available-tools-json-should-be-a-stringified-json-array-of-objects-not) I noticed that available tools are sent as a nested instead of a flat array in google genai, which seems like a bug to me. The format I would expect and how we do it in other integrations is: [{tool-definition}, {tool-definition}] What we actually send atm is: [[{tool-definition}], [{tool-definition}]] This PR fixes this to instead send a flat list of tool definitions.
[Linear Ticket](https://linear.app/getsentry/issue/JS-657/available-tools-json-should-be-a-stringified-json-array-of-objects-not) The available tools sent from our SDKs should generally be in the format of a stringified array of objects (where an object stores information about a particular tool). This is true for all AI SDKs except Vercel, where we send an array of strings. This PR fixes this by parsing the available tool array and converting the whole array into a proper string representation.
…igh `normalizeDepth` (#18206) Fixes #18203 ### Problem When using `normalizeDepth: 10` with `captureConsoleIntegration` enabled, Vue VNodes in console arguments would trigger recursive warning spam. Accessing VNode properties during normalization would trigger Vue's reactive getters, which emit console warnings. These warnings would then be captured and normalized again, creating a recursive loop that could generate hundreds of warnings. Note that this only happens in `dev` mode ### Solution Changed `isVueViewModel()` to detect Vue 3 VNodes (`__v_isVNode: true`) in addition to Vue 2/3 ViewModels. VNodes are now identified early in the normalization process and stringified as `[VueVNode]` before their properties are accessed, preventing the recursive warning loop. Some of the properties on the `VNode` can also be reactive, so it can incorrectly add those to a watchEffect or a render function reactive dependencies when accessed by the normalizer. ### Changes - **`packages/core/src/utils/is.ts`**: Added `__v_isVNode` check to `isVueViewModel()`. - **`packages/core/src/utils/normalize.ts`**: Distinguish VNodes from ViewModels in output (`[VueVNode]` vs `[VueViewModel]`). - **Tests**: Added comprehensive unit tests for Vue object detection and integration test that verifies no property access occurs during VNode normalization. --- I couldn't reproduce this exactly in a test with a real vue component, but verified it fixes the reproduction example. The before and after of the captured logs: Before: <img width="1106" height="1137" alt="CleanShot 2025-11-14 at 15 46 30" src="https://github.com/user-attachments/assets/435dbb04-ba3c-430b-8c39-d886f92072e8" /> After: <img width="908" height="768" alt="CleanShot 2025-11-14 at 15 45 15" src="https://github.com/user-attachments/assets/e7d8cca2-a0e1-48bb-9f95-3a39d2164d21" /> As a Vue developer I don't think the loss of information here would help debug anything.
The ErrorBoundary exported in the SDK only works on the client and is not intended to be used. Use react router's error boundary instead: https://docs.sentry.io/platforms/javascript/guides/react-router/#report-errors-from-error-boundaries.
…ous logging (#18211) The flush timeout was being reset on every incoming log, preventing flushes when logs arrived continuously. Now, the timer starts on the first log and won't get reset, ensuring logs flush within the configured interval. Fixes #18204, getsentry/sentry-react-native#5378 v9 backport: #18214
We were emitting the non-processed metric in the hook before – I changed this behaviour + added a test to verify.
This PR adds official support for instrumenting LangGraph StateGraph
operations in Node with Sentry tracing, following OpenTelemetry semantic
conventions for Generative AI.
### Currently supported:
Node.js - Both agent creation and invocation are instrumented in this PR
ESM and CJS - Both module systems are supported
The langGraphIntegration() accepts the following options:
```
// The integration respects your sendDefaultPii client option
interface LangGraphOptions {
recordInputs?: boolean; // Whether to record input messages
recordOutputs?: boolean; // Whether to record response text and tool calls
}
```
e.g
```
Sentry.init({
dsn: '__DSN__',
sendDefaultPii: false, // Even with PII disabled globally
integrations: [
Sentry.langGraphIntegration({
recordInputs: true, // Force recording input messages
recordOutputs: true, // Force recording response text
}),
],
});
```
### Operations traced:
- gen_ai.create_agent - Spans created when StateGraph.compile() is
called
- gen_ai.invoke_agent - Spans created when CompiledGraph.invoke() is
called
Moves the prioritization hint to a dropdown to avoid users accidentally removing it --------- Co-authored-by: Lukas Stracke <[email protected]>
**Summary** ISR pages will have a `sentry-trace` and `baggage` meta tags rendered on them following the initial render or after the first invalidation causing a cached trace id to be present until the next invalidation. This happens in Next.js 15/16 and both on Turbopack and Webpack. **What I tried and didn't work** I Found no way to conditionally set/unset/change the values set by the `clientTraceMetadata` option, I found nothing useful on unit async storages, nor re-setting the propagation context works. The `clientTraceMetadata` gets called way earlier at the `app-render.tsx` level, which would call our `SentryPropagator.inject()` then. We cannot intercept it either because it runs before the page wrapper is called. The main issue is _timing_: - Suppressing the tracing wouldn't work either because it is too late. Ideally we want a way to tell Next to remove those attributes at runtime, or render them conditionally. - I also tried setting everything that has to do with `sentry-trace` or baggage to dummy values as some sort of "marker" for the SDK on the browser side to drop them, but again it is too late since `clientTraceMetadata` is picked up too early. **Implementation** so I figured a workaround, I decided to do it on the client side by: - Marking ISR page routes via the route manifest we already have. - In `Sentry.init` call we remove the tags before the browser integration has had a chance to grab the meta tags. Not the cleanest way, but I verified the issue by writing tests for it and observing page loads across multiple page visits having the same trace id. The meta deletion forces them to have new id for every visit which is what we want.
Make use of our existing `LRUMap` for the ISR route cache to avoid the map growing too big.
…18237) Restores the office quote in our changelog. This quote acts as a placement marker for attribution and is needed by our action. Alternatively, we could change our action but this is a long-standing quote and part of our culture :)
…#18098) Fixes an issue where consecutive navigations to different routes fail to create separate navigation spans, causing span leaks and missing transaction data. This came up in a React Router v6/v7 application where the pageload / navigation transactions take longer and there is a high `finalTimeout` set in config. When users navigate between different routes (e.g., `/users/:id` → `/projects/:projectId` → `/settings`). The SDK was incorrectly preventing new navigation spans from being created whenever an ongoing navigation span was active, regardless of whether the navigation was to a different route. This resulted in only the first navigation being tracked, with subsequent navigations being silently ignored. Also, the spans that needed to be a part of the subsequent navigation were recorded as a part of the previous one. The root cause was the `if (!isAlreadyInNavigationSpan)` check that we used to prevent cross-usage scenarios (multiple wrappers instrumenting the same navigation), which incorrectly blocked legitimate consecutive navigations to different routes. So, this fix changes the logic to check both navigation span state and the route name: `isSpanForSameRoute = isAlreadyInNavigationSpan && spanJson?.description === name`. This allows consecutive navigations to different routes while preventing duplicate spans for the same route. Also added tracking using `LAST_NAVIGATION_PER_CLIENT`. When multiple wrappers (e.g., `wrapCreateBrowserRouter` + `wrapUseRoutes`) instrument the same application, they may each trigger span creation for the same navigation event. We store the navigation key `${location.pathname}${location.search}${location.hash}` while the span is active and clear it when that span ends. If the same navigation key shows up again before the original span finishes, the second wrapper updates that span’s name if it has better parameterization instead of creating a duplicate, which keeps cross-usage covered.
URLs were missing from server-side transaction events (server components, generation functions) in Next.js. This was previously removed in #18113 because we tried to synchronously access `params` and `searchParams`, which cause builds to crash. This PR approach adds the URL at runtime using a `preprocessEvent` hook as suggested. **Implementation** 1. Reads `http.target` (actual request path) and `next.route` (parameterized route) from the transaction's trace data 2. Extracts headers from the captured isolation scope's SDK processing metadata 3. Builds the full URL using the existing `getSanitizedRequestUrl()` utility 4. Adds it to `normalizedRequest.url` so the `requestDataIntegration` includes it in the event This works uniformly for both Webpack and Turbopack across all of our supported Next.js versions (13~16), I added missing tests for this case in the versions that did not have it. Fixes #18115
Next.js respects the PORT variable. If for some reason this is not sufficient for users we can ship a follow up with a config option, which I wanted to avoid in the first step. Also did a small refactor of the fetching code. closes #18135 closes https://linear.app/getsentry/issue/JS-1139/handle-the-case-where-users-define-a-different-portprotocol
Bumps the vendored-in web vitals library to include the changes between `5.0.2` <-> `5.1.0` from upstream #### Changes from upstream - Remove `visibilitychange` event listeners when no longer required [#627](GoogleChrome/web-vitals#627) - Register visibility-change early [#637](GoogleChrome/web-vitals#637) - Only finalize LCP on user events (isTrusted=true) [#635](GoogleChrome/web-vitals#635) - Fallback to default getSelector if custom function is null or undefined [#634](GoogleChrome/web-vitals#634) #### Our own Changes - Added `addPageListener` and `removePageListener` utilities because the upstream package changed the listeners to be added on `window` instead of `document`, so I added those utilities to avoid having to check for window every time we try to add a listener.
This adds instrumentation for the OpenAI Embeddings API. Specifically, we instrument [Create embeddings](https://platform.openai.com/docs/api-reference/embeddings/create), which is also the only endpoint in the embeddings API atm. Implementation generally follows the same flow we also have for the `completions` and `responses` APIs. To detect `embedding` requests we check whether the model name contains `embeddings`. The embedding results are currently not tracked, as we do not truncate outputs right now as far as I know and these can get large quite easily. For instance, [text-embedding-3 uses dimension 1536 (small) or 3072 (large) by default](https://platform.openai.com/docs/guides/embeddings#use-cases), resulting in single embeddings sizes of 6KB and 12KB, respectively. Test updates: - Added a new scenario-embeddings.mjs file, that covers the embeddings API tests (tried to put this in the main scenario.mjs, but the linter starts complaining about the file being too long). - Added a new scenario file to check that truncation works properly for the embeddings API. Also moved all truncation scenarios to a folder.
…mes (#18242) Attaches a `server.address` attribute to all captured metrics on a `serverRuntimeClient` Did this by emitting a new `processMetric` hook in core, that we listen to in the `serverRuntimeClient`. This way we do not need to re-export all metrics functions from server runtime packages and still only get a minimal client bundle size bump. Added integration tests for node + cloudflare closes #18240 closes https://linear.app/getsentry/issue/JS-1178/attach-serveraddress-as-a-default-attribute-to-metrics --------- Co-authored-by: Lukas Stracke <[email protected]>
#18112) This PR adds manual instrumentation support for LangGraph StateGraph operations in Cloudflare Workers and Vercel Edge environments. ``` import * as Sentry from '@sentry/cloudflare'; // or '@sentry/vercel-edge' import { StateGraph, START, END, MessagesAnnotation } from '@langchain/langgraph'; // Create and instrument the graph const graph = new StateGraph(MessagesAnnotation) .addNode('agent', agentFn) .addEdge(START, 'agent') .addEdge('agent', END); Sentry.instrumentLangGraph(graph, { recordInputs: true, recordOutputs: true, }); const compiled = graph.compile({ name: 'weather_assistant' }); await compiled.invoke({ messages: [{ role: 'user', content: 'What is the weather in SF?' }], }); ``` - [x] This PR depends on #18114
dev-packages/e2e-tests/test-applications/nextjs-15/instrumentation-client.ts
Show resolved
Hide resolved
Contributor
size-limit report 📦
|
Upgrades OpenAI instrumentation to support OpenAI SDK v6.0.0 and adds node integration tests to verify compatibility. ### Changes **Instrumentation:** - Bumped OpenAI SDK support to v6.0.0 (<v7) - OpenAI v6 introduces no breaking changes that affect our instrumentation - All existing instrumentation logic remains compatible with the new SDK version ref: https://github.com/openai/openai-node/releases/tag/v6.0.0 **Testing:** - Created v6 test suite in `dev-packages/node-integration-tests/suites/tracing/openai/v6/` - Tests verify OpenAI SDK v6.0.0 instrumentation across: - Chat completions API with and without PII tracking - Responses API with streaming support - Custom integration options (recordInputs, recordOutputs) - Error handling in chat completions and streaming contexts - Root span creation without wrapping spans - Embeddings API
Lms24
approved these changes
Nov 19, 2025
Member
Lms24
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what a 🚢
Contributor
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
chargome
approved these changes
Nov 19, 2025
Member
The lower version is currently breaking our ci, the version lacks v3 exports that are used by `zod-to-json`.
b631e44 to
af4b916
Compare
andreiborza
approved these changes
Nov 19, 2025
af4b916 to
be12569
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.