Skip to content

Commit 0894f83

Browse files
authored
ref(nextjs): Move manual instrumentation examples for ai into Edge section (#15284)
## DESCRIBE YOUR PR These snippets are only needed when running the Edge runtime but this is only called out in the paragraph above which on first glance is easy to miss leading to people double instrumenting clients and potentially running into undefined behavior. This PR restructures these sections. ## IS YOUR CHANGE URGENT? Help us prioritize incoming PRs by letting us know when the change needs to go live. - [ ] Urgent deadline (GA date, etc.): <!-- ENTER DATE HERE --> - [ ] Other deadline: <!-- ENTER DATE HERE --> - [X] None: Not urgent, can wait up to 1 week+
1 parent fd20794 commit 0894f83

File tree

3 files changed

+73
-67
lines changed

3 files changed

+73
-67
lines changed

docs/platforms/javascript/common/configuration/integrations/anthropic.mdx

Lines changed: 26 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -77,30 +77,6 @@ const response = await client.messages.create({
7777

7878
</PlatformSection>
7979

80-
<PlatformSection supported={['javascript.nextjs']}>
81-
82-
This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the Anthropic client:
83-
84-
```javascript
85-
import * as Sentry from "@sentry/nextjs";
86-
import Anthropic from "@anthropic-ai/sdk";
87-
88-
const anthropic = new Anthropic();
89-
const client = Sentry.instrumentAnthropicAiClient(anthropic, {
90-
recordInputs: true,
91-
recordOutputs: true,
92-
});
93-
94-
// Use the wrapped client instead of the original anthropic instance
95-
const response = await client.messages.create({
96-
model: "claude-3-5-sonnet-20241022",
97-
max_tokens: 1024,
98-
messages: [{ role: "user", content: "Hello!" }],
99-
});
100-
```
101-
102-
</PlatformSection>
103-
10480
## Options
10581

10682
### `recordInputs`
@@ -145,6 +121,32 @@ By default this integration adds tracing support to Anthropic API method calls i
145121

146122
The integration will automatically detect streaming vs non-streaming requests and handle them appropriately.
147123

124+
<PlatformSection supported={['javascript.nextjs']}>
125+
126+
## Edge runtime
127+
128+
This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the Anthropic client:
129+
130+
```javascript
131+
import * as Sentry from "@sentry/nextjs";
132+
import Anthropic from "@anthropic-ai/sdk";
133+
134+
const anthropic = new Anthropic();
135+
const client = Sentry.instrumentAnthropicAiClient(anthropic, {
136+
recordInputs: true,
137+
recordOutputs: true,
138+
});
139+
140+
// Use the wrapped client instead of the original anthropic instance
141+
const response = await client.messages.create({
142+
model: "claude-3-5-sonnet-20241022",
143+
max_tokens: 1024,
144+
messages: [{ role: "user", content: "Hello!" }],
145+
});
146+
```
147+
148+
</PlatformSection>
149+
148150
## Supported Versions
149151

150152
- `@anthropic-ai/sdk`: `>=0.19.2 <1.0.0`

docs/platforms/javascript/common/configuration/integrations/google-genai.mdx

Lines changed: 22 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -73,26 +73,6 @@ const result = await client.models.generateContent("Hello!");
7373

7474
</PlatformSection>
7575

76-
<PlatformSection supported={['javascript.nextjs']}>
77-
78-
This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the Google Gen AI client:
79-
80-
```javascript
81-
import * as Sentry from "@sentry/nextjs";
82-
import { GoogleGenAI } from "@google/genai";
83-
84-
const genAI = new GoogleGenAI(process.env.API_KEY);
85-
const client = Sentry.instrumentGoogleGenAIClient(genAI, {
86-
recordInputs: true,
87-
recordOutputs: true,
88-
});
89-
90-
// Use the wrapped client instead of the original genAI instance
91-
const result = await client.models.generateContent("Hello!");
92-
```
93-
94-
</PlatformSection>
95-
9676
## Options
9777

9878
### `recordInputs`
@@ -135,6 +115,28 @@ By default this integration adds tracing support to Google Gen AI SDK method cal
135115

136116
The integration will automatically detect streaming vs non-streaming requests and handle them appropriately.
137117

118+
<PlatformSection supported={['javascript.nextjs']}>
119+
120+
## Edge runtime
121+
122+
This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the Google Gen AI client:
123+
124+
```javascript
125+
import * as Sentry from "@sentry/nextjs";
126+
import { GoogleGenAI } from "@google/genai";
127+
128+
const genAI = new GoogleGenAI(process.env.API_KEY);
129+
const client = Sentry.instrumentGoogleGenAIClient(genAI, {
130+
recordInputs: true,
131+
recordOutputs: true,
132+
});
133+
134+
// Use the wrapped client instead of the original genAI instance
135+
const result = await client.models.generateContent("Hello!");
136+
```
137+
138+
</PlatformSection>
139+
138140
## Supported Versions
139141

140142
- `@google/genai`: `>=0.10.0 <2`

docs/platforms/javascript/common/configuration/integrations/openai.mdx

Lines changed: 25 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -76,29 +76,6 @@ const response = await client.chat.completions.create({
7676

7777
</PlatformSection>
7878

79-
<PlatformSection supported={['javascript.nextjs']}>
80-
81-
This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the OpenAI client:
82-
83-
```javascript
84-
import * as Sentry from "@sentry/nextjs";
85-
import OpenAI from "openai";
86-
87-
const openai = new OpenAI();
88-
const client = Sentry.instrumentOpenAiClient(openai, {
89-
recordInputs: true,
90-
recordOutputs: true,
91-
});
92-
93-
// Use the wrapped client instead of the original openai instance
94-
const response = await client.chat.completions.create({
95-
model: "gpt-4o",
96-
messages: [{ role: "user", content: "Hello!" }],
97-
});
98-
```
99-
100-
</PlatformSection>
101-
10279
## Options
10380

10481
### `recordInputs`
@@ -138,6 +115,31 @@ By default this integration adds tracing support to OpenAI API method calls incl
138115

139116
The integration will automatically detect streaming vs non-streaming requests and handle them appropriately.
140117

118+
<PlatformSection supported={['javascript.nextjs']}>
119+
120+
## Edge runtime
121+
122+
This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the OpenAI client:
123+
124+
```javascript
125+
import * as Sentry from "@sentry/nextjs";
126+
import OpenAI from "openai";
127+
128+
const openai = new OpenAI();
129+
const client = Sentry.instrumentOpenAiClient(openai, {
130+
recordInputs: true,
131+
recordOutputs: true,
132+
});
133+
134+
// Use the wrapped client instead of the original openai instance
135+
const response = await client.chat.completions.create({
136+
model: "gpt-4o",
137+
messages: [{ role: "user", content: "Hello!" }],
138+
});
139+
```
140+
141+
</PlatformSection>
142+
141143
## Supported Versions
142144

143145
- `openai`: `>=4.0.0 <6`

0 commit comments

Comments
 (0)