Replies: 44 comments 10 replies
-
@timneutkens I would like to find a way to integrate the logging of the request handler into my logging/monitoring infrastructure. I guess it would need some refactoring on
What are your thoughts on this? |
Beta Was this translation helpful? Give feedback.
-
Is there any interest in this? I have a custom server, and I'd really like to unify my log output. I'm personally fine with either approach. Passing in a custom logger sounds easier for Next users (developers), though adding a @timneutkens, @rauchg Thoughts? If bandwidth is an issue I can probably get something working in my spare time over the next few weeks. |
Beta Was this translation helpful? Give feedback.
-
More than this, I'm using It seems to me that Next.js is intercepting these errors so neither Sentry.io (which we use for error reporting) nor our server-level |
Beta Was this translation helpful? Give feedback.
-
Allowing some way to attach a customizable logger to Next's logging would be great for me. I log JSON to stdout for Docker and use Next as a custom server. |
Beta Was this translation helpful? Give feedback.
-
Did anyone found a workaround? We are trying to integrate Rollbar and Datadog and this would be great. |
Beta Was this translation helpful? Give feedback.
-
My "workaround" is to just filter out non-JSON lines. Using roarr: npm run start | npx roarr filter --context 0 --exclude-orphans '{ "context.logLevel": { gt: 10 } }' | npx roarr pretty-print My solution for pino is similar, but I had to use |
Beta Was this translation helpful? Give feedback.
-
Just want to bump this issue. Not being able to have structured logs emit from next is a real limitation. Would love to see an api like:
and happy to work on this if you'd be willing |
Beta Was this translation helpful? Give feedback.
-
Bumping it again, this is important, would be happy if it would exist. |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
-
+1 this would be good improvement in the meantime, how do you guys log response body with your |
Beta Was this translation helpful? Give feedback.
-
I added an Express middleware to log each request. This works because each route hits the server, whether rendering on the server (HTML) or client (JSON). |
Beta Was this translation helpful? Give feedback.
-
That doesn't handle this problem though, because as soon as inside your Express handler you hand off the request to |
Beta Was this translation helpful? Give feedback.
-
I ended up creating a custom serializer for next req/res: file if there is some interest, I could convert it into a package that would take other loggers and more middlewares to run. |
Beta Was this translation helpful? Give feedback.
-
+1 to this being sorely missed; we've got stdout logging to DataDog with structured logs for all of our other services and now it's just constantly interspersed with Next.js's line logging being noise. |
Beta Was this translation helpful? Give feedback.
-
I've seen this statement almost everytime there is a mention of a custom server, (including the docs), but never a detailed explanation of how and why it disables Automatic Static Optimization. |
Beta Was this translation helpful? Give feedback.
-
@leerob: Setup next.js with a custom server is easy. |
Beta Was this translation helpful? Give feedback.
-
By default, logs and errors are printed to node --require ./server-preload.js ./node_modules/.bin/next start
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Load our preload before running `next start` Where this is an example of the preload script. Agreed this could be easier.
Check out https://nextjs.org/docs/advanced-features/automatic-static-optimization as well as the source code for this feature to understand more how it works 😄 |
Beta Was this translation helpful? Give feedback.
-
Thanks for the answer, we are indeed using the preload way for the moment. |
Beta Was this translation helpful? Give feedback.
-
@leerob Sentry's nextjs plugin prepends a const nextBuiltInLogger = require("next/dist/build/output/log") When it starts to monkeypatch the built-in logger, it's returned I've currently got a few "hacks" that require a custom server and I've been looking at moving it back into the built-in server but can't get the hacks patched in the way it's described in that gist. Any thoughts as to why and whether this should be considered a bug? It would be nice to have a single preload for Sentry + DD + logs rather than 2. |
Beta Was this translation helpful? Give feedback.
-
I did read it Lee. And to make sure I understood things correctly, I tried out the Custom Server example locally.
To validate the above, I did this:
And it looks like it is indeed generating static files. So I hope you'd understand why I am confused, when "Using a custom server ejects from the Next.js defaults and de-optimizes your application from things like Automatic Static Optimization"? 😅 |
Beta Was this translation helpful? Give feedback.
-
I'm also curious to know how this works, current documentation does not explain this. Let me give 2 examples here so that someone can clear this up for me and others. Case 1I only need custom server to add an extra line that logs my route to the monitoring service, and let Next js handle the rendering // server.js
const { createServer } = require('http')
const { parse } = require('url')
const next = require('next')
const dev = process.env.NODE_ENV !== 'production'
const app = next({ dev })
const handle = app.getRequestHandler()
app.prepare().then(() => {
createServer((req, res) => {
const parsedUrl = parse(req.url, true)
// log to monitoring service
logToMonitoringService(parsedUrl.pathname)
// does this disable ASO ?
handle(req, res, parsedUrl)
}).listen(3000, (err) => {
if (err) throw err
console.log('> Ready on http://localhost:3000')
})
}) Case 2I want to render some of the routes manually, and let Next js handle the other routes // server.js
const { createServer } = require('http')
const { parse } = require('url')
const next = require('next')
const dev = process.env.NODE_ENV !== 'production'
const app = next({ dev })
const handle = app.getRequestHandler()
app.prepare().then(() => {
createServer((req, res) => {
const parsedUrl = parse(req.url, true)
const { pathname, query } = parsedUrl
if (pathname === '/a') {
// this should disable ASO, since we're rendering manually
app.render(req, res, '/a', query)
} else {
// this shouldn't disable ASO, since we're letting next js handle it
handle(req, res, parsedUrl)
}
}).listen(3000, (err) => {
if (err) throw err
console.log('> Ready on http://localhost:3000')
})
}) As far as I understand, automatic static optimization should be disabled only for the manual routes in case 2. Because when I run |
Beta Was this translation helpful? Give feedback.
-
Further, I want to mention the solution for this - which is where That will be the intended solution for this 👍 |
Beta Was this translation helpful? Give feedback.
-
@leerob, this is great news! 🙌 Is there anything the community can do to support this effort? |
Beta Was this translation helpful? Give feedback.
-
It sounds like there's no appetite to merge this proposal? #22587 One additional thing we added ourselves was tying the I'd love to help move that PR in until we get OpenTelemetry, especially it being Hacktoberfest. But it sounds like that appetite is low considering #4808 (comment) |
Beta Was this translation helpful? Give feedback.
-
I found a pretty simple workaround for this when using a custom server. import nextJs from "next"
async function createNextServerWithCustomLogging() {
const server = nextJs({})
const innerServer = (await (server as any).getServer()) as any
if (!(innerServer.logError instanceof Function)) throw Error("Assertion fail")
innerServer.logError = (err: Error) => {
console.log("Custom handler", err)
}
return server
} |
Beta Was this translation helpful? Give feedback.
-
Hey Team; I'm not excited about preloading or patching global console messages so I'm happy to see that the plan is to adopt OpenTelemetry. I think something that would make it easier would be knowing what kind of timeline when initial OpenTelemetry support is supposed to land even on a super coarse scale like 'This year' or 'maybe next year', etc. I'm not asking for a commitment just a rough idea. :) 🙏 |
Beta Was this translation helpful? Give feedback.
-
Thanks for the shout here @trevoro. I do have an update. We have been working on the OpenTelemetry integration into Next.js actively. I do want to note, however, this will require using To reiterate, using |
Beta Was this translation helpful? Give feedback.
-
Following. Trying to determine how to best get a I know there is a server preloading method using a start script flag to load a file instantiating dd-trace first but it seems like there should be a clearer way to configure telemetry than this, especially for enterprise Vercel orgs Should I use the preloading method for the time being or is there a better way? |
Beta Was this translation helpful? Give feedback.
-
Following up here, OpenTelemetry support landed already, but we're going to continue working on observability next. https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Feature request
Is your feature request related to a problem? Please describe.
Using custom server interferes with other logging in my app.
Describe the solution you'd like
A way to silence
next
and an interface to tap intoready
,recompile
,error
events.Beta Was this translation helpful? Give feedback.
All reactions