Replies: 4 comments 16 replies
-
I have the same issue. It is annoying that it appears in Google Search Console. ![]() |
Beta Was this translation helpful? Give feedback.
-
Did you find any solution for this? |
Beta Was this translation helpful? Give feedback.
-
Is there any updates on this issue? Did you solve that? |
Beta Was this translation helpful? Give feedback.
-
In short, you can't, because it's baked into the RSC behavior. There's pending work to refactor how that works from the React team to remove the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
I've been using Next's Link component all over my code, as per good practices and recommendations. I've also written all of
page.tsx
components to accommodate for SSR e.g. moved client-side components down the tree etc. to maximize performance. Unfortunately, as I noticed in production, and later read in the docs, the<Link/>
component fetches a given page by adding_rsc=XXXXX
at the end of the href, whereXXX
is some kind of id, regardless of whether it prefetches the page or not. This_rsc
suffix tells Next.js to fetch a React Server Component optimized binary of the given page that was created during a production build. This behavior confuses Google Search Console greatly, as I have a lot of "duplicate page with the same canonical URL" entries like so:where in reality their canonical URLs are just
https://www.domain.com/page1
etc. This happens due to Google bot scanning pages, and following links like a normal browser would do. I couldn't find any clue on how to prevent this behavior, so I replaced all<Link/>
components with a standard<a>
tag. Is there any way to use<Link/>
component, retain SSR, and get rid of this_rsc
suffix? Or tell Google bot that this_rsc
refers to the same component?Additional information
No response
Example
No response
Beta Was this translation helpful? Give feedback.
All reactions