[pull] canary from vercel:canary#143
Open
pull[bot] wants to merge 1146 commits intoLeoAnt02:canaryfrom
Open
Conversation
…91478) See: GHSA-mq59-m269-xvcx and [16.1.7](https://github.com/vercel/next.js/releases/tag/v16.1.7) --------- Co-authored-by: Sebastian "Sebbie" Silbermann <sebastian.silbermann@vercel.com>
Startup loading of APP_PAGE PPR entries seeded the in-memory cache from disk using a size estimate that only counted html and the route-level RSC payload. Empty-shell prerenders have 0-byte html and do not persist a monolithic .rsc file, with the reusable data instead stored in postponed metadata and per-segment .segment.rsc files. That made the computed size 0, caused LRUCache to reject the seeded entry, and forced the first request after next start to rerun prerender and rewrite the build artifacts. Account for postponed state and segment buffers when sizing APP_PAGE cache entries so empty-shell prerenders can warm the startup cache the same way contentful shells do. Add a production regression that asserts both routes are emitted as build artifacts and that next start does not rewrite them on the first request. The regression skips deployment mode because it verifies self-hosted behavior by inspecting local .next artifact mtimes, which is not a portable signal in deployment mode. When deployed through an adapter it is not expected that the local Next.js process is handling entries through the Response cache reading from the `.next` folder anyway
### What?
Fix the `description()` method of `DataUriSource` in Turbopack to include the full `data:` URI prefix in the display string.
**Before:** `data URI content (svg+xml;charset=utf8,...)`
**After:** `data URI content (data:svg+xml;charset=utf8,...)`
### Why?
The previous implementation only included the raw data content starting from the media type, omitting the `data:` scheme prefix. This made the description string look malformed and not recognizable as a data URI — especially in error messages where this description is shown to developers.
The fix reconstructs the proper data URI format (`data:<media_type>[;<encoding>],<data>`) before truncating it to 50 characters, so the displayed prefix is a valid (partial) data URI that developers can recognize and understand.
### How?
In `turbopack/crates/turbopack-core/src/data_uri_source.rs`, the `description()` function now builds the full data URI string first (`data:{media_type}{sep}{encoding},{data}`), then takes the first 50 characters of that. This ensures the `data:` scheme prefix is always present in the truncated display.
Closes NEXT-
### Why?
TypeScript fails to infer the generic `P` from `Omit<P, 'children'>` in
the fallback function parameter.
```
Argument of type '(props: { title: string; }, { error, reset, unstable_retry }: ErrorInfo) => Element' is not assignable to parameter of type '(props: Omit<UserProps, "children">, errorInfo: ErrorInfo) => ReactNode'.
Types of parameters 'props' and 'props' are incompatible.
Property 'title' is missing in type 'Omit<UserProps, "children">' but required in type '{ title: string; }'.ts(2345)
catch-error-wrapper.tsx(7, 12): 'title' is declared here.
```
Root params (e.g. `import { lang } from 'next/root-params'`) can now be read inside `"use cache"` functions. The read root param values are automatically included in the cache key so that different root param combinations produce separate cache entries.
Since which root params a cache function reads is only known after execution, the cache key is reconciled post-generation. When root params are read, a two-key scheme is used: the full entry is stored under a specific key (coarse key + root param suffix), and a lightweight redirect entry is stored under the coarse key. The redirect entry's tags encode the root param names using the pattern `_N_RP_<rootParamName>` (e.g. `_N_RP_lang`), following the convention of existing internal tags like `_N_T_<pathname>` (e.g. `_N_T_/dashboard`) for implicit route tags. This allows a cold server to resolve the specific key on the first request after restart. An in-memory map (`knownRootParamsByFunctionId`) provides a fast path for subsequent invocations. When no root params are read, the full entry is stored directly under the coarse key with no redirect involved.
The in-memory map grows monotonically — if a function conditionally reads different root params across invocations, the set accumulates all observed param names. The redirect entry's tags are built from this combined set, ensuring cold servers always resolve the most complete specific key.
The two-key scheme only applies to the cache handler. The Resume Data Cache (RDC) always uses the coarse key because each page gets its own isolated RDC instance, so root params are fixed within a single RDC and no disambiguation is needed. When an RDC entry is found during resume, it seeds `knownRootParamsByFunctionId` so that subsequent cache handler lookups can use the specific key directly.
Reading root params inside `unstable_cache` still throws. Reading root params inside `"use cache"` nested within `unstable_cache` throws with a specific error message explaining the limitation.
Alternatives considered: extending the `CacheEntry` interface (would be a breaking change for custom cache handlers), encoding root param metadata in the stream via a wrapper object, or sentinel byte, or Flightception (runtime overhead of stream manipulation on every cache read), and deferring `cacheHandler.set` until after generation (breaks the cache handler's pending-set deduplication for concurrent requests).
## Summary - Fixes flaky test "per-page value overrides global staleTimes.dynamic regardless of direction" introduced by #91437 - The test was flaky because `browser.back()` restored accordion state from BFCache, causing previously-opened `LinkAccordion` links to be immediately visible. This triggered uncontrolled re-prefetches outside the `act` scope. When the `IntersectionObserver` fired inside a subsequent `act` scope (after clock advancement), stale data would trigger a prefetch that violated the `no-requests` assertion. - Fix: instead of navigating back to a previously visited page, navigate forward to fresh "hub" pages with their own `LinkAccordion` components. Since these are never-visited pages, accordions start closed and no uncontrolled prefetches are triggered. - General principle: when using the router `act` test utility, always use `LinkAccordion` to control when prefetches happen. Prefetches should only be triggered inside an `act` scope by toggling an accordion, never by navigating back to a page where links are already visible. ## Test plan - [x] Ran the full test file 4 times locally — all 5 tests pass consistently
Improve the RSC error messages to be more comprehensive. x-ref: #89688 (comment) --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>
…91454) When a `"use cache"` entry is newly generated during a prerender (`prerender` or `prerender-runtime`), `collectResult` defers propagation of cache life and tags to the outer context. This is because the entry might later be omitted from the final prerender due to short expire or stale times, and omitted entries should not affect the prerender's cache life. However, when a cache handler returns a hit for an existing entry, `propagateCacheEntryMetadata` was called unconditionally, without the same deferral logic. This meant that short-lived cache entries retrieved from the cache handler could propagate their cache life to the prerender store, even though they would later be omitted from the final render. This inconsistency is currently not observable because runtime prefetches use a prospective and final two-store architecture (see `prospectiveRuntimeServerPrerender` and `finalRuntimeServerPrerender` in `app-render.tsx`). The cache handler hit propagation corrupts the prospective store, but the response is produced from the final store, which reads from the resume data cache with correct stale and expire checks. Static prerenders have a similar two-phase architecture that masks the issue. Because of this, there is no test case that can observe the incorrect behavior, but the fix avoids confusion and prevents the inconsistency from becoming a real bug if the architecture changes. This change extracts a `maybePropagateCacheEntryMetadata` function that encapsulates the conditional propagation logic and is now called from both the generation path (inside `collectResult`) and the cache handler hit path. The resume data cache read path continues to call `propagateCacheEntryMetadata` unconditionally, since it runs in the final render phase after short-lived entries have already been filtered out.
## What? This PR improves CSS parsing error handling in Turbopack by making parse errors recoverable instead of fatal. 1. **Extracting CSS parsing logic** into a new `parse_css_stylesheet` function that handles both parsing and CSS module validation in one place 2. **Fixing error recovery** when CSS minification fails — instead of immediately returning an unparsable result, the code now re-parses the original CSS to recover valid styles 3. **Improving error reporting** by renaming variables for clarity (`source` → `issue_source`) and ensuring errors are properly emitted before attempting recovery 4. **Adding comprehensive tests** for CSS parse error scenarios with both snapshot tests and e2e tests The snapshot test fixture exercises several CSS features to confirm they survive error recovery: - Basic selectors (`.before`, `.after`) — valid rules are preserved when surrounded by invalid ones - `::highlight(highlight-test)` — CSS Custom Highlight API pseudo-element - `scroll-marker-group`, `::scroll-marker`, `::scroll-marker:target-current` — CSS scroll marker pseudo-elements (new spec) - `oklch()` color values — converted to `#rrggbbaa` + `lab()` fallbacks for broader browser compatibility ## Why? Previously, when CSS minification encountered an error, the entire stylesheet would be marked as unparsable, losing any valid CSS that could have been recovered. This meant `turbopackIgnoreIssue` could not suppress the errors since the module produced no output regardless. Now, when lightningcss recovers from parse errors (collecting them as warnings), issues are still emitted but processing continues with the recovered stylesheet. Only truly fatal parse errors (where `StyleSheet::parse` returns `Err`) remain non-recoverable. The refactoring also consolidates CSS validation logic into a single function, reducing code duplication and making the error handling flow clearer. ## How? - Created `parse_css_stylesheet()` function that wraps `StyleSheet::parse()` and applies CSS module validation - Modified error handling in `process_content()` to call the new function and recover from minification errors by re-parsing - Changed early returns to error emission followed by recovery attempts - Added test fixtures and e2e tests to verify CSS parse error recovery works correctly - Added snapshot tests showing expected output when CSS contains parse errors (valid rules are preserved, invalid ones are skipped) - Skipped CSS parse error recovery e2e tests for webpack (webpack has its own CSS pipeline) ## Test Plan - Added e2e tests in `css-parse-error-recovery.test.ts` that verify: - Pages with CSS parse errors still render when using `turbopackIgnoreIssue` config - Parse errors are properly reported in CLI output - Added snapshot tests showing CSS output with parse errors, including modern CSS features (`::highlight`, `scroll-marker-group`, `oklch()` color conversion) - Existing CSS tests continue to pass
…1376) Use `Bytes::from_owner(napi::Buffer)` to eliminate the `Buffer → Vec<u8>` memcpy on the JS→Rust path in the worker pool. `bytes::Bytes::from_owner` (>=1.9.0) wraps any `T: AsRef<[u8]> + Send + 'static` without copying — the data pointer comes from `T::as_ref()` and `T` is dropped when the `Bytes` is dropped. Since napi@2's `Buffer` implements `AsRef<[u8]>`, this gives true zero-copy. Also unifies the `Operation` trait to use `Bytes` for both `send` and `recv`. This also helps reduce memory consumption, without zero-copy, the memory consumption of the data returned by the loader would be doubled until GC on the JS side or dropped on the Rust side.
…doc (#91350) This is implemented as a dummy module that includes a markdown file. Clap does something similar in their rustdocs. I had claude review the content for accuracy. It made a few modifications. I added rustdoc links by hand. I converted the mermaid diagram to excalidraw and uploaded the image to vercel blob. Copied from: https://turbopack-rust-docs.vercel.sh/turbopack/layers.html Rendered: 
`snapshot_issues` emits effects when `UPDATE=1`
In many cases, these are just modern selectors that aren't supported yet.
…91519) Follow-up to #91189 which added support for accessing root params inside `generateStaticParams`. Previously, if a `generateStaticParams` function tried to read a root param that hasn't been defined yet — either because the current segment is supposed to define it, or because a child segment defines it — `getRootParam` would silently return `undefined`. This is especially problematic because #91019 will type root param getters as `Promise<string>` without `undefined`, making this a silent type safety violation. This change makes `getRootParam` throw an explicit error with the `generate-static-params` work unit store when the requested param is not present in `rootParams`.
**Current:** 1. #91487 **Up next:** 2. #91488 3. #89297 --- Prefetch responses include metadata (in the Flight stream sense, not HTML document metadata) that describes properties of the overall response — things like the stale time and the set of params that were accessed during rendering. Conceptually these are like late HTTP headers: information that's only known once the response is complete. Since we can't rely on actual HTTP late headers being supported everywhere, we encode this metadata in the body of the Flight response. The mechanism works by including an unresolved thenable in the Flight payload, then resolving it just before closing the stream. On the client, after the stream is fully received, we unwrap the thenable synchronously. This synchronous unwrap relies on the assumption that the server resolved the thenable before closing the stream. The server already buffers prefetch responses before sending them, so the resolved thenable data is always present in the response. However, HTTP chunking in the browser layer can introduce taskiness when processing the response, which could prevent Flight from decoding the full payload synchronously. The existing code includes fallback behavior for this case (e.g. treating the vary params as unknown), so this doesn't fix a semantic issue — it strengthens the guarantee so that the fallback path is never reached. To do this, we buffer the full response on the client and concatenate it into a single chunk before passing it to Flight. A single chunk is necessary because Flight's `processBinaryChunk` processes all rows synchronously within one call. Multiple chunks would not be sufficient even if pre-enqueued: the `await` continuation from `createFromReadableStream` can interleave between chunks, causing promise value rows to be processed after the root model initializes, which leaves thenables in a pending state. Since the server already buffers these responses and they complete during a prefetch (not during a navigation), this is not a performance consideration. Full (dynamic) prefetches are not affected by this change. These are streaming responses — even though they are cached, they are a special case where dynamic data is treated as if it were cached. They don't need to be buffered on either the server or the client the way normal cached responses are.
**Previous:** 1. #91487 **Current:** 2. #91488 **Up next:** 3. #89297 --- When a prefetch response includes vary params, the segment cache rekeys entries to a more generic path based on which params the segment actually depends on. Previously, the rekeying only happened when vary params were provided. Now that vary params are tracked for more response types (and eventually will always be tracked), entries are rekeyed in more cases than before. This exposed a potential race condition: the scheduler would capture a vary path at scheduling time and upsert the entry at that path when the fetch completed. But the fetch functions themselves rekey entries to a different (more generic) path upon fulfillment. The deferred upsert could then move the entry back to the less generic path, undoing the rekeying. To fix this, move the upsert logic inline into the fetch functions that fulfill entries, rather than deferring it to an external callback. This removes the race condition, simplifies the model, and reduces implementation complexity. The previous structure existed to avoid the rekeying cost when vary params weren't available, but rekeying is inexpensive and not worth the added indirection. The upsert function itself already handles concurrent writes by comparing fetch strategies and checking whether the new entry provides more complete data than any existing entry. So it's safe to always call it inline — whichever entry wins will be the most complete one.
**Previous:** 1. #91487 2. #91488 **Current:** 3. #89297 --- Most of the vary params infrastructure was already implemented in previous PRs for static prerenders. This wires up the remaining pieces for runtime prefetches — creating the accumulator, setting it on the prerender store, and resolving it before abort — and adds additional test cases covering empty/full vary sets, searchParams, metadata, and per-segment layout/page splits with runtime prefetching.
…g and minor updates from claude (#91472) Copied from https://turbopack-rust-docs.vercel.sh/turbopack/chunking.html I had claude review it for correctness against the code. It found a few things where the names weren't correct or a trait was actually a struct, and it grouped things into sections. It also added some more details to a few sections, like the dev vs production chunking. I think this is a reasonable improvement. I did a second pass with claude where I had it remove things it 1:1 duplicated with the existing rustdocs, and I had it add intradoc links. Rendered: 
https://github.com/vercel/next.js/actions/runs/23170738194/job/67327462789#step:36:318 is holding up CI, including release. Cause is not from `catchError` PR, it's just that the test has discovered missing support of Next.js for Pages + React 17/18 + React compiler. Will work in parallel.
## What? Update the `qfilter` crate to an alpha version. ## Why? Performance improvements in the quotient filter implementation used by turbo-persistence for SST file lookups. ## Benchmark Results Benchmarks run on Apple Silicon (M-series), comparing `canary` baseline vs this branch. Focused on qfilter-sensitive paths: the filter itself, SST lookups, uncompacted multi-commit DB reads, and writes. ### qfilter microbenchmarks (direct filter operations) | Benchmark | canary | branch | change | |---|---|---|---| | **Lookup (hit)** | | | | | 1Ki entries | 22.10 ns | 14.63 ns | **-33.8%** | | 10Ki entries | 22.21 ns | 15.41 ns | **-30.6%** | | 100Ki entries | 24.96 ns | 17.47 ns | **-30.0%** | | 1000Ki entries | 25.05 ns | 16.61 ns | **-33.7%** | | **Lookup (miss)** | | | | | 1Ki entries | 12.09 ns | 9.19 ns | **-24.0%** | | 10Ki entries | 14.24 ns | 11.07 ns | **-22.3%** | | 100Ki entries | 18.13 ns | 13.72 ns | **-24.3%** | | 1000Ki entries | 13.36 ns | 10.00 ns | **-25.2%** | | **Insert (build filter)** | | | | | 1Ki entries | 8.54 us | 9.64 us | +12.9% | | 10Ki entries | 118.16 us | 120.97 us | +2.4% | | 100Ki entries | 3.18 ms | 2.57 ms | **-19.2%** | | 1000Ki entries | 20.88 ms | 18.31 ms | **-12.3%** | **Summary:** Lookups 22-34% faster across all sizes. Insert is slightly slower at small sizes but 12-19% faster at larger sizes where it matters most. ### SST file lookup (filter + block read) | Benchmark | canary | branch | change | |---|---|---|---| | **Hit (uncached)** | | | | | 1Ki entries | 2.52 us | 2.52 us | ~0% | | 10Ki entries | 3.60 us | 3.52 us | -2.2% | | 100Ki entries | 3.77 us | 3.72 us | -1.3% | | 1000Ki entries | 6.65 us | 6.55 us | -1.5% | | **Miss (cached)** | | | | | 1Ki entries | 124.50 ns | 121.13 ns | -2.7% | | 10Ki entries | 168.18 ns | 161.96 ns | **-3.7%** | | 100Ki entries | 195.97 ns | 189.80 ns | **-3.1%** | | 1000Ki entries | 249.85 ns | 235.68 ns | **-5.7%** | **Summary:** Small but consistent improvements across SST lookups. Miss/cached shows the clearest gains (filter is the primary code path for rejecting misses). ### DB-level reads (20 commits, uncompacted -- amplifies filter cost) | Benchmark | canary | branch | change | |---|---|---|---| | **10.67Mi entries** | | | | | hit/uncached | 3.49 us | 3.13 us | **-10.3%** | | hit/cached | 1.65 us | 1.49 us | **-9.7%** | | miss/uncached | 899.75 ns | 746.28 ns | **-17.1%** | | miss/cached | 793.51 ns | 668.92 ns | **-15.7%** | | **85.33Mi entries** | | | | | hit/uncached | 11.24 us | 10.64 us | ~0% (noisy) | | hit/cached | 4.53 us | 4.10 us | **-9.5%** | | miss/uncached | 5.01 us | 4.36 us | **-13.0%** | | miss/cached | 4.81 us | 4.02 us | **-16.4%** | **Summary:** 10-17% faster reads on uncompacted DBs with many SSTs. Miss paths benefit most since the filter rejects without I/O. ### Write path (includes filter construction) | Benchmark | canary | branch | change | |---|---|---|---| | 85.33Ki entries | 24.91 ms | 23.42 ms | -6.0% | | 853.33Ki entries | 140.63 ms | 131.84 ms | **-6.3%** | | 8.33Mi entries | 1.14 s | 1.07 s | **-6.1%** | **Summary:** ~6% faster writes across all sizes. ## How? Updated the `qfilter` dependency to a new alpha version with improved lookup and insert performance.
#91503) ### What? Removes the `experimental.devCacheControlNoCache` config option entirely and hard-codes `no-cache, must-revalidate` as the dev server `Cache-Control` header value. Previously the option controlled whether the dev server responded with: - `no-store, must-revalidate` (default, `false`) - `no-cache, must-revalidate` (opt-in, `true`) This PR first flips the default to `true`, then removes the option altogether — making `no-cache, must-revalidate` unconditional in all dev code paths. ### Why? `no-cache` is strictly better than `no-store` for the dev server: - `no-cache` allows the browser to revalidate (conditional `If-None-Match`/`If-Modified-Since` requests), letting the server respond with `304 Not Modified` when nothing changed → faster page loads during development. - `no-store` forces a full re-fetch every time, discarding valid cached responses. Since `no-cache` is the correct behavior for all dev users, the toggle has no remaining value and can be removed to simplify the codebase. ### How? **Two commits:** 1. `bcec825` — Flip the default from `false` → `true`; update tests/fixtures/manifest to reflect the new default. 2. `e6e919f` — Remove the option entirely: - Deleted `devCacheControlNoCache?: boolean` from `ExperimentalConfig` interface, `config-schema.ts` Zod schema, `defaultConfig`, `NextConfigRuntime`, and `getNextConfigRuntime()`. - Replaced all four conditional ternaries in `base-server.ts`, `router-server.ts`, `pages-handler.ts`, and `app-page.ts` with the hard-coded string `'no-cache, must-revalidate'`. - Deleted the `dev-cache-control-no-cache-disabled` test suite (was testing the `false` path which no longer exists). - Simplified the remaining `dev-cache-control-no-cache` test (removed experimental framing). - Synced `rspack-dev-tests-manifest.json`. --------- Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com>
[Flakiness metric](https://app.datadoghq.com/ci/test/runs?query=test_level%3Atest%20%40git.repository.id%3A%22github.com%2Fvercel%2Fnext.js%22%20%40test.name%3A%28%22use-cache%20should%20cache%20results%20for%20cached%20functions%20imported%20from%20client%20components%22%20OR%20%22use-cache%20should%20cache%20results%20for%20cached%20functions%20passed%20to%20client%20components%22%29%20%40test.status%3A%22fail%22%20%40duration%3A%3E%3D1ns&agg_m=count&agg_m_source=base&agg_t=count&citest_explorer_sort=timestamp%2Casc&cols=%40test.status%2Ctimestamp%2C%40test.suite%2C%40test.name%2C%40duration%2C%40test.service%2C%40git.branch¤tTab=overview&eventStack=&fromUser=false&index=citest&start=1772476060657&end=1773772060657&paused=false) I'm not sure why these tests recently got much more flaky (they were somewhat flaky before), but they definitely were missing `retry` blocks around the assertions after clicking the reset button.
…handling (#92254) ### What? Bug fixes and a refactoring in `turbo-tasks-backend` targeting stability issues that surface when filesystem caching is enabled: 1. **Preserve `cell_type_max_index` on task error** — when a task fails partway through execution, `cell_counters` only reflects the partially-executed state. Previously, `cell_type_max_index` was updated from these incomplete counters, which removed entries for cell types not yet encountered. This caused `"Cell no longer exists"` hard errors for tasks that still held dependencies on those cells. The fix skips the `cell_type_max_index` update on error, keeping it consistent with the preserved cell data (which already wasn't cleared on error). This bug manifested specifically with `serialization = "hash"` cell types (e.g. `FileContent`), where cell data is transient and readers fall back to `cell_type_max_index` to decide whether to schedule recomputation. 2. **Fix shutdown hang and cache poisoning for cancelled tasks** — three related fixes for tasks cancelled during shutdown: - `task_execution_canceled` now drains and notifies all `InProgressCellState` events, preventing `stop_and_wait` from hanging on foreground jobs waiting on cells that will never be filled. - `try_read_task_cell` bails early (before calling `listen_to_cell`) when a task is in `Canceled` state, avoiding pointless listener registrations that would never resolve. - Cancelled tasks are marked as session-dependent dirty, preventing cache poisoning where `"was canceled"` errors get persisted as task output and break subsequent builds. The session-dependent dirty flag causes the task to re-execute in the next session, invalidating stale dependents. 3. **Extract `update_dirty_state` helper on `TaskGuard`** — the "read old dirty state → apply new state → propagate via `ComputeDirtyAndCleanUpdate`" pattern was duplicated between `task_execution_canceled` and `task_execution_completed_finish`. The new `update_dirty_state` default method on `TaskGuard` handles both transitions (to `SessionDependent` or to `None`) and returns the aggregation job + `ComputeDirtyAndCleanUpdateResult` for callers that need post-processing (e.g. firing the `all_clean_event`). ### Why? These bugs caused observable failures when using Turbopack with filesystem caching (`--cache` / persistent cache): - `"Cell no longer exists"` panics/errors on incremental rebuilds after a task error. - Hangs on `stop_and_wait` during dev server shutdown. - Stale `"was canceled"` errors persisted in the cache breaking subsequent builds until the cache is cleared. ### How? Changes are in `turbopack/crates/turbo-tasks-backend/src/backend/`: **`mod.rs`:** - Guard the `cell_type_max_index` update block inside `if result.is_ok()` to skip it on error, with a cross-reference comment to `task_execution_completed_cleanup` (which similarly skips cell data removal on error — the two must stay in sync). - Move the `is_cancelled` bail in `try_read_task_cell` before the `listen_to_cell` call to avoid inserting phantom `InProgressCellState` events that would never be notified. - In `task_execution_canceled`: switch to `TaskDataCategory::All` (needed for dirty state metadata access), notify all pending in-progress cell events, and mark the task as `SessionDependent` dirty via the new helper. - In `task_execution_completed_finish`: replace ~77 lines of inline dirty state logic with a call to `task.update_dirty_state(new_dirtyness)`, preserving the `all_clean_event` post-processing and the `dirty_changed` variable under `#[cfg(feature = "verify_determinism")]`. **`operation/mod.rs`:** - Add `update_dirty_state` default method on `TaskGuard` trait (~60 lines), co-located with the existing `dirty_state()` reader. Takes `Option<Dirtyness>`, applies the transition, builds `ComputeDirtyAndCleanUpdate`, and returns `(Option<AggregationUpdateJob>, ComputeDirtyAndCleanUpdateResult)`. - Add `ComputeDirtyAndCleanUpdateResult` to the public re-exports. --------- Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>
### What?
Adds a new \`get_compilation_issues\` MCP tool (and the underlying
\`projectGetAllCompilationIssues\` NAPI method) that returns all
compilation issues from all routes in a single call.
**New files:**
- \`crates/next-napi-bindings/src/next_api/project.rs\` —
\`project_get_all_compilation_issues\` NAPI function +
\`get_all_compilation_issues_operation\` /
\`get_all_compilation_issues_inner_operation\` turbo-tasks operations
- \`packages/next/src/server/mcp/tools/get-compilation-issues.ts\` — MCP
tool implementation
-
\`packages/next/src/server/mcp/tools/utils/format-compilation-issues.ts\`
— output formatter
-
\`test/development/mcp-server/mcp-server-get-compilation-issues.test.ts\`
— e2e test
- \`test/development/mcp-server/fixtures/compilation-errors-app/\` —
fixture app with three routes (valid page, missing-module error, syntax
error)
**Modified files:**
- \`packages/next/src/build/swc/generated-native.d.ts\` + \`types.ts\` +
\`index.ts\` — expose the new NAPI method
- \`packages/next/src/server/mcp/get-or-create-mcp-server.ts\` —
register the new tool, accept \`getTurbopackProject\` option
- \`packages/next/src/server/dev/hot-reloader-turbopack.ts\` — pass
\`project\` to MCP middleware
- \`packages/next/src/telemetry/events/build.ts\` — add
\`'mcp/get_compilation_issues'\` to the \`McpToolName\` union
### Why?
The existing MCP tools require a browser session (and thus a specific
route to be rendered) to surface compilation errors.
\`get_compilation_issues\` works without a browser session and covers
all routes proactively — useful for AI coding agents that want to check
for errors across the whole app before trying to render a page.
### How?
**NAPI layer:**
\`project_get_all_compilation_issues\` calls a two-level turbo-tasks
operation pair:
1. \`get_all_compilation_issues_inner_operation\` — iterates all
endpoint groups via \`project.get_all_endpoint_groups(false)\` and calls
\`endpoint_group.module_graphs().as_side_effect()\` on each. This builds
the module graph (resolution + transformation) for every entrypoint
without chunking, emitting, or code generation. Issues are emitted as
turbo-tasks collectables.
2. \`get_all_compilation_issues_operation\` — wraps the inner op in
\`strongly_consistent_catch_collectables\` to harvest
issues/diagnostics/effects, then returns an \`OperationResult\`.
> **Why not \`project.whole_app_module_graphs()\`?**
> \`whole_app_module_graphs()\` calls \`drop_issues()\` in development
mode to prevent every per-route HMR subscription from seeing all global
issues. Calling it here would return zero issues in dev. Per-endpoint
\`module_graphs()\` calls don't have this suppression.
**Output formatting (\`format-compilation-issues.ts\`):**
The raw Turbopack wire types are transformed before being returned to
MCP consumers:
- **StyledString → plain string**: \`title\`, \`description\`, and
\`detail\` are \`StyledString\` union trees (recursive \`{type, value}\`
objects). These are flattened to plain strings —
\`text\`/\`code\`/\`strong\` variants return their \`.value\` directly,
\`line\` joins with \`""\`, \`stack\` joins with \`"\n"\`.
- **ANSI codes stripped**: \`codeFrame\` is a pre-rendered string from
the Rust NAPI layer that contains ANSI terminal colour codes. These are
stripped via \`next/dist/compiled/strip-ansi\`.
- **1-indexed source positions**: Turbopack's \`SourcePos\` is 0-indexed
for both \`line\` and \`column\`. The formatter adds \`+1\` to each so
consumers get the conventional editor-style 1-indexed values.
- **Deduplication**: the same issue can surface from multiple endpoints
during the module graph traversal. Issues are deduplicated by a
\`severity|filePath|title|startLine:startCol\` key.
- **No diagnostics**: Turbopack diagnostics are internal telemetry
(\`EVENT_BUILD_FEATURE_USAGE\` feature-adoption counters). They are not
actionable for the user, so the \`diagnostics\` field is omitted
entirely from the response.
**MCP tool (\`get_compilation_issues\`):**
Calls \`project.getAllCompilationIssues()\`, passes the issues through
\`formatCompilationIssues()\`, and returns \`{ issues: FormattedIssue[]
}\` as JSON. Works without a browser session.
**Test:**
A fixture app with three routes (valid \`app/page.tsx\`,
\`app/missing-module/page.tsx\` importing a non-existent package,
\`app/syntax-error/page.tsx\` with unclosed JSX) is used to verify:
1. The tool returns a result without any browser session
2. Module-not-found errors are detected
3. Syntax errors are detected
4. Issues include \`severity\`, \`title\`, \`filePath\` metadata — all
as plain strings (not StyledString objects)
e2e tests: added —
\`test/development/mcp-server/mcp-server-get-compilation-issues.test.ts\`
---------
Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
## Summary - Adds `agent-043-view-transitions`, a new eval that tests whether agents can add React View Transitions to a Next.js product gallery app - Starter app is a simple product list/detail gallery with Suspense boundaries, no view transitions - EVAL.ts checks 8 patterns: `viewTransition` config flag, `ViewTransition` imported from `react`, shared element `name` props, `transitionTypes` on Link, Suspense enter/exit animations, `default="none"` isolation, `prefers-reduced-motion` CSS, and `::view-transition-*` CSS pseudo-elements ## Test plan - [x] Starter app builds with `next build` - [x] All 8 EVAL.ts tests fail on unmodified starter (no false positives) - [x] All 8 EVAL.ts tests pass on a correct golden solution - [x] Golden solution builds with `next build` (viewTransition experiment detected) - [x] Commit passes lint-staged (prettier + eslint)
`RcStr` has 3 internal forms * `INLINE` meaning it is small enough that the bytes are stored inline (7 bytes) * `STATIC` meaning it was allocated by `rcstr!` and is stored in a `static` item somewhere * `DYNAMIC` meaning it is stored in an `Arc` The nice thing is that `INLINE` and `STATIC` are not ref-counted which optimizes clones. The unfortunate thing is that serialization round trips partially defeat these optimizations. To fix that we do two things 1. decode to a `&str` instead of a `String` so we can defer the allocation to if/when we select the `DYNAMIC` format 2. store all static strings in an inventory managed hashtable (lazily constructed) so we can match against them when deserializing them This allows us to avoid temporary allocations during decoding for the `inline` case and completely eliminate them for the static case. The dynamic case simply pays the cost of probing the static hash table and defers the allocation to slightly later. The hash probe is not free but the table is small ~1500 entries, so this overhead should be negligible Also i needed to update our `wasm-bindgen` versions to workaround wasm-bindgen/wasm-bindgen#4446
…ror (#92282) Note: the change is mostly whitespace. Recommend reviewing w/o whitespace [here](https://github.com/vercel/next.js/pull/92282/changes?w=1). For App Router pages using time-based ISR, a stale cached response can be returned before background revalidation finishes. If that background revalidation later throws, the error does not bubble back through the normal top-level `app-page` request catch. Instead, the response cache has already resolved the request and later logs the failure internally. When an error happens while rendering an app router page, and the entry is stale, we now explicitly await `routeModule.onRequestError(...)` before rethrowing. This copies similar handling in pages router: https://github.com/vercel/next.js/blob/daca04d09bf9aaee9e1c63324166985b643e9844/packages/next/src/server/route-modules/pages/pages-handler.ts#L438-L460 and route handlers: https://github.com/vercel/next.js/blob/daca04d09bf9aaee9e1c63324166985b643e9844/packages/next/src/build/templates/app-route.ts#L407-L409
## What? Fixes the case where having React 18 installed fails the build because react-dom gets imported from stream-ops. #90500 moved the `streamToUint8Array` from node-web-streams-helper to stream-ops. This moves the function back to where it was before.
We recently re-imaged the self-hosted Linux runners and it now hits `ERR_PNPM_EXDEV` when pnpm copies packages between its default store and the temp stats workspace. Keeping both under the same temp root avoids the cross-filesystem copy failure.
#92320) This is causing OOMs in some applications running with a persistent cache See discussion: https://vercel.slack.com/archives/C03EWR7LGEN/p1775159630054759 The issue appears to be invalidating the chunk graph in an odd way that causes us to allocate an ~infinite number of error strings
We don't need to deploy a single image component example every time `build_and_deploy` runs. If something about this example changes, it can be manually re-deployed.
### What? Add a `@deprecated` JSDoc annotation to `experimental.useCache` in `config-shared.ts`, pointing to `cacheComponents: true` as the successor. ### Why? `experimental.cacheComponents` already has `@deprecated use top-level cacheComponents instead`, but `experimental.useCache` has no deprecation notice despite `cacheComponents: true` being the documented successor. This causes both developers and AI coding agents to use the old flag, since nothing in the type definitions signals it's deprecated. IDEs and agents rely on JSDoc annotations for guidance. ### How? Replace the existing JSDoc comment on `experimental.useCache` with `@deprecated use top-level cacheComponents instead`, matching the existing pattern on `experimental.cacheComponents`. ## PR checklist (Fixing a bug) - One-line JSDoc change, no tests needed Made with [Cursor](https://cursor.com)
<!-- CURSOR_AGENT_PR_BODY_BEGIN --> ### What? Improves the error message shown when a user tries to start `next dev` while another dev server is already running in the same directory. ### Why? Previously, the error message only suggested killing the existing process (`Run kill <pid> to stop it.`). This wasn't the best advice — often the user just wants to access the already-running dev server rather than kill it and start a new one. ### How? Updated the error message in `packages/next/src/build/lockfile.ts` to present both options: 1. Access the existing server at its URL 2. Kill the process if they want to start a new one **Before:** ``` ✖ Another next dev server is already running. - Local: http://localhost:3000 - PID: 61479 - Dir: /path/to/project - Log: .next/dev/logs/next-development.log Run kill 61479 to stop it. ``` **After:** ``` ✖ Another next dev server is already running. - Local: http://localhost:3000 - PID: 61479 - Dir: /path/to/project - Log: .next/dev/logs/next-development.log You can access the existing server at http://localhost:3000, or run kill 61479 to stop it and start a new one. ``` Updated the lockfile test regex to match the new message format. <!-- CURSOR_AGENT_PR_BODY_END --> [Slack Thread](https://vercel.slack.com/archives/C046HAU4H7F/p1775173304387809?thread_ts=1775173304.387809&cid=C046HAU4H7F) <div><a href="https://cursor.com/agents/bc-0bb3085b-1fd1-56c3-8617-cd8a32d8c376"><picture><source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/assets/images/open-in-web-dark.png"><source media="(prefers-color-scheme: light)" srcset="https://cursor.com/assets/images/open-in-web-light.png"><img alt="Open in Web" width="114" height="28" src="https://cursor.com/assets/images/open-in-web-dark.png"></picture></a> <a href="https://cursor.com/background-agent?bcId=bc-0bb3085b-1fd1-56c3-8617-cd8a32d8c376"><picture><source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/assets/images/open-in-cursor-dark.png"><source media="(prefers-color-scheme: light)" srcset="https://cursor.com/assets/images/open-in-cursor-light.png"><img alt="Open in Cursor" width="131" height="28" src="https://cursor.com/assets/images/open-in-cursor-dark.png"></picture></a> </div> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com>
This PR is AI generated. If this directory is deleted while the dev server is running, we can't reasonably recover, so we should restart the process. Addresses https://vercel.slack.com/archives/C03KAR5DCKC/p1774757692640999
…92235) This is inspired by https://glama.ai/blog/2026-03-19-open-source-has-a-bot-problem#user-content-how-are-these-agents-setup If agents follow this pattern, we could potentially add labels (this PR does not touch the labeler) to these PRs which may help with review and triage. This does not change how we'd deal with these PRs (AI generated PRs are okay!), but I would like to know if a PR is AI generated when reviewing it.
…StringRef::to_string_ref (#92284) `FileSystemPath::value_to_string` was just calling `ValueToString::to_string`. But: - It's better to call `ValueToStringRef::to_string_ref` - We don't really get much value from this inherent method except for avoiding a few imports? I suspect this is left over from when we used to always wrap `FileSystemPath` in `Vc`.
## What Switch turbo-persistence AMQF filters from owned `qfilter::Filter` (heap-allocated) to zero-copy `qfilter::FilterRef` that borrows directly from the memory-mapped meta file. ## Why - **Lower memory usage** — no heap copy of filter data; `FilterRef` is just a pointer into the mmap - **Faster open time** — no deserialization/allocation, just pointer math over the mmap - **OS-managed memory** — mmap pages can be cheaply evicted under pressure (free LRU behavior), unlike heap-allocated `Filter` data which needs to be swapped ## How - Update `qfilter` dependency to the latest alpha release with `FilterRef` support - Switch per-entry AMQF serialization from `turbo_bincode` to `pot` format, which supports zero-copy deserialization - Store `qfilter::FilterRef<'static>` directly in `MetaEntry` (lifetime transmuted from the mmap borrow) - Rely on Rust's struct field drop order guarantee: `MetaFile::entries` is declared before `MetaFile::mmap`, so all `FilterRef`s are dropped before the mmap is unmapped - Update compaction code to work with `FilterRef` avoiding many allocations when merging ### Safety invariants The `FilterRef<'static>` lifetime is transmuted 🙀 — the actual borrow is from `MetaFile::mmap`. This is safe because: 1. `MetaEntry` is never moved out of `MetaFile` (only accessed by `&` reference via `entries()` / `entry()`) 2. Rust drops struct fields in declaration order, and `entries` is declared before `mmap` 3. This is the same pattern used by `ArcBytes`/`RcBytes` in this crate (raw pointer into backing storage) ## Benchmark Results I ran a number of the Read benchmarks and compaction benchmarks and it is all in the noise, which makes sense. We might get a slight benefit from avoiding the `OnceLock` and lazy initialization, we should also have slightly lower maxrss and offer the OS more flexibility during memory pressure From measuring vercel-site, a warm build saves ~40m of MaxRSS
…#92272) ### What? Add `experimental.swcEnvOptions` to expose SWC's preset-env `env` configuration options — including `mode`, `coreJs`, `include`, `exclude`, `skip`, `shippedProposals`, `forceAllTransforms`, `debug`, and `loose`. ### Why? Currently Next.js only passes `env.targets` (derived from browserslist) to SWC for **syntax downleveling**, but does not expose the polyfill injection capabilities that SWC already supports. This means: - Users who need automatic core-js polyfills (e.g. `Array.prototype.at()`, `Promise.withResolvers()`, `Set` methods) have no built-in way to get them. - The only workarounds are importing `core-js` globally (which bloats bundles significantly) or ejecting to Babel with `useBuiltIns: 'usage'` (which sacrifices SWC's performance benefits). - In the Babel era, Next.js supported this via `@babel/preset-env`'s `useBuiltIns` (PR #10574). That capability was lost when Next.js migrated to SWC. ### How? A new `experimental.swcEnvOptions` config is added. Its properties are spread into the `env` block that Next.js passes to SWC for client-side compilation, alongside the existing browserslist-derived `targets`. Server-side compilation is unaffected (always targets `node`). The option surface mirrors [SWC's preset-env docs](https://swc.rs/docs/configuration/supported-browsers) 1:1, keeping it familiar and forward-compatible. ```js // next.config.js module.exports = { experimental: { swcEnvOptions: { mode: 'usage', coreJs: '3.38', }, }, } ``` #### Changes: config-shared.ts — type definition with JSDoc config-schema.ts — zod validation next-swc-loader.ts → swc/options.ts — plumb config into the SWC env block Unit tests (6 cases) + e2e test (dev & production) Related issues #66562, #63104, #74978 Related discussion #46724 --------- Co-authored-by: Benjamin Woodruff <benjamin.woodruff@vercel.com>
…tions (#92048) PR is AI-generated - `extension` and `extension_ref` do the same thing. - `extension_ref`'s API that returns an `Option` was better than `extension`'s API that returns a potentially-empty string. - In some places we were splitting into a stem and extension, and joining them back to a filename, but those places could just get the original path's filename. --------- Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Benjamin Woodruff <bgw@users.noreply.github.com>
…lveRawVcFuture, ResolveVcFuture, ToResolvedVcFuture) (#91554) ### What? Replace the `async fn resolve()`, `async fn resolve_strongly_consistent()`, and `async fn to_resolved()` methods on `RawVc`, `Vc<T>`, and `OperationVc<T>` with hand-written custom `Future` implementations, following the existing `ReadRawVcFuture` pattern. New types: - **`ResolveRawVcFuture`** (`raw_vc.rs`) — core implementation, replaces `async fn resolve_inner()` - **`ResolveVcFuture<T>`** (`vc/mod.rs`) — typed wrapper over `ResolveRawVcFuture`, returned by `Vc::resolve()` - **`ResolveOperationVcFuture<T>`** (`vc/operation.rs`) — typed wrapper, returned by `OperationVc::resolve()` - **`ToResolvedVcFuture<T>`** (`vc/mod.rs`) — typed wrapper, returned by `Vc::to_resolved()` All new future types expose a `.strongly_consistent()` builder method, enabling `resolve_strongly_consistent()` to be replaced by `.resolve().strongly_consistent()` at call sites. `ReadRawVcFuture` is also updated to delegate its phase-1 resolve loop to `ResolveRawVcFuture` instead of duplicating the logic. `std::task::ready!` is used throughout to simplify poll implementations. Also adds `#[inline(never)]` to `ReadRawVcFuture::poll` and `ResolveRawVcFuture::poll` to avoid inlining large poll implementations into every await site. ### Why? Performance, binary size, and improved API ergonomics: - The hand-written `Future` pattern (already used by `ReadRawVcFuture`) gives the compiler more predictable, smaller code than the state machines generated for `async fn`. The `#[inline(never)]` attributes on `poll` prevent large poll bodies from being duplicated at every await site, which the async desugaring otherwise allows. - The new builder API (`.resolve().strongly_consistent()`) is more composable and removes the need for separate `_strongly_consistent` method variants, reducing the number of methods on `RawVc`/`Vc`/`OperationVc`. - Having `ReadRawVcFuture` delegate to `ResolveRawVcFuture` removes the duplicated resolve loop and ensures both paths stay in sync. ### How? - `ResolveRawVcFuture` stores `current: RawVc`, `read_output_options: ReadOutputOptions`, `strongly_consistent: bool`, and `listener: Option<EventListener>`. Its `poll` replicates the loop from the old `resolve_inner` using `try_read_task_output` / `try_read_local_output`. - On `Err(listener)` from a `try_*` call, the listener is stored in `self.listener` and `Poll::Pending` is returned. At the top of the loop, `ready!(poll_listener(...))` re-polls it and short-circuits if still pending. - Consistency is downgraded to `Eventual` after the first `TaskOutput` hop, matching the previous behavior. - `strongly_consistent: true` keeps the `SUPPRESS_EVENTUAL_CONSISTENCY_TOP_LEVEL_TASK_CHECK` suppression across all polls (same logic as `ReadRawVcFuture`). - `ReadRawVcFuture` now holds a `ResolveRawVcFuture` for phase 1 and drives it via `Pin::new(&mut self.resolve).poll(cx)` before proceeding to the cell read in phase 2. This eliminates the duplicated loop that previously existed in both types. - Typed wrappers (`ResolveVcFuture<T>`, `ResolveOperationVcFuture<T>`, `ToResolvedVcFuture<T>`) delegate `poll` to the inner `ResolveRawVcFuture` and map the output to the appropriate typed result. - `OperationVc::resolve_strongly_consistent()` is removed; 16 call sites updated to `.resolve().strongly_consistent()`. - All new types implement `Unpin` and are exported from `lib.rs`. - `std::task::ready!` is used in all `poll` implementations to reduce boilerplate. No behavioral changes — this is a pure implementation refactor. ### Binary size impact A release build (`pnpm swc-build-native --release`) was measured before and after the branch changes on the same merge-base commit (`a41bef94`): | | Size | |---|---| | Base (`a41bef94`, before branch) | 199,690,656 bytes (~190.4 MB) | | Branch (`6f7846f9`, after changes) | 199,252,384 bytes (~190.0 MB) | | **Difference** | **−438,272 bytes (−428 KB, −0.22%)** | The branch produces a slightly smaller binary. The reduction comes primarily from the `#[inline(never)]` attributes preventing large `poll` bodies from being duplicated at every await site. --------- Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>
…emit (#92292) ### What? Adds deduplication and conflict detection to the asset emission stage in `crates/next-core/src/emit.rs`, and a new `IssueStage::Emit` variant in `turbopack-core`. Before emitting, assets are grouped by their output path. If multiple assets map to the same path: - If their content is identical, one is silently chosen (deduplication). - If their content differs, both versions are written to `<node_root>/<content_hash>.<ext>` and an `EmitConflictIssue` is raised for each conflict. All assets are still emitted — conflicts do not abort the build. ### Why? Previously, duplicate output assets for the same path were emitted unconditionally — whichever write happened last silently won. This masked build graph bugs where two different modules produced conflicting output files. Reporting conflicts as issues (rather than silently overwriting) makes them visible and easy to diagnose without breaking the build. ### How? - Collect all assets with their resolved paths via `try_flat_join`. - Bucket them into two `FxIndexMap<FileSystemPath, Vec<ResolvedVc<Box<dyn OutputAsset>>>>` — one for node-root assets and one for client assets. - For each bucket entry, call `check_duplicates`: compare every asset against the first using `assets_diff`. If content differs, emit an `EmitConflictIssue` as a turbo-tasks collectible — but still return the first asset so emission continues. - `assets_diff` is a `#[turbo_tasks::function]` that takes only `(asset1, asset2, extension, node_root)` — the `asset_path` stays out of the task key to avoid unnecessary task cardinality. When file content differs, it hashes each version with xxh3, writes them to `<node_root>/<hash>.<ext>`, and returns the paths in the detail message so the user can diff them. - `EmitConflictIssue` implements the `Issue` trait with `IssueStage::Emit` (new variant added to `turbopack-core`), `IssueSeverity::Error`, a descriptive title, and a detail message explaining the type of conflict. - Node-root and client assets are emitted in parallel via `futures::join!` (not `try_join!`) to ensure deterministic error reporting — both branches always run to completion so errors are reported in a consistent order. --------- Co-authored-by: Tobias Koppers <sokra@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>
<!-- CURSOR_AGENT_PR_BODY_BEGIN --> ## What? Adds documentation to the contributing guide explaining how deploy tests work and how to run them. ## Why? Deploy tests are an important part of the Next.js CI pipeline that verify the framework works correctly when deployed to Vercel. However, there was no documentation explaining: - How deploy tests are triggered on PRs - How to run deploy tests locally This came up in a Slack discussion where team members were debugging deploy test failures and sharing knowledge about how to trigger and run these tests. ## How? Added a new "Deploy Tests" section to `contributing/core/testing.md` that explains: 1. **Triggering Deploy Tests on PRs**: Deploy tests don't run on every PR by default. To trigger them, you can modify a test file in the deploy test suite, which causes CI to run deploy tests for that file. 2. **Running Deploy Tests Locally**: You can run deploy tests locally using: - `NEXT_TEST_VERSION` to test against a specific commit's pre-built tarball - `NEXT_TEST_DEPLOY_URL` to skip the deploy step and test against an existing deployment <!-- NEXT_JS_LLM_PR --> <!-- CURSOR_AGENT_PR_BODY_END --> [Slack Thread](https://vercel.slack.com/archives/C04KC8A53T7/p1775265710012189?thread_ts=1775265710.012189&cid=C04KC8A53T7) <div><a href="https://cursor.com/agents/bc-014a1052-4dfe-5db4-931d-86efa411016e"><picture><source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/assets/images/open-in-web-dark.png"><source media="(prefers-color-scheme: light)" srcset="https://cursor.com/assets/images/open-in-web-light.png"><img alt="Open in Web" width="114" height="28" src="https://cursor.com/assets/images/open-in-web-dark.png"></picture></a> <a href="https://cursor.com/background-agent?bcId=bc-014a1052-4dfe-5db4-931d-86efa411016e"><picture><source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/assets/images/open-in-cursor-dark.png"><source media="(prefers-color-scheme: light)" srcset="https://cursor.com/assets/images/open-in-cursor-light.png"><img alt="Open in Cursor" width="131" height="28" src="https://cursor.com/assets/images/open-in-cursor-dark.png"></picture></a> </div> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com>
## What? The codepath would cause checks to fail during deployment because it tries to import `node:stream` even when not used. This adds a specific path for edge runtime to avoid that. --------- Co-authored-by: Zack Tanner <1939140+ztanner@users.noreply.github.com>
### What? Add an `AGENTS.md` file at the npm package root (`node_modules/next/AGENTS.md`) so AI agents can discover the bundled documentation. ### Why? AI coding agents naturally check the package root for `AGENTS.md` before searching subdirectories. Without a file at the root, `cat node_modules/next/AGENTS.md` fails and agents hit a dead end. The bundled docs at `dist/docs/` are already shipped with the package but are not discoverable from the root. ### How? - Add `packages/next/AGENTS.md` mirroring the same content `create-next-app` generates for the project root, with the path adjusted to point at `dist/docs/` relative to the package root. - Add `"AGENTS.md"` to the `files` array in `package.json` so it ships with the npm package. ### Improving Documentation - [x] Run `pnpm prettier-fix` to fix formatting issues before opening the PR. <!-- NEXT_JS_LLM_PR -->
## Summary - add production-route-shape fixture coverage for `/[teamSlug]/[project]/settings/domains` - add revalidation helpers (API and server action) and browser controls used by the regression test - reproduce the failing production sequence by priming, revalidating, then loading a page with in-view `next/link` and navigating repeatedly x-ref: #91627 x-ref: #91603 ## Testing - `IS_WEBPACK_TEST=1 NEXT_SKIP_ISOLATE=1 NEXT_TEST_MODE=start pnpm testheadless test/e2e/app-dir/segment-cache/vary-params-base-dynamic/vary-params-base-dynamic.test.ts -t "production route shape"` (passes) - `NEXT_ENABLE_ADAPTER=1 pnpm test-deploy test/e2e/app-dir/segment-cache/vary-params-base-dynamic/vary-params-base-dynamic.test.ts -t "production route shape"` - `NEXT_ENABLE_ADAPTER=1 pnpm test-deploy test/e2e/app-dir/segment-cache/vary-params-base-dynamic/vary-params-base-dynamic.test.ts` (fails in the two production-shape cases) --------- Co-authored-by: Zack Tanner <1939140+ztanner@users.noreply.github.com>
## What Optimizes AMQF (Approximate Membership Query Filter) construction in `StreamingSstWriter` by deferring filter building to `close()` time and using qfilter's sorted `Builder` API. ## Why The previous approach inserted each key hash into the AMQF filter eagerly during `add()`, using random-access `insert_fingerprint` calls. The new qfilter `Builder` API supports sequential sorted insertion which is significantly faster, but requires fingerprints in non-decreasing order. ## How 1. **Deferred construction**: Instead of building the AMQF incrementally during `add()`, collect key hashes (truncated to `u32`) into a vec during writes, then build the filter in one pass at `close()` time. 2. **Sorted Builder insertion**: Sort collected hashes by fingerprint value, then feed them to `Builder::insert_fingerprint` in order. This uses the Builder's optimized sorted-insert path. 3. **u32 storage**: Since fingerprint size is always ≤32 bits, store collected hashes as `u32` instead of `u64`, halving memory usage and improving sort cache behavior. 4. **Exact sizing**: The Builder is constructed with the exact entry count (known at `close()` time) rather than the `max_entry_count` estimate, producing optimally-sized filters. ## Benchmark results (vs `filter_ref` baseline) `write/key_8/value_4/` benchmark: | Entries | filter_ref | sorted_insert | Change | |---------|-----------|---------------|--------| | 85K | 22.3 ms | 21.1 ms | **-5.3%** | | 853K | 149.2 ms | 111.9 ms | **-25.0%** | | 8.3M | 1049 ms | 998.8 ms | **-4.8%** | The 853K case (typical compacted SST size) shows the largest improvement as AMQF construction is a significant fraction of total write time at that scale.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )