Replace HTTP dev-registry proxy with native workerd debug port RPC#12600
Replace HTTP dev-registry proxy with native workerd debug port RPC#12600
Conversation
🦋 Changeset detectedLatest commit: 811a12b The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
b2f8d24 to
5fb825a
Compare
create-cloudflare
@cloudflare/kv-asset-handler
miniflare
@cloudflare/pages-shared
@cloudflare/unenv-preset
@cloudflare/vite-plugin
@cloudflare/vitest-pool-workers
@cloudflare/workers-editor-shared
wrangler
commit: |
d3a87ad to
4c37d41
Compare
66bfe5c to
0f5c3fd
Compare
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
packages/miniflare/src/workers/core/dev-registry-proxy-shared.worker.ts
Outdated
Show resolved
Hide resolved
packages/miniflare/src/workers/core/dev-registry-proxy.worker.ts
Outdated
Show resolved
Hide resolved
packages/miniflare/src/workers/core/dev-registry-proxy.worker.ts
Outdated
Show resolved
Hide resolved
| name: `${RPC_PROXY_SERVICE_NAME}:${id}`, | ||
| worker: { | ||
| compatibilityDate: "2024-08-01", | ||
| compatibilityFlags: ["service_binding_extra_handlers"], |
There was a problem hiding this comment.
Is this definitely still needed?
packages/miniflare/src/workers/core/dev-registry-proxy-shared.worker.ts
Outdated
Show resolved
Hide resolved
Replace HTTP-based dev registry proxy with native workerd debug port: - Use WorkerdDebugPort binding for direct Cap'n Proto RPC to remote workers - Service bindings and DOs now route through debug port instead of HTTP proxy - Support RPC method calls, fetch, and tail handlers through debug port - Add HTTP fallback for WebSocket upgrades (debug port RPC doesn't support them) - Remove old dev-registry.worker.ts HTTP proxy implementation - Simplify external-service.ts by removing HTTP proxy infrastructure All 17 dev-registry tests pass including WebSocket upgrades.
The vite dev <-> vite dev tail handler test was missing the waitForTimeout parameter on one of its waitFor calls, causing it to use the default (shorter) timeout which could flake in CI.
…port RPC Remove the HTTP fallback, eager body buffering, _client GC hack, and WebSocket special-casing from the dev-registry proxy workers. These workarounds existed because the workerd debug port would close connections prematurely when the initial subrequest completed. With the refcounted DebugPortConnectionState fix in workerd (cloudflare/workerd#6100), connections now survive as long as any response body or WebSocket is in use. Key changes: - Replace hasAssets/entryAddress with defaultEntrypointService and userWorkerService fields in WorkerDefinition/RegistryEntry - Route DOs and named entrypoints through userWorkerService (bypassing assets/vite proxy layer) instead of hardcoding core:user: prefix - Validate required registry fields in resolveTarget() to handle stale entries - Inline and delete getWorkerdServiceName() helper - Remove dead entryAddress field and fetchViaHttp code path - Remove DO WebSocket 501 error and DO body buffering workarounds
…uting Vite workers with assets were incorrectly routing through assets:rpc-proxy in the dev registry, but in vite dev mode assets are served by the vite proxy worker, not workerd's asset pipeline. This caused cross-service fetch to vite workers with assets to fail (timeout). Add unsafeOverrideDefaultEntrypoint to miniflare's core options schema so the vite plugin can explicitly specify which workerd service handles the default entrypoint. This replaces the brittle inference from unsafeDirectSockets[].serviceName.
…nd tests After the debug port RPC refactor, unsafeDirectSockets in the user worker config, vite plugin, and dev-registry tests no longer serve a functional purpose — nobody reads the socket URLs. The only remaining use is in wrangler's ProxyController for InspectorProxyWorker (not touched here).
… cross-worker dev registry
The Reflect.has check already handles keys that exist on the target. For keys not on the target, the runtime doesn't probe handler properties through the proxy get trap in a way that requires suppression — confirmed by tests passing without it.
…ndings instead Instead of mutating workerOpts with symbol-tagged designators before plugin processing (requiring every consumer to special-case them), we now let the plugin system process all bindings normally, then rewrite external service bindings and tails to point at the dev registry proxy afterward. This follows the same post-processing pattern already used for assets at line 1673. Removes kResolvedServiceDesignator, ResolvedServiceDesignator, the zod validator, and checks in getCustomServiceDesignator, maybeGetCustomServiceService, and normaliseServiceDesignator.
The proxy's get trap now returns values that are both callable (for env.SERVICE.method()) and thenable (for await env.SERVICE.property), matching workerd's JsRpcProperty behavior. Without the .then property, awaiting a proxy-intercepted property would resolve to the function itself rather than triggering remote resolution.
Debug port RPC fetchers don't support lifecycle event dispatch (fetcher.scheduled()), so scheduled events need special handling: - ExternalServiceProxy: forward scheduled via HTTP to the entry worker's /cdn-cgi/handler/scheduled endpoint, which dispatches internally - RPCProxyWorker (assets proxy): forward scheduled to USER_WORKER via Fetcher.scheduled() - Add service_binding_extra_handlers compat flag to assets proxy service config so Fetcher.scheduled() is available on the USER_WORKER binding
Refactors the to be more generic, efficient, and readable. - Introduces a private method that handles resolving and connecting to the remote service. - Caches the remote fetcher promise for 1s to avoid re-connecting on every single RPC call or property access. - Simplifies the Proxy trap by using the new helper, removing duplicated code and making the logic much clearer.
…, thenable DO RPC
…N, fix stale cache clear
|
Codeowners approval required for this PR:
Show detailed file reviewers
|
|
Codeowners approval required for this PR:
Show detailed file reviewers
|
|
Codeowners approval required for this PR:
Show detailed file reviewers
|
…dd connect to HANDLER_RESERVED_KEYS
…ocket On Windows, the internal Cap'n Proto service-to-service forwarding from the entry worker to the dev-registry-proxy service can break with WSARecv error 64 (ERROR_NETNAME_DELETED), causing the registry push to silently fail and leaving the proxy worker with an empty registry. Give the dev-registry-proxy service its own HTTP socket (SOCKET_DEV_REGISTRY) and have #pushRegistryUpdate POST directly to it, bypassing the entry worker entirely. Also incorporates review fixes: reset previousJSON on restart, guard generated identifiers, check dispose signal on retry, read fresh registry on retry, shallow-copy workerOpts before mutation, use real entry port for loopbackAddress, and guard malformed loopbackAddress parsing.
# Conflicts: # packages/miniflare/src/plugins/core/explorer.ts
…are/workers-sdk into penalosa/native-registry
|
Codeowners approval required for this PR:
Show detailed file reviewers |
petebacondarwin
left a comment
There was a problem hiding this comment.
This looks great. I have not yet pulled it down and played with it locally.
But a code read through didn't bring up any red flags that I can find.
| let text = await response.text(); | ||
| if (response.status !== 200) | ||
| throw new Error(`Got ${response.status}: ${text}`); | ||
| expect(response.status).toBe(200); |
There was a problem hiding this comment.
Not needed given the above statement.
Or was the throw just there for debugging?
| name: `${RPC_PROXY_SERVICE_NAME}:${id}`, | ||
| worker: { | ||
| compatibilityDate: "2024-08-01", | ||
| compatibilityFlags: ["service_binding_extra_handlers"], |
| import { Log } from "./log"; | ||
| import { getGlobalWranglerConfigPath } from "./wrangler"; | ||
| import type { WorkerDefinition, WorkerRegistry } from "./dev-registry-types"; | ||
| export type { WorkerDefinition, WorkerRegistry } from "./dev-registry-types"; |
There was a problem hiding this comment.
duplicate of line above?
| break; | ||
| } | ||
| } | ||
| const registry = getWorkerRegistry(this.registryPath!); |
There was a problem hiding this comment.
perhaps just add an asser?
| const registry = getWorkerRegistry(this.registryPath!); | |
| assert(this.registryPath); | |
| const registry = getWorkerRegistry(this.registryPath); |
| this.watcher | ||
| ?.close() | ||
| .then(() => {}) | ||
| .finally(() => { | ||
| this.watcher = undefined; | ||
| }) ?? Promise.resolve() | ||
| ); |
There was a problem hiding this comment.
I assume that if the .then and .finally are not breaking type check then this.watcher cannot be undefined here? If so, then the ?. can be removed as well as the ?? Promise.resolve().
| this.watcher | |
| ?.close() | |
| .then(() => {}) | |
| .finally(() => { | |
| this.watcher = undefined; | |
| }) ?? Promise.resolve() | |
| ); | |
| this.watcher | |
| .close() | |
| .then(() => {}) | |
| .finally(() => { | |
| this.watcher = undefined; | |
| }) | |
| ); |
If it can be undefined, then I assume what you actually need here is:
| this.watcher | |
| ?.close() | |
| .then(() => {}) | |
| .finally(() => { | |
| this.watcher = undefined; | |
| }) ?? Promise.resolve() | |
| ); | |
| (this.watcher?.close() ?? Promise.resolve()) | |
| .then(() => {}) | |
| .finally(() => { | |
| this.watcher = undefined; | |
| }); |
| "editor.formatOnSave": true | ||
| "editor.formatOnSave": true, | ||
| "[typescript]": { | ||
| "editor.defaultFormatter": "oxc.oxc-vscode" |
| import { DurableObject } from "cloudflare:workers"; | ||
|
|
||
| // These interfaces mirror the workerd debug port RPC API. | ||
| // See https://github.com/cloudflare/workerd/blob/main/src/workerd/server/server.c++ |
There was a problem hiding this comment.
TIL - I didn't know this even existed. Nice
There was a problem hiding this comment.
NIT: All the exported interfaces, classes and functions in this file would benefit from JSDocs explaining their purpose.
The dev registry proxy — responsible for forwarding service binding calls between separate wrangler dev / vite dev sessions — has been rewritten to use workerd's native debug port RPC instead of routing everything through a Node.js HTTP proxy thread.
The old approach had a lot of moving parts: a separate Node.js worker thread running an HTTP server, per-service socket allocations in workerd, special-casing for WebSocket upgrades and streamed bodies, and a growing list of edge cases. The new approach runs an ExternalServiceProxy WorkerEntrypoint directly inside workerd and uses the debug port to forward calls natively — less code, fewer layers, and far fewer things that can go wrong.
As a bonus, Durable Object RPC now works across dev sessions — something that was previously blocked behind a "not yet supported" error.