Queue connections for worker slots instead of rejecting#225
Merged
fuziontech merged 10 commits intomainfrom Feb 18, 2026
Merged
Queue connections for worker slots instead of rejecting#225fuziontech merged 10 commits intomainfrom
fuziontech merged 10 commits intomainfrom
Conversation
…ssion arrow-go's flight.Client.Close() nils out the embedded FlightServiceClient. When a worker crashes, the health check goroutine closes the shared gRPC client while session goroutines are still using it, causing a nil pointer dereference in GetFlightInfo that crashes the entire control plane. Fix with two layers of defense: - Add atomic `dead` flag to FlightExecutor, checked before any RPC call - Add recover() in QueryContext/ExecContext to catch the panic in the race window between the flag check and the actual RPC - Mark all affected executors dead in OnWorkerCrash before the client is closed Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…add tests 1. Narrow recoverClientPanic to only catch nil pointer dereferences; re-panic on unrelated errors to preserve stack traces. 2. Add recoverWorkerPanic to ManagedWorker.CreateSession and DestroySession which access the shared gRPC client directly (same crash class). 3. Proactively close TCP connections on worker crash via connCloser so session goroutines exit cleanly instead of looping on ErrWorkerDead. 4. Add 16 unit tests covering MarkDead, recover behavior (nil pointer vs other panics), OnWorkerCrash session/connection cleanup, and SetConnCloser. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add recoverWorkerPanic to the doHealthCheck call in HealthCheckLoop. Same race: w.client.Close() from a concurrent crash/retire nils out FlightServiceClient while the health check goroutine calls DoAction. - Replace nil contexts with context.Background() in tests. - Add TestDestroySessionAfterOnWorkerCrash to verify the exact production sequence: OnWorkerCrash cleans up, then the deferred DestroySession is a safe no-op. - Add comment explaining intentional double-close of TCP connection (OnWorkerCrash + handleConnection defer). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the "reject immediately" pattern when max_workers is reached with a blocking semaphore in FlightWorkerPool. A buffered channel of size maxWorkers acts as both a concurrency limiter and a FIFO queue. Clients now complete TLS + auth, then block at the ReadyForQuery stage until a worker slot becomes available (or the queue timeout expires). This eliminates retry storms from clients like Fivetran that reconnect aggressively on "too many connections" errors. Key changes: - AcquireWorker(ctx) blocks on semaphore instead of returning ErrMaxWorkersReached - RetireWorker/HealthCheckLoop crash paths release semaphore slots - ShutdownAll closes shutdownCh to unblock all queued waiters - New --worker-queue-timeout flag (default 30s) controls queue wait time - Remove dead reap-and-retry code from Flight ingress (GetOrCreate) - Remove MaxWorkersRetry Prometheus metrics and hooks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
a980b8b to
f45de32
Compare
…ecovery path - Update all references from "30s" to "5m" for WorkerQueueTimeout default - Add missing SessionTokenTTL field to recoverFlightIngressAfterFailedReload (was present in the startup path but not the reload-recovery path) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Move semaphore release from HealthCheckLoop to OnWorkerCrash to ensure idle worker crashes don't leak slots and multi-session workers (if any) release correctly. - Revert DefaultMaxWorkers to scale with memory budget (budget / 256MB) as intended by logs and tests. - Update TestCrashReleasesSemaphoreSlot to call releaseWorkerSem in its mock crash handler.
…isition race - Add 'activeSessions' to ManagedWorker to track occupancy within the pool. - Atomically claim idle workers in AcquireWorker by incrementing the counter under the pool's lock, preventing the race where multiple connections could reuse the same idle worker in the 1:1 model. - Release semaphore slots in HealthCheckLoop based on 'activeSessions' to ensure accurate capacity recovery even for 'in-flight' acquisitions that haven't reached SessionManager yet. - Remove 'SessionCounter' interface and circular dependency on SessionManager. - Update tests to reflect internal occupancy tracking.
…rement - Refactor RetireWorker to release exactly 'activeSessions' slots, ensuring pre-warmed workers (0 sessions) don't cause 'reverse leaks' and multi-session workers (if any) clean up fully. - Fix RetireWorkerIfNoSessions to correctly decrement activeSessions and release the semaphore slot upon CreateSession failure, preventing permanent capacity leaks. - Add regression tests for CreateSession failure leaks, multi-session retirement, and atomic claim races.
Resolved conflicts in: - controlplane/control.go: Combined rate limiter from main with queue timeout config from PR. - controlplane/flight_ingress.go: Included rate limiter, removed obsolete IsMaxWorkersError. - server/flightsqlingress/ingress.go: Updated Options to include RateLimiter. - server/flightsqlingress/ingress_test.go: Kept new rate limiter tests from main, removed deleted retry tests.
- Restore forced idle session reaping during Flight SQL bootstrap to prevent 'sticky' sessions from blocking the queue indefinitely. - Increase maxWorkers from 1 to 3 in TestFlightIngressIncludeSchemaLowWorkerRegression. The test was previously passing due to a TOCTOU race in main that ignored the worker limit; with strict enforcement, 1 worker is insufficient for 3 concurrent Flight users because sessions are not destroyed immediately.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
ErrMaxWorkersReachedrejection with a blocking FIFO semaphore inFlightWorkerPool. Clients complete TLS + auth, then wait at ReadyForQuery until a worker slot opens (or--worker-queue-timeoutexpires).--worker-queue-timeoutCLI flag /DUCKGRES_WORKER_QUEUE_TIMEOUTenv var /worker_queue_timeoutYAML field (default: 30s)GetOrCreate, along withMaxWorkersRetryPrometheus metrics and hooksMotivation
Clients like Fivetran reconnect aggressively on "too many connections" errors, causing retry storms. By holding the connection open and queuing for a worker slot, the client sees a normal (slow) connection instead of an error, eliminating the storm.
Test plan
go build ./...compiles cleanlygo test ./controlplane/... ./server/...)TestAcquireWorkerBlocksUntilSlotAvailable,TestAcquireWorkerRespectsContextCancellation,TestAcquireWorkerUnlimitedWhenMaxZero,TestAcquireWorkerShutdownUnblocksWaiters,TestCrashReleasesSemaphoreSlot--max-workers 2: 3rd psql session blocks, unblocks when 1st disconnects--worker-queue-timeout 5s→ 3rd session gets "too many connections" after 5s🤖 Generated with Claude Code