Skip to content

Queue connections for worker slots instead of rejecting#225

Merged
fuziontech merged 10 commits intomainfrom
fix/flight-executor-nil-panic
Feb 18, 2026
Merged

Queue connections for worker slots instead of rejecting#225
fuziontech merged 10 commits intomainfrom
fix/flight-executor-nil-panic

Conversation

@fuziontech
Copy link
Member

Summary

  • Replace immediate ErrMaxWorkersReached rejection with a blocking FIFO semaphore in FlightWorkerPool. Clients complete TLS + auth, then wait at ReadyForQuery until a worker slot opens (or --worker-queue-timeout expires).
  • Add --worker-queue-timeout CLI flag / DUCKGRES_WORKER_QUEUE_TIMEOUT env var / worker_queue_timeout YAML field (default: 30s)
  • Remove dead reap-and-retry code from Flight ingress GetOrCreate, along with MaxWorkersRetry Prometheus metrics and hooks

Motivation

Clients like Fivetran reconnect aggressively on "too many connections" errors, causing retry storms. By holding the connection open and queuing for a worker slot, the client sees a normal (slow) connection instead of an error, eliminating the storm.

Test plan

  • go build ./... compiles cleanly
  • All existing tests pass (go test ./controlplane/... ./server/...)
  • New tests: TestAcquireWorkerBlocksUntilSlotAvailable, TestAcquireWorkerRespectsContextCancellation, TestAcquireWorkerUnlimitedWhenMaxZero, TestAcquireWorkerShutdownUnblocksWaiters, TestCrashReleasesSemaphoreSlot
  • Manual test with --max-workers 2: 3rd psql session blocks, unblocks when 1st disconnects
  • Verify --worker-queue-timeout 5s → 3rd session gets "too many connections" after 5s

🤖 Generated with Claude Code

fuziontech and others added 4 commits February 17, 2026 13:41
…ssion

arrow-go's flight.Client.Close() nils out the embedded FlightServiceClient.
When a worker crashes, the health check goroutine closes the shared gRPC
client while session goroutines are still using it, causing a nil pointer
dereference in GetFlightInfo that crashes the entire control plane.

Fix with two layers of defense:
- Add atomic `dead` flag to FlightExecutor, checked before any RPC call
- Add recover() in QueryContext/ExecContext to catch the panic in the race
  window between the flag check and the actual RPC
- Mark all affected executors dead in OnWorkerCrash before the client is closed

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…add tests

1. Narrow recoverClientPanic to only catch nil pointer dereferences;
   re-panic on unrelated errors to preserve stack traces.

2. Add recoverWorkerPanic to ManagedWorker.CreateSession and DestroySession
   which access the shared gRPC client directly (same crash class).

3. Proactively close TCP connections on worker crash via connCloser so
   session goroutines exit cleanly instead of looping on ErrWorkerDead.

4. Add 16 unit tests covering MarkDead, recover behavior (nil pointer
   vs other panics), OnWorkerCrash session/connection cleanup, and
   SetConnCloser.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add recoverWorkerPanic to the doHealthCheck call in HealthCheckLoop.
  Same race: w.client.Close() from a concurrent crash/retire nils out
  FlightServiceClient while the health check goroutine calls DoAction.

- Replace nil contexts with context.Background() in tests.

- Add TestDestroySessionAfterOnWorkerCrash to verify the exact
  production sequence: OnWorkerCrash cleans up, then the deferred
  DestroySession is a safe no-op.

- Add comment explaining intentional double-close of TCP connection
  (OnWorkerCrash + handleConnection defer).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the "reject immediately" pattern when max_workers is reached with
a blocking semaphore in FlightWorkerPool. A buffered channel of size
maxWorkers acts as both a concurrency limiter and a FIFO queue.

Clients now complete TLS + auth, then block at the ReadyForQuery stage
until a worker slot becomes available (or the queue timeout expires).
This eliminates retry storms from clients like Fivetran that reconnect
aggressively on "too many connections" errors.

Key changes:
- AcquireWorker(ctx) blocks on semaphore instead of returning ErrMaxWorkersReached
- RetireWorker/HealthCheckLoop crash paths release semaphore slots
- ShutdownAll closes shutdownCh to unblock all queued waiters
- New --worker-queue-timeout flag (default 30s) controls queue wait time
- Remove dead reap-and-retry code from Flight ingress (GetOrCreate)
- Remove MaxWorkersRetry Prometheus metrics and hooks

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@fuziontech fuziontech force-pushed the fix/flight-executor-nil-panic branch from a980b8b to f45de32 Compare February 17, 2026 23:12
fuziontech and others added 6 commits February 17, 2026 15:21
…ecovery path

- Update all references from "30s" to "5m" for WorkerQueueTimeout default
- Add missing SessionTokenTTL field to recoverFlightIngressAfterFailedReload
  (was present in the startup path but not the reload-recovery path)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Move semaphore release from HealthCheckLoop to OnWorkerCrash to ensure
  idle worker crashes don't leak slots and multi-session workers (if any)
  release correctly.
- Revert DefaultMaxWorkers to scale with memory budget (budget / 256MB)
  as intended by logs and tests.
- Update TestCrashReleasesSemaphoreSlot to call releaseWorkerSem in its
  mock crash handler.
…isition race

- Add 'activeSessions' to ManagedWorker to track occupancy within the pool.
- Atomically claim idle workers in AcquireWorker by incrementing the counter
  under the pool's lock, preventing the race where multiple connections
  could reuse the same idle worker in the 1:1 model.
- Release semaphore slots in HealthCheckLoop based on 'activeSessions'
  to ensure accurate capacity recovery even for 'in-flight' acquisitions
  that haven't reached SessionManager yet.
- Remove 'SessionCounter' interface and circular dependency on SessionManager.
- Update tests to reflect internal occupancy tracking.
…rement

- Refactor RetireWorker to release exactly 'activeSessions' slots, ensuring
  pre-warmed workers (0 sessions) don't cause 'reverse leaks' and
  multi-session workers (if any) clean up fully.
- Fix RetireWorkerIfNoSessions to correctly decrement activeSessions and
  release the semaphore slot upon CreateSession failure, preventing permanent
  capacity leaks.
- Add regression tests for CreateSession failure leaks, multi-session retirement,
  and atomic claim races.
Resolved conflicts in:
- controlplane/control.go: Combined rate limiter from main with queue timeout config from PR.
- controlplane/flight_ingress.go: Included rate limiter, removed obsolete IsMaxWorkersError.
- server/flightsqlingress/ingress.go: Updated Options to include RateLimiter.
- server/flightsqlingress/ingress_test.go: Kept new rate limiter tests from main, removed deleted retry tests.
- Restore forced idle session reaping during Flight SQL bootstrap to prevent
  'sticky' sessions from blocking the queue indefinitely.
- Increase maxWorkers from 1 to 3 in TestFlightIngressIncludeSchemaLowWorkerRegression.
  The test was previously passing due to a TOCTOU race in main that ignored
  the worker limit; with strict enforcement, 1 worker is insufficient for
  3 concurrent Flight users because sessions are not destroyed immediately.
@fuziontech fuziontech merged commit d19d016 into main Feb 18, 2026
11 checks passed
@fuziontech fuziontech deleted the fix/flight-executor-nil-panic branch February 18, 2026 00:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments