Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
126 changes: 53 additions & 73 deletions pkg/epp/flowcontrol/contracts/registry.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,104 +21,84 @@ import (
"sigs.k8s.io/gateway-api-inference-extension/pkg/epp/flowcontrol/types"
)

// FlowRegistry is the complete interface for the global control plane. An implementation of this interface is the single
// source of truth for all flow control state and configuration.
// FlowRegistry is the complete interface for the global flow control plane.
// It composes the client-facing data path interface and the administrative interface. A concrete implementation of this
// interface is the single source of truth for all flow control state.
//
// # Conformance
// # Conformance: Implementations MUST be goroutine-safe.
//
// # Flow Lifecycle
//
// A flow instance, identified by its immutable `types.FlowKey`, has a lease-based lifecycle managed by this interface.
// Any implementation MUST adhere to this lifecycle:
//
// All methods MUST be goroutine-safe. Implementations are expected to perform complex updates (e.g.,
// `RegisterOrUpdateFlow`) atomically.
// 1. Lease Acquisition: A client calls Connect to acquire a lease. This signals that the flow is in use and protects
// it from garbage collection. If the flow does not exist, it is created Just-In-Time (JIT).
// 2. Active State: A flow is "Active" as long as its lease count is greater than zero.
// 3. Lease Release: The client MUST call `Close()` on the returned `FlowConnection` to release the lease.
// When the lease count drops to zero, the flow becomes "Idle".
// 4. Garbage Collection: The implementation MUST automatically garbage collect a flow after it has remained
// continuously Idle for a configurable duration.
//
// # System Invariants
//
// Concrete implementations MUST uphold the following invariants:
//
// 1. Shard Consistency: All configured priority bands and registered flow instances must exist on every Active shard.
// Plugin instance types must be consistent for a given flow across all shards.
// 2. Flow Instance Uniqueness: Each unique `types.FlowKey` (`ID` + `Priority`) corresponds to exactly one managed flow
// instance.
// 3. Capacity Partitioning: Global and per-band capacity limits must be uniformly partitioned across all Active
// 2. Capacity Partitioning: Global and per-band capacity limits must be uniformly partitioned across all Active
// shards.
//
// # Flow Lifecycle
//
// A flow instance (identified by its immutable `FlowKey`) has a simple lifecycle:
//
// - Registered: Known to the `FlowRegistry` via `RegisterOrUpdateFlow`.
// - Idle: Queues are empty across all Active and Draining shards.
// - Garbage Collected (Unregistered): The registry automatically garbage collects flows after they have remained Idle
// for a configurable duration.
//
// # Shard Lifecycle
//
// When a shard is decommissioned, it is marked inactive (`IsActive() == false`) to prevent new enqueues. The shard
// continues to drain and is deleted only after it is empty.
type FlowRegistry interface {
FlowRegistryClient
FlowRegistryAdmin
ShardProvider
}

// FlowRegistryAdmin defines the administrative interface for the global control plane.
//
// # Dynamic Update Strategies
//
// The contract specifies behaviors for handling dynamic updates, prioritizing stability and correctness:
//
// - Immutable Flow Identity (`types.FlowKey`): The system treats the `FlowKey` (`ID` + `Priority`) as the immutable
// identifier. Changing the priority of traffic requires registering a new `FlowKey`. The old flow instance is
// automatically garbage collected when Idle. This design eliminates complex priority migration logic.
//
// - Graceful Draining (Shard Scale-Down): Decommissioned shards enter a Draining state. They stop accepting new
// requests but continue to be processed for dispatch until empty.
//
// - Self-Balancing (Shard Scale-Up): When new shards are added, the `controller.FlowController`'s distribution logic
// naturally utilizes them, funneling new requests to the less-loaded shards. Existing queued items are not
// migrated.
type FlowRegistryAdmin interface {
// RegisterOrUpdateFlow handles the registration of a new flow instance or the update of an existing instance's
// specification (for the same `types.FlowKey`). The operation is atomic across all shards.
//
// Since the `FlowKey` (including `Priority`) is immutable, this method cannot change a flow's priority.
// To change priority, the caller should simply register the new `FlowKey`; the old flow instance will be
// automatically garbage collected when it becomes Idle.
//
// Returns errors wrapping `ErrFlowIDEmpty`, `ErrPriorityBandNotFound`, or internal errors if plugin instantiation
// fails.
RegisterOrUpdateFlow(spec types.FlowSpecification) error

// UpdateShardCount dynamically adjusts the number of internal state shards.
//
// The implementation MUST atomically re-partition capacity allocations across all active shards.
// Returns an error wrapping `ErrInvalidShardCount` if `n` is not positive.
UpdateShardCount(n int) error

// Stats returns globally aggregated statistics for the entire `FlowRegistry`.
Stats() AggregateStats

// ShardStats returns a slice of statistics, one for each internal shard. This provides visibility for debugging and
// monitoring per-shard behavior (e.g., identifying hot or stuck shards).
// ShardStats returns a slice of statistics, one for each internal shard.
ShardStats() []ShardStats
}

// ShardProvider defines the interface for discovering available shards.
//
// A "shard" is an internal, parallel execution unit that allows the `controller.FlowController`'s core dispatch logic
// to be parallelized, preventing a CPU bottleneck at high request rates. The `FlowRegistry`'s state is sharded to
// support this parallelism by reducing lock contention.
// FlowRegistryClient defines the primary, client-facing interface for the registry.
// This is the interface that the `controller.FlowController`'s data path depends upon.
type FlowRegistryClient interface {
// WithConnection manages a scoped, leased session for a given flow.
// It is the primary and sole entry point for interacting with the data path.
//
// This method handles the entire lifecycle of a flow connection:
// 1. Just-In-Time (JIT) Registration: If the flow for the given `types.FlowKey` does not exist, it is created and
// registered automatically.
// 2. Lease Acquisition: It acquires a lifecycle lease, protecting the flow from garbage collection.
// 3. Callback Execution: It invokes the provided function `fn`, passing in a temporary `ActiveFlowConnection` handle.
// 4. Guaranteed Lease Release: It ensures the lease is safely released when the callback function returns.
//
// This functional, callback-based approach makes resource leaks impossible, as the caller is not responsible for
// manually closing the connection.
//
// Errors returned by the callback `fn` are propagated up.
// Returns `ErrFlowIDEmpty` if the provided key has an empty ID.
WithConnection(key types.FlowKey, fn func(conn ActiveFlowConnection) error) error
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alternative here is:

FlowRegistryClient.Connect with a corresponding ActiveFlowConnection.Close. The functional approach is leak proof and more architecturally sound, but I am open to feedback here. Connect/Close also works just fine with the flow control synchronous blocking model.

e.g.,

FlowController.EnqueueAndWait(...) (...) { 
  conn := fc.registryClient.Connect(req.FlowKey())
  defer conn.Close()
 
  // ... rest of logic proceeds as normal
}

Also, this method should eventually accept and respect caller context. I am not doing this in this PR as this will need to be done comprehensively across the flow control module contracts. As of right now, none of our function have unbounded blocking, so this is more for contract hardening and tracing than correctness at the moment.

I will do this in a separate PR.

}

// ActiveFlowConnection represents a handle to a temporary, leased session on a flow.
// It provides a safe, scoped entry point to the registry's sharded data plane.
//
// Consumers MUST check `RegistryShard.IsActive()` before routing new work to a shard to avoid sending requests to a
// Draining shard.
type ShardProvider interface {
// Shards returns a slice of accessors, one for each internal state shard (Active and Draining).
// Callers should not modify the returned slice.
// An `ActiveFlowConnection` instance is only valid for the duration of the `WithConnection` callback from which it was
// received. Callers MUST NOT store a reference to this object or use it after the callback returns.
// Its purpose is to ensure that any interaction with the flow's state (e.g., accessing its shards and queues) occurs
// safely while the flow is guaranteed to be protected from garbage collection.
type ActiveFlowConnection interface {
// Shards returns a stable snapshot of accessors for all internal state shards (both Active and Draining).
// Consumers MUST check `RegistryShard.IsActive()` before routing new work to a shard from this slice.
Shards() []RegistryShard
}

// RegistryShard defines the interface for accessing a specific slice (shard) of the `FlowRegistry's` state.
// It provides a concurrent-safe view for `controller.FlowController` workers.
//
// # Conformance
// RegistryShard defines the interface for a single slice (shard) of the `FlowRegistry`'s state.
// A shard acts as an independent, parallel execution unit, allowing the system's dispatch logic to scale horizontally.
//
// All methods MUST be goroutine-safe.
// # Conformance: Implementations MUST be goroutine-safe.
type RegistryShard interface {
// ID returns a unique identifier for this shard, which must remain stable for the shard's lifetime.
ID() string
Expand Down
44 changes: 44 additions & 0 deletions pkg/epp/flowcontrol/registry/connection.go
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can simply be in registry.go, but I plan on expanding the set of functionality exposed from ActiveConnection, and I want to keep registry.go lean. I don't have a strong opinion here though. We should do whatever is best for readability / maintainability.

Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
/*
Copyright 2025 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package registry

import (
"sigs.k8s.io/gateway-api-inference-extension/pkg/epp/flowcontrol/contracts"
"sigs.k8s.io/gateway-api-inference-extension/pkg/epp/flowcontrol/types"
)

// connection is the concrete, un-exported implementation of the `contracts.ActiveFlowConnection` interface.
// It is a temporary handle created for the duration of a single `WithConnection` call.
type connection struct {
registry *FlowRegistry
key types.FlowKey
}

var _ contracts.ActiveFlowConnection = &connection{}

// Shards returns a stable snapshot of accessors for all internal state shards.
func (c *connection) Shards() []contracts.RegistryShard {
c.registry.mu.RLock()
defer c.registry.mu.RUnlock()

// Return a copy to ensure the caller cannot modify the registry's internal slice.
shardsCopy := make([]contracts.RegistryShard, len(c.registry.allShards))
for i, s := range c.registry.allShards {
shardsCopy[i] = s
}
return shardsCopy
}
119 changes: 13 additions & 106 deletions pkg/epp/flowcontrol/registry/doc.go
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dramatically shortened for doc maintainability (plus a lot of it became obsolete with the new lease-based model). Following principle of first disclosure by moving details closer to the relevant types.

Original file line number Diff line number Diff line change
Expand Up @@ -14,116 +14,23 @@ See the License for the specific language governing permissions and
limitations under the License.
*/

// Package registry provides the concrete implementation of the `contracts.FlowRegistry`.
// Package registry provides the concrete implementation of the `contracts.FlowRegistry` interface.
//
// As the stateful control plane, this package manages the lifecycle of all flows, queues, and policies. It provides a
// sharded, concurrent-safe view of its state to the `controller.FlowController` workers, enabling scalable, parallel
// request processing.
// # Architecture: A Sharded, Concurrent Control Plane
//
// # Architecture: Composite, Sharded, and Separated Concerns
// This package implements the flow control state machine using a sharded architecture to enable scalable, parallel
// request processing. It separates the orchestration control plane from the request-processing data plane.
//
// The registry separates the control plane (orchestration) from the data plane (request processing state).
// - `FlowRegistry`: The top-level orchestrator (Control Plane). It manages the lifecycle of all flows and shards,
// handling registration, garbage collection, and scaling operations.
// - `registryShard`: A slice of the data plane. It holds a partition of the total state and provides a
// read-optimized, concurrent-safe view for a single `controller.FlowController` worker.
// - `managedQueue`: A stateful decorator around a `framework.SafeQueue`. It is the fundamental unit of state,
// responsible for atomically tracking statistics (e.g., length and byte size) and ensuring data consistency.
//
// - `FlowRegistry`: The Control Plane. The top-level orchestrator and single source of truth. It centralizes complex
// operations: flow registration, garbage collection (GC) coordination, and shard scaling.
//
// - `registryShard`: The Data Plane Slice. A concurrent-safe "slice" of the registry's total state. It provides a
// read-optimized view for `FlowController` workers.
//
// - `managedQueue`: The Stateful Decorator. A wrapper around a `framework.SafeQueue`. It augments the queue with
// atomic statistics tracking and signaling state transitions to the control plane.
//
// # Data Flow and Interaction Model
//
// The data path (Enqueue/Dispatch) is optimized for minimal latency and maximum concurrency.
//
// Enqueue Path:
// 1. The `FlowController`'s distributor selects an active `registryShard`.
// 2. The distributor calls `shard.ManagedQueue(flowKey)` (acquires `RLock`).
// 3. The distributor calls `managedQueue.Add(item)`.
// 4. `managedQueue` atomically updates the queue and its stats and signals the control plane.
//
// Dispatch Path:
// 1. A `FlowController` worker iterates over its assigned `registryShard`.
// 2. The worker uses policies and accessors to select the next item (acquires `RLock`).
// 3. The worker calls `managedQueue.Remove(handle)`.
// 4. `managedQueue` atomically updates the queue and its stats and signals the control plane.
//
// # Concurrency Strategy: Multi-Tiered and Optimized
//
// The registry maximizes performance on the hot path while ensuring strict correctness for complex state transitions:
//
// - Serialized Control Plane (Actor Model): The `FlowRegistry` uses a single background goroutine to process all
// state change events serially, eliminating race conditions in the control plane.
//
// - Sharding (Data Plane Parallelism): State is partitioned across multiple `registryShard` instances, allowing the
// data path to scale linearly.
//
// - Lock-Free Data Path (Atomics): Statistics aggregation (Shard/Registry level) uses lock-free atomics.
//
// - Strict Consistency (Hybrid Locking): `managedQueue` uses a hybrid locking model (Mutex for writes, Atomics for
// reads) to guarantee strict consistency between queue contents and statistics, which is required for GC
// correctness.
// # Concurrency Model
//
// The registry uses a multi-layered strategy to maximize performance on the hot path while ensuring correctness for
// administrative tasks.
// (See the `FlowRegistry` struct documentation for detailed locking rules).
//
// # Garbage Collection: The "Trust but Verify" Pattern
//
// The registry handles the race condition between asynchronous data path activity and synchronous GC. The control plane
// maintains an eventually consistent cache (`flowState`).
//
// The registry uses a periodic, generational "Trust but Verify" pattern. It identifies candidate flows using the cache.
// Before deletion, it performs a "Verify" step: it synchronously acquires write locks on ALL shards and queries the
// ground truth (live queue counters). This provides strong consistency when needed.
//
// (See the `garbageCollectFlowLocked` function documentation for detailed steps).
//
// # Scalability Characteristics and Trade-offs
//
// The architecture prioritizes data path throughput and correctness, introducing specific trade-offs:
//
// - Data Path Throughput (Excellent): Scales linearly with the number of shards and benefits from lock-free
// statistics updates.
//
// - GC Latency Impact (Trade-off): The GC "Verify" step requires locking all shards (O(N)). This briefly pauses the
// data path. As the shard count (N) increases, this may impact P99 latency. This trade-off guarantees correctness.
//
// - Control Plane Responsiveness during Scale-Up (Trade-off): Scaling up requires synchronizing all existing flows
// (M) onto the new shards (K). This O(M*K) operation occurs under the main control plane lock. If M is large, this
// operation may block the control plane.
//
// # Event-Driven State Machine and Lifecycle Scenarios
//
// The system relies on atomic state transitions to generate reliable, edge-triggered signals. These signals are sent
// reliably; if the event channel is full, the sender blocks, applying necessary backpressure to ensure no events are
// lost, preventing state divergence.
//
// The following scenarios detail how the registry handles lifecycle events:
//
// New Flow Registration: A new flow instance `F1` (`FlowKey{ID: "A", Priority: 10}`) is registered.
//
// 1. `managedQueue` instances are created for `F1` on all shards.
// 2. The `flowState` cache marks `F1` as Idle. If it remains Idle, it will eventually be garbage collected.
//
// Flow Activity/Inactivity:
//
// 1. When the first request for `F1` is enqueued, the queue signals `BecameNonEmpty`. The control plane marks `F1`
// as Active, protecting it from GC.
// 2. When the last request is dispatched globally, the queues signal `BecameEmpty`. The control plane updates the
// cache, and `F1` is now considered Idle by the GC scanner.
//
// "Changing" Flow Priority: Traffic for `ID: "A"` needs to shift from `Priority: 10` to `Priority: 20`.
//
// 1. The caller registers a new flow instance, `F2` (`FlowKey{ID: "A", Priority: 20}`).
// 2. The system treats `F1` and `F2` as independent entities (Immutable `FlowKey` design).
// 3. As `F1` becomes Idle, it is automatically garbage collected. This achieves the outcome gracefully without complex
// state migration logic.
//
// Shard Scaling:
//
// - Scale-Up: New shards are created and marked Active. Existing flows are synchronized onto the new shards.
// Configuration is re-partitioned.
// - Scale-Down: Targeted shards transition to Draining (stop accepting new work). Configuration is re-partitioned
// across remaining active shards. When a Draining shard is empty, it signals `BecameDrained` and is removed by the
// control plane.
package registry
Loading