Skip to content

Commit 5dbd716

Browse files
LukasaMordil
authored andcommitted
Implement a simple Redis Connection Pool.
Motivation: Users of Redis will frequently want to be able to run queries in parallel, while bounding the number of connections they use. They will also often want to be able to reuse connections, without having to arrange to manage those connections themselves. These are jobs usually done by a Connection Pool. This new connection pool will conform to `RedisClient` so a pool of clients and a single connection are interchangeable. Connection Pools come in a wide range of shapes and sizes. In NIO applications and frameworks, there are a number of questions that have to be answered by any pool implementation: 1. Is the pool safe to share across EventLoops: that is, is its interface thread-safe? 2. Is the pool _tied_ to an EventLoop: that is, can the pool return connections that belong on lots of event loops, or just one? 3. If the pool is not tied to an EventLoop, is it possible to influence its choice about what event loop it uses for a given connection? Question 1 is straightforward: it is almost always a trivial win to ensure that the public interface to a connection pool is thread-safe. NIO makes it possible to do this fairly cheaply in the case when the pool is only used on a single loop. Question 2 is a lot harder. Pools that are not tied to a specific EventLoop have two advantages. The first is that it is easier to bound maximum concurrency by simply configuring the pool, instead of needing to do math on the number of pools and the number of event loops. The second is that non-tied pools can arrange to keep busy applications close to this maximum concurrency regardless of how the application spreads its load across loops. However, pools that are tied to a specific EventLoop have advantages too. The first is one of implementation simplicity. As they always serve connections on a single EventLoop, they can arrange to have all of their state on that event loop too. This avoids the need to acquire locks on that loop, making internal state management easier and more obviously correct without having to worry about how long locks are held for. The second advantage is that they can be used for latency sensitive use-cases without needing to go to the work of (3). In cases where latency is very important, it can be valuable to ensure that any Channel that needs a connection can get one on the same event loop as itself. This avoids the need to thread-hop in order to communicate between the pooled connection and the user connection, reducing the latency of operations. Given the simplicity and latency benefits (which we deem particularly important for Redis use-cases), we concluded that a good initial implementation will be a pool that has a thread-safe interface, but is tied to a single EventLoop. This allows a compact, easy-to-verify implementation of the pool with great low-latency performance and simple implementation logic, that can still be accessed from any EventLoop in cases when latency is not a concern. Modifications: - Add new internal `ConnectionPool` object - Add new `RedisConnectionPool` object - Add new `RedisConnectionPoolError` type - Add tests for new types Results: Users will have access to a pooled Redis client.
1 parent 95ce2cd commit 5dbd716

File tree

10 files changed

+1823
-0
lines changed

10 files changed

+1823
-0
lines changed

Sources/RediStack/Connection Pool/ConnectionPool.swift

Lines changed: 483 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
//===----------------------------------------------------------------------===//
2+
//
3+
// This source file is part of the RediStack open source project
4+
//
5+
// Copyright (c) 2020 RediStack project authors
6+
// Licensed under Apache License v2.0
7+
//
8+
// See LICENSE.txt for license information
9+
// See CONTRIBUTORS.txt for the list of RediStack project authors
10+
//
11+
// SPDX-License-Identifier: Apache-2.0
12+
//
13+
//===----------------------------------------------------------------------===//
14+
15+
import protocol Foundation.LocalizedError
16+
17+
/// If something goes wrong with any part of the Redis connection pool, errors of this type will be thrown.
18+
public struct RedisConnectionPoolError: LocalizedError, Equatable {
19+
private var baseError: BaseError
20+
21+
init(baseError: BaseError) {
22+
self.baseError = baseError
23+
}
24+
25+
internal enum BaseError: Equatable {
26+
case poolClosed
27+
case timedOutWaitingForConnection
28+
case noAvailableConnectionTargets
29+
}
30+
31+
/// The connection pool has already been closed, but the user has attempted to perform another operation on it.
32+
public static let poolClosed = RedisConnectionPoolError(baseError: .poolClosed)
33+
34+
/// The timeout for waiting for a connection expired before we got a connection.
35+
public static let timedOutWaitingForConnection = RedisConnectionPoolError(baseError: .timedOutWaitingForConnection)
36+
37+
/// The pool has been configured without available connection targets, so there is nowhere to connect to.
38+
public static let noAvailableConnectionTargets = RedisConnectionPoolError(baseError: .noAvailableConnectionTargets)
39+
}
Lines changed: 283 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,283 @@
1+
//===----------------------------------------------------------------------===//
2+
//
3+
// This source file is part of the RediStack open source project
4+
//
5+
// Copyright (c) 2020 RediStack project authors
6+
// Licensed under Apache License v2.0
7+
//
8+
// See LICENSE.txt for license information
9+
// See CONTRIBUTORS.txt for the list of RediStack project authors
10+
//
11+
// SPDX-License-Identifier: Apache-2.0
12+
//
13+
//===----------------------------------------------------------------------===//
14+
import struct Foundation.UUID
15+
import NIO
16+
import NIOConcurrencyHelpers
17+
import Logging
18+
19+
/// A `RedisConnectionPool` is an implementation of `RedisClient` backed by a pool of connections to Redis,
20+
/// rather than a single one.
21+
///
22+
/// `RedisConnectionPool` uses a pool of connections on a single `EventLoop` to manage its activity. This
23+
/// pool may vary in size and strategy, including how many active connections it tries to manage at any one
24+
/// time and how it responds to demand for connections beyond its upper limit.
25+
///
26+
/// Note that `RedisConnectionPool` is entirely thread-safe, even though all of its connections belong to a
27+
/// single `EventLoop`: if callers call the API from a different `EventLoop` (or from no `EventLoop` at all)
28+
/// `RedisConnectionPool` will ensure that the call is dispatched to the correct loop.
29+
public class RedisConnectionPool {
30+
// This needs to be var because we hand it a closure that references us strongly. This also
31+
// establishes a reference cycle which we need to break.
32+
// Aside from on init, all other operations on this var must occur on the event loop.
33+
private var pool: ConnectionPool?
34+
35+
/// This needs to be var because it is updatable and mutable. As a result, aside from init,
36+
/// all use of this var must occur on the event loop.
37+
private var serverConnectionAddresses: ConnectionAddresses
38+
39+
private let loop: EventLoop
40+
41+
private var poolLogger: Logger
42+
43+
/// This lock exists only to access the pool logger. We don't use the pool logger here at all, but
44+
/// we need to be able to give it to users in a way that is thread-safe, as users can also set it from
45+
/// any thread they want.
46+
private let poolLoggerLock: Lock
47+
48+
private let connectionPassword: String?
49+
50+
private let connectionLogger: Logger
51+
52+
private let connectionTCPClient: ClientBootstrap?
53+
54+
private let poolID: UUID
55+
56+
/// Create a new `RedisConnectionPool`.
57+
///
58+
/// - parameters:
59+
/// - serverConnectionAddresses: The set of Redis servers to which this pool is initially willing to connect.
60+
/// This set can be updated over time.
61+
/// - loop: The event loop to which this pooled client is tied.
62+
/// - maximumConnectionCount: The maximum number of connections to for this pool, either to be preserved or as a hard limit.
63+
/// - minimumConnectionCount: The minimum number of connections to preserve in the pool. If the pool is mostly idle
64+
/// and the Redis servers close these idle connections, the `RedisConnectionPool` will initiate new outbound
65+
/// connections proactively to avoid the number of available connections dropping below this number. Defaults to `1`.
66+
/// - connectionPassword: The password to use to connect to the Redis servers in this pool.
67+
/// - connectionLogger: The `Logger` to pass to each connection in the pool.
68+
/// - connectionTCPClient: The base `ClientBootstrap` to use to create pool connections, if a custom one is in use.
69+
/// - poolLogger: The `Logger` used by the connection pool itself.
70+
/// - connectionBackoffFactor: Used when connection attempts fail to control the exponential backoff. This is a multiplicative
71+
/// factor, each connection attempt will be delayed by this amount times the previous delay.
72+
/// - initialConnectionBackoffDelay: If a TCP connection attempt fails, this is the first backoff value on the reconnection attempt.
73+
/// Subsequent backoffs are computed by compounding this value by `connectionBackoffFactor`.
74+
public init(
75+
serverConnectionAddresses: [SocketAddress],
76+
loop: EventLoop,
77+
maximumConnectionCount: RedisConnectionPoolSize,
78+
minimumConnectionCount: Int = 1,
79+
connectionPassword: String? = nil,
80+
connectionLogger: Logger = .init(label: "RediStack.RedisConnection"),
81+
connectionTCPClient: ClientBootstrap? = nil,
82+
poolLogger: Logger = .init(label: "RediStack.RedisConnectionPool"),
83+
connectionBackoffFactor: Float32 = 2,
84+
initialConnectionBackoffDelay: TimeAmount = .milliseconds(100)
85+
) {
86+
self.poolID = UUID()
87+
self.loop = loop
88+
self.serverConnectionAddresses = ConnectionAddresses(initialAddresses: serverConnectionAddresses)
89+
self.connectionPassword = connectionPassword
90+
91+
var connectionLogger = connectionLogger
92+
connectionLogger[metadataKey: String(describing: RedisConnectionPool.self)] = "\(self.poolID)"
93+
self.connectionLogger = connectionLogger
94+
95+
var poolLogger = poolLogger
96+
poolLogger[metadataKey: String(describing: RedisConnectionPool.self)] = "\(self.poolID)"
97+
self.poolLogger = poolLogger
98+
99+
self.connectionTCPClient = connectionTCPClient
100+
self.poolLoggerLock = Lock()
101+
102+
self.pool = ConnectionPool(
103+
maximumConnectionCount: maximumConnectionCount.size,
104+
minimumConnectionCount: minimumConnectionCount,
105+
leaky: maximumConnectionCount.leaky,
106+
loop: loop,
107+
logger: poolLogger,
108+
connectionBackoffFactor: connectionBackoffFactor,
109+
initialConnectionBackoffDelay: initialConnectionBackoffDelay,
110+
connectionFactory: self.connectionFactory(_:)
111+
)
112+
}
113+
}
114+
115+
// MARK: General helpers.
116+
extension RedisConnectionPool {
117+
public func activate() {
118+
self.loop.execute {
119+
self.pool?.activate()
120+
}
121+
}
122+
123+
public func close() {
124+
self.loop.execute {
125+
self.pool?.close()
126+
127+
// This breaks the cycle between us and the pool.
128+
self.pool = nil
129+
}
130+
}
131+
132+
/// Updates the list of valid connection addresses.
133+
///
134+
/// This does not invalidate existing connections: as long as those connections continue to stay up, they will be kept by
135+
/// this client. However, no new connections will be made to any endpoint that is not in `newTargets`.
136+
public func updateConnectionAddresses(_ newAddresses: [SocketAddress]) {
137+
self.poolLoggerLock.withLockVoid {
138+
self.poolLogger.info("Updated pool with new addresses", metadata: ["new-addresses": "\(newAddresses)"])
139+
}
140+
141+
self.loop.execute {
142+
self.serverConnectionAddresses.update(newAddresses)
143+
}
144+
}
145+
146+
private func connectionFactory(_ targetLoop: EventLoop) -> EventLoopFuture<RedisConnection> {
147+
// Validate the loop invariants.
148+
self.loop.preconditionInEventLoop()
149+
targetLoop.preconditionInEventLoop()
150+
151+
guard let nextTarget = self.serverConnectionAddresses.nextTarget() else {
152+
// No valid connection target, we'll fail.
153+
return targetLoop.makeFailedFuture(RedisConnectionPoolError.noAvailableConnectionTargets)
154+
}
155+
156+
return RedisConnection.connect(
157+
to: nextTarget,
158+
on: targetLoop,
159+
password: self.connectionPassword,
160+
logger: self.connectionLogger,
161+
tcpClient: self.connectionTCPClient
162+
)
163+
}
164+
}
165+
166+
// MARK: RedisClient conformance
167+
extension RedisConnectionPool: RedisClient {
168+
public var eventLoop: EventLoop {
169+
return self.loop
170+
}
171+
172+
public var logger: Logger {
173+
return self.poolLoggerLock.withLock {
174+
return self.poolLogger
175+
}
176+
}
177+
178+
public func setLogging(to logger: Logger) {
179+
var logger = logger
180+
logger[metadataKey: String(describing: RedisConnectionPool.self)] = "\(self.poolID)"
181+
182+
self.poolLoggerLock.withLock {
183+
self.poolLogger = logger
184+
185+
// We must enqueue this before we drop the lock to prevent a race on setting this logger.
186+
self.loop.execute {
187+
self.pool?.setLogger(logger)
188+
}
189+
}
190+
}
191+
192+
public func send(command: String, with arguments: [RESPValue]) -> EventLoopFuture<RESPValue> {
193+
// Establish event loop context then jump to the in-loop version.
194+
return self.loop.flatSubmit {
195+
return self._send(command: command, with: arguments)
196+
}
197+
}
198+
199+
private func _send(command: String, with arguments: [RESPValue]) -> EventLoopFuture<RESPValue> {
200+
self.loop.preconditionInEventLoop()
201+
202+
guard let pool = self.pool else {
203+
return self.loop.makeFailedFuture(RedisConnectionPoolError.poolClosed)
204+
}
205+
206+
// For now we have to default the deadline. For maximum compatibility with the existing implementation, we use a fairly-long timeout:
207+
// one minute.
208+
return pool.leaseConnection(deadline: .now() + .seconds(60)).flatMap { connection in
209+
connection.sendCommandsImmediately = true
210+
return connection.send(command: command, with: arguments).always { _ in
211+
pool.returnConnection(connection)
212+
}
213+
}
214+
}
215+
}
216+
217+
// MARK: Helper for round-robin connection establishment
218+
extension RedisConnectionPool {
219+
/// A helper structure for valid connection addresses. This structure implements round-robin connection establishment.
220+
private struct ConnectionAddresses {
221+
private var addresses: [SocketAddress]
222+
223+
private var index: Array<SocketAddress>.Index
224+
225+
init(initialAddresses: [SocketAddress]) {
226+
self.addresses = initialAddresses
227+
self.index = self.addresses.startIndex
228+
}
229+
230+
mutating func nextTarget() -> SocketAddress? {
231+
// Early exit on 0, makes life easier.
232+
guard self.addresses.count > 0 else {
233+
self.index = self.addresses.startIndex
234+
return nil
235+
}
236+
237+
// It's an invariant of this function that the index is always valid for subscripting the collection.
238+
let nextTarget = self.addresses[self.index]
239+
self.addresses.formIndex(after: &self.index)
240+
if self.index == self.addresses.endIndex {
241+
self.index = self.addresses.startIndex
242+
}
243+
return nextTarget
244+
}
245+
246+
mutating func update(_ newAddresses: [SocketAddress]) {
247+
self.addresses = newAddresses
248+
self.index = self.addresses.startIndex
249+
}
250+
}
251+
}
252+
253+
254+
/// `RedisConnectionPoolSize` controls how the maximum number of connections in a pool are interpreted.
255+
public enum RedisConnectionPoolSize {
256+
/// The pool will allow no more than this number of connections to be "active" (that is, connecting, in-use,
257+
/// or pooled) at any one time. This will force possible future users of new connections to wait until a currently
258+
/// active connection becomes available by being returned to the pool, but provides a hard upper limit on concurrency.
259+
case maximumActiveConnections(Int)
260+
261+
/// The pool will only store up to this number of connections that are not currently in-use. However, if the pool is
262+
/// asked for more connections at one time than this number, it will create new connections to serve those waiting for
263+
/// connections. These "extra" connections will not be preserved: while they will be used to satisfy those waiting for new
264+
/// connections if needed, they will not be preserved in the pool if load drops low enough. This does not provide a hard
265+
/// upper bound on concurrency, but does provide an upper bound on low-level load.
266+
case maximumPreservedConnections(Int)
267+
268+
internal var size: Int {
269+
switch self {
270+
case .maximumActiveConnections(let size), .maximumPreservedConnections(let size):
271+
return size
272+
}
273+
}
274+
275+
internal var leaky: Bool {
276+
switch self {
277+
case .maximumActiveConnections:
278+
return false
279+
case .maximumPreservedConnections:
280+
return true
281+
}
282+
}
283+
}

0 commit comments

Comments
 (0)