Conversation
|
❌ Benchmark comparison failed with error ❌ Summary
New baseline 'pull_request' is WORSE than the 'main' baseline thresholds. Full Benchmark ComparisonComparing results between 'main' and 'pull_request'ValkeyBenchmarksClient: GET benchmark metricsMalloc (total): results within specified thresholds, fold down for details.
Client: GET benchmark | parallel 20 | 20 concurrent connections metricsMalloc (total): results within specified thresholds, fold down for details.
Client: GET benchmark | parallel 50 | 20 concurrent connections metricsMalloc (total): results within specified thresholds, fold down for details.
Connection: GET benchmark metricsMalloc (total): results within specified thresholds, fold down for details.
HashSlot – {user}.whatever metricsMalloc (total): results within specified thresholds, fold down for details.
ValkeyCommandEncoder – Command with 7 words metricsMalloc (total): results within specified thresholds, fold down for details.
ValkeyCommandEncoder – Simple GET metricsMalloc (total): results within specified thresholds, fold down for details.
ValkeyCommandEncoder – Simple MGET 15 keys metricsMalloc (total): results within specified thresholds, fold down for details.
|
| case .hostname(let host, let port): | ||
| future = connect.connect(host: host, port: port) | ||
| let socketAddress = try! SocketAddress(ipAddress: host, port: port) | ||
| future = connect.connect(to: socketAddress) |
There was a problem hiding this comment.
Can see why you did this, but can we keep it to another PR. It would be good to still support hostnames.
There was a problem hiding this comment.
this was because of a bug in a pre-release os version. will remove :)
| @usableFromInline | ||
| let continuation: CheckedContinuation<T, any Error> | ||
| @usableFromInline | ||
| let lock: Mutex<RequestState> |
There was a problem hiding this comment.
You setup RequestState to be AtomicRepresentable, but are using a Mutex
| defer { self.connectionPool.releaseConnection(connection) } | ||
|
|
||
| return try await operation(connection) | ||
| fatalError() |
There was a problem hiding this comment.
And how are you planning to support withConnection now that the connection request is ValkeyConnectionRequest<RESPToken>
| self.lock.withLock { state in | ||
| state = .onConnection(connection) | ||
| } | ||
| self.onConnection(connection, self) |
There was a problem hiding this comment.
complete and succeed seem to do different things for a successful result, which is the correct one?
|
@adam-fowler I have discussed with @nilanshu-sharma that this is a feature that we can ship after 1.0 – as it does not affect API at all. Are you on board with this? |
Yeah I'm fine with that |
Motivation
ConnectionPool offers a fast mode for single requests. For this to work we have to leave the paradigms of structured concurrency. It is important that the ConnectionRequest is synchronously fulfilled without any continuations.
Using the
withConnectionAPI, we have the following context switch scenario:withConnectionis reached -> acquire lock -> get next waiting request -> fulfill continuation for connectionwithConnectionclosure is invokedsendis invokedIf we don't use structured concurrency here, we can have the following flow:
Not a single context switch was necessary in this scenario.
Performance
In local testing I can see about a 20% improvement (throughput and wall clock time) for all the client benchmarks.
Changes
ValkeyConnectionRequest, that uses unstructured callbacks for improved performance