You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sql: set default initial_retry_backoff_for_read_committed to 2ms
Testing was performed with a 3-node cluster running the following
transaction repeatedly from 32 connections, all trying to lock the same
row:
```sql
BEGIN;
SELECT 1;
SELECT k, v FROM kv WHERE k IN (0) FOR UPDATE;
-- 10 ms pause
UPSERT INTO kv VALUES (0, <random byte>);
COMMIT;
```
This workload was produced using the following commands:
```
cockroach workload init kv \
--sequential --cycle-length 1 --insert-count 1 \
--data-loader INSERT --user demo --db defaultdb
cockroach workload run kv \
--sequential --cycle-length 1 --write-seq S0 \
--del-percent 0 --read-percent 0 --span-percent 0 \
--isolation-level read_committed \
--sel1-writes --sfu-writes --sfu-wait-delay 10ms \
--user demo --db defaultdb --concurrency 32
```
For this workload, setting `initial_retry_backoff_for_read_committed`
below 500μs caused more than 100 retries, and above 10ms caused
excessively high PMax query latency. 1ms, 2ms, 4ms all performed well.
We pick 2ms as this should be just at or below the typical execution
time of most workloads. 2ms might be too low for longer-running
statements (though still an improvement over 0ms, the current default).
Release note (sql change): Set the default value of session variable
`initial_retry_backoff_for_read_committed` to 2ms. Testing has shown
that some high-contention workloads running under Read Committed
isolation benefit from exponential backoff. 2ms might be too quick of an
initial backoff for longer-running statements, but setting this much
higher than the normal duration of execution will cause excessive delay.
0 commit comments