Skip to content

Commit a2ad747

Browse files
committed
docs: enhance performance guide with additional configuration options
1 parent cc0a9b7 commit a2ad747

File tree

1 file changed

+42
-0
lines changed

1 file changed

+42
-0
lines changed

app/spicedb/ops/performance/page.mdx

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,12 @@ import { Callout } from "nextra/components";
22

33
# Improving Performance
44

5+
<Callout type="info">
6+
SpiceDB's server-side configuration defaults favor correctness over raw speed.
7+
API requests, however, default to `minimize_latency` consistency for read operations, favoring cache utilization over strict freshness.
8+
The flags documented on this page allow you to further tune SpiceDB for your specific workload.
9+
</Callout>
10+
511
## By enabling cross-node communication
612

713
SpiceDB can be deployed in a clustered configuration where multiple nodes work together to serve API requests. In such a configuration, and for the CheckPermissions API, enabling a feature called **dispatch** allows nodes to break down one API request into smaller "questions" and forward those to other nodes within the cluster. This helps reduce latency and improve overall performance.
@@ -60,6 +66,21 @@ spicedb serve ...
6066

6167
The `upstream-addr` should be the DNS address of the load balancer at which _all_ SpiceDB nodes are accessible at the default dispatch port of `:50053`.
6268

69+
### Dispatch Chunk Size
70+
71+
The `--dispatch-chunk-size` flag controls the maximum number of object IDs included in a single dispatched request.
72+
This is particularly impactful for lookup operations (such as LookupResources and LookupSubjects) that need to process many objects.
73+
74+
```sh
75+
spicedb serve \
76+
--dispatch-chunk-size=100
77+
```
78+
79+
<Callout type="info">
80+
Larger chunk sizes reduce dispatch overhead but increase memory usage per request.
81+
Start with the default (100) and increase if you observe high dispatch latency with large lookup operations.
82+
</Callout>
83+
6384
## By enabling Materialize
6485

6586
[Materialize] is a separate service that allows for the precomputation of permission query results.
@@ -91,3 +112,24 @@ To configure the schema cache, use the following flags:
91112
# When false: always uses JIT caching
92113
--enable-experimental-watchable-schema-cache=false
93114
```
115+
116+
## By tuning revision quantization
117+
118+
The `--datastore-revision-quantization-interval` and `--datastore-revision-quantization-max-staleness-percent` flags control how SpiceDB groups revisions for caching.
119+
Increasing these values improves cache hit rates at the cost of data freshness.
120+
121+
See the [load testing guide](/spicedb/ops/load-testing#spicedb-quantization-performance) for details on how quantization affects performance, and the [hotspot caching blog post](https://authzed.com/blog/hotspot-caching-in-google-zanzibar-and-spicedb) for a deeper explanation.
122+
123+
## By tuning connection pools
124+
125+
For PostgreSQL, CockroachDB, and MySQL datastores, connection pool sizing significantly impacts performance under load.
126+
Key flags include `--datastore-conn-pool-read-max-open`, `--datastore-conn-pool-write-max-open`, and the corresponding min and jitter settings.
127+
128+
See the [datastores reference](/spicedb/concepts/datastores) for the full list of connection pool flags and defaults, and the [best practices guide](/best-practices#tune-connections-to-datastores) for sizing recommendations.
129+
130+
## By tuning the transaction overlap strategy (CockroachDB only)
131+
132+
The `--datastore-tx-overlap-strategy` flag controls how SpiceDB handles concurrent write transactions.
133+
CockroachDB users can trade consistency guarantees for write throughput by selecting from strategies: `static` (default), `prefix`, `request`, or `insecure`.
134+
135+
See the [CockroachDB datastore documentation](/spicedb/concepts/datastores#overlap-strategy) for detailed strategy descriptions and trade-offs.

0 commit comments

Comments
 (0)