diff --git a/best-practices/massive-regions-best-practices.md b/best-practices/massive-regions-best-practices.md index 9bce76b87ee01..30f185135af1c 100644 --- a/best-practices/massive-regions-best-practices.md +++ b/best-practices/massive-regions-best-practices.md @@ -150,7 +150,7 @@ The default size of a Region is 256 MiB, and you can reduce the number of Region ### Method 7: Increase the maximum number of connections for Raft communication -By default, the maximum number of connections used for Raft communication between TiKV nodes is 1. Increasing this number can help alleviate blockage issues caused by heavy communication workloads of a large number of Regions. For detailed instructions, see [`grpc-raft-conn-num`](/tikv-configuration-file.md#grpc-raft-conn-num). +To adjust the maximum number of connections used for Raft communication between TiKV nodes, you can modify the [`server.grpc-raft-conn-num`](/tikv-configuration-file.md#grpc-raft-conn-num) configuration item. Increasing this number can help alleviate blockage issues caused by heavy communication workloads of a large number of Regions. > **Note:** > diff --git a/best-practices/three-nodes-hybrid-deployment.md b/best-practices/three-nodes-hybrid-deployment.md index cbd698c50de28..52794404ba6d0 100644 --- a/best-practices/three-nodes-hybrid-deployment.md +++ b/best-practices/three-nodes-hybrid-deployment.md @@ -61,7 +61,7 @@ The default value of this parameter is 80% of the number of machine threads. In #### `server.grpc-concurrency` -This parameter defaults to `4`. Because in the existing deployment plan, the CPU resources are limited and the actual requests are few. You can observe the monitoring panel, lower the value of this parameter, and keep the usage rate below 80%. +Because in the existing deployment plan, the CPU resources are limited and the actual requests are few. You can observe the monitoring panel, lower the value of [`server.grpc-concurrency`](/tikv-configuration-file.md#grpc-concurrency), and keep the usage rate below 80%. In this test, the value of this parameter is set to `2`. Observe the **gRPC poll CPU** panel and you can see that the usage rate is just around 80%. diff --git a/tikv-configuration-file.md b/tikv-configuration-file.md index 5a650f13d4cd8..8c630809952e1 100644 --- a/tikv-configuration-file.md +++ b/tikv-configuration-file.md @@ -151,7 +151,11 @@ This document only describes the parameters that are not included in command-lin ### `grpc-concurrency` + The number of gRPC worker threads. When you modify the size of the gRPC thread pool, refer to [Performance tuning for TiKV thread pools](/tune-tikv-thread-performance.md#performance-tuning-for-tikv-thread-pools). -+ Default value: `5` ++ Default value: + + + Starting from v8.5.4 and v9.0.0, the default value is adjusted to `grpc-raft-conn-num * 3 + 2`, which is calculated based on the value of [`grpc-raft-conn-num`](#grpc-raft-conn-num). For example, when the number of CPU cores is 8, the default value of `grpc-raft-conn-num` is 1. Accordingly, the default value of `grpc-concurrency` is `1 * 3 + 2 = 5`. + + In v8.5.3 and earlier versions, the default value is `5`. + + Minimum value: `1` ### `grpc-concurrent-stream` @@ -169,7 +173,11 @@ This document only describes the parameters that are not included in command-lin ### `grpc-raft-conn-num` + The maximum number of connections between TiKV nodes for Raft communication -+ Default value: `1` ++ Default value: + + + Starting from v8.5.4 and v9.0.0, the default value is adjusted to `MAX(1, MIN(4, CPU cores / 8))`, where `MIN(4, CPU cores / 8)` indicates that when the number of CPU cores is greater than or equal to 32, the default maximum number of connections is 4. + + In v8.5.3 and earlier versions, the default value is `1`. + + Minimum value: `1` ### `max-grpc-send-msg-len` diff --git a/tune-tikv-memory-performance.md b/tune-tikv-memory-performance.md index 3fade7cac85d3..fbb0290d11e6a 100644 --- a/tune-tikv-memory-performance.md +++ b/tune-tikv-memory-performance.md @@ -41,7 +41,7 @@ log-level = "info" # Size of thread pool for gRPC # grpc-concurrency = 4 # The number of gRPC connections between each TiKV instance -# grpc-raft-conn-num = 10 +# grpc-raft-conn-num = 1 # Most read requests from TiDB are sent to the coprocessor of TiKV. This parameter is used to set the number of threads # of the coprocessor. If many read requests exist, add the number of threads and keep the number within that of the diff --git a/tune-tikv-thread-performance.md b/tune-tikv-thread-performance.md index 36dfc3dd5ce5a..2a72f45f83874 100644 --- a/tune-tikv-thread-performance.md +++ b/tune-tikv-thread-performance.md @@ -43,7 +43,7 @@ Starting from TiKV v5.0, all read requests use the unified thread pool for queri * The gRPC thread pool. - The default size (configured by `server.grpc-concurrency`) of the gRPC thread pool is `5`. This thread pool has almost no computing overhead and is mainly responsible for network I/O and deserialization requests, so generally you do not need to adjust the default configuration. + Starting from v8.5.4 and v9.0.0, the default size of the gRPC thread pool (configured by `server.grpc-concurrency`) is changed from a fixed value of `5` to an adaptive value calculated based on the number of CPU cores. For the detailed calculation formula, see [`server.grpc-concurrency`](/tikv-configuration-file.md#grpc-concurrency). This thread pool has almost no computing overhead and is mainly responsible for network I/O and deserialization requests, so generally you do not need to adjust the default configuration. - If the machine deployed with TiKV has a small number (less than or equal to 8) of CPU cores, consider setting the `server.grpc-concurrency` configuration item to `2`. - If the machine deployed with TiKV has very high configuration, TiKV undertakes a large number of read and write requests, and the value of `gRPC poll CPU` that monitors Thread CPU on Grafana exceeds 80% of `server.grpc-concurrency`, then consider increasing the value of `server.grpc-concurrency` to keep the thread pool usage rate below 80% (that is, the metric on Grafana is lower than `80% * server.grpc-concurrency`).