You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Chapter4_2.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -279,6 +279,8 @@ A B+ tree is an m-ary tree characterized by a large number of children per node,
279
279
280
280
B+ trees excel in block-oriented storage contexts, such as filesystems, due to their high fanout (typically around 100 or more pointers per node). This high fanout reduces the number of I/O operations needed to locate an element, making B+ trees especially efficient when data cannot fit into memory and must be read from disk.
281
281
282
+
Concurrency control of operations in B-trees, however, is perceived as a difficult subject with many subtleties and special cases [70]. For details on this subject, please refer to the paper "A Survey of B-Tree Locking Techniques".
283
+
282
284
InnoDB employs B+ trees for its indexing, leveraging their ability to ensure a fixed maximum number of reads based on the tree's depth, which scales efficiently. For specific details on B+ tree implementation in MySQL, refer to the file *btr/btr0btr.cc*.
Copy file name to clipboardExpand all lines: Chapter4_4.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ Figure 4-17. Linux CPU scheduling of threads waiting for I/O.
48
48
49
49
This means that these threads waiting for I/O have little impact on other active threads. In MySQL, transaction lock waits function similarly to I/O waits—threads voluntarily block themselves and wait to be activated, which is generally manageable in cases where conflicts are not severe.
50
50
51
-
It is worth mentioning that it is advisable to avoid having a large number of threads waiting for the same global latch or lock, as this can lead to frequent context switches. In NUMA environments, it can also cause frequent cache migrations, thereby affecting MySQL's scalability.
51
+
It is worth mentioning that it is advisable to avoid having a large number of threads waiting for the same global latch or lock, as this can lead to frequent context switches. In NUMA environments, it can also cause frequent cache migrations[71], thereby affecting MySQL's scalability.
52
52
53
53
With the increasing number of CPU cores and larger memory sizes available today, the impact of thread creation costs on MySQL has become smaller. Except for special scenarios such as short connection applications, MySQL can handle a large number of threads given sufficient memory. The key is to limit the number of active threads running concurrently. In theory, MySQL can support thousands of concurrent threads.
54
54
@@ -68,8 +68,8 @@ In Linux, processes and threads are fundamental to multitasking and parallel exe
68
68
69
69
**Key differences include:**
70
70
71
-
-**Memory Consumption:** Processes require separate memory space, making them more memory-intensive compared to threads, which share the memory of their parent process. A process typically consumes around 10 megabytes, whereas a thread uses about 1 megabyte.
72
-
-**Concurrency Handling:** Given the same amount of memory, systems can support significantly more threads than processes. This makes threads more suitable for applications requiring high concurrency.
71
+
-**Memory Consumption:** Processes require separate memory space, making them more memory-intensive compared to threads, which share the memory of their parent process. A process typically consumes around 10 megabytes, whereas a thread uses about 1 megabyte.
72
+
-**Concurrency Handling:** Given the same amount of memory, systems can support significantly more threads than processes. This makes threads more suitable for applications requiring high concurrency.
73
73
74
74
When building a concurrent database system, memory efficiency is critical. MySQL's use of a thread-based model offers an advantage over traditional PostgreSQL's process-based model, particularly in high concurrency scenarios. While PostgreSQL's model can lead to higher memory consumption, MySQL's threading model is more efficient in handling large numbers of concurrent connections.
75
75
@@ -81,14 +81,14 @@ For MySQL, the thread-based model is advantageous over the process model due to
81
81
82
82
**Challenges of the Thread-Based Model:**
83
83
84
-
1.**Cache Performance:** The thread-based execution model often results in poor cache performance with multiple clients.
85
-
2.**Complexity:** The monolithic design of modern DBMS software leads to complex, hard-to-maintain systems.
84
+
1.**Cache Performance:** The thread-based execution model often results in poor cache performance with multiple clients.
85
+
2.**Complexity:** The monolithic design of modern DBMS software leads to complex, hard-to-maintain systems.
86
86
87
87
**Pitfalls of Thread-Based Concurrency:**
88
88
89
-
1.**Thread Management:** There is no optimal number of preallocated worker threads for varying workloads. Too many threads can waste resources, while too few restrict concurrency.
90
-
2.**Context Switching:** Context switches during operations can evict a large working set from the cache, causing delays when the thread resumes.
91
-
3.**Cache Utilization:** Round-robin scheduling does not consider the benefit of shared cache contents, leading to inefficiencies.
89
+
1.**Thread Management:** There is no optimal number of preallocated worker threads for varying workloads. Too many threads can waste resources, while too few restrict concurrency.
90
+
2.**Context Switching:** Context switches during operations can evict a large working set from the cache, causing delays when the thread resumes.
91
+
3.**Cache Utilization:** Round-robin scheduling does not consider the benefit of shared cache contents, leading to inefficiencies.
92
92
93
93
Despite ongoing improvements in operating systems, the thread model continues to face significant challenges in optimization.
94
94
@@ -106,18 +106,18 @@ The staged model is a specialized type of thread model that minimizes some of th
106
106
107
107
**Benefits of the Staged Model**
108
108
109
-
1.**Targeted Thread Allocation**: Each stage allocates worker threads based on its specific functionality and I/O frequency, rather than the number of concurrent clients. This approach allows for more precise thread management tailored to the needs of different database tasks, compared to a generic thread pool size.
110
-
2.**Voluntary Thread Yielding**: Instead of preempting a thread arbitrarily, a stage thread voluntarily yields the CPU at the end of its stage code execution. This reduces cache eviction during the shrinking phase of the working set, minimizing the time needed to restore it. This technique can also be adapted to existing database architectures.
111
-
3.**Exploiting Stage Affinity**: The thread scheduler focuses on tasks within the same stage, which helps to exploit processor cache affinity. The initial task brings common data structures and code into higher cache levels, reducing cache misses for subsequent tasks.
112
-
4.**CPU Binding Efficiency**: The singular nature of thread operations in the staged model allows for improved efficiency through CPU binding, which is especially effective in NUMA environments.
109
+
1.**Targeted Thread Allocation**: Each stage allocates worker threads based on its specific functionality and I/O frequency, rather than the number of concurrent clients. This approach allows for more precise thread management tailored to the needs of different database tasks, compared to a generic thread pool size.
110
+
2.**Voluntary Thread Yielding**: Instead of preempting a thread arbitrarily, a stage thread voluntarily yields the CPU at the end of its stage code execution. This reduces cache eviction during the shrinking phase of the working set, minimizing the time needed to restore it. This technique can also be adapted to existing database architectures.
111
+
3.**Exploiting Stage Affinity**: The thread scheduler focuses on tasks within the same stage, which helps to exploit processor cache affinity. The initial task brings common data structures and code into higher cache levels, reducing cache misses for subsequent tasks.
112
+
4.**CPU Binding Efficiency**: The singular nature of thread operations in the staged model allows for improved efficiency through CPU binding, which is especially effective in NUMA environments.
113
113
114
114
The staged model is extensively used in MySQL for tasks such as secondary replay, Group Replication, and improvements to the Redo log in MySQL 8.0. However, it is not well-suited for handling user requests due to increased response times caused by various queues. MySQL primary servers prioritize low latency and high throughput for user-facing operations, while tasks like secondary replay, which do not interact directly with users, can afford higher latency in exchange for high throughput.
115
115
116
116
The figure below illustrates the processing flow of Group Replication. In this design, Group Replication is divided into multiple subprocesses connected through queues. This staged approach offers several benefits, including:
117
117
118
-
-**High Efficiency**: By breaking down tasks into discrete stages, Group Replication can process tasks more effectively.
119
-
-**Cache-Friendly Access**: The design minimizes cache misses by ensuring that related tasks are executed in sequence.
120
-
-**Pipelined Processing**: Tasks are handled in a pipelined manner, allowing for improved throughput
118
+
-**High Efficiency**: By breaking down tasks into discrete stages, Group Replication can process tasks more effectively.
119
+
-**Cache-Friendly Access**: The design minimizes cache misses by ensuring that related tasks are executed in sequence.
120
+
-**Pipelined Processing**: Tasks are handled in a pipelined manner, allowing for improved throughput
121
121
122
122

123
123
@@ -133,9 +133,9 @@ Modern CPUs generate high memory request rates that can overwhelm the interconne
133
133
134
134
Linux prioritizes local node allocation and minimizes thread migrations across nodes using scheduling domains. This reduces inter-domain migrations but may affect load balance and performance. To optimize memory usage:
135
135
136
-
1.Identify memory-intensive threads.
137
-
2.Distribute them across memory domains.
138
-
3.Migrate memory with threads.
136
+
1. Identify memory-intensive threads.
137
+
2. Distribute them across memory domains.
138
+
3. Migrate memory with threads.
139
139
140
140
Understanding these memory management principles is crucial for diagnosing and solving MySQL performance problems. Linux aims to reduce process interference by minimizing CPU switches and cache migrations across NUMA nodes.
[70] Graefe G (2010) A survey of B-tree locking techniques. ACM Trans Database Syst 35(3):16
141
+
142
+
[71] Jean-Pierre Lozi, Baptiste Lepers, Justin Funston, Fabien Gaud, Vivien Quéma, and Alexandra Fedorova. The linux scheduler: A decade of wasted cores. In Proceedings of the Eleventh European Conference on Computer Systems, EuroSys '16, New York, NY, USA, 2016. Association for Computing Machinery.
0 commit comments