Skip to content

Commit 0d3dc89

Browse files
committed
Further reduce image size to speed up loading times
1 parent c17a087 commit 0d3dc89

20 files changed

+7
-7
lines changed

Chapter10.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ From the graph, it can be observed that the SQL thread encounters two major bott
169169

170170
After switching to jemalloc 4.5 as the memory allocation tool, not only did the MySQL secondary replay speed increase, but also the CPU overhead of the SQL thread itself decreased. Please refer to the figure below for details.
171171

172-
![](media/c7ef66e1ace9b544ba85d64f3f4ed1c4.png)
172+
![](media/c7ef66e1ace9b544ba85d64f3f4ed1c4.gif)
173173

174174
Figure 10-9. Reduced CPU overhead for the SQL thread with improved jemalloc.
175175

@@ -274,7 +274,7 @@ In MySQL, the SQL thread for secondary replay is divided into six threads: one f
274274

275275
Below is the *'top'* screenshot of the MySQL secondary running process.
276276

277-
![](media/dccad19c5a07ffcd2477a9030d9337e7.png)
277+
![](media/dccad19c5a07ffcd2477a9030d9337e7.gif)
278278

279279
Figure 10-13. SQL thread appears as six separate threads in *'top'* display.
280280

Chapter4_8.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ Figure 4-54. The *perf* statistics before optimizing the MVCC ReadView data stru
7575

7676
*Perf* analysis reveals that the first and second bottlenecks together account for about 33% of the total. After optimizing the MVCC ReadView, this percentage drops to approximately 5.7%, reflecting a reduction of about 28%, or up to 30% considering measurement fluctuations. According to Amdahl's Law, theoretical performance improvement could be up to around 43%. However, actual throughput has increased by 53%.
7777

78-
![](media/440f78c946a7e68fece427a41d414a2d.png)
78+
![](media/440f78c946a7e68fece427a41d414a2d.gif)
7979

8080
![](media/d06602b60d3af269409eb5633f8387c5.png)
8181

@@ -212,7 +212,7 @@ Date: Fri Nov 8 20:58:48 2013 +0100
212212

213213
Removing these cache padding optimizations, as shown in the figure below, serves as the version before cache optimization.
214214

215-
![](media/9a737b0274b3fad6b6d2570bba64996a.png)
215+
![](media/9a737b0274b3fad6b6d2570bba64996a.gif)
216216

217217
Figure 4-62. Partial reversion of cache padding optimizations.
218218

Chapter7.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ Figure 7-14. Requirements of the official worklog for CATS.
321321
322322
Therefore, with the adoption of the CATS algorithm, performance degradation should be absent in all scenarios. It seems like things end here, but the summary in the CATS algorithm's paper [24] raises some doubts. Details are provided in the following figure:
323323
324-
![](media/4cc389ee95fbae485f1e014aad393aa8.png)
324+
![](media/4cc389ee95fbae485f1e014aad393aa8.gif)
325325
326326
Figure 7-15. Doubts about the CATS paper.
327327

Chapter8.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -564,7 +564,7 @@ Figure 8-27. Performance improvement after eliminating the double latch bottlene
564564

565565
From the figure, it is evident that the modifications significantly improved scalability under high-concurrency conditions. To understand the reasons for this improvement, let's use the *perf* tool for further investigation. Below is the *perf* screenshot at 2000 concurrency before the modifications:
566566

567-
![](media/ccb771014f600402fee72ca7134aea10.png)
567+
![](media/ccb771014f600402fee72ca7134aea10.gif)
568568

569569
Figure 8-28. Latch-related bottleneck observed in *perf* screenshot.
570570

Chapter9.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -429,7 +429,7 @@ From the figure, it is evident that the new Group Replication single-primary mod
429429
430430
Accurately detecting node failure is challenging due to the FLP impossibility result, which states that consensus is impossible in a purely asynchronous system if even one process can fail. The difficulty arises because a server can't distinguish if another server has failed or is just "very slow" when it receives no messages [32]. Fortunately, most practical systems are not purely asynchronous, so the FLP result doesn't apply. To circumvent this, additional assumptions about system synchrony are made, allowing for the design of protocols that maintain safety and provide liveness under certain conditions. One common method is to use an inaccurate local failure detector.
431431
432-
![](media/e2554f0ea244f337c0e66ea34bf53edf.png)
432+
![](media/e2554f0ea244f337c0e66ea34bf53edf.gif)
433433
434434
Figure 9-18. The asynchronous message passing model borrowed from the Mencius paper.
435435
50.9 KB
Loading
-181 KB
Binary file not shown.
46.1 KB
Loading
-199 KB
Binary file not shown.
42.6 KB
Loading

0 commit comments

Comments
 (0)