You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: text/0083-Improve-the-robust-of-balance-scheduler.md
+12-14Lines changed: 12 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,42 +5,40 @@
5
5
6
6
## Summary
7
7
8
-
Make scheduler more robust for dynamic region size.
8
+
Make schedulers more robust for dynamic region size.
9
9
10
10
## Motivation
11
11
12
-
We have observed many different situations when the region size is different. The major drawback comes from this aspects:
12
+
We have observed many different situations when the region size is different. The major drawback comes from these aspects:
13
13
14
-
1. Balance region scheduler pick source store in order of store's score, the second store will be picked after the first store has not met some filter or retry times exceed fixed value, this problem is also exist in target pick strategy.
15
-
2.Operator has an import effect on region leader, and the leader is responsible in the operator life cycle. But the region leader will not be limited by any filter.
16
-
3. There are some factor that influence execution time of operator such as region size, IO limit, cpu load. PD needs to be more flexible to manage operator's life not fixed config.
14
+
1. Balance region scheduler picks source store in order of store's score, the lower store will be picked after the higher store has not met some filter or retry times exceed fixed value. If the count of placement rules or tikv is bigger, the lower store has less chance to balance like TiFlash.
15
+
2.splitting rocksDB and sending them by region leader will occupy cpu and io resources.
16
+
3. There are some factors that influence execution time of an operator such as region size, IO limit, cpu load. PD needs to be more flexible to manage operator's timeout threshold rather than not fixed value.
17
17
4. PD should know some global config about TiKV like `region-max-size`, `region-report-interval`. This config should synchronize with PD.
18
18
19
19
## Detailed design
20
20
21
21
### Store pick strategy
22
22
23
-
It can arrange all the store based on label, like TiKV and TiFlash and allow low score group has more chance to scheduler. But the first score region should has highest priority to be selected.
23
+
It can arrange all the stores based on label, like TiKV and TiFlash and allow low score groups more chances to schedule. But the first score region should have the highest priority to be selected.
24
24
25
25
#### Consider Influence to leader
26
26
27
-
Normally, one operator is made of region, source store and target store, the key works finished by region leader such as snapshot generate, snapshot send. It is not friendly to the leader if majority operator is add follow.
28
-
29
-
It will add new store limit as new limit type to decrease leader loads of every store.
27
+
It will add a new store limit to decrease leader loads of every store. Picking region should check if the leader token is available.
30
28
31
29
### Operator control
32
30
33
31
#### Store limit cost
34
32
35
-
Different size region occupy tokens should be different. Maybe can use this formula:
33
+
Different size regions occupy tokens should be different. Maybe can use this formula:
Cost equals 200 if operator influence is 1Mb or equals 600 if operator influence is 1gb.
37
+
Cost equals 200 if operator influence is 1MB or equals 600 if operator influence is 1GB.
40
38
41
39
#### Operator life cycle
42
40
43
-
The operator life cycle can divide into some stages: create, executing(started), complete. PD will check operator stage by region heart beats and cancel operator if one operator‘s running time exceed the fixed value(10m).
41
+
The operator life cycle can be divided into some stages: create, executing(started), complete. PD will check operator stage by region heartbeat and cancel if one operator‘s running time exceeds the fixed value(10m).
44
42
45
43
It will be better if we can calculate every step expecting execute duration by major factor includes region size, IO limit and operator concurrency like this:
46
44
@@ -56,9 +54,9 @@ There are some global config that all components need to synchronize like `regio
56
54
57
55
## Alternatives
58
56
59
-
Removing peer may not influence the cluster performance, it can be replace by leader store limit.
57
+
Removing peer may not influence the cluster performance, it can be replaced by leader store limit.
60
58
61
-
Canceling operator can depends on TiKV not by PD, but TiKV should notify PD after canceled one operator.
59
+
Canceling operators can depend on TiKV not by PD, but TiKV should notify PD after canceling one operator.
0 commit comments