Skip to content

Commit ed22749

Browse files
committed
asim: improve comments for tests under mma
This test improves the comments on the setup. I’ve made a first pass to clarify the meaning of various parts, though it may require further review to fully understand the scenario. This is a starting point.
1 parent bd0111a commit ed22749

11 files changed

+172
-63
lines changed

pkg/kv/kvserver/asim/tests/testdata/non_rand/mma/full_disk.txt

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,21 @@
1+
# This test verifies that the allocator can rebalance replicas when stores
2+
# have limited disk capacity and some stores become nearly full. The test
3+
# sets up a 5-node cluster where each store has a 10GB capacity. It creates
4+
# 15 ranges with 3 replicas each, where each range is 500MiB in size. The
5+
# initial placement is skewed, causing stores s1, s2, and s3 to have most
6+
# replicas (with s1 at 92% capacity, s2 at 85%, and s3 at 49%), while s4
7+
# and s5 are underutilized at 24% capacity each.
8+
#
9+
# Expected outcome: The allocator should rebalance replicas to distribute
10+
# disk usage evenly across all stores, moving replicas from the fuller
11+
# stores (s1, s2, s3) to the less utilized stores (s4, s5). The final
12+
# distribution should achieve balanced disk usage (~55% on each store) and
13+
# more even replica distribution across all nodes.
114
gen_cluster nodes=5 store_byte_capacity_gib=10
215
----
316

4-
# Each range will be 500 MiB in size and the placement will be skewed, s.t.
5-
# n1/s1, n2/s2 and n3/s3 will have every replicas initially and n1/s1 will have
6-
# every lease.
17+
# Each range will be 500 MiB (524288000 B) in size and the placement will be skewed,
18+
# s.t. n1/s1, n2/s2 and n3/s3 will have most replicas initially.
719
gen_ranges ranges=15 bytes=524288000 repl_factor=3 placement_type=skewed
820
----
921

pkg/kv/kvserver/asim/tests/testdata/non_rand/mma/heterogeneous_cpu.txt

Lines changed: 12 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,14 @@
1+
# This test verifies allocator's behavior in a heterogeneous cluster where nodes
2+
# have different CPU capacities. The test sets up a 3-node cluster with nodes n1
3+
# and n2 having 8vcpu capacity each, and node n3 having 16vcpu capacity. It
4+
# creates 200 ranges evenly distributed across all stores and generates a
5+
# read-only workload.
6+
#
7+
# Expected outcome: mma should balance cpu load based on cpu utilization
8+
# percentage rather than absolute cpu nanos. However, both mma and sma currently
9+
# balance on absolute cpu-nanos. n3 should handle more load due to its higher
10+
# capacity, but the current implementation doesn't account for this. This is
11+
# tracked in issue: https://github.com/cockroachdb/cockroach/issues/153777.
112
gen_cluster nodes=3 node_cpu_rate_capacity=(8000000000,8000000000,16000000000)
213
----
314

@@ -18,7 +29,7 @@ assertion stat=cpu_util type=balance ticks=6 upper_bound=1.1
1829
----
1930
asserting: max_{stores}(cpu_util)/mean_{stores}(cpu_util) ≤ 1.10 at each of last 6 ticks
2031

21-
eval cfgs=(sma-count,mma-only,mma-count) duration=10m metrics=(cpu,cpu_util)
32+
eval cfgs=(sma-count,mma-count) duration=10m metrics=(cpu,cpu_util)
2233
----
2334
cpu#1: last: [s1=6700914166, s2=6696073333, s3=6603012499] (stddev=45053658.00, mean=6666666666.00, sum=19999999998)
2435
cpu#1: thrash_pct: [s1=185%, s2=170%, s3=189%] (sum=544%)
@@ -38,20 +49,6 @@ cpu#1: last: [s1=6399850833, s2=6395993333, s3=7203112093] (stddev=379573477.72
3849
cpu#1: thrash_pct: [s1=32%, s2=37%, s3=40%] (sum=110%)
3950
cpu_util#1: last: [s1=0.80, s2=0.80, s3=0.45] (stddev=0.16, mean=0.68, sum=2)
4051
cpu_util#1: thrash_pct: [s1=8%, s2=9%, s3=5%] (sum=21%)
41-
artifacts[mma-only]: 10d7af8d9edcf883
42-
failed assertion sample 1
43-
balance stat=cpu_util threshold=(≤1.10) ticks=6
44-
max/mean=1.17 tick=0
45-
max/mean=1.17 tick=1
46-
max/mean=1.17 tick=2
47-
max/mean=1.17 tick=3
48-
max/mean=1.17 tick=4
49-
max/mean=1.17 tick=5
50-
==========================
51-
cpu#1: last: [s1=6399850833, s2=6395993333, s3=7203112093] (stddev=379573477.72, mean=6666318753.00, sum=19998956259)
52-
cpu#1: thrash_pct: [s1=32%, s2=37%, s3=40%] (sum=110%)
53-
cpu_util#1: last: [s1=0.80, s2=0.80, s3=0.45] (stddev=0.16, mean=0.68, sum=2)
54-
cpu_util#1: thrash_pct: [s1=8%, s2=9%, s3=5%] (sum=21%)
5552
artifacts[mma-count]: 10d7af8d9edcf883
5653
failed assertion sample 1
5754
balance stat=cpu_util threshold=(≤1.10) ticks=6
Lines changed: 23 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,17 @@
1+
# This test verifies that the allocator can rebalance replicas and leases when
2+
# there is high cpu load imbalance across the cluster. The test sets up a 10-node
3+
# cluster with two distinct workloads: one evenly distributed across all nodes,
4+
# and another high-cpu workload initially concentrated on only the first few nodes
5+
# due to skewed placement. The second workload has significantly higher cpu cost
6+
# per op, creating cpu imbalance.
7+
#
8+
# Expected outcome: The allocator should rebalance both replicas and leases to
9+
# distribute the high-cpu workload more evenly across all 10 nodes.
110
gen_cluster nodes=10 node_cpu_rate_capacity=8000000000
211
----
312

13+
# TODO(wenyihu6): why didn't we balance more replicas/leases - is it because of a very high cpu per range
14+
415
# Set the rebalance mode to use the mma store rebalancer and disable the lease
516
# and replicate queues so that only the mma store rebalancer is moving replicas
617
# or leases.
@@ -15,36 +26,36 @@ gen_load rate=5000 rw_ratio=0.95 min_block=100 max_block=100 request_cpu_per_acc
1526
----
1627

1728
# Another workload is added over the second half of the keyspace, which is initially
18-
# only on s1-s3.
29+
# mostly on s1-s3.
1930
gen_ranges ranges=50 min_key=10001 max_key=20000 placement_type=skewed
2031
----
2132

2233
gen_load rate=5000 rw_ratio=0.95 min_block=128 max_block=128 request_cpu_per_access=100000 raft_cpu_per_write=20000 min_key=10001 max_key=20000
2334
----
2435

25-
eval duration=15m samples=1 seed=42 cfgs=(mma-only,mma-count) metrics=(cpu,write_bytes_per_second,replicas,leases)
36+
eval duration=2m samples=1 seed=42 cfgs=(mma-only,mma-count) metrics=(cpu,cpu_util,replicas,leases)
2637
----
27-
cpu#1: last: [s1=274870057, s2=124118783, s3=42166496, s4=21298975, s5=10805903, s6=10577758, s7=453407, s8=10306222, s9=10413474, s10=10503921] (stddev=81956672.84, mean=51551499.60, sum=515514996)
28-
cpu#1: thrash_pct: [s1=7%, s2=6%, s3=4%, s4=3%, s5=2%, s6=2%, s7=0%, s8=2%, s9=2%, s10=2%] (sum=30%)
38+
cpu#1: last: [s1=275096159, s2=123983362, s3=41814276, s4=21433672, s5=10796253, s6=10602552, s7=439843, s8=10300378, s9=10452776, s10=10595723] (stddev=81999286.66, mean=51551499.40, sum=515514994)
39+
cpu#1: thrash_pct: [s1=4%, s2=3%, s3=3%, s4=2%, s5=1%, s6=1%, s7=0%, s8=1%, s9=1%, s10=1%] (sum=18%)
40+
cpu_util#1: last: [s1=0.03, s2=0.02, s3=0.01, s4=0.00, s5=0.00, s6=0.00, s7=0.00, s8=0.00, s9=0.00, s10=0.00] (stddev=0.01, mean=0.01, sum=0)
41+
cpu_util#1: thrash_pct: [s1=4%, s2=3%, s3=3%, s4=2%, s5=1%, s6=1%, s7=0%, s8=1%, s9=1%, s10=1%] (sum=18%)
2942
leases#1: first: [s1=37, s2=22, s3=14, s4=13, s5=11, s6=11, s7=10, s8=11, s9=10, s10=11] (stddev=8.07, mean=15.00, sum=150)
3043
leases#1: last: [s1=37, s2=22, s3=14, s4=13, s5=11, s6=11, s7=10, s8=11, s9=10, s10=11] (stddev=8.07, mean=15.00, sum=150)
3144
leases#1: thrash_pct: [s1=0%, s2=0%, s3=0%, s4=0%, s5=0%, s6=0%, s7=0%, s8=0%, s9=0%, s10=0%] (sum=0%)
3245
replicas#1: first: [s1=80, s2=70, s3=51, s4=42, s5=37, s6=35, s7=34, s8=33, s9=34, s10=34] (stddev=16.02, mean=45.00, sum=450)
3346
replicas#1: last: [s1=80, s2=70, s3=51, s4=42, s5=37, s6=35, s7=34, s8=33, s9=34, s10=34] (stddev=16.02, mean=45.00, sum=450)
3447
replicas#1: thrash_pct: [s1=0%, s2=0%, s3=0%, s4=0%, s5=0%, s6=0%, s7=0%, s8=0%, s9=0%, s10=0%] (sum=0%)
35-
write_bytes_per_second#1: last: [s1=39511, s2=33080, s3=20899, s4=15208, s5=11942, s6=10699, s7=10093, s8=9465, s9=10055, s10=10043] (stddev=10247.09, mean=17099.50, sum=170995)
36-
write_bytes_per_second#1: thrash_pct: [s1=13%, s2=18%, s3=20%, s4=19%, s5=17%, s6=16%, s7=14%, s8=14%, s9=17%, s10=16%] (sum=165%)
37-
artifacts[mma-only]: bd71a8872f557e0f
48+
artifacts[mma-only]: c9c14a2b21947e75
3849
==========================
39-
cpu#1: last: [s1=153545974, s2=82571497, s3=61967377, s4=31436939, s5=21209665, s6=31257441, s7=10903219, s8=40903888, s9=51026201, s10=30714935] (stddev=39256550.41, mean=51553713.60, sum=515537136)
40-
cpu#1: thrash_pct: [s1=10%, s2=6%, s3=7%, s4=5%, s5=4%, s6=6%, s7=3%, s8=6%, s9=7%, s10=5%] (sum=58%)
50+
cpu#1: last: [s1=153767559, s2=82526536, s3=61655396, s4=31442666, s5=21243662, s6=31483931, s7=10725049, s8=40802943, s9=51247053, s10=30866698] (stddev=39300865.24, mean=51576149.30, sum=515761493)
51+
cpu#1: thrash_pct: [s1=6%, s2=4%, s3=4%, s4=3%, s5=2%, s6=4%, s7=1%, s8=4%, s9=5%, s10=3%] (sum=37%)
52+
cpu_util#1: last: [s1=0.02, s2=0.01, s3=0.01, s4=0.00, s5=0.00, s6=0.00, s7=0.00, s8=0.01, s9=0.01, s10=0.00] (stddev=0.00, mean=0.01, sum=0)
53+
cpu_util#1: thrash_pct: [s1=6%, s2=4%, s3=4%, s4=3%, s5=2%, s6=4%, s7=1%, s8=4%, s9=5%, s10=3%] (sum=37%)
4154
leases#1: first: [s1=37, s2=22, s3=14, s4=13, s5=11, s6=11, s7=10, s8=11, s9=10, s10=11] (stddev=8.07, mean=15.00, sum=150)
4255
leases#1: last: [s1=20, s2=16, s3=15, s4=16, s5=12, s6=14, s7=12, s8=15, s9=15, s10=15] (stddev=2.14, mean=15.00, sum=150)
4356
leases#1: thrash_pct: [s1=0%, s2=0%, s3=15%, s4=0%, s5=0%, s6=0%, s7=0%, s8=0%, s9=0%, s10=0%] (sum=15%)
4457
replicas#1: first: [s1=80, s2=70, s3=51, s4=42, s5=37, s6=35, s7=34, s8=33, s9=34, s10=34] (stddev=16.02, mean=45.00, sum=450)
4558
replicas#1: last: [s1=45, s2=44, s3=44, s4=47, s5=44, s6=44, s7=44, s8=45, s9=46, s10=47] (stddev=1.18, mean=45.00, sum=450)
4659
replicas#1: thrash_pct: [s1=0%, s2=0%, s3=0%, s4=0%, s5=0%, s6=0%, s7=0%, s8=0%, s9=0%, s10=0%] (sum=0%)
47-
write_bytes_per_second#1: last: [s1=25330, s2=20719, s3=18391, s4=17246, s5=15288, s6=15257, s7=14520, s8=14450, s9=15382, s10=14423] (stddev=3361.47, mean=17100.60, sum=171006)
48-
write_bytes_per_second#1: thrash_pct: [s1=84%, s2=62%, s3=67%, s4=35%, s5=45%, s6=38%, s7=29%, s8=29%, s9=42%, s10=40%] (sum=471%)
49-
artifacts[mma-count]: abbd0fc9dbc1971a
60+
artifacts[mma-count]: de0b265129d19e1
5061
==========================

pkg/kv/kvserver/asim/tests/testdata/non_rand/mma/high_cpu_25nodes.txt

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# This test verifies that the allocator can rebalance replicas and leases when
2+
# there is high CPU load imbalance across a large cluster. The test set-up is
3+
# similar to high_cpu.txt but is on 25 nodes and with 3x the load for two
4+
# gen_load commands.
15
gen_cluster nodes=25 node_cpu_rate_capacity=8000000000
26
----
37

@@ -12,7 +16,7 @@ gen_load rate=15000 rw_ratio=0.95 min_block=100 max_block=100 request_cpu_per_ac
1216
----
1317

1418
# Another workload is added over the second half of the keyspace, which is initially
15-
# only on s1-s3.
19+
# only mainly on s1-s3 due to the skewed distribution.
1620
gen_ranges ranges=50 min_key=10001 max_key=20000 placement_type=skewed
1721
----
1822

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# See comments on top of high_cpu_able_to_shed_leases.txt for test details.
2+
3+
# Case (2) where s1 has leases and is CPU overloaded due to raft CPU. It
4+
# will be able to shed its own leases because it is the leaseholer. There should
5+
# be a period of lease-rebalancing activity before replica-rebalancing.
6+
7+
gen_cluster nodes=5 node_cpu_rate_capacity=9000000000
8+
----
9+
10+
setting split_queue_enabled=false
11+
----
12+
13+
gen_ranges ranges=25 min_key=0 max_key=10000 placement_type=replica_placement
14+
{s1:*,s2,s3}:7
15+
{s1:*,s4,s5}:6
16+
{s1:*,s2,s4}:6
17+
{s1:*,s3,s5}:6
18+
----
19+
{s1:*,s2,s3}:7
20+
{s1:*,s4,s5}:6
21+
{s1:*,s2,s4}:6
22+
{s1:*,s3,s5}:6
23+
24+
gen_load rate=50000 rw_ratio=0 min_key=0 max_key=10000 raft_cpu_per_write=100000
25+
----
26+
27+
eval duration=30m samples=1 seed=42 cfgs=(mma-only,mma-count) metrics=(cpu,cpu_util,write_bytes_per_second,replicas,leases)
28+
----
29+
cpu#1: last: [s1=2599713435, s2=2799759450, s3=3200050347, s4=3199885256, s5=3200075310] (stddev=253112601.53, mean=2999896759.60, sum=14999483798)
30+
cpu#1: thrash_pct: [s1=162%, s2=84%, s3=146%, s4=177%, s5=177%] (sum=746%)
31+
cpu_util#1: last: [s1=0.29, s2=0.31, s3=0.36, s4=0.36, s5=0.36] (stddev=0.03, mean=0.33, sum=2)
32+
cpu_util#1: thrash_pct: [s1=162%, s2=84%, s3=146%, s4=177%, s5=177%] (sum=746%)
33+
leases#1: first: [s1=25, s2=0, s3=0, s4=0, s5=0] (stddev=10.00, mean=5.00, sum=25)
34+
leases#1: last: [s1=3, s2=5, s3=7, s4=10, s5=0] (stddev=3.41, mean=5.00, sum=25)
35+
leases#1: thrash_pct: [s1=30%, s2=0%, s3=0%, s4=0%, s5=57%] (sum=87%)
36+
replicas#1: first: [s1=25, s2=13, s3=13, s4=12, s5=12] (stddev=5.02, mean=15.00, sum=75)
37+
replicas#1: last: [s1=13, s2=14, s3=16, s4=16, s5=16] (stddev=1.26, mean=15.00, sum=75)
38+
replicas#1: thrash_pct: [s1=0%, s2=0%, s3=0%, s4=0%, s5=35%] (sum=35%)
39+
write_bytes_per_second#1: last: [s1=25997, s2=27997, s3=32000, s4=31998, s5=32000] (stddev=2530.93, mean=29998.40, sum=149992)
40+
write_bytes_per_second#1: thrash_pct: [s1=162%, s2=84%, s3=146%, s4=177%, s5=177%] (sum=746%)
41+
artifacts[mma-only]: 123731b2fdd740e2
42+
==========================
43+
cpu#1: last: [s1=3200054918, s2=2800070997, s3=3200598149, s4=2799968139, s5=3000353068] (stddev=179022817.20, mean=3000209054.20, sum=15001045271)
44+
cpu#1: thrash_pct: [s1=125%, s2=63%, s3=50%, s4=108%, s5=80%] (sum=426%)
45+
cpu_util#1: last: [s1=0.36, s2=0.31, s3=0.36, s4=0.31, s5=0.33] (stddev=0.02, mean=0.33, sum=2)
46+
cpu_util#1: thrash_pct: [s1=125%, s2=63%, s3=50%, s4=108%, s5=80%] (sum=426%)
47+
leases#1: first: [s1=25, s2=0, s3=0, s4=0, s5=0] (stddev=10.00, mean=5.00, sum=25)
48+
leases#1: last: [s1=6, s2=2, s3=5, s4=5, s5=7] (stddev=1.67, mean=5.00, sum=25)
49+
leases#1: thrash_pct: [s1=0%, s2=0%, s3=0%, s4=0%, s5=0%] (sum=0%)
50+
replicas#1: first: [s1=25, s2=13, s3=13, s4=12, s5=12] (stddev=5.02, mean=15.00, sum=75)
51+
replicas#1: last: [s1=16, s2=14, s3=16, s4=14, s5=15] (stddev=0.89, mean=15.00, sum=75)
52+
replicas#1: thrash_pct: [s1=0%, s2=0%, s3=0%, s4=0%, s5=0%] (sum=0%)
53+
write_bytes_per_second#1: last: [s1=32000, s2=28000, s3=32005, s4=27999, s5=30003] (stddev=1790.20, mean=30001.40, sum=150007)
54+
write_bytes_per_second#1: thrash_pct: [s1=124%, s2=63%, s3=50%, s4=108%, s5=80%] (sum=426%)
55+
artifacts[mma-count]: 2ef4e5947798976f
56+
==========================

pkg/kv/kvserver/asim/tests/testdata/non_rand/mma/high_cpu_unable_to_shed_leases.txt

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
1+
# This test verifies that for remotely cpu-overloaded stores, mma wait for lease
2+
# shedding grace period (remoteStoreLeaseSheddingGraceDuration) before rebalancing
3+
# replicas away from the store. The test sets up a 5-node cluster where store s1
4+
# has a high replica count (25 out of 75 total replicas) but holds no leases. All
5+
# leases are distributed among stores s2-s5. A write-only workload with high raft
6+
# CPU cost creates CPU pressure primarily on s1 due to its replica count, but s1
7+
# cannot shed leases since it holds none.
8+
#
9+
# Expected outcome:
110
# Want to test two cases:
2-
# (1) Where its impossible to shed leases from the CPU overloaded store, so we
3-
# should initially observe a period of no rebalancing activity away from
4-
# the store.
5-
# (2) Where its possible to shed leases from the CPU overloaded store, so we
6-
# should observe a period of lease transfers before any replica based
7-
# rebalancing away from the store occurs.
11+
# (1) high_cpu_unable_to_shed_leases.txt: Where its impossible to shed leases
12+
# from the cpu overloaded s1, so we should initially observe a period of no
13+
# rebalancing activity away from the store before
14+
# any replica based rebalancing.
15+
# (2) high_cpu_able_to_shed_leases.txt: Where its possible to shed leases from
16+
# the CPU overloaded s1, so we should observe a period of lease transfers before
17+
# any replica based rebalancing away from the store occurs.
18+
819
gen_cluster nodes=5 node_cpu_rate_capacity=9000000000
920
----
1021

1122
setting split_queue_enabled=false
1223
----
1324

14-
# Case (1) where s1 has no leases and is CPU overloaded due to raft CPU. It
15-
# won't be able to shed its own replicas because it is not the leaseholder for
16-
# any of the ranges.
17-
18-
# Originally, this test uses replica_weights=(0.3,0.175,0.175,0.175,0.175)
19-
# lease_weights=(0,0.25,0.25,0.25,0.25). Replication factor is 3 by default. 75
20-
# replicas in total. replicas distribution is approximately s1: 23, s2: 13, s3:
21-
# 13, s4: 13, s5: 13 leaseholder weights: s2: 7 leaseholder, s3: 6 leaseholder,
22-
# s4: 6 leaseholder, s5: 6 leaseholder. To approximate this, we use replica
23-
# placement: As an approximation, (s1,s2*,s3):7, (s1,s4,s5*):6, (s1,s2,s4*):6,
24-
# (s1,s3*,s5):6 s1 does not have the lease. Other stores have the same
2525
gen_ranges ranges=25 min_key=0 max_key=10000 placement_type=replica_placement
2626
{s1,s2:*,s3}:7
2727
{s1,s4,s5:*}:6
@@ -66,5 +66,3 @@ write_bytes_per_second#1: last: [s1=2600, s2=3202, s3=3201, s4=3000, s5=3000] (
6666
write_bytes_per_second#1: thrash_pct: [s1=19%, s2=22%, s3=18%, s4=16%, s5=20%] (sum=96%)
6767
artifacts[mma-count]: 5b1fca7fda20dfdf
6868
==========================
69-
70-
# TODO(kvoli): Case (2)

pkg/kv/kvserver/asim/tests/testdata/non_rand/mma/high_write_uniform_cpu.txt

Lines changed: 16 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,22 @@
1+
# This test sets up a 10-node cluster with two workloads: (read-only, high-cpu on lh)
2+
# uniformly across nodes and (write-only, high-write) initially concentrated on s1-s3.
3+
#
4+
# Expected outcome: mma should rebalance replicas and leases to distribute the
5+
# cpu load and write load more evenly across all stores.
16
gen_cluster nodes=10 node_cpu_rate_capacity=3000000000
27
----
38

4-
# Read only workload, which generates 100_000 request cpu nanos/s evenly over
9+
# Read only workload, which generates 1000 request cpu nanos/s evenly over
510
# the first half of the keyspace, which will be on all stores initially.
611
gen_ranges ranges=30 min_key=1 max_key=10000 placement_type=even
712
----
813

914
gen_load rate=1000 rw_ratio=1.0 request_cpu_per_access=5000000 min_key=1 max_key=10000
1015
----
1116

12-
# Write only workload, which generates no CPU and 100_000 (x replication
13-
# factor) write bytes per second over the second half of the keyspace, which
14-
# are all on s1-s3 initially.
17+
# Write only workload, which generates no CPU and 20000op/s*1000B/op =
18+
# 20000000B/s (x 3 replication factor) write bytes per second over the second half
19+
# of the keyspace, which are all on s1-s3 initially.
1520
gen_ranges ranges=30 min_key=10001 max_key=20000 placement_type=skewed
1621
----
1722

@@ -21,12 +26,15 @@ gen_load rate=20000 rw_ratio=0 min_block=1000 max_block=1000 min_key=10001 max_k
2126
setting split_queue_enabled=false
2227
----
2328

24-
eval duration=20m samples=1 seed=42 cfgs=(mma-only,mma-count) metrics=(cpu,cpu_util,write_bytes_per_second,replicas,leases)
29+
eval duration=20m samples=1 seed=42 cfgs=(mma-only,mma-count) metrics=(disk_fraction_used,cpu,cpu_util,write_bytes_per_second,replicas,leases)
2530
----
2631
cpu#1: last: [s1=501095833, s2=496283333, s3=501825000, s4=499525000, s5=499191666, s6=496991666, s7=497529166, s8=505645833, s9=500662500, s10=501250000] (stddev=2612585.85, mean=499999999.70, sum=4999999997)
2732
cpu#1: thrash_pct: [s1=462%, s2=503%, s3=518%, s4=521%, s5=478%, s6=550%, s7=472%, s8=510%, s9=542%, s10=507%] (sum=5065%)
2833
cpu_util#1: last: [s1=0.17, s2=0.17, s3=0.17, s4=0.17, s5=0.17, s6=0.17, s7=0.17, s8=0.17, s9=0.17, s10=0.17] (stddev=0.00, mean=0.17, sum=2)
2934
cpu_util#1: thrash_pct: [s1=462%, s2=503%, s3=518%, s4=521%, s5=478%, s6=550%, s7=472%, s8=510%, s9=542%, s10=507%] (sum=5065%)
35+
disk_fraction_used#1: first: [s1=0.00, s2=0.00, s3=0.00, s4=0.00, s5=0.00, s6=0.00, s7=0.00, s8=0.00, s9=0.00, s10=0.00] (stddev=0.00, mean=0.00, sum=0)
36+
disk_fraction_used#1: last: [s1=0.03, s2=0.03, s3=0.03, s4=0.03, s5=0.03, s6=0.03, s7=0.03, s8=0.03, s9=0.03, s10=0.03] (stddev=0.00, mean=0.03, sum=0)
37+
disk_fraction_used#1: thrash_pct: [s1=119%, s2=55%, s3=43%, s4=45%, s5=27%, s6=23%, s7=24%, s8=12%, s9=109%, s10=29%] (sum=487%)
3038
leases#1: first: [s1=19, s2=11, s3=6, s4=3, s5=3, s6=3, s7=4, s8=3, s9=4, s10=4] (stddev=4.92, mean=6.00, sum=60)
3139
leases#1: last: [s1=10, s2=6, s3=6, s4=3, s5=6, s6=4, s7=4, s8=6, s9=9, s10=6] (stddev=2.05, mean=6.00, sum=60)
3240
leases#1: thrash_pct: [s1=16%, s2=15%, s3=38%, s4=12%, s5=14%, s6=0%, s7=25%, s8=14%, s9=42%, s10=27%] (sum=204%)
@@ -41,6 +49,9 @@ cpu#1: last: [s1=666962932, s2=499499610, s3=667678881, s4=502782759, s5=330971
4149
cpu#1: thrash_pct: [s1=309%, s2=335%, s3=369%, s4=290%, s5=323%, s6=198%, s7=502%, s8=252%, s9=78%, s10=247%] (sum=2904%)
4250
cpu_util#1: last: [s1=0.22, s2=0.17, s3=0.22, s4=0.17, s5=0.11, s6=0.17, s7=0.17, s8=0.06, s9=0.17, s10=0.22] (stddev=0.05, mean=0.17, sum=2)
4351
cpu_util#1: thrash_pct: [s1=309%, s2=335%, s3=369%, s4=290%, s5=323%, s6=198%, s7=502%, s8=252%, s9=78%, s10=247%] (sum=2904%)
52+
disk_fraction_used#1: first: [s1=0.00, s2=0.00, s3=0.00, s4=0.00, s5=0.00, s6=0.00, s7=0.00, s8=0.00, s9=0.00, s10=0.00] (stddev=0.00, mean=0.00, sum=0)
53+
disk_fraction_used#1: last: [s1=0.04, s2=0.03, s3=0.04, s4=0.03, s5=0.03, s6=0.03, s7=0.04, s8=0.03, s9=0.03, s10=0.03] (stddev=0.00, mean=0.03, sum=0)
54+
disk_fraction_used#1: thrash_pct: [s1=224%, s2=272%, s3=323%, s4=271%, s5=184%, s6=250%, s7=394%, s8=278%, s9=110%, s10=164%] (sum=2470%)
4455
leases#1: first: [s1=19, s2=11, s3=6, s4=3, s5=3, s6=3, s7=4, s8=3, s9=4, s10=4] (stddev=4.92, mean=6.00, sum=60)
4556
leases#1: last: [s1=8, s2=8, s3=8, s4=7, s5=5, s6=4, s7=7, s8=1, s9=5, s10=7] (stddev=2.14, mean=6.00, sum=60)
4657
leases#1: thrash_pct: [s1=154%, s2=214%, s3=224%, s4=171%, s5=135%, s6=157%, s7=270%, s8=124%, s9=101%, s10=159%] (sum=1709%)

0 commit comments

Comments
 (0)