Skip to content

Abnormally high score appears randomly in parallel fstime test while running large amount of benchmark copies with HDDs #103

@Zengkai163

Description

@Zengkai163

Hi Glenn Strauss,
We found some abnormally high scores that appeared randomly in parallel fstime tests while executing large amount of copies(for example, more than two hundred) of the benchmark with hard disk drives.
abnormally high score:
abnormally high score
normal score:
normal score
After some debugging, I found that the dual sync operations in function c_test() would cause some fstime processes block waiting for a global rwsem lock for a long time(maybe larger than 120s), with the commit 81e9de5 ("fstime.c - Seperate r/w files for each parallel (#85)")" merged.
`
/*

  • Run the copy test for the time given in seconds.
    */
    int c_test(int timeSecs)
    {
    unsigned long counted = 0L;
    unsigned long tmp;
    double start, end;
    extern int sigalarm;

     sync(); // the first sync operation
     sleep(2);
     sync(); // the second sync operation
     sleep(1);
    
     /* rewind */
     errno = 0;
     lseek(f, 0L, 0);
    

......
}
Here are some related dmesg logs and unixbench result logs:
dmesg logs(partial):
[Sat Nov 30 17:33:33 2024] INFO: task fstime:15977 blocked for more than 120 seconds.
[Sat Nov 30 17:33:33 2024] Not tainted 6.12.0
[Sat Nov 30 17:33:33 2024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sat Nov 30 17:33:33 2024] task:fstime state:D stack:0 pid:15977 tgid:15977 ppid:15967 flags:0x00000204
[Sat Nov 30 17:33:33 2024] Call trace:
[Sat Nov 30 17:33:33 2024] __switch_to+0xf4/0x160
[Sat Nov 30 17:33:33 2024] __schedule+0x2a0/0x838
[Sat Nov 30 17:33:33 2024] schedule+0x30/0xc8
[Sat Nov 30 17:33:33 2024] schedule_preempt_disabled+0x18/0x30
[Sat Nov 30 17:33:33 2024] rwsem_down_write_slowpath+0x33c/0x820
[Sat Nov 30 17:33:33 2024] down_write+0x60/0x78
[Sat Nov 30 17:33:33 2024] sync_inodes_sb+0xa0/0x110
[Sat Nov 30 17:33:33 2024] sync_inodes_one_sb+0x24/0x38
[Sat Nov 30 17:33:33 2024] iterate_supers+0xb4/0x1f8
[Sat Nov 30 17:33:33 2024] ksys_sync+0x54/0xc8
[Sat Nov 30 17:33:33 2024] __arm64_sys_sync+0x18/0x30
[Sat Nov 30 17:33:33 2024] invoke_syscall+0x50/0x120
[Sat Nov 30 17:33:33 2024] el0_svc_common.constprop.0+0xc8/0xf0
[Sat Nov 30 17:33:33 2024] do_el0_svc+0x24/0x38
[Sat Nov 30 17:33:33 2024] el0_svc+0x34/0x128
[Sat Nov 30 17:33:33 2024] el0t_64_sync_handler+0x100/0x130
[Sat Nov 30 17:33:33 2024] el0t_64_sync+0x188/0x190
[Sat Nov 30 17:33:33 2024] INFO: task fstime:16102 blocked for more than 120 seconds.
[Sat Nov 30 17:33:33 2024] Not tainted 6.12.0
[Sat Nov 30 17:33:33 2024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sat Nov 30 17:33:33 2024] task:fstime state:D stack:0 pid:16102 tgid:16102 ppid:16093 flags:0x00000204
[Sat Nov 30 17:33:33 2024] Call trace:
[Sat Nov 30 17:33:33 2024] __switch_to+0xf4/0x160
[Sat Nov 30 17:33:33 2024] __schedule+0x2a0/0x838
[Sat Nov 30 17:33:33 2024] schedule+0x30/0xc8
[Sat Nov 30 17:33:33 2024] schedule_preempt_disabled+0x18/0x30
[Sat Nov 30 17:33:33 2024] rwsem_down_write_slowpath+0x33c/0x820
[Sat Nov 30 17:33:33 2024] down_write+0x60/0x78
[Sat Nov 30 17:33:33 2024] sync_inodes_sb+0xa0/0x110
[Sat Nov 30 17:33:33 2024] sync_inodes_one_sb+0x24/0x38
[Sat Nov 30 17:33:33 2024] iterate_supers+0xb4/0x1f8
[Sat Nov 30 17:33:33 2024] ksys_sync+0x54/0xc8
[Sat Nov 30 17:33:33 2024] __arm64_sys_sync+0x18/0x30
[Sat Nov 30 17:33:33 2024] invoke_syscall+0x50/0x120
[Sat Nov 30 17:33:33 2024] el0_svc_common.constprop.0+0xc8/0xf0
[Sat Nov 30 17:33:33 2024] do_el0_svc+0x24/0x38
[Sat Nov 30 17:33:33 2024] el0_svc+0x34/0x128
[Sat Nov 30 17:33:33 2024] el0t_64_sync_handler+0x100/0x130
[Sat Nov 30 17:33:33 2024] el0t_64_sync+0x188/0x190
......
unixbench abnormal result logs(partial):
......
COUNT0: 1672180
COUNT1: 0
COUNT2: KBps
TIME: 30.0
elapsed: 197.830620 // one fstime process lasts for 197s
pid: 32800
status: 0

COUNT0: 1495245
COUNT1: 0
COUNT2: KBps
TIME: 30.0
elapsed: 217.923324 // another fstime process lasts for 217s
pid: 32804
status: 0

COUNT0: 1746359
COUNT1: 0
COUNT2: KBps
TIME: 30.0
elapsed: 212.533035
pid: 32807
status: 0
......
`
As a result, some fstime processes which have finished the dual sync operations will enter the file copy phase earlier than others that are still block waiting, so the actual amount of parallel file copy processes may be less than the number specified by the '-c' parameter in some early stages during the test, which is not the expected behavior for the test purpose, AFAICS. The less amount of the parallel file copy processes running, the less pressure to the memory bandwidth, so the score of each fstime process will grow under less memory bandwidth contention, all the fstime processes will still finish the test finally, but the total benchmark run time gets longer as shown in the "abnormally high score" figure, so the total score will be higher than the normal case.

Can we just call fsync to synchronize changes to the r/w files instead of doing the system-wide sync to avoid this issue?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions