Skip to content

IOPS mismatch with [Max|Min|Mean](OPS) #532

@santos-lucas

Description

@santos-lucas

Hello,

I cant fully understand the big difference between the IOPS value for each iteration and the [Max|Min|Mean](OPS) at the result summary, sorry if it is just my stupidity and not an issue , but looking at the following execution for instance:

IOR-4.0.0: MPI Coordinated Test of Parallel I/O
Began               : Thu Jan 29 20:37:17 2026
Command line        : ior -a posix -F -e -g -b 14g -t 7k -C -i 5
Machine             : Linux c01
TestID              : 0
StartTime           : Thu Jan 29 20:37:17 2026
Path                : testFile.00000000
FS                  : 1039.6 TiB   Used FS: 0.1%   Inodes: 3932.2 Mi   Used Inodes: 0.0%

Options: 
api                 : posix
apiVersion          : 
test filename       : testFile
access              : file-per-process
type                : independent
segments            : 1
ordering in a file  : sequential
ordering inter file : constant task offset
task offset         : 1
nodes               : 64
tasks               : 256
clients per node    : 4
repetitions         : 5
xfersize            : 7168 bytes
blocksize           : 14 GiB
aggregate filesize  : 3.50 TiB

Results: 

access    bw(MiB/s)  IOPS       Latency(s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ----       ----------  ---------- ---------  --------   --------   --------   --------   ----
write     104035     16924917   0.000006    14680064   7.00       3.56       31.72      3.55       35.28      0   
read      116120     19139176   0.000012    14680064   7.00       3.55       28.05      3.55       31.61      0   
write     102854     15346962   0.000006    14680064   7.00       3.55       34.98      3.55       35.68      1   
read      115588     19039327   0.000013    14680064   7.00       3.55       28.20      3.55       31.75      1   
write     102545     16654813   0.000011    14680064   7.00       3.55       32.24      3.55       35.79      2   
read      117412     19377959   0.000013    14680064   7.00       3.55       27.71      3.55       31.26      2   
write     102603     16665398   0.000012    14680064   7.00       3.55       32.21      3.55       35.77      3   
read      114439     18826171   0.000013    14680064   7.00       3.55       28.52      3.55       32.07      3   
write     107146     17488330   0.000011    14680064   7.00       3.55       30.70      3.55       34.25      4   
read      115905     17901555   0.000010    14680064   7.00       3.55       29.99      3.55       31.66      4   

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
write      107145.67  102544.72  103836.34    1740.46 15673880.61 15000827.74 15189773.68  254604.69   35.35397         NA            NA     0    256   4    5   1     1        1         0    0      1 15032385536     7168 3670016.0 posix      0
read       117411.81  114439.44  115893.00     955.44 17175670.14 16740855.35 16953489.70  139767.29   31.66943         NA            NA     0    256   4    5   1     1        1         0    0      1 15032385536     7168 3670016.0 posix      0
Finished            : Thu Jan 29 20:42:30 2026

For the summary, I can see that the OPs values makes sense, because they are basically: BW / Transfer size and makes total sense with the code for what I can understand:

static struct results *bw_values(const int reps, IOR_results_t *measured,
                                 const double *vals, const int access)
{
        return bw_ops_values(reps, measured, 1, vals, access);
}

static struct results *ops_values(const int reps, IOR_results_t *measured,
                                  IOR_offset_t transfer_size,
                                  const double *vals, const int access)
{
        return bw_ops_values(reps, measured, transfer_size, vals, access);
}

But, when we are talking about IOPS for each iteration, taking for instance the first write iteration on this test:

access    bw(MiB/s)  IOPS       Latency(s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ----       ----------  ---------- ---------  --------   --------   --------   --------   ----
write     104035     16924917   0.000006    14680064   7.00       3.56       31.72      3.55       35.28      0

104035 * 1024 = 106531840 (KiB/s for bw)

106531840 (KiB/s for bw) / 7 (xfer KiB) = 15218834,28

What differs from the IOPS value in the result: 16924917

And when I look at the code:

        bw = (double)point->aggFileSizeForBW / totalTime;

        /* For IOPS in this iteration, we divide the total amount of IOs from
         * all ranks over the entire access time (first start -> last end). */
        iops = (point->aggFileSizeForBW / params->transferSize) / accessTime;

I cant really see how those values should be so different, because it seems it is calculated almost in the same way from the summary.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions