Skip to content

Creating better datasets for weak scaling analysis. #36

@tinaok

Description

@tinaok

We have performed weak scaling analysis using chunk_size = 64, 128,256,512MB, starting 1 node.
we used chunk_per_worker 10 for all chunk size.
This creates non homogeneous data size for each analysis.
What about modifying this chunk_per_worker as 80,40,20 and 10 for chunk_size = 64, 128,256 and 512MB respectively?
Then the computational size is same for each chunk size, thus we should really be able to see the difference of chunk_size effect on computational time?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions