Skip to content

Scaling analysisΒ #1151

@PhilipDeegan

Description

@PhilipDeegan

Testing the current 3d modifications across various numbers of nodes indicates some fundamental issue with scaling for PHARE

This is demonstrated here

Profiling with scalasca shows a massive amount of time in MPI_Allreduce which comes from the forced synchronization during error checking, after particle pushing, and at the end of the coarse time step.

Skipping this error checking shows the time moves to "MPI_Waitsome" which is called during SAMRAI schedules, this suggests imbalance across patch neighbors for data synchronizations across patches

Besides these waits, I've seen a lot of time spent in the box iterator, which, may be suboptimal as each increment performs 3 comparisons in 3d, when we should probably have a system with 3 loops so that each loop increment is reduced (generally) to a single comparison

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Do me Β πŸ‘‹

Relationships

None yet

Development

No branches or pull requests

Issue actions