Skip to content

Conversation

ddmatsu
Copy link

@ddmatsu ddmatsu commented Jul 2, 2024

The workload of dot calculation is not consistent among the different implementations. The larger the arraysize, the longer it takes for the HIP version to complete.

# hip-stream -n 1500 -s $((1<<30)) | grep Dot
Dot         1376603.333 0.01248     0.01266     0.01251
# cuda-stream -n 1500 -s $((1<<30)) | grep Dot
Dot         1444860.830 0.01189     0.01199     0.01193

The HIP version currently uses arraysize to determine 'dot_num_blocks', which is used as kernel grid size and iteration count for reduction in the host code. The CUDA counterpart uses the number of SM (based on GPU specs) to determine 'dot_num_blocks'. The result should be more reliable with the CUDA one because of higher occupancy and more reasonable overhead of reduction on the host.

ddmatsu added 3 commits July 1, 2024 15:52
The results did not match between cuda-stream and hip-stream on the same
NVIDIA GPU card (NVIDIA A100 40GB PCIe) when large arraysize is specified.
cuda-stream uses the number of SMs to decide dot_num_blocks, which looks
more sensible than to use arraysize to determine the parameter. It is used
as kernel grid size and iteration count for reduction in the host code.

Link: UoB-HPC@9954b7d
Signed-off-by: Daisuke Matsuda <[email protected]>
@tomdeakin
Copy link
Contributor

@thomasgibson - can you comment on this as I think you wrote the current HIP version?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants