Skip to content

Size computation slows bulk insert significantly #237

@esoha-nvidia

Description

@esoha-nvidia

The size computation requires a small memcpy from device to host and then a synchronization. Each one is the cause of serious performance degradation.

CUCO_CUDA_TRY(cudaMemcpyAsync(
&h_num_successes, num_successes_, sizeof(atomic_ctr_type), cudaMemcpyDeviceToHost, stream));
CUCO_CUDA_TRY(cudaStreamSynchronize(stream));

The synchronization is bad because it means that other unrelated streams are unable to do work.

The memcpy is bad because future copies are queued behind this one in architectures that have a limited number of cuda copy engines.

I was able to get a significant performance improvement by deleting these lines.

There ought to be a better way to compute size. Perhaps a lazy method. If this is too difficult, you might consider using templates to allow the user to choose to not maintain size_ at all! Use templates to change the type of size_ from int to a struct that has no members. That way it doesn't take up any space. Provide no methods on this struct so that the size_ doesn't get accidentally used. It will still use some space on the host but that seems like no big deal.

#237 (comment)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions