Skip to content

Conversation

dchigarev
Copy link
Contributor

@dchigarev dchigarev commented Dec 13, 2024

We shouldn't merge this until a proper lowering for gpu.memcpy for tensor.concat case is ready in GC

Signed-off-by: dchigarev <[email protected]>
@dchigarev
Copy link
Contributor Author

cc @kurapov-peter @AndreyPavlenko

@dchigarev dchigarev marked this pull request as ready for review December 13, 2024 13:38
Comment on lines +79 to +91
static void replaceMemrefCopyWithGpuMemcpy(mlir::OpBuilder &builder,
mlir::Value memory) {
mlir::SmallVector<mlir::memref::CopyOp> toErase;

for (auto u : memory.getUsers()) {
if (auto copyOp = mlir::dyn_cast<mlir::memref::CopyOp>(u)) {
builder.setInsertionPoint(copyOp);
builder.create<mlir::gpu::MemcpyOp>(
copyOp.getLoc(), /*resultTypes=*/mlir::TypeRange(),
/*asyncDeps=*/mlir::ValueRange(), /*dst=*/copyOp.getTarget(),
/*src=*/copyOp.getSource());
toErase.push_back(copyOp);
} else if (u->getNumResults() == 0)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it check if we are in the gpu module?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants