Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
149 changes: 124 additions & 25 deletions flang/lib/Optimizer/OpenMP/MapInfoFinalization.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@
#include "mlir/IR/SymbolTable.h"
#include "mlir/Pass/Pass.h"
#include "mlir/Support/LLVM.h"
#include "llvm/ADT/BitmaskEnum.h"
#include "llvm/ADT/SmallPtrSet.h"
#include "llvm/ADT/StringSet.h"
#include "llvm/Frontend/OpenMP/OMPConstants.h"
Expand Down Expand Up @@ -398,24 +399,18 @@ class MapInfoFinalizationPass
baseAddrIndex);
}

/// Adjusts the descriptor's map type. The main alteration that is done
/// currently is transforming the map type to `OMP_MAP_TO` where possible.
/// This is because we will always need to map the descriptor to device
/// (or at the very least it seems to be the case currently with the
/// current lowered kernel IR), as without the appropriate descriptor
/// information on the device there is a risk of the kernel IR
/// requesting for various data that will not have been copied to
/// perform things like indexing. This can cause segfaults and
/// memory access errors. However, we do not need this data mapped
/// back to the host from the device, as per the OpenMP spec we cannot alter
/// the data via resizing or deletion on the device. Discarding any
/// descriptor alterations via no map back is reasonable (and required
/// for certain segments of descriptor data like the type descriptor that are
/// global constants). This alteration is only inapplicable to `target exit`
/// and `target update` currently, and that's due to `target exit` not
/// allowing `to` mappings, and `target update` not allowing both `to` and
/// `from` simultaneously. We currently try to maintain the `implicit` flag
/// where necessary, although it does not seem strictly required.
/// Adjust the descriptor's map type such that we ensure the descriptor
/// itself is present on device when needed, without changing the user's
/// requested data mapping semantics for the underlying data.
///
/// We conservatively transform descriptor mappings to `OMP_MAP_TO` (and
/// preserve `IMPLICIT`/`ALWAYS` when present) for structured regions. The
/// descriptor should live on device for indexing, bounds, etc., but we do
/// not require, nor want, additional mapping semantics like `CLOSE` for the
/// descriptor entry itself. `CLOSE` (and other user-provided flags) should
/// apply to the base data entry that actually carries the pointee, which is
/// generated separately as a member map. For `target exit`/`target update`
/// we keep the original map type unchanged.
unsigned long getDescriptorMapType(unsigned long mapTypeFlag,
mlir::Operation *target) {
using mapFlags = llvm::omp::OpenMPOffloadMappingFlags;
Expand All @@ -425,8 +420,24 @@ class MapInfoFinalizationPass

mapFlags flags = mapFlags::OMP_MAP_TO |
(mapFlags(mapTypeFlag) &
(mapFlags::OMP_MAP_IMPLICIT | mapFlags::OMP_MAP_CLOSE |
mapFlags::OMP_MAP_ALWAYS));
(mapFlags::OMP_MAP_IMPLICIT | mapFlags::OMP_MAP_ALWAYS));

// For unified_shared_memory, we additionally add `CLOSE` on the descriptor
// to ensure device-local placement where required by tests relying on USM +
// close semantics.
if (target) {
if (auto mod = target->getParentOfType<mlir::ModuleOp>()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be worth mentioning that we have a little check downstream currently (that I think I've added to an upstream PR recently that's not landed yet). That will prevent always + close being applied at the same time, as it was causing some mapping issues for the runtime in certain cases, not sure that's applicable here but just incase it crops up for whatever reason!

Nit: Might be nice to put this check logic into a helper function just to keep the logic in this function as simple as it can be, all of the bit flags can already be a little difficult to follow! :-) Just a small suggestion though if it makes sense.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be worth mentioning that we have a little check downstream currently (that I think I've added to an upstream PR recently that's not landed yet). That will prevent always + close being applied at the same time, as it was causing some mapping issues for the runtime in certain cases, not sure that's applicable here but just incase it crops up for whatever reason!

Thanks, I'll revisit when it lands upstream.

Nit: Might be nice to put this check logic into a helper function just to keep the logic in this function as simple as it can be, all of the bit flags can already be a little difficult to follow! :-) Just a small suggestion though if it makes sense.

I've refactored this to add a utility func.

if (mlir::Attribute reqAttr = mod->getAttr("omp.requires")) {
if (auto req =
mlir::dyn_cast<mlir::omp::ClauseRequiresAttr>(reqAttr)) {
if (mlir::omp::bitEnumContainsAll(
req.getValue(),
mlir::omp::ClauseRequires::unified_shared_memory))
flags |= mapFlags::OMP_MAP_CLOSE;
}
}
}
}
return llvm::to_underlying(flags);
}

Expand Down Expand Up @@ -518,6 +529,75 @@ class MapInfoFinalizationPass
return newMapInfoOp;
}

// Expand mappings of type(C_PTR) to map their `__address` field explicitly
// as a single pointer-sized member (USM-gated at callsite). This helps in
// USM scenarios to ensure the pointer-sized mapping is used.
mlir::omp::MapInfoOp genCptrMemberMap(mlir::omp::MapInfoOp op,
fir::FirOpBuilder &builder) {
if (!op.getMembers().empty())
return op;

mlir::Type varTy = fir::unwrapRefType(op.getVarPtr().getType());
if (!mlir::isa<fir::RecordType>(varTy))
return op;
auto recTy = mlir::cast<fir::RecordType>(varTy);
// If not a builtin C_PTR record, skip.
if (!recTy.getName().ends_with("__builtin_c_ptr"))
return op;

// Find the index of the c_ptr address component named "__address".
int32_t fieldIdx = recTy.getFieldIndex("__address");
if (fieldIdx < 0)
return op;

mlir::Location loc = op.getVarPtr().getLoc();
mlir::Type memTy = recTy.getType(fieldIdx);
fir::IntOrValue idxConst =
mlir::IntegerAttr::get(builder.getI32Type(), fieldIdx);
mlir::Value coord = fir::CoordinateOp::create(
builder, loc, builder.getRefType(memTy), op.getVarPtr(),
llvm::SmallVector<fir::IntOrValue, 1>{idxConst});

// Child for the `__address` member.
llvm::SmallVector<llvm::SmallVector<int64_t>> memberIdx = {{0}};
mlir::ArrayAttr newMembersAttr = builder.create2DI64ArrayAttr(memberIdx);
// Force CLOSE in USM paths so the pointer gets device-local placement
// when required by tests relying on USM + close semantics.
uint64_t mapTypeVal =
op.getMapType() |
llvm::to_underlying(
llvm::omp::OpenMPOffloadMappingFlags::OMP_MAP_CLOSE);
mlir::IntegerAttr mapTypeAttr = builder.getIntegerAttr(
builder.getIntegerType(64, /*isSigned=*/false), mapTypeVal);

mlir::omp::MapInfoOp memberMap = mlir::omp::MapInfoOp::create(
builder, loc, coord.getType(), coord,
mlir::TypeAttr::get(fir::unwrapRefType(coord.getType())), mapTypeAttr,
builder.getAttr<mlir::omp::VariableCaptureKindAttr>(
mlir::omp::VariableCaptureKind::ByRef),
/*varPtrPtr=*/mlir::Value{},
/*members=*/llvm::SmallVector<mlir::Value>{},
/*member_index=*/mlir::ArrayAttr{},
/*bounds=*/op.getBounds(),
/*mapperId=*/mlir::FlatSymbolRefAttr(),
/*name=*/op.getNameAttr(),
/*partial_map=*/builder.getBoolAttr(false));

// Rebuild the parent as a container with the `__address` member.
mlir::omp::MapInfoOp newParent = mlir::omp::MapInfoOp::create(
builder, op.getLoc(), op.getResult().getType(), op.getVarPtr(),
op.getVarTypeAttr(), mapTypeAttr, op.getMapCaptureTypeAttr(),
/*varPtrPtr=*/mlir::Value{},
/*members=*/llvm::SmallVector<mlir::Value>{memberMap},
/*member_index=*/newMembersAttr,
/*bounds=*/llvm::SmallVector<mlir::Value>{},
/*mapperId=*/mlir::FlatSymbolRefAttr(), op.getNameAttr(),
/*partial_map=*/builder.getBoolAttr(false));
op.replaceAllUsesWith(newParent.getResult());
op->erase();
return newParent;
}

mlir::omp::MapInfoOp genDescriptorMemberMaps(mlir::omp::MapInfoOp op,
fir::FirOpBuilder &builder,
mlir::Operation *target) {
Expand Down Expand Up @@ -727,11 +807,6 @@ class MapInfoFinalizationPass
argIface.getUseDeviceAddrBlockArgsStart() +
argIface.numUseDeviceAddrBlockArgs());

mlir::MutableOperandRange useDevPtrMutableOpRange =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't the new expanded argument still want to be added to the BlockArgs for consistency? I do understand that they won't be used within the region directly and that it can be a pain to have the extra BlockArguments for use_dev stuff as we do a lot of re-binding in the lowering (had my fair share of headaches with it) and that we're a little more lenient on these things for TargetData so I'll leave it up to you and @skatrak :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reverted this change to maintain consistency.

targetDataOp.getUseDevicePtrVarsMutable();
addOperands(useDevPtrMutableOpRange, target,
argIface.getUseDevicePtrBlockArgsStart() +
argIface.numUseDevicePtrBlockArgs());
} else if (auto targetOp = llvm::dyn_cast<mlir::omp::TargetOp>(target)) {
mlir::MutableOperandRange hasDevAddrMutableOpRange =
targetOp.getHasDeviceAddrVarsMutable();
Expand Down Expand Up @@ -1169,6 +1244,30 @@ class MapInfoFinalizationPass
genBoxcharMemberMap(op, builder);
});

// Expand type(C_PTR) only when unified_shared_memory is required,
// to ensure device-visible pointer size/behavior in USM scenarios
// without changing default expectations elsewhere.
func->walk([&](mlir::omp::MapInfoOp op) {
// Check module requires USM; otherwise, leave mappings untouched.
auto mod = func->getParentOfType<mlir::ModuleOp>();
bool hasUSM = false;
if (mod) {
if (mlir::Attribute reqAttr = mod->getAttr("omp.requires")) {
if (auto req =
mlir::dyn_cast<mlir::omp::ClauseRequiresAttr>(reqAttr)) {
hasUSM = mlir::omp::bitEnumContainsAll(
req.getValue(),
mlir::omp::ClauseRequires::unified_shared_memory);
}
}
}
if (!hasUSM)
return;

builder.setInsertionPoint(op);
genCptrMemberMap(op, builder);
});

func->walk([&](mlir::omp::MapInfoOp op) {
// TODO: Currently only supports a single user for the MapInfoOp. This
// is fine for the moment, as the Fortran frontend will generate a
Expand Down
Loading