Skip to content

Commit 38f3d5e

Browse files
noclownsvarun-r-mallya
authored andcommitted
[MLIR] Fix use-after-move for DEBUG builds, and broken assert logic. (llvm#164763)
These issues affect only Debug builds, and Release builds with asserts enabled. 1. In `SparseTensor.h` a variable is moved-from within an assert, introducing a side effect that alters its subsequent use, and causes divergence between Debug and Release builds (with asserts disabled). 2. In `IterationGraphSorter.cpp`, the class constructor arguments are moved-from to initialize class member variables via the initializer list. Because both the arguments and class members are identically named, there's a naming collision where the arguments shadow their identically-named member variables counterparts inside the constructor body. In the original code, unqualified names inside the asserts, referred to the constructor arguments. This is wrong, because these have already been moved-from. It's not just a UB, but is broken. These SmallVector types when moved-from are reset i.e. the size resets to 0. This actually renders the affected asserts ineffective, since the comparisons operate on two hollowed-out objects and always succeed. This name ambiguity is fixed by using 'this->' to correctly refer to the initialized member variables carrying the relevant state. 3. While the fix 2 above made the asserts act as intended, it also unexpectedly broke one mlir test: `llvm-lit -v mlir/test/Dialect/SparseTensor/sparse_scalars.mlir` This required fixing the assert logic itself, which likely has never worked and went unnoticed all this time due to the bug 2. Specifically, in the failing test that uses `mlir/test/Dialect/SparseTensor/sparse_scalars.mlir` the '%argq' of 'ins' is defined as 'f32' scalar type, but the original code inside the assert had no support for scalar types as written, and was breaking the test. Testing: ``` ninja check-mlir llvm-lit -v mlir/test/Dialect/SparseTensor/sparse_scalars.mlir ```
1 parent 059cacb commit 38f3d5e

File tree

2 files changed

+14
-8
lines changed

2 files changed

+14
-8
lines changed

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -158,16 +158,14 @@ namespace sparse_tensor {
158158
/// Convenience method to abbreviate casting `getType()`.
159159
template <typename T>
160160
inline RankedTensorType getRankedTensorType(T &&t) {
161-
assert(static_cast<bool>(std::forward<T>(t)) &&
162-
"getRankedTensorType got null argument");
161+
assert(static_cast<bool>(t) && "getRankedTensorType got null argument");
163162
return dyn_cast<RankedTensorType>(std::forward<T>(t).getType());
164163
}
165164

166165
/// Convenience method to abbreviate casting `getType()`.
167166
template <typename T>
168167
inline MemRefType getMemRefType(T &&t) {
169-
assert(static_cast<bool>(std::forward<T>(t)) &&
170-
"getMemRefType got null argument");
168+
assert(static_cast<bool>(t) && "getMemRefType got null argument");
171169
return cast<MemRefType>(std::forward<T>(t).getType());
172170
}
173171

mlir/lib/Dialect/SparseTensor/Transforms/Utils/IterationGraphSorter.cpp

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -159,14 +159,22 @@ IterationGraphSorter::IterationGraphSorter(
159159
loop2OutLvl(loop2OutLvl), iterTypes(std::move(iterTypes)),
160160
strategy(strategy) {
161161
// One map per tensor.
162-
assert(loop2InsLvl.size() == ins.size());
162+
assert(this->loop2InsLvl.size() == this->ins.size());
163163
// All the affine maps have the same number of dimensions (loops).
164164
assert(llvm::all_equal(llvm::map_range(
165-
loop2InsLvl, [](AffineMap m) { return m.getNumDims(); })));
165+
this->loop2InsLvl, [](AffineMap m) { return m.getNumDims(); })));
166166
// The number of results of the map should match the rank of the tensor.
167-
assert(llvm::all_of(llvm::zip(loop2InsLvl, ins), [](auto mvPair) {
167+
assert(llvm::all_of(llvm::zip(this->loop2InsLvl, this->ins), [](auto mvPair) {
168168
auto [m, v] = mvPair;
169-
return m.getNumResults() == cast<ShapedType>(v.getType()).getRank();
169+
170+
// For ranked types the rank must match.
171+
// Simply return true for UnrankedTensorType
172+
if (auto shapedType = llvm::dyn_cast<ShapedType>(v.getType())) {
173+
return !shapedType.hasRank() ||
174+
(m.getNumResults() == shapedType.getRank());
175+
}
176+
// Non-shaped (scalar) types behave like rank-0.
177+
return m.getNumResults() == 0;
170178
}));
171179

172180
itGraph.resize(getNumLoops(), std::vector<bool>(getNumLoops(), false));

0 commit comments

Comments
 (0)