Skip to content

[mlir][parser] Fix creation of placeholder entries createForwardRefPlaceholder for null types#12

Open
ad-zama wants to merge 24 commits intomainfrom
andi/rersolve-operands
Open

[mlir][parser] Fix creation of placeholder entries createForwardRefPlaceholder for null types#12
ad-zama wants to merge 24 commits intomainfrom
andi/rersolve-operands

Conversation

@ad-zama
Copy link
Contributor

@ad-zama ad-zama commented Sep 11, 2025

The function resolveOperands in GLWEOpsParsers.cpp deliberately
erases the types of all operands of glwe.call operations to avoid
type clashes before type propagation. However, for undefined SSA
values, the subsequent invocation of Parser::resolveOperands()
reuses the erased type when creating a placeholder operation and
passes it to the builder of builtin.unresolved_conversion_cast. This
leads to the creation of a TypeRange for the operands of the
placeholder operation with a null type, ultimately triggering an
assertion of the constructor of TypeRange.

This workaround sets the type for the placeholder operation to a bogus
integer type in order to create a valid type range and to let the
creation of the placeholder operation succeed.

BourgerieQuentin and others added 24 commits February 22, 2023 16:01
Linalg named operations are currently limited to tensors and memrefs
composed of floating point, integer or complex elements and using any
other element type triggers an assertion.

This change adds support for arbitrary element types through the
specification of the arithmetic operations associated to a type in
specific attributes of a linalg named operation. The attributes' names
correspond to the short form of the arithmetic operator implemented by
the operation (i.e., add, sub, mul, max_signed, max_unsigned,
min_signed, min_unsigned, exp, abs, ceil, negf or log) and receive as
values the name of an operation and optionally a return type given
after a colon.

For example, a `linalg.matmul` operation multiplying two tensors
composed of in would be expressed as:

  linalg.matmul { add = "complex.add", mul = "complex.mul" }
    ins(%arg1, %arg2 : tensor<?x?xcomplex<f32>>, tensor<?x?xcomplex<f32>>)
    outs(%2 : tensor<?x?xcomplex<f32>>)

Sensible default values for the attributes are given for float,
integer and complex types, such that the omission of the attributes
results in the original behavior of the named operation before this
change. I.e., the expression:

  linalg.matmul { add = "arith.addf", mul = "arith.mulf" }
    ins(%arg1, %arg2 : tensor<?x?xf32>, tensor<?x?xf32>)
    outs(%2 : tensor<?x?xf32>)

and:

  linalg.matmul
    ins(%arg1, %arg2 : tensor<?x?xf32>, tensor<?x?xf32>)
    outs(%2 : tensor<?x?xf32>)

yield identical results.

By default, the result type of an operation implementing an arithmetic
operator is assumed to be identical with the type of the first
argument. If this assumption does not hold for an operation, the
result type must be specified explicitly, e.g.,

  linalg.matmul { add = "custom_add_op:restypeadd",
                  mul = "custom_mul_op:restypemul" }
    ins(%arg1, %arg2 : tensor<?x?xcomplex<f32>>, tensor<?x?xcomplex<f32>>)
    outs(%2 : tensor<?x?xcomplex<f32>>)

The extraction of operation names and result types from attributes,
proper instantiation and default values are provided by a set of
operation interfaces (one per operator) in
`LinalgFrontendInterfaces.td`. The set of operation interfaces
required for a named operation is derived transparently from the
arithmetic expressions in its YAML specification via
`mlir-linalg-ods-yaml-gen`.
Linalg structure ops do not implement control flow in the way expected
by RegionBranchOpInterface, and the interface implementation isn't
actually used anywhere. The presence of this interface without correct
implementation is confusing for, e.g., dataflow analyses.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D155841
…oadcast to initial tensor

This change adds an extra parameter to the function
`generateInitialTensorForPartialReduction` of
`PartialReductionOpInterface` that allows for the specification of a
callback function that returns a value that is broadcast to the tensor
generated by the function. This enables the generation of initial
tensors for partial reductions with a custom result type.
… value classes

The class `Lattice` should automatically delegate invocations of the
meet operator to the meet operation of the associated lattice value
class if that class provides a static function called `meet`. This
process fails for two reasons:

  1. `Lattice::has_meet` checks for a member function `meet` without
     arguments of the lattice value class, although it should check
     for a static member function.

  2. The function template `Lattice::meet<VT>()` implementing the
     default meet operation directly in the lattice is always present
     and takes precedence over the delegating function template
     `Lattice::meet<VT, std::integral_constant<bool, true>>()`.

This change fixes the automatic delegation of the meet operation of a
lattice to the lattice value class in the presence of a static `meet`
function by conditionally enabling either the delegating function
template or the non-delegating function template and by changing
`Lattice::has_meet` so that it checks for a static `meet` member
function in the lattice value type.
…el op

Add RegionBranchOpIntefface to scf.forall and scf.parallel op to make analysis trace through subregions.

Differential Revision: https://reviews.llvm.org/D151287
…rallOp

This adds the method `getSuccessorEntryOperands()` to `ForallOp` in
order to allow dataflow analysis to correctly determine the
relationship between operands and region arguments.
Currently iter args are not taken into account and `scf.yield` values
are ignored. As such loop coalescing can only be used after
bufferization.
[MLIR][SCF] Loop coalescing: fix the loop coalescing utility function.
…ization of tensor.insert_slice

This extends the canonicalization of `tensor.insert_slice` with a
pattern that replaces insertions of slices with the same number of
elements as the destination at zero offsets, with unit strides and the
sizes of the destination with appropriate `tensor.expand_shape`
operations.

Example:
```mlir
  %0 = tensor.insert_slice %slice into
          %x[0, 0, 0, 0, 0][1, 1, 1, 16, 32][1, 1, 1, 1, 1] :
          tensor<16x32xf32> into tensor<1x1x1x16x32xf32>
```

folds into:

```mlir
  %0 = tensor.expand_shape %slice[[0,1,2,3], [4]] :
          tensor<16x32xf32> into tensor<1x1x1x16x32xf32>
```
This adds a token for a forward slash to the token definition list and
the methods to `AsmParser::parseSlash()` and
`AsmParser::parseOptionalSlash()`, similar to other tokens used as
operators (e.g., star, plus, etc.). This allows implementations of
attributes that contain arithmetic expressions to support an operator
using a forward slash, e.g., a division.
This adds a new method `parseOptionalFloat` to `AsmParser` that
attempts to parse a floating point value. Unlike `parseFloat`, no
errors are emitted and all tokens are put back if parsing fails.
The current detection logic will fail for containers with an overloaded
`push_back` member. This causes issues with types like `std::vector` and
`SmallVector<SomeNonTriviallyCopyableT>`, which have both
`push_back(const T&)` and `push_back(T&&)`.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D147101
…mParser

Add the methods `AsmParser::pushLexerPos()` and
`AsmParser::popLexerPos()` that allow parsing code to speculatively
parse portions of the input and to back off in case of a failure.
…aceholder for null types

The function `resolveOperands` in `GLWEOpsParsers.cpp` deliberately
erases the types of all operands of `glwe.call` operations to avoid
type clashes before type propagation. However, for undefined SSA
values, the subsequent invocation of `Parser::resolveOperands()`
reuses the erased type when creating a placeholder operation and
passes it to the builder of `builtin.unresolved_conversion_cast`. This
leads to the creation of a TypeRange for the operands of the
placeholder operation with a `null` type, ultimately triggering an
assertion of the constructor of `TypeRange`.

This workaround sets the type for the placeholder operation to a bogus
integer type in order to create a valid type range and to let the
creation of the placeholder operation succeed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants