You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Context:** The new bufferization pipeline does not appear to bufferize
correctly the `tensor.generate` operation. We see it generate the
following code inside `tensor.generate`
```mlir
%8 = linalg.index 0 : index
%9 = linalg.index 1 : index
%10 = memref.load %arg0[%9] : memref<2xf64>
%11 = arith.addf %10, %cst : f64
memref.store %11, %arg0[%9] : memref<2xf64>
%12 = func.call @circuit_0(%arg0) : (memref<2xf64>) -> memref<2xf64>
```
This code clearly modifies the value stored in memref `%arg0` during
each execution of `tensor.generate` (or `linalg.map` after
bufferization). Before the new bufferization the correct code was as
follows:
```mlir
%8 = linalg.index 0 : index
%9 = linalg.index 1 : index
%10 = memref.load %arg0[%9] : memref<2xf64>
%11 = arith.addf %10, %cst : f64
%alloc_1 = memref.alloc() {alignment = 64 : i64} : memref<2xf64>
memref.copy %arg0, %alloc_1 : memref<2xf64> to memref<2xf64>
memref.store %11, %alloc_1[%9] : memref<2xf64>
%12 = func.call @circuit_0(%alloc_1) : (memref<2xf64>) -> memref<2xf64>
```
**Description of the Change:**
This commit adds the change that now there is an explicit copy on the
argument that is to be added with the finite difference parameter.
**Benefits:** Correct code generation.
Upstream bug report: llvm/llvm-project#141667
[sc-92105]
0 commit comments