Skip to content

Conversation

@b-m-f
Copy link

@b-m-f b-m-f commented Jun 5, 2025

Changed the branch to integrate upstream changes into my fork easier.

Still working on #668

Previous discussions: #966

@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch 2 times, most recently from 069cf4d to b38c03b Compare June 5, 2025 13:18
@b-m-f
Copy link
Author

b-m-f commented Jun 30, 2025

Hi @pdamme @philipportner,

I am currently facing an issue while trying to push down the log() function with following code:

mlir::daphne::FillOp pushDownFillIntoEwLog(mlir::daphne::FillOp fillOp, mlir::daphne::EwLogOp op, mlir::Value scalar, mlir::PatternRewriter &rewriter) {
  auto fillValue = fillOp.getArg();
  auto height = fillOp.getNumRows();
  auto width = fillOp.getNumCols();
  mlir::daphne::EwLogOp newLog = rewriter.create<mlir::daphne::EwLogOp>(op.getLoc(), fillValue, scalar);
  return rewriter.create<mlir::daphne::FillOp>(op.getLoc(), op.getResult().getType(), newLog, width, height);

In order to get canonicalization to trigger I added:

let hasCanonicalizeMethod = 1;

to the EwLogOp which also has the Trait ValueTypeFromArgsFP -> the result should therefore always be a floating-point, right?

Unfortunately my MLIR output with --explain=parsing_simplified on the script:

a = fill(8,3,3);
res = log(a,3);

print(res);

looks as follows:

max@daphne:~/daphne$ ./run-daphne.sh --explain=parsing_simplified workspace/log.daph
IR after parsing and some simplifications:
module {
  func.func @main() {
    %0 = "daphne.constant"() {value = 3 : index} : () -> index
    %1 = "daphne.constant"() {value = false} : () -> i1
    %2 = "daphne.constant"() {value = true} : () -> i1
    %3 = "daphne.constant"() {value = 8 : si64} : () -> si64
    %4 = "daphne.constant"() {value = 3 : si64} : () -> si64
    %5 = "daphne.ewLog"(%3, %4) : (si64, si64) -> si64
    %6 = "daphne.fill"(%5, %0, %0) : (si64, index, index) -> !daphne.Matrix<?x?xf64>
    %7 = "daphne.ewLog"(%3, %4) : (si64, si64) -> f64
    "daphne.print"(%6, %2, %1) : (!daphne.Matrix<?x?xf64>, i1, i1) -> ()
    "daphne.print"(%7, %2, %1) : (f64, i1, i1) -> ()
    "daphne.return"() : () -> ()
  }
}
[default E]: Lowering pipeline error.{}
PassManager failed module lowering, responsible IR written to module_fail.log.
RewriteToCallKernelOpPass failed with the following message [ no kernel for operation `fill` available for the required input types `(si64, index, index)` and output types `(!daphne.Matrix<?x?xf64>)` for backend `CPP`, registered kernels for this op:
KernelCatalog (195 ops, 2253 kernels)
- operation `fill` (7 kernels)
  - kernel `_fill__DenseMatrix_float__float__size_t__size_t`: (f32, index, index) -> (!daphne.Matrix<?x?xf32>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
  - kernel `_fill__DenseMatrix_double__double__size_t__size_t`: (f64, index, index) -> (!daphne.Matrix<?x?xf64>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
  - kernel `_fill__DenseMatrix_int64_t__int64_t__size_t__size_t`: (si64, index, index) -> (!daphne.Matrix<?x?xsi64>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
  - kernel `_fill__DenseMatrix_uint64_t__uint64_t__size_t__size_t`: (ui64, index, index) -> (!daphne.Matrix<?x?xui64>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
  - kernel `_fill__DenseMatrix_uint8_t__uint8_t__size_t__size_t`: (ui8, index, index) -> (!daphne.Matrix<?x?xui8>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
  - kernel `_fill__DenseMatrix_bool__bool__size_t__size_t`: (i1, index, index) -> (!daphne.Matrix<?x?xi1>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
  - kernel `_fill__DenseMatrix_string__char__size_t__size_t`: (!daphne.String, index, index) -> (!daphne.Matrix<?x?x!daphne.String>) for backend `CPP` (in `/home/max/daphne/bin/../lib/libAllKernels.so`)
 ]
    | Source file -> "/home/max/daphne/workspace/log.daph":2:6
    |
  2 | res = log(a,3);
    |       ^~~

I believe that %5 = "daphne.ewLog"(%3, %4) : (si64, si64) -> si64 is the culprit here (log(8,3) is floating-point).

Have I introduced this problem when creating my newLog Operation? But should I not be able to rely on the cast to floating-point?

All code is checked into the latest commit. Any help is appreciated - I would like to make sure I am not having some big misunderstanding before implementing the remaining push-downs for unary and binary ops.

@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch from 6e49f8c to b6756e7 Compare June 30, 2025 18:32
@b-m-f
Copy link
Author

b-m-f commented Jul 1, 2025

Also https://github.com/daphne-eu/daphne/blob/main/test/api/cli/distributed/DistributedTest.cpp is failing if I mark RandMatrixOp as Pure.

Is there another safe way to remove the OP without setting Pure? Otherwise it seems that I am currently ending up with 2 calls to randMatrix after PushDown.

@pdamme
Copy link
Collaborator

pdamme commented Jul 1, 2025

Yes, the result of EwLogOp should always be of a floating-point value type. More precisely, the fact that EwLogOp has the ValueTypeFromArgsFP trait means that type inference will always assign a floating-point value type to the result of this op. However, it is possible (but actually not intended) to create an EwLogOp with an integer value type, because the definition of EwLogOp in src/ir/daphneir/DaphneOps.td only requires a numerical value type (could also be an integer). Type inference only runs if the result type is unknown or has an unknown value type. Thus, when an EwLogOp is created with an integer result type, type inference will not change the result type anymore.

Your EwLogOp newLog has the result value type si64, because you don't specify a result type when you create the op (the result type would follow after the location). All elementwise binary ops have a custom builder (defined in EwBinaryOp in DaphneOps.td) that infers the result type from the argument types, if no result type is specified. However, this inference is wrong (as I realize now, and there is also a todo attached to it): it always uses the type of the lhs argument. I will fix that soon by removing this custom builder.

To solve the problem, you should either (a) specify an unknown type (mlir::daphne::UnknownType::get(...)) for the result of newLog (then the result type will be inferred later), or (b) specify the value type of the result type of op (CompilerUtils::getValueType(op.getResult().getType())) for the result type of newLog (then the result type is correct from the start).

@pdamme
Copy link
Collaborator

pdamme commented Jul 1, 2025

Regarding the failing DistributedTest.cpp when RandMatrixOp has the Pure trait: Attaching Pure to RandMatrixOp is the right way to go. The reason why the distributed test fails is most likely because test/api/cli/distributed/distributed_4.daphne has two identical calls to rand(), which I assume get simplified to a single one by common-subexpression elimitation (CSE) when RandMatrixOp is Pure. However, that should not crash the distributed runtime, so it's most likely a bug in the distributed runtime. For the time being, please simply change this test file, e.g., to the following:

m1 = rand(10, 10, 0.0, 20.0, 1.0, 1);
m2 = rand(10, 10, 0.0, 30.0, 1.0, 1);
print(m1 @ m2);

If we haven't solved the problem in the distributed runtime by the time we merge your PR, we will change this test file and create an issue for the problem.

@philipportner
Copy link
Collaborator

philipportner commented Jul 2, 2025

Just took a brief look at the code, and I'm wondering if implementing this push down optimization as canonicalization on every operator that we want to push down, for every operator that can be pushed into, is the right place to do that. It seems quite hard to maintain.

@b-m-f @pdamme What do you think about an interface based approach?

def PushDownOpInterface : OpInterface<"PushDownOpInterface"> {
      let description = [{
          Interface for operations that can be pushed into source/other operations.
      }];

      let methods = [
          InterfaceMethod<
              "Check if operands are compatible for pushdown",
              "bool", "isScalarCompatible",
              (ins "Value":$operand),
              /*methodBody=*/[{}],
              /*defaultImplementation=*/[{
                  // Check if operand is a scalar, same as is currently done in each canonicalize method
              }]
          >,

          InterfaceMethod<
              "Create scalar value",
              "Value", "createScalarValue",
              (ins "OpBuilder &":$builder,
                   "Location":$loc,
                   "Value":$lhs,
                   "Value":$rhs),
              /*defaultImplementation=*/[{
                  // Create the actual computational operator
              }]
          >
      ];
  }

With that interface, every op that wants to can implement it:

  def AddOp ...
      [PushDownOpInterface]> {
      let extraClassDeclaration = [{
          Value createScalarValue(OpBuilder &builder, Location loc,
                                    Value lhs, Value rhs) {
              return builder.create<daphne::EwAddOp>(loc, lhs, rhs);
          }

      }];
  }

  def MulOp : ... [PushDownOpInterface]> {
      let extraClassDeclaration = [{
          Value createScalarValue(OpBuilder &builder, Location loc,
                                    Value lhs, Value rhs) {
              return builder.create<daphne::EwMulOp>(loc, lhs, rhs);
          }

      }];
  }

That can either be done in the TableGen definition directly or similar to our current interfaces in DAPHNE.

And with that we have a rewrite pass on, for example, the FillOp:

struct FillOpPushdownPattern : public OpRewritePattern<FillOp> {
    LogicalResult matchAndRewrite(FillOp fillOp,
            PatternRewriter &rewriter) const override {
        auto pushdownOp = dyn_cast< PushDownOpInterface >(fillOpArgument);
        if (!pushdownOp)
            return failure();

        // Call the interface functions to:
        // 1. Check if it's actually a scalar like is currently happening in the canonicalize method
        // 2. Call the `createScalarValue` method that creates the correct, pushed down, computation

        auto newFill = rewriter.create<FillOp>(result of pushed down computation...);
        rewriter.replaceOp(fillOpArgument, newFill);
    }
}

Additionally, instead of having a FillOpPushdownPattern for each op that can be pushed into, another interface could be used to generalize across all those operations. At the end, we would have interface methods for source and push-down ops and a single rewrite pattern doing all the "work" in applying those.

New operators that can be pushed down now only need to implement a single interface and source operators that want to enable this do it with the other interface.

I feel like this approach could eliminate all this duplicated code that's currently in the draft, and makes it quite easy to extend push down optimizations to other operations by implementing the corresponding interface. What do you think?

@b-m-f
Copy link
Author

b-m-f commented Jul 3, 2025

@philipportner ,

I wanted to implement all pushdown operations mentioned in the issue and then refactor from there - to have tests in place etc. ( duplication was easier for me l, as I am still getting used to the framework ).
I played around with Traits so far, but thanks for the interface based solution.
Next week I should be able to tackle this 👍

@b-m-f
Copy link
Author

b-m-f commented Jul 3, 2025

All pushdown combinations mentioned in the issue are now implemented in the verbose style.

Here are few notes for the next Zoom session:

  • Random + Abs: only makes sense on positive values, in which case the abs operation can simply be removed. Any way to check the actual values in the Canonicalization step? Or simply ignore this combination?
  • runEwUnary is failing now, as the behaviour of rand changes. Should I simply update the test case with the new result?
  • Random Sqrt: sqrt on negative will fail from normal checks. No additional checks for validity of statements are needed, correct?
  • Seq with Div: seems to have an issue on main. seq(1,5,2) / 2 will result in 0,1,2. Some type issue with int and floats. How to handle this?

@b-m-f
Copy link
Author

b-m-f commented Jul 7, 2025

@philipportner ,

Unfortunately I can not seem to make any progress with the interface approach.

I have added a new commit with the changes I assume to be necessary in order to get EwAdd to expect the Interface methods to be implemented, but I am hit with a different error.

The whole trace is very long, but I believe this to be the gist:

In file included from /home/max/daphne/src/ir/daphneir/Daphne.h:112:
/home/max/daphne/build/src/ir/daphneir/DaphneOps.h.inc: At global scope:
/home/max/daphne/build/src/ir/daphneir/DaphneOps.h.inc:9993:391: error: ‘PushDownOpInterface’ was not declared in this scope; did you mean ‘FunctionOpInterface’?
 9993 | class EwAddOp : public ::mlir::Op<EwAddOp, ::mlir::OpTrait::ZeroRegions, ::mlir::OpTrait::OneResult, ::mlir::OpTrait::OneTypedResult<::mlir::Type>::Impl, ::mlir::OpTrait::ZeroSuccessors, ::mlir::OpTrait::NOperands<2>::Impl, ::mlir::OpTrait::OpInvariants, ::mlir::OpTrait::ValueTypeFromArgs, ::mlir::OpTrait::CastArgsToResType, ::mlir::OpTrait::EwSparseIfBoth, ::mlir::OpTrait::CUDASupport, PushDownOpInterface::Trait, ::mlir::OpTrait::DataTypeFromArgs, Distributable::Trait, Vectorizable::Trait, ::mlir::OpTrait::ShapeEwBinary, ::mlir::MemoryEffectOpInterface::Trait> {
      |                                                                                                                                                                                                                                                                                                                                                                                                       ^~~~~~~~~~~~~~~~~~~
      | FunctionOpInterface

https://mlir.llvm.org/docs/Interfaces/#interface-methods states:

 As detailed in [Traits](Traits), given that each operation implementing
      this interface will also add the interface trait, the methods on this
      interface are inherited by the derived operation.

Do I also need to generate a new trait in the xxxxInterface.td file?
The other Interfaces in the daphneir folder do not seem to do so afaict.

Edit: Not sure if this is fine - since this in an exercise for AMLS - but maybe you can integrate FillOp for Add onto this branch (basically the example you gave) and I take it from there for the rest?

@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch 2 times, most recently from 316895c to b2897e7 Compare July 12, 2025 18:16
@b-m-f b-m-f changed the title Draft: # 668: Push-down of arithmetics before source operations 668: Push-down of arithmetics before source operations Jul 13, 2025
@b-m-f
Copy link
Author

b-m-f commented Jul 13, 2025

@philipportner I was not able to get it running using interfaces.

Instead I discussed with @pdamme to use C++ template functions as well as Traits to get rid of all the duplications.

Traits

  • PushDown
  • PushDownLinear
  • PushDownWithIntervalUpdate

This is now complete and pushed here for review.

The individual Operations still have to enable Canonicalization with hasCanonicalizeMethod = 1.
This could be avoided with a dedicated compiler pass -> out of scope for this ticket.

@b-m-f b-m-f marked this pull request as ready for review July 15, 2025 14:48
@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch from f4106ca to 60b6d94 Compare July 15, 2025 14:49
@pdamme
Copy link
Collaborator

pdamme commented Jul 18, 2025

All elementwise binary ops have a custom builder (defined in EwBinaryOp in DaphneOps.td) that infers the result type from the argument types, if no result type is specified. However, this inference is wrong (as I realize now, and there is also a todo attached to it): it always uses the type of the lhs argument. I will fix that soon by removing this custom builder.

Just as a quick follow-up: I fixed the problem with the incorrect type inference in the custom builder of the elementwise binary ops today in b55d1a8.

@pdamme pdamme self-requested a review July 22, 2025 18:38
@pdamme pdamme added student project Suitable for a bachelor/master student's programming project. AMLS summer 2025 Student project for the Architecture of ML Systems lecture at TU Berlin (summer 2025). labels Jul 22, 2025
Copy link
Collaborator

@pdamme pdamme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this contribution, @b-m-f! Simplification rewrites that avoid unnecessary matrix intermediate results by pushing down certain operations (especially elementwise unary and binary functions) into source ops like FillOp, RandMatrixOp, and SeqOp are very welcome. Your overall approach of integrating these rewrites as canonicalizations of the elementwise ops while using some custom traits and delegating the work to one helper function per arity of elementwise operations is a reasonable trade-off to avoid code duplication for now. (We could try to refactor it to use MLIR interfaces later.) Overall, your code already looks good to me, but there are a few pitfalls in the details and, thus, a few points need to be addressed before we can merge this PR:

Required changes: (must be addressed before we can merge this PR)

  1. Please document each push-down-related trait with a short description in src/ir/daphneir/DaphnePushDownTraits.h. It should become clear what these traits mean, such that other developers know when to attach them to an operation. You can take inspiration from src/ir/daphneir/DaphneInferTypesOpInterface.h.
  2. In pushDownUnary(), the case for RandMatrixOp followed by EwAbsOp has some issues that need to be fixed:
    1. It uses CompilerUtils::isConstant(), but doesn’t check if the SSA values are really constant. The first element of the returned pair must be examined. The second element must only be used if the first element is true.
    2. When min and max are both non-negative, there is no need to create a new RandMatrixOp, you can just replace the EwAbsOp by the existing RandMatrixOp.
    3. I think the case for min and max both negative should check if maxValue <= 0 (not minValue <= 0).
    4. Please make sure that the rewrite is applied when min and max are the same, e.g., abs(rand(2, 2, -77, -77, 0.8, -1)) (and add test cases for it).
  3. In pushDownBinary(), the case for RandMatrixOp: EwMulOp and EwDivOp must not be pushed before RandMatrixOp for integral value types. For instance, rand(1, 1, 1, 3, 1.0, -1) * 100 should only return values in {100, 200, 300}, but currently, it can return any value in [100, 300] (e.g, 123). For floating-point value types, the rewrite should be fine, though.
  4. In pushDownBinary(), the case for SeqOp: Shouldn’t EwDivOp also have the PushDownWithIntervalUpdate trait? For instance, seq(2, 6, 2) / 2 should return [1, 2, 3], but currently it returns [1, 3] because the increment is not updated. Please fix and add a test case.
  5. Please fix this bug (and add test cases): rand(2, 3, 100, 200, 1.0, -1) + 10 should return a 2x3 matrix, but returns a 3x2 matrix.

Optional changes: (recommended, but not critical for merging)

  1. Do we really need the PushDown trait? The fact that an op supports push-down in some way is already encoded in the canonicalization function of the op calling tryPushDown(). I think removing this trait would make the code simpler overall (less checks etc., less things one could forget when supporting more ops).
  2. Do we really need tryPushDown()? It seems to me like an unnecessary indirection. Instead, we could directly call pushDownUnary() or pushDownBinary() in the concrete ops’ canonicalization function. To support push-down for another elementwise unary/binary function, it would currently not suffice to add a canonicalization function that calls tryPushDown(), one also has to add the op in tryPushDown(), which could easily be forgotten.
  3. In pushDownUnary(), the case for RandMatrixOp treats ops with the trait PushDownLinear in a special way. However, this trait is only attached to elementwise binary ops in DaphneOps.td. One consequence it that this code path doesn’t get tested. I suggest adding an example of an elementwise unary op that behaves linearly, such as EwMinusOp as in -rand(...).
  4. Feel free to use rewriter.replaceOpWithNewOp() instead of rewriter.create() followed by rewriter.replaceOp(). It’s shorter.
  5. Should the trait PushDownIntervalUpdate be renamed to PushDownIncrementUpdate? As far as I understand this trait is about updating the increment argument of SeqOp.

I hope these comments are helpful for improving this PR. Feel free to share your thoughts.

@b-m-f
Copy link
Author

b-m-f commented Jul 24, 2025

Thanks for the review @pdamme !
I’ll try to fix all the mentioned suggestions this weekend.

From the top of my head I remember there being an issue with EwDivOp and SeOp if the division is not resulting in an integer, but I will double check and can also try to do a check using mod denominator == 0.

@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch from 1893830 to 77ad54e Compare July 26, 2025 10:40
@b-m-f
Copy link
Author

b-m-f commented Jul 26, 2025

Hi @pdamme,

I have added all suggestions into the MR.

There is one inconsistency with rand + EwMinus for the PushDownLinear + Unary branch though.

Filters: [operations]

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
run_tests is a Catch v2.13.8 host application.
Run with -? for options

-------------------------------------------------------------------------------
randEwMinus
  randEwMinus_1.daphne
-------------------------------------------------------------------------------
/home/max/daphne/test/api/cli/operations/CanonicalizationPushDownTest.cpp:57
...............................................................................

/home/max/daphne/test/api/cli/Utils.h:350: FAILED:
  CHECK( out.str() == exp )
with expansion:
  "DenseMatrix(1x3, int64_t)
  -6 -6 -3
  "
  ==
  "DenseMatrix(1x3, int64_t)
  -3 -3 -6
  "

-------------------------------------------------------------------------------
randEwMinus
  randEwMinus_2.daphne
-------------------------------------------------------------------------------
/home/max/daphne/test/api/cli/operations/CanonicalizationPushDownTest.cpp:57
...............................................................................

/home/max/daphne/test/api/cli/Utils.h:350: FAILED:
  CHECK( out.str() == exp )
with expansion:
  "DenseMatrix(1x3, int64_t)
  3 3 6
  "
  ==
  "DenseMatrix(1x3, int64_t)
  6 6 3
  "

-------------------------------------------------------------------------------
randEwMinus
  randEwMinus_3.daphne
-------------------------------------------------------------------------------
/home/max/daphne/test/api/cli/operations/CanonicalizationPushDownTest.cpp:57
...............................................................................

/home/max/daphne/test/api/cli/Utils.h:350: FAILED:
  CHECK( out.str() == exp )
with expansion:
  "DenseMatrix(1x3, int64_t)
  -6 -4 1
  "
  ==
  "DenseMatrix(1x3, int64_t)
  3 1 -4
  "

I have left the failing tests included for now instead of accepting the new results - in order to discuss whether they should be accepted or if I should just remove the PushDownLinear branch from the tryPushDownUnary function.

The results are valid even though they differ from main. This is because the max and min are swapped during push-down, which in turn gives new results - even with the same seed.

For me this still seems fine, as the results are in the correct range, but for absolute consistency with the non pushed down version this might not be desired.

Just let me know and I will either:

  • check in the new test results to get tests passing
    Or
  • remove the PushDownLinear + Unary branch from the code

@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch 2 times, most recently from b3f09a1 to 3647c5d Compare July 26, 2025 11:01
@b-m-f
Copy link
Author

b-m-f commented Aug 6, 2025

I see that the build is failing, which is unexpected and seems to be related to
[668]: replace create and replaceOp with replaceOpWithNewOp().
I will be able to check this on the 18th of August.

Maybe I forgot to push a change? 🤔

bob@sodawa.com added 2 commits August 18, 2025 17:04
Signed-off-by: bob@sodawa.com <bob@sodawa.com>
Signed-off-by: bob@sodawa.com <bob@sodawa.com>
Maximilian Ehlers added 26 commits August 18, 2025 17:04
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
… of the randomly generated matrix

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
…tion on pushdown

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
… be buggy - does not output doubles

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
… for PushDown

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
…bs case.

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
…pushdown

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
…wn in integer space, adds docs to tryPushDown

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
…value checks are performed safely

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
… are integers

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
…pdate

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
@b-m-f b-m-f force-pushed the 668-pushdown-arithmetics branch from 3647c5d to 58f19f9 Compare August 18, 2025 17:05
@b-m-f
Copy link
Author

b-m-f commented Aug 18, 2025

Hi @pdamme,

I am confused as to why the build is failing in the CI.

My steps:

  • ./build.sh --no-deps --clean ✅
  • ./test ✅
  • rebase onto main from this upstream repo, no conflicts
  • ./build.sh --no-deps --clean ✅
  • ./test ❌

The working commit this branch builds on is c6f7563 .

Do you have an idea which change since then could interfere with EwAddOp?

Maybe b55d1a8 ?

Unfortunately I can not continue on this today ( I just came back to Berlin and many things are in the backlog ), but i'll try to allocate an hour per day this week.

Signed-off-by: Maximilian Ehlers <daphnevm@sodawa.com>
@b-m-f
Copy link
Author

b-m-f commented Aug 19, 2025

My assumption from the previous comment seems to have been correct. I updated the calls to create according to what is written in that commit.

The questions for the tests still remain before merge.

Please let me know whether this "change" in behavior is acceptable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AMLS summer 2025 Student project for the Architecture of ML Systems lecture at TU Berlin (summer 2025). student project Suitable for a bachelor/master student's programming project.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants