Skip to content

Conversation

@kasuga-fj
Copy link
Contributor

@kasuga-fj kasuga-fj commented Oct 7, 2025

The monotonicity definition states its domain as follows:

/// The property of monotonicity of a SCEV. To define the monotonicity, assume
/// a SCEV defined within N-nested loops. Let i_k denote the iteration number
/// of the k-th loop. Then we can regard the SCEV as an N-ary function:
///
///   F(i_1, i_2, ..., i_N)
///
/// The domain of i_k is the closed range [0, BTC_k], where BTC_k is the
/// backedge-taken count of the k-th loop

Current monotonicity check implementation doesn't match this definition because:

  • Just checking nowrap property of addrecs recursively is not sufficient to ensure monotonicity over the entire domain. The nowrap property may hold for certain paths but not for all possible iteration combinations of nested loops.
  • It doesn't consider cases where exact backedge-taken counts are unknown.

Therefore we need to fix either the definition or the implementation. This patch adds the test cases that demonstrate this mismatch.

Copy link
Contributor Author

kasuga-fj commented Oct 7, 2025

Copy link
Contributor

@amehsan amehsan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The examples here seem very similar to the examples we have discussed in #159846

I still don't see relevance of loop guards here.

Copy link
Contributor

@amehsan amehsan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this is still a draft PR, I want to make sure my opinion on this is clear. I believe we still need to see a justificaiton and also details of how loop guards are relevant.

@kasuga-fj kasuga-fj force-pushed the users/kasuga-fj/da-monotonic-check-0 branch from 611229f to 9bfa9d5 Compare October 9, 2025 10:51
@kasuga-fj kasuga-fj force-pushed the users/kasuga-fj/da-monotonic-check-1 branch from 0b8c29b to 5eeaf55 Compare October 9, 2025 10:51
Base automatically changed from users/kasuga-fj/da-monotonic-check-0 to main October 21, 2025 09:11
@kasuga-fj kasuga-fj force-pushed the users/kasuga-fj/da-monotonic-check-1 branch from 5eeaf55 to 5d7ebe3 Compare November 7, 2025 12:12
@kasuga-fj kasuga-fj force-pushed the users/kasuga-fj/da-monotonic-check-1 branch from 5d7ebe3 to 774a239 Compare November 7, 2025 12:18
Copy link
Contributor Author

@kasuga-fj kasuga-fj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think we need to handle loop guards properly in some way, and I believe the test case @nsw_under_loop_guard0 illustrates why. What I'm trying to say is:

  • Monotonicity reasoning based on the nowrap property in addrecs may only be valid under loop guards
  • Each dependence testing function in DA basically assumes monotonicity across the entire iteration space. E.g., it evaluates an addrec at a BTC and treats the result as the minimum or maximum value of the addrec.

That is, there is a gap between the information provided by the monotonicity check and the assumptions made by the dependence testing functions. To bridge this gap, we may need to either:

  • Pessimistically reject cases where loop guards are present, or
  • Prove that the gap does not lead to unsoundness in dependence testing.

It's also true that I haven't found any cases where this gap actually causes problems, so it might be possible to prove that the gap is harmless.

@Meinersbur What do you think?

Comment on lines 5 to 21
; for (i = 0; i < INT64_MAX - 1; i++)
; if (i < 1000)
; for (j = 0; j < 2000; j++)
; a[i + j] = 0;
;
; FIXME: This is not monotonic. The nsw flag is valid under
; the condition i < 1000, not for all i.
define void @nsw_under_loop_guard0(ptr %a) {
; CHECK-LABEL: 'nsw_under_loop_guard0'
; CHECK-NEXT: Monotonicity check:
; CHECK-NEXT: Inst: store i8 0, ptr %idx, align 1
; CHECK-NEXT: Expr: {{\{\{}}0,+,1}<nuw><nsw><%loop.i.header>,+,1}<nuw><nsw><%loop.j>
; CHECK-NEXT: Monotonicity: MultivariateSignedMonotonic
; CHECK-EMPTY:
; CHECK-NEXT: Src: store i8 0, ptr %idx, align 1 --> Dst: store i8 0, ptr %idx, align 1
; CHECK-NEXT: da analyze - output [* *]!
;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i + j is monotonic under the condition i < 1000, but obviously not over the entire iteration space (0 <= i < INT64_MAX - 1)

@kasuga-fj kasuga-fj requested a review from Meinersbur November 7, 2025 12:45
@Meinersbur
Copy link
Member

Meinersbur commented Nov 7, 2025

Some background on how Polly handles this:
Each BasicBlock has a "domain" which is the set of possible values of i, j,n, ... . When there is a condition such as i < 1000, the domain includes the infomation $i &lt; 1000$ which then can be used to rule out overflow, etc.

DA doesn't have an equivalent of execution domains, it has to assume Src/Dst may be executed for any i. I don't think we need to bail out only because of the simple presence of conditionals/loop guards, it just means we cannot prove monotonicity in this case which may then cause to not be analyzable, but we would have no probles with loop guards such as if (i >= 1) or if (debug).

When I said "has to assume Src/Dst may be executed for any i", this is not entirely true. DependenceInfo::collectUpperBound can infer an upper bound from the loop's BackedgeTakenCount. I think it could also collect the branch conditions in the acyclic control flow to reach Src/Dst instructions. This means that along a SCEV, one also needs to pass a context BB for such kind of analysis since the restricted upper bound only counds within the if-condition. This kind of context information is already used by getSCEVAtScope, but it passes only the loop and does not care for the paticular BB.

Although I wonder whether it is worth the effort. You get more complicated upper bound SCEVs such as min(n, 10000). At least in the example case where n==INT64_MAX, you get a constant 999 within the BB, and INT64_MAX-1 outside of it.

@amehsan
Copy link
Contributor

amehsan commented Nov 7, 2025

I still think we need to handle loop guards properly in some way, and I believe the test case @nsw_under_loop_guard0 illustrates why.

I'd like to look into this in detail, but I may not be able to do it today. Will get back to you on this asap.

@kasuga-fj
Copy link
Contributor Author

Some background on how Polly handles this: Each BasicBlock has a "domain" which is the set of possible values of i, j,n, ... . When there is a condition such as i < 1000, the domain includes the infomation $i &lt; 1000$ which then can be used to rule out overflow, etc.

Thanks for the details. I also think we don't need to bail out in all cases where loop guards exist. However, I believe some checks are necessary to justify the reasoning about the monotonicity based on the nowrap properties of addrecs, and I haven't found a good way to detect only conditions like if (i < 1000) while ignoring ones like if (1 <= i) or if (debug). I'll take a look at Polly's implementation.

When I said "has to assume Src/Dst may be executed for any i", this is not entirely true. DependenceInfo::collectUpperBound can infer an upper bound from the loop's BackedgeTakenCount. I think it could also collect the branch conditions in the acyclic control flow to reach Src/Dst instructions. This means that along a SCEV, one also needs to pass a context BB for such kind of analysis since the restricted upper bound only counds within the if-condition. This kind of context information is already used by getSCEVAtScope, but it passes only the loop and does not care for the paticular BB.

I think we can make DA more precise by taking context such as branch conditions into account. For example, simply replacing isKnownPredicate with isKnownPredicateAt may improve the result in some cases. However, I'd like to work on such improvements after fixing the existing correctness issues.

@amehsan
Copy link
Contributor

amehsan commented Nov 7, 2025

and I believe the test case @nsw_under_loop_guard0 illustrates why. What I'm trying to say is:

This is not different from the examples that we have discussed before. I really don't understand why you keep going back to this issue.

All dependence testing, motonicity proof, etc. are only relevant when the loop is executed. If the guard condition is not satisfied the loop is not executed. for those values of i and j even if your results is incorrect it doesn't matter.

Your concern about correctness of DA when loop guard exists is 100% unjustified. If this could result in a bug, it is basically impossible to write a correct DA. I gave you an example in the previous discussion. Your nsw flags may be correct under assumptions that you have practically no way to discover.

@amehsan
Copy link
Contributor

amehsan commented Nov 7, 2025

DA doesn't have an equivalent of execution domains, it has to assume Src/Dst may be executed for any i. I don't think we need to bail out only because of the simple presence of conditionals/loop guards, it just means we cannot prove monotonicity in this case which may then cause to not be analyzable

@Meinersbur consider a loop like this

for (i = 0; i < n; i++) {

    if (i <  3000) {

              for (j = .....) {
              }
              // some code with no control flow
              for (k = .....) {
              }
    }
}

I may have optimizations that infer nsw flags for computations in k loop by looking at the if (i < 3000) condition. If you think monotonicity cannot be proven in the presennce of loop guards, then monotonicity cannot be proven in this case either. Should we check for existence of a conditino like this and then bail out and say we cannot prove monotonicity if we encounter a condition like this?

@kasuga-fj
Copy link
Contributor Author

kasuga-fj commented Nov 7, 2025

All dependence testing, motonicity proof, etc. are only relevant when the loop is executed. If the guard condition is not satisfied the loop is not executed. for those values of i and j even if your results is incorrect it doesn't matter.

It looks to me that you are conflating "what dependence testing functions assume" with "when the result should be correct". It would be true that it’s sufficient for the result to be correct only when it is actually executed, but I’m talking about the preconditions which the testing functions rely on. For example, as for an addrec {c,+,a} (a*i + c), many testing functions (perhaps implicitly) assume that it takes its maximum value at the last iteration (i = BTC) if a is non-negative. This assumption holds when the addrec is monotonic over the entire iteration space. However, as the example I gave shows, this is not necessarily the case when some loop guards exist.

@amehsan
Copy link
Contributor

amehsan commented Nov 8, 2025

It looks to me that you are conflating "what dependence testing functions assume" with "when the result should be correct". It would be true that it’s sufficient for the result to be correct only when it is actually executed, but I’m talking about the preconditions which the testing functions rely on. For example, as for an addrec {c,+,a} (a*i + c), many testing functions (perhaps implicitly) assume that it takes its maximum value at the last iteration (i = BTC) if a is non-negative. This assumption holds when the addrec is monotonic over the entire iteration space. However, as the example I gave shows, this is not necessarily the case when some loop guards exist.

It is not clear to me where we make this assumption and what are the consequences of that. But before discussing that I have another question: I believe you are talking about this example from your testcase:

; for (i = 0; i < INT64_MAX - 1; i++)
;   if (i < 1000)
;     for (j = 0; j < 2000; j++)
;       a[i + j] = 0;

and you are saying for DA to be correct we need to check that a loop guard exists (or may be check what the loop guard is) and bail out. Now I can change your example to the following:

for (i = 0; i < INT64_MAX - 1; i++)
   if (i < 1000) {
     for (k = 0;  k < 5000; k++) {
           // do something in the loop
      }
     // some complex code possibly involving control flow 
     for (j = 0; j < 2000; j++)   
       a[i + j] = 0;
     }
  } // if (i < 1000)

Your comment is applicable to this example as well. Here j loop has no guard. How do you want to handle this case? Checking the guard doesn't seem to work here. Also please answer the same question about the following example

for (i = 0; i < INT64_MAX - 1; i++)
   
     for (k = 0;  k < 5000; k++) {
           if (i > 1000) goto X; // X is a label outside the loopnest.
      }
     // some complex code possibly involving control flow 
     for (j = 0; j < 2000; j++)   
       a[i + j] = 0;
     }
  } 

@Meinersbur
Copy link
Member

I think we can make DA more precise by taking context such as branch conditions into account. For example, simply replacing isKnownPredicate with isKnownPredicateAt may improve the result in some cases. However, I'd like to work on such improvements after fixing the existing correctness issues.

Wow, I didn't know about isKnownPredicateAt. It is indeed what I had in mind. For monotonicity, we would need a isMonotonicAt function that takes a CtxI.

@Meinersbur
Copy link
Member

DA doesn't have an equivalent of execution domains, it has to assume Src/Dst may be executed for any i. I don't think we need to bail out only because of the simple presence of conditionals/loop guards, it just means we cannot prove monotonicity in this case which may then cause to not be analyzable

@Meinersbur consider a loop like this

for (i = 0; i < n; i++) {

    if (i <  3000) {

              for (j = .....) {
              }
              // some code with no control flow
              for (k = .....) {
              }
    }
}

I may have optimizations that infer nsw flags for computations in k loop by looking at the if (i < 3000) condition. If you think monotonicity cannot be proven in the presennce of loop guards, then monotonicity cannot be proven in this case either. Should we check for existence of a conditino like this and then bail out and say we cannot prove monotonicity if we encounter a condition like this?

Two cases:

  1. The monotonicity check is control-flow insensitive. Then we cannot use i < 3000 to prove monotonicity because we do not know whether where in the CFG we are: before/after the br? If after the br, which branch did we take? DA is currently only considers cyclic control flow. Loop guards are acyclic control flow, which DA currently does not handle.

  2. The monotonicity check is made control-flow sensitive. In addition to the SCEV, we pass where in the CFG the SCEV is being evaluated. Only if this information is passed over as well we have the information whether i < 3000 has been checked at that location.

The presence of a loop guard does not guarantee that the property holds globally everywhere. For instance:

for (i = 0; i < n; i++) {
     if (i <  3000) {
               for (j = .....) {
                  ... j + i ...;
               }
       }
	   for (k = .....) {
	      ... k + i ...
	   }
}

Obviously we cannot assume that i < 3000 when k + i is evaluated.

@amehsan
Copy link
Contributor

amehsan commented Nov 10, 2025

Two cases:

  1. The monotonicity check is control-flow insensitive. Then we cannot use i < 3000 to prove monotonicity because we do not know whether where in the CFG we are: before/after the br? If after the br, which branch did we take? DA is currently only considers cyclic control flow. Loop guards are acyclic control flow, which DA currently does not handle.
  2. The monotonicity check is made control-flow sensitive. In addition to the SCEV, we pass where in the CFG the SCEV is being evaluated. Only if this information is passed over as well we have the information whether i < 3000 has been checked at that location.

I believe this response misses a point in my objection, however right now I am doing some hands-on work on the example provided. Let me finish that and I will post an update here. I believe that will be a more constructive discussion.

@kasuga-fj
Copy link
Contributor Author

I think we can make DA more precise by taking context such as branch conditions into account. For example, simply replacing isKnownPredicate with isKnownPredicateAt may improve the result in some cases. However, I'd like to work on such improvements after fixing the existing correctness issues.

Wow, I didn't know about isKnownPredicateAt. It is indeed what I had in mind. For monotonicity, we would need a isMonotonicAt function that takes a CtxI.

I think isMonotonicAt is generally insufficient. I crafted a test case.

; stride = INT64_MAX; 
; for (i = 0; i < 10; i++)
;   if (i % 2 == 0) {
;     A[stride*i + 100] = 1;  // A[100], A[98], A[96], ...
;     A[stride*i + 102] = 2;  // A[102], A[100], A[98], ...
;   }
;
define void @f(ptr %A) {
entry:
  br label %loop.header

loop.header:
  %i = phi i64 [ 0, %entry ], [ %i.inc, %loop.latch ]
  %offset.0 = phi i64 [ 100, %entry ], [ %offset.0.next, %loop.latch ]
  %offset.1 = phi i64 [ 102, %entry ], [ %offset.1.next, %loop.latch ]
  %odd = and i64 %i, 1
  %cond = icmp ne i64 %odd, 1
  br i1 %cond, label %if.then, label %loop.latch

if.then:
  %gep.0 = getelementptr inbounds i8, ptr %A, i64 %offset.0
  %gep.1 = getelementptr inbounds i8, ptr %A, i64 %offset.1
  store i8 1, ptr %gep.0
  store i8 2, ptr %gep.1
  br label %loop.latch

loop.latch:
  %i.inc = add nuw nsw i64 %i, 1
  %offset.0.next = add i64 %offset.0, 9223372036854775807
  %offset.1.next = add i64 %offset.1, 9223372036854775807
  %ec = icmp eq i64 %i.inc, 10
  br i1 %ec, label %exit, label %loop.header

exit:
  ret void
}

Both offsets are monotonic in the execution domain, and Strong SIV misses the dependency (godbolt). The culprit seems here. I think something strange happens if the execution domain is not consecutive.

@kasuga-fj
Copy link
Contributor Author

It is not clear to me where we make this assumption and what are the consequences of that. But before discussing that I have another question: I believe you are talking about this example from your testcase:

; for (i = 0; i < INT64_MAX - 1; i++)
;   if (i < 1000)
;     for (j = 0; j < 2000; j++)
;       a[i + j] = 0;

and you are saying for DA to be correct we need to check that a loop guard exists (or may be check what the loop guard is) and bail out. Now I can change your example to the following:

for (i = 0; i < INT64_MAX - 1; i++)
   if (i < 1000) {
     for (k = 0;  k < 5000; k++) {
           // do something in the loop
      }
     // some complex code possibly involving control flow 
     for (j = 0; j < 2000; j++)   
       a[i + j] = 0;
     }
  } // if (i < 1000)

Your comment is applicable to this example as well. Here j loop has no guard. How do you want to handle this case? Checking the guard doesn't seem to work here. Also please answer the same question about the following example

for (i = 0; i < INT64_MAX - 1; i++)
   
     for (k = 0;  k < 5000; k++) {
           if (i > 1000) goto X; // X is a label outside the loopnest.
      }
     // some complex code possibly involving control flow 
     for (j = 0; j < 2000; j++)   
       a[i + j] = 0;
     }
  } 

I think we can ensure it by checking the (post-)dominance for headers and/or latches between the outer loop and the inner loop. It might also be worth noting that ScalarEvolution::LoopGuards already implements the function that collects the conditions required to enter the loop.

@amehsan
Copy link
Contributor

amehsan commented Nov 11, 2025

It is not clear to me where we make this assumption and what are the consequences of that. But before discussing that I have another question: I believe you are talking about this example from your testcase:

; for (i = 0; i < INT64_MAX - 1; i++)
;   if (i < 1000)
;     for (j = 0; j < 2000; j++)
;       a[i + j] = 0;

and you are saying for DA to be correct we need to check that a loop guard exists (or may be check what the loop guard is) and bail out. Now I can change your example to the following:

for (i = 0; i < INT64_MAX - 1; i++)
   if (i < 1000) {
     for (k = 0;  k < 5000; k++) {
           // do something in the loop
      }
     // some complex code possibly involving control flow 
     for (j = 0; j < 2000; j++)   
       a[i + j] = 0;
     }
  } // if (i < 1000)

Your comment is applicable to this example as well. Here j loop has no guard. How do you want to handle this case? Checking the guard doesn't seem to work here. Also please answer the same question about the following example

for (i = 0; i < INT64_MAX - 1; i++)
   
     for (k = 0;  k < 5000; k++) {
           if (i > 1000) goto X; // X is a label outside the loopnest.
      }
     // some complex code possibly involving control flow 
     for (j = 0; j < 2000; j++)   
       a[i + j] = 0;
     }
  } 

I think we can ensure it by checking the (post-)dominance for headers and/or latches between the outer loop and the inner loop. It might also be worth noting that ScalarEvolution::LoopGuards already implements the function that collects the conditions required to enter the loop.

We need something like this. However there are some subtleties to consider. Dominance directly does not work, because the inner loop might be for (int i = 0; i < n; i++) and then for n = 0 the inner loop is not executed (Unless the loop guard is hoisted by loop unswitch). We need to make sure the case of early exit is also considered.

But before getting into details of this, I believe we need to think about a couple of higher level issues:

(1) I wonder if there is a proof of correctness of these tests in the literature that we can look into and figure out what are the assumptions for each one? It is much better if we can gather this issues in a systematic way. If not, it might be even better to try to write down the proofs and in the process we can find what do we need to assume. That is easier than discovering this bugs one by one.

(2) Different tests might have different assumptions. We need to be careful about this. One way to handle this, could be to require loops to be in a more strict canonical form (perfect loop nests? maybe with no early exit in the innermost loop?), before applying any loop optimizations. However, this will be large amount of work and at this point I don't know whether we have a justification for it or not. But if this is something that we are going to be forced to do down the road, may be we need to pause now and think about it.

I believe our best next step, would be to start gathering the list of assumptions required for each of the tests. (Unless we have a reason that missing assumption are very rare and it is not something to generally worry about).

One last thing is that I still haven't done a careful debug of the example provided. I assume you have done this and there is no other issue (such as overflow in calculation, or other problems in DA computations)

@amehsan
Copy link
Contributor

amehsan commented Nov 11, 2025

(1) I wonder if there is a proof of correctness of these tests in the literature that we can look into and figure out what are the assumptions for each one?

This book may have the proofs
Dependence Analysis
by Utpal Banerjee

But first I will look more for online resources.

@amehsan
Copy link
Contributor

amehsan commented Nov 11, 2025

I think something strange happens if the execution domain is not consecutive.

Do you mean the issue is related to the conditional statement in the loop? The problem is reproducible without that, too.

StrongSIV is a relatively simple test. I think we should be able to figure out the proof of this one and figure out what are the assumptions.

(EDIT: BTW, this subscript doesn't pass your monotonicity check. But I think what you mean is that if we look at the values of subscript, there is no wrapping and the values are constantly increasing/decreasing. Please let me know if I misunderstood something)

@Meinersbur
Copy link
Member

I think isMonotonicAt is generally insufficient. I crafted a test case.

; stride = INT64_MAX; 
; for (i = 0; i < 10; i++)
;   if (i % 2 == 0) {
;     A[stride*i + 100] = 1;  // A[100], A[98], A[96], ...
;     A[stride*i + 102] = 2;  // A[102], A[100], A[98], ...
;   }
}

Both offsets are monotonic in the execution domain, and Strong SIV misses the dependency (godbolt). The culprit seems here. I think something strange happens if the execution domain is not consecutive.

I don't think execution context adds anything to this example. It is monotonic, and has dependences, even without the conditional:

for (i = 0; i < 10; i++) {
     A[stride*i + 100] = 1;  // A[100], A[99], A[98], ...
     A[stride*i + 102] = 2;  // A[102], A[101], A[100], ...
}

DA does not find the dependency in either case. It is just another bug in DA.

Did you mean A[stride*i + 101]? There woudn't be a dependency in this case, but I don't think showing monotonicity is the difficult part here.

@kasuga-fj
Copy link
Contributor Author

If I understand correctly, [ L 1 , U 1 ] is a subset of [ 0 , BTC 1 ] . Are the new bounds derived from the conditions leading required to reach a BasicBlock, i.e. acyclic control-flow-sensitive?

If so, this seems to be isMonotonicAt(BB) with an additional step: isMonotonicInDomain(getEffectiveDomain(BB)).

That matches my intent. One thing I don’t quite understand, and which makes things more complex, is the existence of delinearization. For example, in the following case:

int A[][10];
for (i = 0; i < INT64_MAX; i++)
  for (j = 0; j < 10; j++)
    A[i][j] = 0;

The original offset will be lowered to %i*10 + %j, which overflows. However, this would be delinearized to %A[%i][%j], and in this case each subscript %i and %j is monotonic. I'm really not sure if it's sound that isMonotonic(BB, i) returns true. Conservatively, maybe we should also pass the original offset to isMonotonicAt?

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

I haven't read the full comment yet, but a point that I am going to mention in the issue that I have opened is this:

What we need to focus on is "no wrapping" property which can be divided to signed and unsigned categories. monotonicity is not a fundamental property that we need to care about. It just happens that the two are related for linear functions.

To demonstrate this assume we have a non-monotonic function; for example: assume we are checking the dependences between A[i*i - 1] and A[0] (all numbers interpreted as signed). Also assume that it is guaranteed that both operations, multiply and subtraction, are nsw for values of i that the instruction is executed (not the entire domain).

The question that we need to answer is this: are there values for i such that i * i -1 = 0. When there is no overflow in computation, the calculation of i*i - 1 will have the same bitwise representation whether it is calculated in the ring of integers modulo 2^64 or it is calculated as integers. and this tells us we can just solve the equation as integer and we are fine.

This is the basic idea of the proofs that we need to show control flow is irrelevant. But monotonicity is not the issue here. What is important here is "no wrapping" property as long as it is used in a consistent way (signed or unsigned) for each proof.

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

I have opened this issue and will write down proofs that control flow is irrelevant (i.e. if we have nowrappig flags, we don't need to worry about control flow). Will be updated gradually.

#168823

@kasuga-fj
Copy link
Contributor Author

  • I selected the term "monotonicity" instead of "no-wrap" because the former is easier to define. For a polynomial with its degree is less than 2, monotonicity would be equivalent no-wrap.
  • The example A[i*i - 1] doesn't make much sense to me since I believe that DA doesn't support quadratic or higher degree addrecs for the foreseeable future (or possibly ever). Also you may want to pay attention to how SCEV represents such higher degree expressions. It doesn't hold the multiplications explicitly (though I don't know what it implies for DA).
  • As I mentioned in the previous comment, there exists dependence tests with different characteristics (Category 1 and 2). It seems to me that you only consider the tests in Category 2. I'm saying that entire domain monotonicity is necessary for Category 1 tests.

I'm going to be direct for a moment. If this is a misunderstanding, I apologize in advance.

Looking back at your comments in this thread, I got the impression that you haven't thoroughly read the DA code, and you are only focusing on the parts you have checked. If so, it's only natural that the discussion take time and/or is going in circles. I don’t think continuing the discussion with you in this state will lead to a good outcome.

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

I'm going to be direct for a moment. If this is a misunderstanding, I apologize in advance.

Looking back at your comments in this thread, I got the impression that you haven't thoroughly read the DA code, and you are only focusing on the parts you have checked. If so, it's only natural that the discussion take time and/or is going in circles. I don’t think continuing the discussion with you in this state will lead to a good outcome.

I suggest you show a specific technical mistake in my comments. That will be much more convincing. But now let me anwer to your comment above.

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

  • I selected the term "monotonicity" instead of "no-wrap" because the former is easier to define. For a polynomial with its degree is less than 2, monotonicity would be equivalent no-wrap.

They are not equivalent and that is important for your example

; stride = INT64_MAX; 
; for (i = 0; i < 10; i++)
;   if (i % 2 == 0) {
;     A[stride*i + 100] = 1;  // A[100], A[98], A[96], ...
;     A[stride*i + 102] = 2;  // A[102], A[100], A[98], ...
;   }
;

because here it doesn't matter whether you interpret numbers as signed or unsigned stride*i + 102 cannot have no wrap flags on both operations when i = 2. So this addresses your concern about control flow in this loop. But currently this loop is being discussed because according to definition of monotonicity stride*i + 102 is monotonic for values of i = 0, i = 2, ....10.

Do I misunderstand something here?

  • The example A[i*i - 1] doesn't make much sense to me since I believe that DA doesn't support quadratic or higher degree addrecs for the foreseeable future (or possibly ever). Also you may want to pay attention to how SCEV represents such higher degree expressions. It doesn't hold the multiplications explicitly (though I don't know what it implies for DA).

I agree that this is not a practically interesting example, but the point is this: we don't need monotonicity for proving correctness of a given test. We just need existance of nowrap flags.

  • As I mentioned in the previous comment, there exists dependence tests with different characteristics (Category 1 and 2). It seems to me that you only consider the tests in Category 2. I'm saying that entire domain monotonicity is necessary for Category 1 tests.

Yes, I had the same understanding for some time, but now I believe we don't need to distinguish between them. I will write down proofs soon. You can look into them and try to find a mistake in them.

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

@Meinersbur I think it might be more productive if we have a loop meeting and discuss these issues in a meeting. Is that possible?

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

  • The example A[i*i - 1] doesn't make much sense to me since I believe that DA doesn't support quadratic or higher degree addrecs for the foreseeable future (or possibly ever). Also you may want to pay attention to how SCEV represents such higher degree expressions. It doesn't hold the multiplications explicitly (though I don't know what it implies for DA).

I agree that this is not a practically interesting example, but the point is this: we don't need monotonicity for proving correctness of a given test. We just need existance of nowrap flags.

To be clear, here I am using a test that requires solving a quadratic equation. Sure. We don't have this in DA and we don't want it. The point is that, even for non-monotonic functions we can devise a test and prove its corretness if needed.

@kasuga-fj
Copy link
Contributor Author

They are not equivalent and that is important for your example

; stride = INT64_MAX; 
; for (i = 0; i < 10; i++)
;   if (i % 2 == 0) {
;     A[stride*i + 100] = 1;  // A[100], A[98], A[96], ...
;     A[stride*i + 102] = 2;  // A[102], A[100], A[98], ...
;   }
;

because here it doesn't matter whether you interpret numbers as signed or unsigned stride*i + 102 cannot have no wrap flags on both operations when i = 2. So this addresses your concern about control flow in this loop. But currently this loop is being discussed because according to definition of monotonicity stride*i + 102 is monotonic for values of i = 0, i = 2, ....10.

I forgot to mention a very important precondition: they would be equivalent if the domain is consective. I wasn’t really intending to discuss that extreme example.

For the remaining parts, I'd strongly recommend you to read the code. I think Symbolic RDIV is a good one. It assumes that an addrec takes its minimum value at the first iteration and maximum value at the last iteration if the coefficient is non-negative. I think it's clear that "an addrec doesn't wrap when it's executed" is insufficient

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

For the remaining parts, I'd strongly recommend you to read the code. I think Symbolic RDIV is a good one. It assumes that an addrec takes its minimum value at the first iteration and maximum value at the last iteration if the coefficient is non-negative. I think it's clear that "an addrec doesn't wrap when it's executed" is insufficient

OK. I give you a proof for the case that you mentioned. Note that we are talking about a mathematical problem. If you believe my proof is incorrect, I would be more than happy to see a counterexample and learn from it.

Note that in addition to “an addrec doesn’t wrap when it’s executed” we also need computations inside DA code to not overflow. We have already fixed a couple of bugs related to that. So that should not be a controversial assumption.

Now let’s focus on the case of non-negative coefficients in Symbolic RDIV.

Assume we have a1 * i + c1 and a2 * j + c2. Note that Symbolic RDIV calculates both c1 – c2 and c2-c1 so let’s interpret the numbers as signed, otherwise one of the two subtractions will overflow (Unless c1 == c2). (There are probably ways to make it work for unsigned numbers but I don’t think that is a concern here).

Also Note that a1 * N1 and a2 * N2 are two multiplications performed in DA code. If we cannot prove they do not overflow DA cannot be correct. (Similar to the bugfix for StrongSIV that Alireza contributed and you reviewed). So it is safe to assume that these two multiplications do not overflow. Also the test assumes starting value of i and j is zero.

Now let’s proceed the proof by contradiction. Let’s say there are two executed values of x (for i) and y (for j) such that a1 *x + c1 == a2 *y + c2 but Symbolic RDIV proved independence. There are two possible scenarios:

Case 1: a1 * N1 < c2 – c1 and so we (supposedly incorrectly) proved independence. Note that from our assumptions about non-overflowing computations we can conclude a1 * x + c1 <= a1 * N1 + c1 (x cannot be more than N1) So we have a1 * N1 + c1 < c2 it is ok to move c1 because we know there is no overflow in the left hand side computations. But since a1, N1, and x are non-negative numbers we have a1 * x + c1 < c2. And since a2 and y are non-negative numbers we conclude a1 * x + c1 < a2 * y+ c2.

I haven’t written down the proof for Case 2 but I believe it will work out similarly. The key point is this: All the computations used in the proof are either computations of executed subscript or computations inside the DA. Everything else is irrelevant.

If this is incorrect, please give me a counterexmple.

@amehsan
Copy link
Contributor

amehsan commented Nov 21, 2025

Case 1: a1 * N1 < c2 – c1 and so we (supposedly incorrectly) proved independence. Note that from our assumptions about non-overflowing computations we can conclude a1 * x + c1 <= a1 * N1 + c1 (x cannot be more than N1) So we have a1 * N1 + c1 < c2 it is ok to move c1 because we know there is no overflow in the left hand side computations. But since a1, N1, and x are non-negative numbers we have a1 * x + c1 < c2. And since a2 and y are non-negative numbers we conclude a1 * x + c1 < a2 * y+ c2.

There is a minor mistake. Let me correct it:

We know a1 * x < a1 * N1 and we know a1 * N1 < c2 – c1 . So we conclude a1 * x < c2 - c1. Now we can move c1 to left side and conclude a1 * x + c1 < c2

@kasuga-fj
Copy link
Contributor Author

Also Note that a1 * N1 and a2 * N2 are two multiplications performed in DA code. If we cannot prove they do not overflow DA cannot be correct.

This assumption seems almost equivalent to requiring monotonicity over the entire domain. If we cannot prove monotonicity over the entire domain, then in most cases we cannot prove a1 * N1 and a2 * N2 don't overflow either. To go further, it feels like we can prove the former much more often than the latter. I'm really not sure why you are strongly opposed to introduce the notion like entire monotonicity.

I believe Symbolic RDIV should be rewritten to just compare the minimum/maximum values of the two subscripts, after ensuring they are monotonic over the entire domain. This approach is more natural and doesn’t require any complicated proofs.

@amehsan
Copy link
Contributor

amehsan commented Nov 22, 2025

It would be great to have the opinion of llvm area team about this discussion. @nikic @arsenm @fhahn

I don't know what is your approach, but my suggestion is to start with a meeting on the issue. Obviously feel free to ignore my suggestion as I am not familiar with your processes.

@Meinersbur
Copy link
Member

The question that we need to answer is this: are there values for i such that i * i -1 = 0. When there is no overflow in computation, the calculation of i*i - 1 will have the same bitwise representation whether it is calculated in the ring of integers modulo 2^64 or it is calculated as integers. and this tells us we can just solve the equation as integer and we are fine.

Are you suggesting using arbitrary-precision integers for solving dependency equations? Because this is why Integer Set Library and https://github.com/llvm/llvm-project/tree/main/mlir/include/mlir/Analysis/Presburger use arbitrary-precision integers.

This is the basic idea of the proofs that we need to show control flow is irrelevant. But monotonicity is not the issue here. What is important here is "no wrapping" property as long as it is used in a consistent way (signed or unsigned) for each proof.

Monotonicity is supposed to give the additional guarantee that expression evaluations stay between the what it evaluates to in the first and last loop iterations, since otherwise to get the range of an arbitrary expression one would need to evaluate it for every in-between value. Everytime I see getBackedgeTakenCount, some monotonicity assumption probably went in. Two subscript expressions having non-intersecting ranges is the basic non-dependence test. The even stronger guarantee would be linear, but this would would also rule out SCEVExpr such as integer-division-by-constant and min/max. Indeed, ZIV, SIV, MIV classified subscripts don't contain them but are linear by defintion, but there are ther cases we might want to dependence-analyse.

@arsenm
Copy link
Contributor

arsenm commented Nov 24, 2025

entier?

@amehsan
Copy link
Contributor

amehsan commented Nov 24, 2025

entier?

entire iteration space

EDIT: i assume you are asking about the word in the title of the PR

@arsenm
Copy link
Contributor

arsenm commented Nov 24, 2025

entier?

entire iteration space

EDIT: i assume you are asking about the word in the title of the PR

Which isn't a word so fix the typo?

@amehsan
Copy link
Contributor

amehsan commented Nov 24, 2025

entier?

entire iteration space
EDIT: i assume you are asking about the word in the title of the PR

Which isn't a word so fix the typo?

Sure. I did not open the PR, but I believe fixing a typo should be fine.

@amehsan amehsan changed the title [DA] Add tests for nsw doesn't hold on entier iteration [DA] Add tests for nsw doesn't hold on entire iteration space Nov 24, 2025
@kasuga-fj
Copy link
Contributor Author

I didn't except to ask for opinions from others (including LLVM Area Team) on this one, because

  • This thread alone lacks context
  • Even the information within the thread isn't well-organized
  • Nearly half of this thread is somewhat off-topic

I think summarizing the contents and issues is necessary before involving others.

my suggestion is to start with a meeting on the issue

I'm very bad at speaking and listening in English, so I'm not sure if arranging a meeting is a good idea. That said, I don't mind joining the next loop opt meeting.

Anyway, I'll mark this PR as ready for review in a few days, as it's clear that either the definition of monotonicity or the result of its check needs to be corrected.

Comment on lines 5 to 8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The solution to this is to drop the notion of monotonicity and define everything in terms of "no-wrap".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in #159846 (comment), there are cases where entire monotonicity has benefits. My proposal is to prepare two types of domain. Have you read the previous reply?

Also, we have explained why we use the term monotonicity. Moreover, it's slightly off the topic of this issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in #159846 (comment), there are cases where entire monotonicity has benefits. My proposal is to prepare two types of domain. Have you read the previous reply?

And I responded to your comment there. Please, let's stop this never ending discussion and wait for the area team to help us resolve this issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't expect the area team to assist even at this level of detail...

I'm crafting a prototype to show my approach. I'll share it once it's ready. If you still have any objections, please say so. I honestly have no idea what you are disagreeing with.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, I've implemented what I have in mind for Strong SIV. If you still see any issues, please let me know.

kasuga-fj/llvm-project@c0a7b15...f579161

I believe this approach is better than inserting overflow checks everywhere as we don't need complex proofs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1- looking at strong SIV code, you have removed 24 lines of code and added 36 lines of code. So your code is more complex.

But we don't need complex proofs like #162281 (comment). I really want to avoid relying on such a proof because:

  • It's complex and easy to get wrong
  • It strongly depends on specific parts of the implementation
  • Some minor changes in the code can easily break the proof

2- How do you know nsw is valid for first or last iteration of the loop?

For the innermost addrec, if the exact BTC is computable, nowrap property should be valid for every iteration.

3- This is more compile time intensive than previous approach. Just a quick look at the SCEV related computations in your approach it seems more expensive than the previous approach

It's not trivial. If you haven't seen it, I'd strongly recommend taking a look at the implementation of SCEV. For example, willNotOverflow performs same operation twice, with two different bitwidth.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the innermost addrec, if the exact BTC is computable, nowrap property should be valid for every iteration.

I don't think this is correct. I will give you an example in another comment later today. Will also respond to the rest of your comments later today.

Copy link
Contributor

@amehsan amehsan Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of quick comments for now:

But we don't need complex proofs like #162281 (comment).

The link is to the proof for Symbolic RDIV. Correct link to the proof for strong SIV is this: #168823 (comment) And I believe you talk about case 2, which is the test that you are talking about.

Case 1 which is about the other test in Strong SIV is quite simple.

For the innermost addrec, if the exact BTC is computable, nowrap property should be valid for every iteration.

I believe you rely on the SCEV's conservative decision to drop nsw flags for instructions that are under a condition. That is a deliberately conservative choice on SCEV's part, and I don't think we should rely on that. In future SCEV may change or be extended or modified otherwise. There is no reason to create potential future limitations.

Will add more comments later on.

EDIT: This conservative decision of SCEV will hurt us. If two loops have control flow inside their body, very likely they cannot be fused because they fail dependence check. We may need to get to the bottom of this.

Copy link
Contributor Author

@kasuga-fj kasuga-fj Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link is to the proof for Symbolic RDIV. Correct link to the proof for strong SIV is this: #168823 (comment) And I believe you talk about case 2, which is the test that you are talking about.

Similar approach can be applied to Symbolic RDIV as well. So both #162281 (comment) and Case 2 in #168823 (comment) are unnecessary with my approach.

I believe you rely on the SCEV's conservative decision to drop nsw flags for instructions that are under a condition. That is a deliberately conservative choice on SCEV's part, and I don't think we should rely on that. In future SCEV may change or be extended or modified otherwise. There is no reason to create potential future limitations.

It is guaranteed that the nowrap property holds within the defining scope of the SCEV. This means that the nowrap property of an addrec holds on all executed iterations of the loop. It is therefore reasonable to rely on this contract. I'd recommend to read this article and #159846 (comment).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is guaranteed that the nowrap property holds within the defining scope of the SCEV. This means that the nowrap property of an addrec holds on all executed iterations of the loop. It is therefore reasonable to rely on this contract. I'd recommend to read this article and #159846 (comment).

If we rely on SCEV behavior, I may be able to simplify my proofs too. Let me check if it is possible. Also I need to look into the compile time issue a little more. Give me some time for that.

this article

Not directly related to your proposed change, but: the article explains the design, but it doesn't say why it is designed this way. By any chance do you have an example that something breaks if we keep IR's nsw flag for subscripts that are executed under a condition (Assume the load/store that uses the subscript is also under the same condition)

@kasuga-fj kasuga-fj marked this pull request as ready for review November 27, 2025 15:26
@llvmbot llvmbot added the llvm:analysis Includes value tracking, cost tables and constant folding label Nov 27, 2025
@llvmbot
Copy link
Member

llvmbot commented Nov 27, 2025

@llvm/pr-subscribers-llvm-analysis

Author: Ryotaro Kasuga (kasuga-fj)

Changes

The monotonicity definition states its domain as follows:

/// The property of monotonicity of a SCEV. To define the monotonicity, assume
/// a SCEV defined within N-nested loops. Let i_k denote the iteration number
/// of the k-th loop. Then we can regard the SCEV as an N-ary function:
///
///   F(i_1, i_2, ..., i_N)
///
/// The domain of i_k is the closed range [0, BTC_k], where BTC_k is the
/// backedge-taken count of the k-th loop

Current monotonicity check implementation doesn't match this definition because:

  • Just checking nowrap property of addrecs recursively is not sufficient to ensure monotonicity over the entire domain. The nowrap property may hold for certain paths but not for all possible iteration combinations of nested loops.
  • It doesn't consider cases where exact backedge-taken counts are unknown.

Therefore we need to fix either the definition or the implementation. This patch adds the test cases that demonstrate this mismatch.


Full diff: https://github.com/llvm/llvm-project/pull/162281.diff

1 Files Affected:

  • (added) llvm/test/Analysis/DependenceAnalysis/monotonicity-loop-guard.ll (+141)
diff --git a/llvm/test/Analysis/DependenceAnalysis/monotonicity-loop-guard.ll b/llvm/test/Analysis/DependenceAnalysis/monotonicity-loop-guard.ll
new file mode 100644
index 0000000000000..5f19ca96badcd
--- /dev/null
+++ b/llvm/test/Analysis/DependenceAnalysis/monotonicity-loop-guard.ll
@@ -0,0 +1,141 @@
+; NOTE: Assertions have been autogenerated by utils/update_analyze_test_checks.py UTC_ARGS: --version 6
+; RUN: opt < %s -disable-output -passes="print<da>" -da-dump-monotonicity-report \
+; RUN:     -da-enable-monotonicity-check 2>&1 | FileCheck %s
+
+; FIXME: These cases are not monotonic because currently we define the domain
+; of monotonicity as "entire iteration space". However, the nsw property is
+; actually valid only under the loop guard conditions.
+
+; for (i = 0; i < INT64_MAX - 1; i++)
+;   if (i < 1000)
+;     for (j = 0; j < 2000; j++)
+;       a[i + j] = 0;
+;
+define void @nsw_under_loop_guard0(ptr %a) {
+; CHECK-LABEL: 'nsw_under_loop_guard0'
+; CHECK-NEXT:  Monotonicity check:
+; CHECK-NEXT:    Inst: store i8 0, ptr %idx, align 1
+; CHECK-NEXT:      Expr: {{\{\{}}0,+,1}<nuw><nsw><%loop.i.header>,+,1}<nuw><nsw><%loop.j>
+; CHECK-NEXT:      Monotonicity: MultivariateSignedMonotonic
+; CHECK-EMPTY:
+; CHECK-NEXT:  Src: store i8 0, ptr %idx, align 1 --> Dst: store i8 0, ptr %idx, align 1
+; CHECK-NEXT:    da analyze - none!
+;
+entry:
+  br label %loop.i.header
+
+loop.i.header:
+  %i = phi i64 [ 0 , %entry ], [ %i.next, %loop.i.latch ]
+  br label %loop.j.pr
+
+loop.j.pr:
+  %guard.j = icmp slt i64 %i, 1000
+  br i1 %guard.j, label %loop.j, label %loop.i.latch
+
+loop.j:
+  %j = phi i64 [ 0, %loop.j.pr ], [ %j.next, %loop.j ]
+  %offset = add nsw i64 %i, %j
+  %idx = getelementptr inbounds i8, ptr %a, i64 %offset
+  store i8 0, ptr %idx
+  %j.next = add nsw i64 %j, 1
+  %ec.j = icmp eq i64 %j.next, 2000
+  br i1 %ec.j, label %loop.i.latch, label %loop.j
+
+loop.i.latch:
+  %i.next = add nsw i64 %i, 1
+  %ec.i = icmp eq i64 %i.next, 9223372036854775807
+  br i1 %ec.i, label %exit, label %loop.i.header
+
+exit:
+  ret void
+}
+
+; for (i = 0; i < INT64_MAX; i++)
+;   if (100 < i)
+;     for (j = 0; j < 100; j++)
+;       a[INT64_MAX - i + j] = 0;
+;
+define void @nsw_under_loop_guard1(ptr %a) {
+; CHECK-LABEL: 'nsw_under_loop_guard1'
+; CHECK-NEXT:  Monotonicity check:
+; CHECK-NEXT:    Inst: store i8 0, ptr %idx, align 1
+; CHECK-NEXT:      Expr: {{\{\{}}9223372036854775807,+,-1}<nsw><%loop.i.header>,+,1}<nuw><nsw><%loop.j>
+; CHECK-NEXT:      Monotonicity: MultivariateSignedMonotonic
+; CHECK-EMPTY:
+; CHECK-NEXT:  Src: store i8 0, ptr %idx, align 1 --> Dst: store i8 0, ptr %idx, align 1
+; CHECK-NEXT:    da analyze - none!
+;
+entry:
+  br label %loop.i.header
+
+loop.i.header:
+  %i = phi i64 [ 0 , %entry ], [ %i.next, %loop.i.latch ]
+  br label %loop.j.pr
+
+loop.j.pr:
+  %guard.j = icmp sgt i64 %i, 100
+  br i1 %guard.j, label %loop.j, label %exit
+
+loop.j:
+  %j = phi i64 [ 0, %loop.j.pr ], [ %j.next, %loop.j ]
+  %val.0 = sub nsw i64 9223372036854775807, %i
+  %val = add nsw i64 %val.0, %j
+  %idx = getelementptr inbounds i8, ptr %a, i64 %val
+  store i8 0, ptr %idx
+  %j.next = add nsw i64 %j, 1
+  %ec.j = icmp eq i64 %j.next, 100
+  br i1 %ec.j, label %loop.i.latch, label %loop.j
+
+loop.i.latch:
+  %i.next = add nsw i64 %i, 1
+  %ec.i = icmp eq i64 %i.next, 9223372036854775807
+  br i1 %ec.i, label %exit, label %loop.i.header
+
+exit:
+  ret void
+}
+
+; for (i = 0; i < n; i++)
+;   if (i < m)
+;     for (j = 0; j < k; j++)
+;       a[i + j] = 0;
+;
+define void @nsw_under_loop_guard2(ptr %a, i64 %n, i64 %m, i64 %k) {
+; CHECK-LABEL: 'nsw_under_loop_guard2'
+; CHECK-NEXT:  Monotonicity check:
+; CHECK-NEXT:    Inst: store i8 0, ptr %idx, align 1
+; CHECK-NEXT:      Expr: {{\{\{}}0,+,1}<nuw><nsw><%loop.i.header>,+,1}<nuw><nsw><%loop.j>
+; CHECK-NEXT:      Monotonicity: MultivariateSignedMonotonic
+; CHECK-EMPTY:
+; CHECK-NEXT:  Src: store i8 0, ptr %idx, align 1 --> Dst: store i8 0, ptr %idx, align 1
+; CHECK-NEXT:    da analyze - output [* *]!
+;
+entry:
+  br label %loop.i.header
+
+loop.i.header:
+  %i = phi i64 [ 0 , %entry ], [ %i.next, %loop.i.latch ]
+  br label %loop.j.pr
+
+loop.j.pr:
+  %guard.j = icmp slt i64 %i, %m
+  br i1 %guard.j, label %loop.j, label %exit
+
+loop.j:
+  %j = phi i64 [ 0, %loop.j.pr ], [ %j.next, %loop.j ]
+  %val = phi i64 [ %i, %loop.j.pr ], [ %val.next, %loop.j ]
+  %j.next = add nsw i64 %j, 1
+  %idx = getelementptr inbounds i8, ptr %a, i64 %val
+  store i8 0, ptr %idx
+  %val.next = add nsw i64 %val, 1
+  %ec.j = icmp eq i64 %j.next, %k
+  br i1 %ec.j, label %loop.i.latch, label %loop.j
+
+loop.i.latch:
+  %i.next = add nsw i64 %i, 1
+  %ec.i = icmp eq i64 %i.next, %n
+  br i1 %ec.i, label %exit, label %loop.i.header
+
+exit:
+  ret void
+}

@kasuga-fj kasuga-fj changed the title [DA] Add tests for nsw doesn't hold on entire iteration space [DA] Add tests for nsw doesn't hold on entire iteration space (NFC) Nov 27, 2025
@kasuga-fj
Copy link
Contributor Author

With respect to SIV tests and nowrap flag based monotonicity inference, I noticed that monotonicity over entire domain and effective domain are almost the same...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llvm:analysis Includes value tracking, cost tables and constant folding

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants