Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## latest #2513 +/- ##
==========================================
+ Coverage 79.48% 79.71% +0.23%
==========================================
Files 346 346
Lines 85869 86227 +358
==========================================
+ Hits 68251 68735 +484
+ Misses 17618 17492 -126 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
jajhall
left a comment
There was a problem hiding this comment.
Please fix the code that fails the CI tests
|
@galabovaa I'm not sure how to handle code coverage here. This change should only handle the fringe case where there's many unbounded integer variables, and any such case is not going to be a quick solve that can be put into the tests. |
fwesselm
left a comment
There was a problem hiding this comment.
Looks good, @Opt-Mucca . I am going to qualify this shortly.
highs/mip/HighsRedcostFixing.cpp
Outdated
| if (mipsolver.mipdata_->domain.col_upper_[col] - | ||
| mipsolver.mipdata_->domain.col_lower_[col] >= | ||
| 512) { | ||
| if (lpredcost[col] > mipsolver.mipdata_->feastol) { |
There was a problem hiding this comment.
One could use the absolute here:
if (std::abs(lpredcost[col]) > mipsolver.mipdata_->feastol)
There was a problem hiding this comment.
Good suggestion! I'll add it.
highs/mip/HighsRedcostFixing.cpp
Outdated
| expshift = std::min(expshift, static_cast<int>(maxNumStepsExp)); | ||
| maxNumStepsExp = maxNumStepsExp - expshift + 5; | ||
| } | ||
| auto maxNumSteps = static_cast<HighsInt>(1ULL << maxNumStepsExp); |
There was a problem hiding this comment.
Maybe use HighsInt here instead of auto?
There was a problem hiding this comment.
I had this originally, but my IDE insists that you should use auto if everything is already wrapped in a static_cast. Will change it back, and look into changing my settings.
| if (maxub - lb > 1024) step = (maxub - lb + 1023) >> 10; | ||
| if (maxub - lb > maxNumSteps) | ||
| step = (maxub - lb + maxNumSteps - 1) >> maxNumStepsExp; | ||
|
|
There was a problem hiding this comment.
Is it worth having a lambda to compute step:
auto computeStep = [&](HighsInt lb, HighsInt ub){
if (ub - lb > maxNumSteps)
return (ub - lb + maxNumSteps - 1) >> maxNumStepsExp;
return HighsInt{1};
};
There was a problem hiding this comment.
Possibly, but this PR is taking up rather a lot of time in the context of much higher priorities
There was a problem hiding this comment.
I don't think so. Is only used twice.
At some point I cobbled together a lambda for the whole thing (as there's plenty of duplicate code for the positive vs negative reduced cost), but I wasn't convinced that my attempt was easier to read (even if it was a few lines shorter).
There was a problem hiding this comment.
@Opt-Mucca , there was mention of code coverage in one comment, so I thought having this little piece of logic in a lambda would help to improve the coverage. But, it's up to you, of course.
|
The test results look reasonable - only a few instances are affected - and "average" performance (in terms of ratios of shifted geometric means) looks good. |
Currently for each integral column with
ub - lb >= 1024, HiGHS adds 1024 lurking bounds using reduced cost logic. In general this isn't an issue because most models don't have many integer columns with large domains. Additionally, if they do, presolve usually finds that most domains can be substantially tightened, and even if it doesn't, then it's unlikely for many of the integer columns to feature in the objective function.What happened in #2508 is that a huge amount of continuous columns got upgraded to implied integer, with most of them being unbounded and featuring in the objective. This led to a ridiculous amount of time being spent in adding lurking bounds derived from reduced cost, which is not that important for the solve process.
The change: I've added a quick check to scale the max amount of values to query (instead of 1024 it's now some power of 2 in the range
[32, 1024]). This is decided based on the amount of columns with non-zero reduced cost and domains wider thanub - lb >= 512.The only downside: This adds a linear time check that is redundant for nearly all instances. I don't see it being a problem.
I've tested this on 60 or so instances, and as expected, there's no change in any of them. This should still be tested on a wider set of instances, just to see if the safety check does actually have an overhead, and if the change affect the solve path of any instance.