Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## latest #2449 +/- ##
==========================================
+ Coverage 79.44% 79.68% +0.23%
==========================================
Files 346 346
Lines 85060 85898 +838
==========================================
+ Hits 67577 68444 +867
+ Misses 17483 17454 -29 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
fwesselm
left a comment
There was a problem hiding this comment.
Thank you for making these changes, @Opt-Mucca. My only question is if it would be worth adding an option to control the bound threshold (instead of hard-coding it in different places)?
|
@fwesselm It would say that it's worth adding as a parameter. I've just been hesitant to add some as I thought it was a design choice. |
Thanks. Instead of adding an option, maybe just adding
in the |
jajhall
left a comment
There was a problem hiding this comment.
As observed by @fwesselm, this change to a "magic number" needs to be the const value of an identifier at the very least.
Once we've done regression testing on MIPLIB, it may be OK to leave it as a const identifier, otherwise it will have to be modifiable as either an option or a value set according to a problem measure
|
@galabovaa I've got a C++ question: I'd like to add the line
@fwesselm @jajhall I first attempted to add this as an option, but hit a wall as it gets accessed by static functions and there's no clean way to access |
I think the code looks good. I am going to qualify this now. |
I think this should be fine! |
|
This was qualified by @fwesselm and was found to be slightly performance negative. This points to another misalignment between my local tests and his cluster. For now I'll close the PR, but keep the branch open for additional parameter changes that I think should be made. |
This PR would change two minor things (I am seeing a very nice chunk of performance improvement for smaller instances):
Currently there is no upper limit on the weight of
degeneracyFactorin the branching scores. Let's imagine a scenario wheredegeneracyFactorhas meaningful impact in the code (when it's>=10). In that case, strong branching is disabled, so we don't gather any new observations. We don't do this as we predict that most strong branching decisions are not going to return any objective value improvements and thus just populate our pseudo-costs with 0.What about when
degeneracyFactor = 1000? In that case we still wouldn't get any strong branching information, but now the score is essentially ignoring the pseudocosts that we have actually gathered just from regular branching. I'm just adding a cap on the factor during scoring so that the pseudo-costs in scoring are never ignored (if they're all 0 then who cares and if they're larger than 0 on some variables then we probably shouldn't be ignoring them).HiGHS has a huge threshold for allowing a bound change on a continuous variable (currently needs to reduce the domain by 30%). That is, currently if we could reduce the domain of a continuous variable from
100to80we'd be ignoring it. I've changed this to 20% (which is still relatively large). I observed even better performance at 15%, but much worse at 10%, so I played it safe. If we have the time to test more values then we should be trying 15% on a wide amount of instances.