Conversation
…mbent_read and refactored
…ile for last worker; LastIncumbentRead not being set
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## latest #2474 +/- ##
==========================================
- Coverage 79.73% 79.69% -0.04%
==========================================
Files 346 346
Lines 85976 86096 +120
==========================================
+ Hits 68553 68616 +63
- Misses 17423 17480 +57 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…ut using Highs::mipRaceResults
|
Now that solution of sub-MIPs can be terminated, the MIP solver instances terminated in a race now take only a little longer than the winner. Hence we have a 7X speedup on fiball Further experiments will follow once I've cleaned up the dev printf statements |
highs/mip/HighsMipSolverData.cpp
Outdated
| return this->start_write_incumbent == start_write_incumbent | ||
| ? start_write_incumbent | ||
| : kMipRaceNoSolution; | ||
| } |
There was a problem hiding this comment.
Nice trick with the start/finish checks - but I believe there's still a chance where this could fail in a race condition... especially if some "smart" compiler optimizes away the final this->start_write_incumbent == start_write_incumbent check.
Perhaps consider using std::atomic<HighsInt> for your start/finish variables? The atomic class is lock free and very fast on most systems.
struct MipRaceIncumbent {
std::atomic<HighsInt> start_write_incumbent = kMipRaceNoSolution;
std::atomic<HighsInt> finish_write_incumbent = kMipRaceNoSolution;
double objective = -kHighsInf;
std::vector<double> solution;
void clear();
void initialise(const HighsInt num_col);
void update(const double objective, const std::vector<double>& solution);
HighsInt read(const HighsInt last_incumbent_read, double& objective_,
std::vector<double>& solution_) const;
MipRaceIncumbent() = default;
MipRaceIncumbent(const MipRaceIncumbent& copy) {
start_write_incumbent = copy.start_write_incumbent.load();
finish_write_incumbent = copy.finish_write_incumbent.load();
objective = copy.objective;
solution = copy.solution;
}
MipRaceIncumbent(MipRaceIncumbent&& moving) {
start_write_incumbent = moving.start_write_incumbent.load();
finish_write_incumbent = moving.finish_write_incumbent.load();
objective = moving.objective;
solution = std::move(moving.solution);
}
};There was a problem hiding this comment.
That said, I'm guessing you're trying to avoid heavier std::mutex for thread synchronization. As a lighter alternative, you can also use std::atomic_flag for a SpinLock implementation. For example:
class Spinlock {
private:
std::atomic_flag lock_flag = ATOMIC_FLAG_INIT;
public:
void lock() {
while (lock_flag.test_and_set(std::memory_order_acquire)) {
// Busy-wait (spin) until the lock is released
}
}
bool try_lock() noexcept {
return lock_flag.test_and_set(std::memory_order_acquire);
}
void unlock() { lock_flag.clear(std::memory_order_release); }
};Note: There is likely better SpinLock code available.
There was a problem hiding this comment.
Oh! I forgot that we already have HighsSpinMutex. So ignore my SpinLock code above.
There was a problem hiding this comment.
I tried implementing the new struct MipRaceIncumbent, but got the following compiler errors
/home/jajhall/HiGHS/highs/mip/HighsMipSolver.h:38:25: error: field ‘start_write_incumbent’ has incomplete type ‘std::atomic’
38 | std::atomic start_write_incumbent = kMipRaceNoSolution;
| ^~~~~~~~~~~~~~~~~~~~~
There was a problem hiding this comment.
Not entirely sure. Did you #include <atomic>?
There was a problem hiding this comment.
Ah, sorry. That's a linux vs windows issue. The following should work on both.
std::atomic<HighsInt> start_write_incumbent {kMipRaceNoSolution};
std::atomic<HighsInt> finish_write_incumbent {kMipRaceNoSolution};There was a problem hiding this comment.
Not entirely sure. Did you
#include <atomic>?
Yes, but doesn't fix the error
Ah, sorry. That's a linux vs windows issue. The following should work on both.
Works on Linux! :-)
highs/lp_data/Highs.cpp
Outdated
| } | ||
| highs::parallel::for_each( | ||
| 0, mip_race_concurrency, [&](HighsInt start, HighsInt end) { | ||
| for (HighsInt instance = start; instance < end; instance++) { |
There was a problem hiding this comment.
BTW: AFAIK this inner for loop will execute only on one thread, so it might be a bit pointless if it loops more than once (i.e., that means a previous HighsMipSolver::run has already finished).
That said, the default grain size = 1, so it's unlikely to ever occur. You could probably get rid of the inner for loop and just take instance = start instead.
Alternatively, you could try to use TaskGroup directly and spawn each task. It would be more effort, but you might've been able to use the TaskGroup::cancel function to interrupt the other threads.
highs/mip/HighsMipSolver.h
Outdated
| HighsInt concurrency() const; | ||
| void update(const double objective, const std::vector<double>& solution); | ||
| bool newSolution(const HighsInt instance, double objective, | ||
| std::vector<double>& solution); |
There was a problem hiding this comment.
The newSolution method is missing the & for the double objective. Without this, the solution sharing across threads isn't updating as frequently as it could.
bool newSolution(const HighsInt instance, double& objective,
std::vector<double>& solution);There was a problem hiding this comment.
That's an important omission! It meant that the objective for the solution read from another thread wasn't being passed back, so the solution from the other thread never replaced the incumbent. Hence, sharing solutions was the same as a pure race.
Making the correction, the performance on fiball when sharing solutions is slowed greatly, as none of the threads runs like the "lucky" random_seed=1 case. This is what I'd expected to see. However, for problems where there isn't an extremely lucky random_seed value, maybe this will open the door to improved performance!
Naturally I'll experiment
There was a problem hiding this comment.
I noticed similar behaviour in my experiments. Sharing solutions can help on other instances, but not really on fiball. That said, when I performed the presolve once and shared that across the other threads, fiball was fast again.
BTW: I had to make many changes to ensure presolve is only performed once. Do you know a better way?
There was a problem hiding this comment.
BTW: After more experiments with fiball, sharing the presolve doesn't necessarily solve it fast. More investigation needed!
There was a problem hiding this comment.
Now I've thought a little, it's easy internally to perform presolve only once, and then race the solution of the presolved MIP
highs/lp_data/Highs.cpp
Outdated
| "= %6.2f), and status %s\n", | ||
| int(instance), solver_info.solution_objective, | ||
| 1e2 * solver_info.gap, mip_time[instance], | ||
| modelStatusToString(instance_model_status).c_str()); |
There was a problem hiding this comment.
Minor fix: \% should be %%
i.e., " Solver %d has best objective %15.8g, gap %6.2f\% (time "
highs/mip/HighsMipSolverData.cpp
Outdated
| postSolveStack.getReducedPrimalSolution(instance_solution); | ||
| addIncumbent(reduced_instance_solution, instance_solution_objective_value, | ||
| kSolutionSourceHighsSolution); | ||
| } |
There was a problem hiding this comment.
I believe there is a bug with the objective value when mipsolver.model_->offset_ != 0. This can prevent improved solutions from being accepted by other threads.
Details:
addIncumbent checks instance_solution_objective_value < upper_bound before updating, but upper_bound excludes the mipsolver.model_->offset_ value and instance_solution_objective_value includes it.
Potential fix:
// exclude offset from objective value
instance_solution_objective_value -= mipsolver.model_->offset_;
addIncumbent(reduced_instance_solution, instance_solution_objective_value,
kSolutionSourceHighsSolution);There was a problem hiding this comment.
Thanks: the correction depends on whether the problem is a maximization or minimization
There was a problem hiding this comment.
Good to know. I was thinking the sign for offset_ might've already accounted for min/max.
…ctive value for maximization, as well as minimization problems; formatted
…ived from another thread
…responding objective locally
|
Can now perform presolve before the MIP race so that all participants are solving the same problem. Controlled by option |
|
The performance gain from the MIP race didn't justify introducing a non-deterministic solver into HiGHS, or the investment of time required to fix the segfaults and incorrect deductions of infeasibility. All the code relating to the MIP race has been deleted so that the one useful outcome - the |
1 similar comment
|
The performance gain from the MIP race didn't justify introducing a non-deterministic solver into HiGHS, or the investment of time required to fix the segfaults and incorrect deductions of infeasibility. All the code relating to the MIP race has been deleted so that the one useful outcome - the |
Some residual code from MIP race experience.
HighsTerminatorstruct allows termination of the MIP solver to be communicated more readily - and across multiple threadsMipSolutionSourcenow ordered alphabetically by key letters