Replies: 2 comments
-
|
Two comments: a) I think our support for b) As you note multi-processing for optimization problems is not easy. It would be wonderful to be able to support this. We're not opposed to doing so, we just do not know how to do it. Suggestions and pull requests welcome. |
Beta Was this translation helpful? Give feedback.
-
|
@MagneticX this should be addressed with PR #1022 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear community,
I'm currently working with lmfit's differential_evolution algorithm.
My understanding is that lmfit leverages scipy.optimize.differential_evolution under the hood, but I prefer lmfit due to its more convenient handling of parameter definition, constraints and bounds.
My goal is to accelerate the optimization process through parallel evaluation of parameter sets.
Typically, this is achieved by providing a map-like callable to the workers parameter, which requires the objective function to be pickleable for multiprocessing.
Here lies my challenge: My objective function is designed to evaluate multiple sets of parameters concurrently.
It achieves this by using asyncio to orchestrate and await results from multiple external simulations. Consequently, this objective function is not pickleable.
With scipy.optimize.differential_evolution, I could leverage the vectorized=True option.
This allowed me to pass an array of parameter sets to my objective function, which then returned a corresponding array of result-values, by using its own internal parallelism.
My question is: Does lmfit's differential_evolution (or any other lmfit optimizer) offer a similar vectorized evaluation mode, where the objective function receives multiple parameter sets at once?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions