You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* <tr><td> `EvalBackend(std::string const&)` <td> Choose a likelihood evaluation backend:
852
852
* <table>
853
853
* <tr><th> Backend <th> Description
854
-
* <tr><td> **cpu** - *default* <td> New vectorized evaluation mode, using faster math functions and auto-vectorisation.
854
+
* <tr><td> **cpu** - *default* <td> New vectorized evaluation mode, using faster math functions and auto-vectorisation (currently on a single thread).
855
855
* Since ROOT 6.23, this is the default if `EvalBackend()` is not passed, succeeding the **legacy** backend.
856
856
* If all RooAbsArg objects in the model support vectorized evaluation,
857
-
* likelihood computations are 2 to 10 times faster than with the **legacy** backend
857
+
* likelihood computations are 2 to 10 times faster than with the **legacy** backend (each on a single thread).
858
858
* - unless your dataset is so small that the vectorization is not worth it.
859
859
* The relative difference of the single log-likelihoods with respect to the legacy mode is usually better than \f$10^{-12}\f$,
860
860
* and for fit parameters it's usually better than \f$10^{-6}\f$. In past ROOT releases, this backend could be activated with the now deprecated `BatchMode()` option.
* This backend can drastically speed up the fit if all RooAbsArg object in the model support it.
865
865
* <tr><td> **legacy** <td> The original likelihood evaluation method.
866
866
* Evaluates the PDF for each single data entry at a time before summing the negative log probabilities.
867
+
* It supports multi-threading, but you might need more than 20 threads to maybe see about 10% performance gain over the default cpu-backend (that runs currently only on a single thread).
867
868
* <tr><td> **codegen** <td> **Experimental** - Generates and compiles minimal C++ code for the NLL on-the-fly and wraps it in the returned RooAbsReal.
868
869
* Also generates and compiles the code for the gradient using Automatic Differentiation (AD) with [Clad](https://github.com/vgvassilev/clad).
869
870
* This analytic gradient is passed to the minimizer, which can result in significant speedups for many-parameter fits,
0 commit comments