@@ -10,6 +10,7 @@ See benchmark results [here](https://intelpython.github.io/scikit-learn_bench).
10
10
11
11
* [ Prerequisites] ( #prerequisites )
12
12
* [ How to create conda environment for benchmarking] ( #how-to-create-conda-environment-for-benchmarking )
13
+ * [ How to enable daal4py patching for scikit-learn benchmarks] ( #how-to-enable-daal4py-patching-for-scikit-learn-benchmarks )
13
14
* [ Running Python benchmarks with runner script] ( #running-python-benchmarks-with-runner-script )
14
15
* [ Supported algorithms] ( #supported-algorithms )
15
16
* [ Algorithms parameters] ( #algorithms-parameters )
@@ -30,13 +31,15 @@ Create a suitable conda environment for each framework to test. Each item in the
30
31
* [ ** cuml** ] ( https://github.com/PivovarA/scikit-learn_bench/blob/master/cuml/README.md#how-to-create-conda-environment-for-benchmarking )
31
32
* [ ** xgboost** ] ( https://github.com/PivovarA/scikit-learn_bench/tree/master/xgboost/README.md#how-to-create-conda-environment-for-benchmarking )
32
33
34
+ ## How to enable daal4py patching for scikit-learn benchmarks
35
+ Set specific environment variable ` export FORCE_DAAL4PY_SKLEARN=YES `
33
36
34
37
## Running Python benchmarks with runner script
35
38
36
- Run ` python runner.py --config configs/config_example.json [--output-format json --verbose] ` to launch benchmarks.
39
+ Run ` python runner.py --configs configs/config_example.json [--output-format json --verbose] ` to launch benchmarks.
37
40
38
41
runner options:
39
- * `` config `` : the path to configuration file
42
+ * `` configs `` : configuration files paths
40
43
* `` dummy-run `` : run configuration parser and datasets generation without benchmarks running
41
44
* `` verbose `` : print additional information during benchmarks running
42
45
* `` output-format `` : * json* or * csv* . Output type of benchmarks to use with their runner
0 commit comments