Skip to content
This repository was archived by the owner on Jun 7, 2023. It is now read-only.

Commit 675f129

Browse files
author
Anand Sanmukhani
authored
Merge pull request #156 from arjunshenoymec/master
adding a description for the FLT_PARALLELISM option in the README file
2 parents d46d0a1 + d0ccf01 commit 675f129

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@ The use case for this framework is to assist teams in real-time alerting of thei
1616
<br> Example: If this parameter is set to `15`, it will collect the past 15 minutes of metric data every 15 minutes and append it to the training dataframe.
1717
* `FLT_ROLLING_TRAINING_WINDOW_SIZE` - This parameter limits the size of the training dataframe to prevent Out of Memory errors. It can be set to the duration of data that should be stored in memory as dataframes. (Default `15d`)
1818
<br> Example: If set to `1d`, every time before training the model using the training dataframe, the metric data that is older than 1 day will be deleted.
19-
19+
* `FLT_PARALLELISM` - An option for parallelism. Each metric is "assigned" a separate model object. This parameter reperesents the number of models that will be trained concurrently.
20+
<br> The default value is set as `1` and the upper limit will depend on the number of CPU cores provided to the container.
2021
If you are testing locally, you can do the following:
2122
- Environment variables are loaded from `.env`. `pipenv` will load these automatically. So make sure you execute everything via `pipenv install`.
2223

0 commit comments

Comments
 (0)