v1.1.5 release
This is a large update
Features:
- The major changes are that the
AdaSTEMclass now supportsduckdbandparquetfile path as input, this allow the user to pass in large dataset without duplicating the pandas dataframe cross the processors when working with n_jobs>1 parallel computing. See the new Jupyter notebooks for details. #76 - The lazy loading is no longer realized by the
LazyLoadingEnsembleclass. Instead, it is realized byLazyLoadingEstimator. This allow the model to be dumped once its training/prediction is finished, and we don't need to accumulate the models (hence, memory) until the training is finished for the whole ensemble. This will largely reduce the memory use. See the new Jupyter notebooks for details. #77 - n_jobs > ensemble_folds are no longer supported for user-end clarity. Those jobs are paralleled by ensemble folds so n_jobs > ensemble_folds is meaning less. We do not want to mislead users to think that a 10-ensemble model will be trained faster using n_jobs=20 compared to n_jobs=10.
- These features will not be available in
SphereAdaSTEMdue to the negligible user market and the negligible advantages. #75
Major bugs fixed:
- Previously the models are stored in
self.model_dictdynamically during the parallel ensemble training process, which means the dictionary is being altered during this process. However, we ask for aselfas input argument for the ensemble-level training function serialization. This is not ideal since the object being serialized should not be changing. This is fixed by assigning themodel_dicttoselfafter all trainings are finished. - Also fixed #74