Contains some implementations of fundamental algorithms use in Digital Signal Processing
Solutions of some kaggle competitions. Results base upon those notebooks were submited to kaggle webservice. In folder have not been located datasets, due to licence limits. To properly test solutions you have to approve licences on each kaggle competition, next you will be able to download datasets.
Contains implementations of fundemantals of NLP
Practical use of basic tool for machine learning using PyTorch lib.
- example of binary classification cats vs dogs
- to start tensorboard ...$ tensorboard --logdir=logs
- reads data from one, local source (no split into traning and validaion dataset) using lista and torch.utils.data.Dataset
- uses torchvision to transform data
- uses custom function in torchvision.transform pipline
- estimate accuracy (sigmoid) (no nn.BCEWithLogitsLoss)
- added the adaptive LR
- added early stopping and dropout layer
- added argument parser
- contains several easy solutions of common problems of predictions time series using autoregresive method
- framing function
- Vanilla RNN
Various scripts for machine learning
function for Libri speech audio pcm data base that creates dict: keys - speaker label, value - path to flac file then creates callable class; during calling instance estimate MFCC using librosa lib and does some other minor preprocessing steps like: convert to mono, resampling to 16 kHz, trmming then using multiprocessing lib start process-based parallelism
function creates PCA algorithm using theoretical description
Jupyter notobook shows how to use ensamble methods like Bagging (BaggingClassifier), Boosting (GradientBoostingClassifier, AdaBoostClassifier, XGBoost), Stacking (used all above and Logistic Regression or SVM as a meta models) and Blending (used all above and Logistic Regression as a meta model). In example data from Kaggle were used.