Releases: pythonlessons/mltu
Releases · pythonlessons/mltu
1.1.4
[1.1.4] - 2022-09-29
Changed
- Improoved
mltu.torch.dataProvider.DataProviderto hanglemultiprocessingwhen it doesn't work to switch tomultithreading
1.1.3
[1.1.3] - 2022-09-29
Changed
- Removed
Librosalibrary dependency in requirements, now it is optional and required only with modules that use librosa
Added
- Created
Tutorials.05_sound_to_text.train_no_limit.pythat demonstrates how to train audio recognition model withmltuwithout audio length limit
1.1.1
[1.1.1] - 2022-09-26
Changed
- Included
self._executoras generator inmltu.dataProvider.DataProviderobject, to enable functionality to modify batch preprocessing without changing original code - Introduced changes in
mltu.torch.dataProvider.pyto handle data in multiprocessing and multithreading modes, for faster preprocessing while torch models - Modified
mltu.transformers.AudioPaddingobject, to work with batches of raw audio data
Added
- Created tutorial
10_wav2vec2_torch(Audio to Text model) that shows how to train wav2vec2 model with mltu
1.1.0
[1.1.0] - 2022-08-28
Changed
- Changed
mltu.transformers.SpectrogramPaddingobject, to pad spectrogram end with zeros instead of start
Added
- Created
Tutorials/09_translation_transformertutorial, that shows how to train translation transformer model - Created
mltu.tensorflow.tokenizersmodule, that containsCustomTokenizerfor text data - Created
mltu.tensorflow.transformer.attentionmodule, that containsBaseAttention,CrossAttention,GlobalSelfAttentionandCausalSelfAttentionlayers - Created
mltu.tensorflow.transformer.layersmodule, that containspositional_encodingfunction,PositionalEmbedding,FeedForward,EncoderLayer,DecoderLayer,Encoder,Decoderlayers andTransformermodel - Created
mltu.tensorflow.transformer.callbacksmodule, that containsEncDecSplitCallbackcallback, to split Transformer model into separate encoder and decoder models - Created
mltu.tensorflow.transformer.utilsmodule, that containsMaskedLossloss andMaskedAccuracymetric, used for training Transformer models
1.0.15
[1.0.15] - 2022-07-15
Changed
- Fixed bug in
mltu.dataProvider.DataProviderto work withbatch_postprocessors.
1.0.14
[1.0.14] - 2022-07-13
Changed
- Included
augment_annotationbool option to allmltu.augmentorsto be able to choose whether to augment annotation or not - Changed
mltu.augmentors.RandomRotateto have@staticmethodofrotate_imageto be able to use it without creating object
Added
- Added
batch_postprocessoroption tomltu.dataProvider.DataProviderto be able to postprocess batch after augmentation
1.0.12
1.0.12
1.0.11
Release 1.0.11 with some bug fixes
1.0.10
new release with minor changes
1.0.9
Spelling mistake fixes, single quotes to double quotes, introduced CVImage and PillowImage objects