Skip to content

Commit 1715f15

Browse files
Marvin84JudyxujjJingjing Xummz33Simon Berger
authored
Dummy 2 (#227)
* Add swb PyTorch ctc setup (#219) * add initial setup * rm binary files * rm binary files --------- Co-authored-by: Jingjing Xu <[email protected]> * add conformer enc with more weight dropout * fix * add more weight noise opts * black formatting * Jing tedlium independent softmax (#220) * tedlium ctc pytorch * rm empty files --------- Co-authored-by: Jingjing Xu <[email protected]> * use ff regs for mhsa out * add more regularized trafo dec * update * add regs to rnn decoder * more * add more regs to rnn dec * black formatting * Update users/berger * update * add readme for RF * more * cleanup, generalize, different spm vocab sizes * more * more * small fix * update ls att+ctc+lm * add args * fix pretraining * Glow-TTS-ASR: Update with fixed invertibility tests * Glow-TTS-ASR update * fix * Glow-TTS-ASR: Cleanup and comments/documentation * small fixes * spm20k * more ctc and rnn-t librispeech experiments * add greedy decoder * black * add ebranchformer * more * more * better layer names for ebranchformer * better * better * decouple mhsa residual * cleanup * refactor args. add ebranch config * more * Update users/berger * Update users/berger * better * config enable write cache manager * standalone 2024 setup add LSTM lm pipeline * update * add horovod to libri pipeline * update configs * fix * more * update * update zoneout fix ted2 * Update users/berger * Update users/berger * more * update * update * Update users/berger * more * more * update * update * ConformerV2 setup * updates * cleanup * updates and fix mel norm + zoneout * update conf v2 * more * update * Update users/berger * fixes and update * add CTC gauss weights * convert ls960 LSTM LM to rf * fix * fix * fix * more * more * use_eos_postfix * fix CTC with EOS recog scoring * ctc eos fix more * update * Update Glow-TTS-ASR * updates quant * added factored bw * deleted wrong stashed * update trainings and initial rnnt decoder rf * feature batch norm * feature normalization * recog fix API doc * collect stats, initial code * librispeech feature stats * feature global norm * small fixes * small fix * small fix * small fix * fix feat norm search * update * update * update * add more weight drop to rnn decoder * add chunked rnn decoder * update * fix * fix * more * more * fix name * more * more * small fix * more * more * cleanup * fix * comment * more * cleanup * more * add canary 1b recog sis prepare config * add config (#223) Co-authored-by: Jingjing Xu <[email protected]> * more * add nemo model download job * add nemo search job * add custom hash * fix * add nemo search * first version of nemo search * better * fix bug * better * add missing search output path * add compute_wer func * add wer as output var * run search for all test sets with canary 1b model * add configs (#224) Co-authored-by: Jingjing Xu <[email protected]> * update * update * register wer as out * update * add libri test other test set * fix args * fix args * update * add modified normalized * Create README.md * Update README.md * Update users/berger * Update users/berger * more * more * more * more * more * more * prepare for some more modeling code * move SequentialLayerDrop * better * move mixup * rnnt dec rf WIP * Update users/berger * update users/raissi monofactored * update * update * ls960 pretrain: use phoneme info for mask boundaries * BatchRenorm initial implementation (untested) * test_piecewise_linear * test_piecewise_linear use dyn_lr_piecewise_linear * dyn_lr_piecewise_linear use RETURNN PiecewiseLinear * DeleteLemmataFromLexiconJob (#225) * ls960 pretrain: phoneme mask and other targets * ls960 pretrain: update num epochs * better * first version of beam search * fix * fix enc shape * use expand instead of repeat for efficiency * better * add hyp postprocessing * better * add beam search * remove print * more * BatchRenorm with build_from_dict * more * small fix * more * small fix * reorder code * comment * prior * cleanup * cache enc beam expansion * fix bug * update * more * more * more * more * LS spm vocab alias * make private * move * lazy, aliases * update and test rf vs torch mhsa * fix warning * fix bug * vocab outputs * more * more, AED featBN, sampling * extract SPM vocab * add rtfs * add cache suffix * update * fix * add debug out * add batch size logging * import i6_models conformer in rf, batch 1 * SamplingBytePairEncoding for SentencePiece * add gradient clipping to example baseline * 2-precision WER and quantization helper * HDF alignment labels example data pipeline * ls960 pretrain: fix python launcher for itc/i6 * latest users/raissi --------- Co-authored-by: Judyxujj <[email protected]> Co-authored-by: Jingjing Xu <[email protected]> Co-authored-by: Mohammad Zeineldeen <[email protected]> Co-authored-by: Simon Berger <[email protected]> Co-authored-by: schmitt <[email protected]> Co-authored-by: Albert Zeyer <[email protected]> Co-authored-by: luca.gaudino <[email protected]> Co-authored-by: Lukas Rilling <[email protected]> Co-authored-by: Nick Rossenbach <[email protected]> Co-authored-by: Benedikt Hilmes <[email protected]> Co-authored-by: Mohammad Zeineldeen <[email protected]> Co-authored-by: Peter Vieting <[email protected]> Co-authored-by: vieting <[email protected]>
1 parent b2d2baf commit 1715f15

File tree

0 file changed

+0
-0
lines changed

    0 file changed

    +0
    -0
    lines changed

    0 commit comments

    Comments
     (0)