Skip to content

Commit df538c4

Browse files
committed
🚀 update readmes
1 parent bda1fdf commit df538c4

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,11 @@ TensorFlowASR implements some automatic speech recognition architectures such as
2121

2222
## What's New?
2323

24+
- (02/16/2021) Supported for TPU training
2425
- (12/27/2020) Supported _naive_ token level timestamp, see [demo](./examples/demonstration/conformer.py) with flag `--timestamp`
2526
- (12/17/2020) Supported ContextNet [http://arxiv.org/abs/2005.03191](http://arxiv.org/abs/2005.03191)
2627
- (12/12/2020) Add support for using masking
2728
- (11/14/2020) Supported Gradient Accumulation for Training in Larger Batch Size
28-
- (11/3/2020) Reduce differences between `librosa.stft` and `tf.signal.stft`
29-
- (10/31/2020) Update DeepSpeech2 and Supported Jasper [https://arxiv.org/abs/1904.03288](https://arxiv.org/abs/1904.03288)
30-
- (10/18/2020) Supported Streaming Transducer [https://arxiv.org/abs/1811.06621](https://arxiv.org/abs/1811.06621)
3129

3230
## Table of Contents
3331

@@ -41,6 +39,7 @@ TensorFlowASR implements some automatic speech recognition architectures such as
4139
- [Installation](#installation)
4240
- [Installing via PyPi](#installing-via-pypi)
4341
- [Installing from source](#installing-from-source)
42+
- [Running in a container](#running-in-a-container)
4443
- [Setup training and testing](#setup-training-and-testing)
4544
- [TFLite Convertion](#tflite-convertion)
4645
- [Features Extraction](#features-extraction)

vocabularies/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# Predefined Vocabularies
22

33
- `language.characters` files contain all of that language's characters
4-
- `corpus_maxlength_nwords.subwords` files contain subwords generated from corpus transcripts, with maximum length of a subword is `maxlength` and number of subwords is `nwords`.
4+
- `corpus_maxlength_nwords.subwords` files contain subwords generated from corpus transcripts, with maximum length of a subword is `maxlength` and number of subwords is `nwords`.
5+
- `corpus_maxlength_nwords.max_lengths.json` files contain max lengths calculated from corpus duration and transcripts, for using static training

0 commit comments

Comments
 (0)