Skip to content

Commit 4487b31

Browse files
author
Sean Naren
authored
Merge pull request #157 from rbracco/README
WIP - Rough Draft of README
2 parents 958975c + 19aa773 commit 4487b31

File tree

2 files changed

+107
-5
lines changed

2 files changed

+107
-5
lines changed

README.md

Lines changed: 75 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,85 @@
22

33
ctcdecode is an implementation of CTC (Connectionist Temporal Classification) beam search decoding for PyTorch.
44
C++ code borrowed liberally from Paddle Paddles' [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech).
5-
It includes swappable scorer support enabling standard beam search, and KenLM-based decoding.
5+
It includes swappable scorer support enabling standard beam search, and KenLM-based decoding. If you are new to the concepts of CTC and Beam Search, please visit the Resources section where we link a few tutorials explaining why they are needed.
66

77
## Installation
8-
The library is largely self-contained and requires only PyTorch 1.0. Building the C++ library requires gcc or clang. KenLM language modeling support is also optionally included, and enabled by default.
8+
The library is largely self-contained and requires only PyTorch.
9+
Building the C++ library requires gcc or clang.
10+
KenLM language modeling support is also optionally included, and enabled by default.
11+
12+
The below installation also works for Google Colab.
913

1014
```bash
1115
# get the code
1216
git clone --recursive https://github.com/parlance/ctcdecode.git
13-
cd ctcdecode
14-
pip install .
17+
cd ctcdecode && pip install .
18+
```
19+
20+
## How to Use
21+
22+
```python
23+
from ctcdecode import CTCBeamDecoder
24+
25+
decoder = CTCBeamDecoder(
26+
labels,
27+
model_path=None,
28+
alpha=0,
29+
beta=0,
30+
cutoff_top_n=40,
31+
cutoff_prob=1.0,
32+
beam_width=100,
33+
num_processes=4,
34+
blank_id=0,
35+
log_probs_input=False
36+
)
37+
beam_results, beam_scores, timesteps, out_lens = decoder.decode(output)
1538
```
39+
40+
### Inputs to `CTCBeamDecoder`
41+
- `labels` are the tokens you used to train your model. They should be in the same order as your outputs. For example
42+
if your tokens are the english letters and you used 0 as your blank token, then you would
43+
pass in list("_abcdefghijklmopqrstuvwxyz") as your argument to labels
44+
- `model_path` is the path to your external kenlm language model(LM). Default is none.
45+
- `alpha` Weighting associated with the LMs probabilities. A weight of 0 means the LM has no effect.
46+
- `beta` Weight associated with the number of words within our beam.
47+
- `cutoff_top_n` Cutoff number in pruning. Only the top cutoff_top_n characters with the highest probability in the vocab will be used in beam search.
48+
- `cutoff_prob` Cutoff probability in pruning. 1.0 means no pruning.
49+
- `beam_width` This controls how broad the beam search is. Higher values are more likely to find top beams, but they also
50+
will make your beam search exponentially slower. Furthermore, the longer your outputs, the more time large beams will take.
51+
This is an important parameter that represents a tradeoff you need to make based on your dataset and needs.
52+
- `num_processes` Parallelize the batch using num_processes workers. You probably want to pass the number of cpus your computer has. You can find this in python with `import multiprocessing` then `n_cpus = multiprocessing.cpu_count()`. Default 4.
53+
- `blank_id` This should be the index of the blank token (probably 0) used when training your model so that ctcdecode can remove it during decoding.
54+
- `log_probs_input` If your outputs have passed through a softmax and represent probabilities, this should be false, if they passed through a LogSoftmax and represent negative log likelihood, you need to pass True. If you don't understand this, run `print(output[0][0].sum())`, if it's a negative number you've probably got NLL and need to pass True, if it sums to ~1.0 you should pass False. Default False.
55+
56+
### Inputs to the `decode` method
57+
- `output` should be the output activations from your model. If your output has passed through a SoftMax layer, you shouldn't need to alter it (except maybe to transpose), but if your `output` represents negative log likelihoods (raw logits), you either need to pass it through an additional `torch.nn.functional.softmax` or you can pass `log_probs_input=False` to the decoder. Your output should be BATCHSIZE x N_TIMESTEPS x N_LABELS so you may need to transpose it before passing it to the decoder. Note that if you pass things in the wrong order, the beam search will probably still run, you'll just get back nonsense results.
58+
59+
### Outputs from the `decode` method
60+
61+
4 things get returned from `decode`
62+
1. `beam_results` - Shape: BATCHSIZE x N_BEAMS X N_TIMESTEPS A batch containing the series of characters (these are ints, you still need to decode them back to your text) representing results from a given beam search. Note that the beams are almost always shorter than the total number of timesteps, and the additional data is non-sensical, so to see the top beam (as int labels) from the first item in the batch, you need to run `beam_results[0][0][:out_len[0][0]]`.
63+
1. `beam_scores` - Shape: BATCHSIZE x N_BEAMS x N_TIMESTEPS A batch with the likelihood of each beam (I think this is p=1/e\**beam_score). If this is true, you can get the model's confidence that that beam is correct with `p=1/np.exp(beam_score)` **more info needed**
64+
1. `timesteps` - Shape: BATCHSIZE x N_BEAMS The timestep at which the nth output character has peak probability. Can be used as alignment between the audio and the transcript.
65+
1. `out_lens` - Shape: BATCHSIZE x N_BEAMS. `out_lens[i][j]` is the length of the jth beam_result, of item i of your batch.
66+
67+
### More examples
68+
69+
Get the top beam for the first item in your batch
70+
`beam_results[0][0][:out_len[0][0]]`
71+
72+
Get the top 50 beams for the first item in your batch
73+
```python
74+
for i in range(50):
75+
print(beam_results[0][i][:out_len[0][i]])
76+
```
77+
78+
Note, these will be a list of ints that need decoding. You likely already have a function to decode from int to text, but if not you can do something like.
79+
`"".join[labels[n] for n in beam_results[0][0][:out_len[0][0]]]` using the labels you passed in to `CTCBeamDecoder`
80+
81+
82+
## Resources
83+
84+
- [Distill Guide to CTC](https://distill.pub/2017/ctc/)
85+
- [Beam Search Video by Andrew Ng](https://www.youtube.com/watch?v=RLWuzLLSIgw)
86+
- [An Intuitive Explanation of Beam Search](https://towardsdatascience.com/an-intuitive-explanation-of-beam-search-9b1d744e7a0f)

ctcdecode/__init__.py

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,22 @@
33

44

55
class CTCBeamDecoder(object):
6+
"""
7+
Pytorch wrapper for DeepSpeech PaddlePaddle Beam Search Decoder
8+
9+
Args:
10+
labels (list): The tokens/vocab used to train your model. They should be in the same order as they are in your model's outputs.
11+
model_path (basestring): The path to your external KenLM language model(LM)
12+
alpha (float): Weighting associated with the LMs probabilities. A weight of 0 means the LM has no effect.
13+
beta (float): Weight associated with the number of words within our beam.
14+
cutoff_top_n (int): Cutoff number in pruning. Only the top cutoff_top_n characters with the highest probability in the vocab will be used in beam search.
15+
cutoff_prob (float): Cutoff probability in pruning. 1.0 means no pruning.
16+
beam_width (int): This controls how broad the beam search is. Higher values are more likely to find top beams, but they also
17+
will make your beam search exponentially slower.
18+
num_processes (int): Parallelize the batch using num_processes workers.
19+
blank_id (int): Index of the blank token (probably 0) used when training your model so that ctcdecode can remove it during decoding.
20+
log_probs_input (bool): Pass False if your model has passed through a softmax and output probabilities sum to 1. Pass True otherwise.
21+
"""
622
def __init__(self, labels, model_path=None, alpha=0, beta=0, cutoff_top_n=40, cutoff_prob=1.0, beam_width=100,
723
num_processes=4, blank_id=0, log_probs_input=False):
824
self.cutoff_top_n = cutoff_top_n
@@ -19,7 +35,22 @@ def __init__(self, labels, model_path=None, alpha=0, beta=0, cutoff_top_n=40, cu
1935
self._cutoff_prob = cutoff_prob
2036

2137
def decode(self, probs, seq_lens=None):
22-
# We expect batch x seq x label_size
38+
"""
39+
Conduct the beamsearch on model outputs and return results
40+
41+
Args:
42+
probs (Tensor) - A rank 3 tensor representing model outputs. Shape is batch x num_timesteps x num_labels.
43+
seq_lens (Tensor) - A rank 1 tensor representing the sequence length of the items in the batch. Optional, if not provided the size of axis 1 (num_timesteps) of `probs` is used for all items
44+
45+
Returns:
46+
tuple: (beam_results, beam_scores, timesteps, out_lens)
47+
48+
beam_results (Tensor): A rank 3 tensor representing the top n beams of a batch of items. Shape: batchsize x num_beams x num_timeteps. Results are still encoded as ints at this stage.
49+
beam_scores (Tensor): A rank 3 tensor representing the likelihood of each beam in beam_results. Shape: batchsize x num_beams x num_timeteps
50+
timesteps (Tensor): A rank 2 tensor representing the timesteps at which the nth output character has peak probability. To be used as alignment between audio and transcript. Shape: batchsize x num_beams
51+
out_lens (Tensor): A rank 2 tensor representing the length of each beam in beam_results. Shape: batchsize x n_beams.
52+
53+
"""
2354
probs = probs.cpu().float()
2455
batch_size, max_seq_len = probs.size(0), probs.size(1)
2556
if seq_lens is None:

0 commit comments

Comments
 (0)