lfeats provides a unified interface to extract hidden representations from various speech foundation models such as HuBERT and Whisper. While these extracted features are task-independent, the package is primarily designed for speech generation tasks including text-to-speech and voice conversion.
- Python 3.10+
- PyTorch 2.6.0+
The latest stable release can be installed via PyPI:
pip install lfeatsAlternatively, the development version can be installed directly from the GitHub repository:
pip install git+https://github.com/takenori-y/lfeats.git@master| Model Name | Model Variant | Layers | Dimension | Paper | Source | Model Hub |
|---|---|---|---|---|---|---|
contentvec |
hubert-100 |
12 | 768 | arXiv | GitHub | |
hubert-500 |
12 | 768 | ||||
data2vec |
base |
12 | 768 | arXiv | GitHub | |
large |
24 | 1024 | ||||
data2vec2 |
base |
8 | 768 | arXiv | GitHub | |
large |
16 | 1024 | ||||
emotion2vec |
base |
8 | 768 | arXiv | GitHub | π€ |
emotion2vec+ |
seed |
8 | 768 | π€ | ||
base |
8 | 768 | π€ | |||
large |
8 | 1024 | π€ | |||
hubert |
base |
12 | 768 | arXiv | GitHub | π€ |
large |
24 | 1024 | π€ | |||
xlarge |
48 | 1280 | π€ | |||
r-spin |
wavlm-32 |
12 | 768 | arXiv | GitHub | |
wavlm-64 |
12 | 768 | ||||
wavlm-128 |
12 | 768 | ||||
wavlm-256 |
12 | 768 | ||||
wavlm-512 |
12 | 768 | ||||
wavlm-1024 |
12 | 768 | ||||
wavlm-2048 |
12 | 768 | ||||
spidr |
base |
12 | 768 | arXiv | GitHub | |
spin |
hubert-128 |
12 | 768 | arXiv | GitHub | |
hubert-256 |
12 | 768 | ||||
hubert-512 |
12 | 768 | ||||
hubert-1024 |
12 | 768 | ||||
hubert-2048 |
12 | 768 | ||||
wavlm-128 |
12 | 768 | ||||
wavlm-256 |
12 | 768 | ||||
wavlm-512 |
12 | 768 | ||||
wavlm-1024 |
12 | 768 | ||||
wavlm-2048 |
12 | 768 | ||||
sslzip |
tiny |
0 | 16 | ISCA | GitHub | π€ |
base |
0 | 256 | π€ | |||
unispeech-sat |
base |
12 | 768 | arXiv | GitHub | π€ |
base+ |
12 | 768 | π€ | |||
large |
24 | 1024 | π€ | |||
wav2vec2 |
base |
12 | 768 | arXiv | GitHub | |
large |
24 | 1024 | ||||
xlsr |
24 | 1024 | arXiv | |||
xlsr-v2 |
24 | 1024 | arXiv | GitHub | ||
wavlm |
base |
12 | 768 | arXiv | GitHub | π€ |
base+ |
12 | 768 | π€ | |||
large |
24 | 1024 | π€ | |||
whisper |
tiny |
4 | 384 | arXiv | GitHub | π€ |
base |
6 | 512 | π€ | |||
small |
12 | 768 | π€ | |||
medium |
24 | 1024 | π€ | |||
large |
32 | 1280 | π€ | |||
large-v2 |
32 | 1280 | π€ | |||
large-v3 |
32 | 1280 | π€ |
| Model Name | Model Variant | Layers | Dimension | Paper | Source | Model Hub |
|---|---|---|---|---|---|---|
ecapa-tdnn |
base |
0 | 192 | arXiv | GitHub | π€ |
next-tdnn |
light |
0 | 192 | arXiv | GitHub | |
base |
0 | 192 | ||||
base-v2 |
0 | 192 | ||||
r-vector |
base |
0 | 256 | arXiv | GitHub | π€ |
x-vector |
base |
0 | 512 | IEEE | GitHub | π€ |
Important
Users must comply with the respective licenses of the models. Please refer to the original repositories for detailed licensing information.
| Resampler Type | Quality Preset | Source | License |
|---|---|---|---|
lilfilter |
base |
GitHub | MIT |
soxr |
quick |
GitHub | LGPL v2.1+ |
low |
|||
medium |
|||
high |
|||
very-high |
|||
torchaudio |
kaiser-fast |
GitHub | BSD 2-Clause |
kaiser-best |
lfeats simplifies the process of extracting hidden states from various speech foundation models.
You don't need to worry about differences between model types or input/output data types.
import lfeats
import numpy as np
# Prepare an audio waveform without zero-mean and unit-variance normalization.
# Either a NumPy array or a Torch tensor are accepted as the input of the extractor.
sample_rate = 16000
waveform = np.random.uniform(-1, 1, sample_rate)
# Initialize the extractor.
extractor = lfeats.Extractor(
model_name="hubert",
model_variant="base",
resampler_type="torchaudio",
resampler_preset="kaiser-best",
device="cpu",
)
# Note: The model weights are automatically loaded during the first call to extractor(),
# so calling extractor.load() explicitly is optional.
extractor.load()
# Extract features.
features = extractor(waveform, sample_rate)
print(f"Shape: {features.shape}") # (1, 50, 768)
# You can access the features as a Numpy array.
print(type(features.array)) # <class 'numpy.ndarray'>
# You can also access the features as a Torch tensor.
print(type(features.tensor)) # <class 'torch.Tensor'>lfeats allows you to extract features from specific layer(s).
By default, the last layer is used.
import lfeats
import numpy as np
sample_rate = 16000
waveform = np.random.uniform(-1, 1, sample_rate)
extractor = lfeats.Extractor(model_name="hubert")
# Get the second-to-last layer output.
features = extractor(waveform, sample_rate, layers=-2)
print(f"Shape: {features.shape}") # (1, 50, 768)
# Get the multiple layer outputs.
features = extractor(waveform, sample_rate, layers=(11, 12))
print(f"Shape: {features.shape}") # (1, 50, 1536)
# Get all layer outputs as a concatenated vector.
features = extractor(waveform, sample_rate, layers="all")
print(f"Shape: {features.shape}") # (1, 50, 9984)To be computationally efficient and prevent mismatches between training and inference, long audio files can be processed by splitting them into chunks.
import lfeats
import numpy as np
sample_rate = 16000
waveform = np.random.uniform(-1, 1, 10 * sample_rate)
extractor = lfeats.Extractor(model_name="hubert")
# Processing a 10-second waveform with a 5-second chunk and 1-second overlap.
features = extractor(waveform, sample_rate, chunk_length_sec=5, overlap_length_sec=1)
print(f"Shape: {features.shape}") # (1, 500, 768)Since the frame rate of speech foundation models is typically 20ms,
it often doesn't match the 5ms requirement of speech generation tasks.
lfeats bridges this gap by sliding the input waveform and interleaving the resulting features,
providing a high-resolution output.
import lfeats
import numpy as np
sample_rate = 16000
waveform = np.random.uniform(-1, 1, sample_rate)
extractor = lfeats.Extractor(model_name="hubert")
# Extract features at a 5ms frame rate.
features = extractor(waveform, sample_rate, upsample_factor=4)
print(f"Shape: {features.shape}") # (1, 200, 768)lfeats can extract utterance-level features, e.g., speaker embeddings, as well as frame-level features.
import lfeats
import numpy as np
sample_rate = 16000
waveform = np.random.uniform(-1, 1, sample_rate)
extractor = lfeats.Extractor(model_name="ecapa-tdnn")
# The default aggregation method for utterance-level features is averaging.
features = extractor(waveform, sample_rate, overlap_length_sec=0, reduction="mean")
print(f"Shape: {features.shape}") # (1, 1, 192)Once installed via pip, you can use the lfeats command directly from your terminal.
# Basic usage: extract features from a wav file
$ lfeats input.wav --output_format npz
# Process all audio files in a directory
$ lfeats input/dir --output_dir feats
# Process files listed in a file
$ lfeats input.scp --output_dir feats
# Specify model and layer
$ lfeats input.wav --model_name hubert --model_variant base --layer 12Tip
For more details on all available flags and default values, simply run:
$ lfeats --helpThis project is released under the MIT License.
lfeats partially incorporates the following repositories:
| Repository | License |
|---|---|
| fairseq | MIT |
| NeXt_TDNN_ASV | Apache-2.0 |
| R-Spin | MIT |
| S3PRL | Apache-2.0 |
| SpeechBrain | Apache-2.0 |
| Spin | MIT |
| timm | Apache-2.0 |