Skip to content

Commit 551a89a

Browse files
authored
Standardize CLAP model card format (#39738)
* Standardize CLAP model card format * Apply review feedback * Remove Resources section
1 parent da70b13 commit 551a89a

File tree

1 file changed

+36
-11
lines changed

1 file changed

+36
-11
lines changed

docs/source/en/model_doc/clap.md

Lines changed: 36 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,25 +14,50 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
</div>
21+
</div>
22+
1723
# CLAP
1824

19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
21-
</div>
25+
[CLAP (Contrastive Language-Audio Pretraining)](https://huggingface.co/papers/2211.06687) is a multimodal model that combines audio data with natural language descriptions through contrastive learning.
26+
27+
It incorporates feature fusion and keyword-to-caption augmentation to process variable-length audio inputs and to improve performance. CLAP doesn't require task-specific training data and can learn meaningful audio representations through natural language.
28+
29+
You can find all the original CLAP checkpoints under the [CLAP](https://huggingface.co/collections/laion/clap-contrastive-language-audio-pretraining-65415c0b18373b607262a490) collection.
30+
31+
> [!TIP]
32+
> This model was contributed by [ybelkada](https://huggingface.co/ybelkada) and [ArthurZ](https://huggingface.co/ArthurZ).
33+
>
34+
> Click on the CLAP models in the right sidebar for more examples of how to apply CLAP to different audio retrieval and classification tasks.
35+
36+
The example below demonstrates how to extract text embeddings with the [`AutoModel`] class.
37+
38+
<hfoptions id="usage">
39+
<hfoption id="AutoModel">
40+
41+
```python
42+
import torch
43+
from transformers import AutoTokenizer, AutoModel
2244

23-
## Overview
45+
model = AutoModel.from_pretrained("laion/clap-htsat-unfused", torch_dtype=torch.float16, device_map="auto")
46+
tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused")
2447

25-
The CLAP model was proposed in [Large Scale Contrastive Language-Audio pretraining with
26-
feature fusion and keyword-to-caption augmentation](https://huggingface.co/papers/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
48+
texts = ["the sound of a cat", "the sound of a dog", "music playing"]
2749

28-
CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score.
50+
inputs = tokenizer(texts, padding=True, return_tensors="pt").to("cuda")
2951

30-
The abstract from the paper is the following:
52+
with torch.no_grad():
53+
text_features = model.get_text_features(**inputs)
3154

32-
*Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-6*
55+
print(f"Text embeddings shape: {text_features.shape}")
56+
print(f"Text embeddings: {text_features}")
57+
```
3358

34-
This model was contributed by [Younes Belkada](https://huggingface.co/ybelkada) and [Arthur Zucker](https://huggingface.co/ArthurZ) .
35-
The original code can be found [here](https://github.com/LAION-AI/Clap).
59+
</hfoption>
60+
</hfoptions>
3661

3762
## ClapConfig
3863

0 commit comments

Comments
 (0)