Skip to content

Commit ffbea2d

Browse files
Deep-unlearningmerveenoyanstevhliuDeep-unlearning
authored
audio text to text task guide (#43413)
* audio text to text task guide * add torchcodec in requirements * add task to toctree * Update docs/source/en/tasks/audio_text_to_text.md Co-authored-by: Merve Noyan <merveenoyan@gmail.com> * Update docs/source/en/tasks/audio_text_to_text.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Replace Voxtral with Audio Flamingo in audio-text-to-text * Update docs/source/en/tasks/audio_text_to_text.md Co-authored-by: Merve Noyan <merveenoyan@gmail.com> * nit * Update docs/source/en/tasks/audio_text_to_text.md Co-authored-by: Merve Noyan <merveenoyan@gmail.com> * nit * Apply suggestion from @merveenoyan Co-authored-by: Merve Noyan <merveenoyan@gmail.com> * nit * remove newlines in codeblocks * lora nit * nit --------- Co-authored-by: Merve Noyan <merveenoyan@gmail.com> Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by: Deep-unlearning <steven@hugginface.co>
1 parent a64996e commit ffbea2d

File tree

2 files changed

+364
-0
lines changed

2 files changed

+364
-0
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -303,6 +303,8 @@
303303
title: Audio classification
304304
- local: tasks/asr
305305
title: Automatic speech recognition
306+
- local: tasks/audio_text_to_text
307+
title: Audio-text-to-text
306308
- local: tasks/text-to-speech
307309
title: Text to speech
308310
title: Audio
Lines changed: 362 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,362 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
12+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13+
rendered properly in your Markdown viewer.
14+
15+
-->
16+
17+
# Audio-text-to-text
18+
19+
[[open-in-colab]]
20+
21+
Audio-text-to-text models accept both audio and text as inputs and generate text as output. They combine audio understanding with language generation, enabling tasks like audio question answering (e.g., "What is being said in this clip?"), audio reasoning (e.g., "What emotion does the speaker convey?"), and spoken dialogue understanding. Unlike traditional automatic speech recognition (ASR) models that only transcribe speech into text, audio-text-to-text models can reason about the audio content, follow complex instructions, and produce contextual responses based on what they hear.
22+
23+
The example below shows how to load a model and processor, pass an audio file with a text prompt, and generate a response. In this case, we ask the model to transcribe a speech recording.
24+
25+
```python
26+
from transformers import AudioFlamingo3ForConditionalGeneration, AutoProcessor
27+
28+
model_id = "nvidia/audio-flamingo-3-hf"
29+
processor = AutoProcessor.from_pretrained(model_id)
30+
model = AudioFlamingo3ForConditionalGeneration.from_pretrained(model_id, device_map="auto")
31+
32+
conversation = [
33+
{
34+
"role": "user",
35+
"content": [
36+
{"type": "text", "text": "Transcribe the input speech."},
37+
{"type": "audio", "path": "https://huggingface.co/datasets/nvidia/AudioSkills/resolve/main/assets/WhDJDIviAOg_120_10.mp3"},
38+
],
39+
}
40+
]
41+
42+
inputs = processor.apply_chat_template(
43+
conversation,
44+
tokenize=True,
45+
add_generation_prompt=True,
46+
return_dict=True,
47+
).to(model.device)
48+
49+
outputs = model.generate(**inputs, max_new_tokens=500)
50+
51+
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
52+
print(decoded_outputs)
53+
## ["The transcription of the audio is 'summer follows spring the days grow longer and the nights are warm'."]
54+
```
55+
56+
This guide will show you how to:
57+
58+
1. Fine-tune [Audio Flamingo 3](https://huggingface.co/nvidia/audio-flamingo-3-hf) on the [AudioCaps](https://huggingface.co/datasets/OpenSound/AudioCaps) dataset for audio captioning using LoRA.
59+
2. Use your fine-tuned model for inference.
60+
61+
> [!TIP]
62+
> To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/audio-text-to-text).
63+
64+
65+
Before you begin, make sure you have all the necessary libraries installed:
66+
67+
```bash
68+
pip install transformers datasets peft accelerate
69+
```
70+
71+
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
72+
73+
```py
74+
>>> from huggingface_hub import notebook_login
75+
>>> notebook_login()
76+
```
77+
78+
## Load AudioCaps dataset
79+
80+
Start by loading the [AudioCaps](https://huggingface.co/datasets/OpenSound/AudioCaps) dataset from the 🤗 Datasets library in streaming mode. This dataset contains audio clips with descriptive captions, perfect for audio captioning tasks.
81+
82+
```py
83+
>>> from datasets import load_dataset, Audio
84+
>>> dataset = load_dataset("OpenSound/AudioCaps", split="train", streaming=True)
85+
```
86+
87+
Cast the audio column to 16kHz, which is required by Audio Flamingo's Whisper feature extractor:
88+
89+
```py
90+
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
91+
```
92+
93+
Split the dataset into train and test sets using `.take()` and `.skip()` for streaming datasets:
94+
95+
```py
96+
>>> train_dataset = dataset.take(1000)
97+
>>> eval_dataset = dataset.skip(1000).take(100)
98+
```
99+
100+
Take a look at an example:
101+
102+
```py
103+
>>> next(iter(train_dataset))
104+
{'audio': {'array': array([...], dtype=float32),
105+
'path': '...',
106+
'sampling_rate': 16000},
107+
'caption': 'A man speaks followed by footsteps'}
108+
```
109+
110+
The dataset contains:
111+
112+
- `audio`: the audio waveform
113+
- `caption`: the descriptive text caption for the audio
114+
115+
## Preprocess
116+
117+
Load the Audio Flamingo processor to handle both audio and text inputs:
118+
119+
```py
120+
>>> from transformers import AutoProcessor
121+
>>> processor = AutoProcessor.from_pretrained("nvidia/audio-flamingo-3-hf")
122+
```
123+
124+
Create a data collator that processes audio-text pairs into the format expected by Audio Flamingo. The collator uses the chat template format with direct audio arrays:
125+
126+
```py
127+
>>> class AudioFlamingo3DataCollator:
128+
... """Data collator for Audio Flamingo 3 audio captioning training."""
129+
...
130+
... def __init__(self, processor):
131+
... self.processor = processor
132+
...
133+
... def __call__(self, features):
134+
... conversations = []
135+
...
136+
... for feature in features:
137+
... # Build conversation format for Audio Flamingo
138+
... # Audio is passed directly as an array, no base64 encoding needed
139+
... sample = [
140+
... {
141+
... "role": "user",
142+
... "content": [
143+
... {"type": "text", "text": "Describe the audio."},
144+
... {"type": "audio", "audio": feature["audio"]["array"]},
145+
... ],
146+
... },
147+
... {
148+
... "role": "assistant",
149+
... "content": [{"type": "text", "text": feature["caption"]}],
150+
... }
151+
... ]
152+
... conversations.append(sample)
153+
...
154+
... # Apply chat template and format labels for training
155+
... return self.processor.apply_chat_template(
156+
... conversations,
157+
... tokenize=True,
158+
... add_generation_prompt=False,
159+
... return_dict=True,
160+
... output_labels=True, # Automatically creates labels for training
161+
... )
162+
```
163+
164+
Instantiate the data collator:
165+
166+
```py
167+
>>> data_collator = AudioFlamingo3DataCollator(processor)
168+
```
169+
170+
## Train
171+
172+
> [!TIP]
173+
> If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training)!
174+
175+
176+
Load the Audio Flamingo model. We use `bfloat16` precision and `device_map="auto"` for efficient memory usage:
177+
178+
```py
179+
>>> from transformers import AudioFlamingo3ForConditionalGeneration
180+
>>> import torch
181+
>>> model = AudioFlamingo3ForConditionalGeneration.from_pretrained(
182+
... "nvidia/audio-flamingo-3-hf",
183+
... torch_dtype=torch.bfloat16,
184+
... device_map="auto",
185+
... )
186+
```
187+
188+
### Configure LoRA
189+
190+
[LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) (Low-Rank Adaptation) enables efficient fine-tuning by only training a small number of additional parameters. Configure LoRA to target the language model's attention and feed-forward layers:
191+
192+
```py
193+
>>> from peft import LoraConfig, get_peft_model
194+
>>> lora_config = LoraConfig(
195+
... r=16, # LoRA rank
196+
... lora_alpha=32, # LoRA scaling factor
197+
... target_modules=[
198+
... # Language model attention
199+
... "q_proj",
200+
... "k_proj",
201+
... "v_proj",
202+
... "o_proj",
203+
... # Feed-forward layers
204+
... "gate_proj",
205+
... "up_proj",
206+
... "down_proj",
207+
... ],
208+
... lora_dropout=0.05,
209+
... bias="none",
210+
... task_type="CAUSAL_LM",
211+
... )
212+
>>> model = get_peft_model(model, lora_config)
213+
>>> model.print_trainable_parameters()
214+
```
215+
216+
> [!TIP]
217+
> [LoRA](https://huggingface.co/docs/peft/main/conceptual_guides/lora) significantly reduces memory usage and training time by only updating a small number of adapter parameters instead of the full model. This configuration targets the language model's attention and feed-forward layers while keeping the audio encoder frozen, making it possible to fine-tune on a single GPU.
218+
219+
220+
### Setup training
221+
222+
Define training hyperparameters in [`TrainingArguments`]. Note that we use `max_steps` instead of epochs since we're using a streaming dataset:
223+
224+
```py
225+
>>> from transformers import TrainingArguments, Trainer
226+
>>> training_args = TrainingArguments(
227+
... output_dir="audio-flamingo-3-hf-lora-finetuned",
228+
... per_device_train_batch_size=4,
229+
... per_device_eval_batch_size=4,
230+
... gradient_accumulation_steps=4,
231+
... learning_rate=1e-4,
232+
... max_steps=500, # Use max_steps with streaming datasets
233+
... bf16=True,
234+
... logging_steps=10,
235+
... eval_steps=100,
236+
... save_steps=250,
237+
... save_total_limit=2, # Keep only the latest 2 checkpoints
238+
... save_only_model=True, # Skip saving optimizer state to save disk space
239+
... eval_strategy="steps",
240+
... save_strategy="steps",
241+
... remove_unused_columns=False,
242+
... dataloader_num_workers=0, # Must be 0 for streaming datasets
243+
... gradient_checkpointing=True,
244+
... report_to="none",
245+
... )
246+
```
247+
248+
Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator:
249+
250+
```py
251+
>>> trainer = Trainer(
252+
... model=model,
253+
... args=training_args,
254+
... train_dataset=train_dataset,
255+
... eval_dataset=eval_dataset,
256+
... data_collator=data_collator,
257+
... )
258+
>>> trainer.train()
259+
```
260+
261+
Save the LoRA adapter and processor:
262+
263+
```py
264+
>>> trainer.save_model()
265+
>>> processor.save_pretrained("audio-flamingo-3-hf-lora-finetuned")
266+
```
267+
268+
Once training is completed, share your model to the Hub:
269+
270+
```py
271+
>>> trainer.push_to_hub()
272+
```
273+
274+
## Inference
275+
276+
Now that you've fine-tuned the model, you can use it for audio captioning.
277+
278+
Load the fine-tuned model and processor:
279+
280+
```py
281+
>>> from transformers import AudioFlamingo3ForConditionalGeneration, AutoProcessor
282+
>>> from peft import PeftModel
283+
>>> import torch
284+
>>> base_model = AudioFlamingo3ForConditionalGeneration.from_pretrained(
285+
... "nvidia/audio-flamingo-3-hf",
286+
... torch_dtype=torch.bfloat16,
287+
... device_map="auto",
288+
... )
289+
>>> model = PeftModel.from_pretrained(base_model, "audio-flamingo-3-hf-lora-finetuned")
290+
>>> processor = AutoProcessor.from_pretrained("audio-flamingo-3-hf-lora-finetuned")
291+
```
292+
293+
Load an audio sample for inference:
294+
295+
```py
296+
>>> from datasets import load_dataset, Audio
297+
>>> dataset = load_dataset("OpenSound/AudioCaps", split="test", streaming=True)
298+
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
299+
>>> sample = next(iter(dataset))
300+
```
301+
302+
Prepare the input with a conversation format:
303+
304+
```py
305+
>>> messages = [
306+
... {
307+
... "role": "user",
308+
... "content": [
309+
... {"type": "text", "text": "Describe the audio."},
310+
... {"type": "audio", "audio": sample["audio"]["array"]},
311+
... ],
312+
... }
313+
... ]
314+
>>> inputs = processor.apply_chat_template(
315+
... messages,
316+
... tokenize=True,
317+
... add_generation_prompt=True,
318+
... return_dict=True,
319+
... )
320+
```
321+
322+
Generate a response:
323+
324+
```py
325+
>>> with torch.no_grad():
326+
... output_ids = model.generate(**inputs, max_new_tokens=100)
327+
>>> # Decode only the generated tokens
328+
>>> input_len = inputs["input_ids"].shape[1]
329+
>>> response = processor.tokenizer.decode(output_ids[0][input_len:], skip_special_tokens=True)
330+
>>> print(response)
331+
## A sewing machine is running while people are talking
332+
```
333+
334+
## Pipeline
335+
336+
You can also use the [`Pipeline`] API for quick inference. First, merge the LoRA adapter with the base model, then create a pipeline:
337+
338+
```py
339+
>>> from transformers import pipeline
340+
>>> # Merge LoRA adapter for pipeline use
341+
>>> merged_model = model.merge_and_unload()
342+
>>> pipe = pipeline(
343+
... "audio-text-to-text",
344+
... model=merged_model,
345+
... processor=processor,
346+
... )
347+
>>> result = pipe(
348+
... sample["audio"]["array"],
349+
... generate_kwargs={"max_new_tokens": 100},
350+
... )
351+
>>> print(result[0]["generated_text"])
352+
```
353+
354+
> [!TIP]
355+
> For more advanced use cases like multi-turn conversations with audio, you can structure your messages with alternating user and assistant roles, similar to [image-text-to-text](./image_text_to_text) models.
356+
357+
358+
## Further Reading
359+
360+
- [Audio-text-to-text task page](https://huggingface.co/tasks/audio-text-to-text) covers model types, use cases, and datasets.
361+
- [PEFT documentation](https://huggingface.co/docs/peft) for more LoRA configuration options and other adapter methods.
362+
- [Audio Flamingo 3 model card](https://huggingface.co/nvidia/audio-flamingo-3-hf) for model-specific details and capabilities.

0 commit comments

Comments
 (0)