Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
cbb8c19
Add audio-text-to-text task with datasets, demo, models, datsasets, s…
MrShahzebKhoso Aug 17, 2025
73cf5fe
Update about.md
MrShahzebKhoso Aug 17, 2025
bc4dfe7
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 21, 2025
fd95ece
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 25, 2025
b031fef
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 25, 2025
ce51d87
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 26, 2025
bf06eee
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 27, 2025
0f6538e
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 28, 2025
bd8733b
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 29, 2025
7930ccc
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 29, 2025
69a2b79
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 29, 2025
c91ccef
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 29, 2025
5dfde65
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 30, 2025
c50a69c
Update about.md
MrShahzebKhoso Aug 31, 2025
5289f1b
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
9c4e6a4
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
e5438ec
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
4db1edc
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
e8f652e
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
b9f1c48
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
5fd050e
Update about.md
MrShahzebKhoso Aug 31, 2025
cdeaba0
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
a427dee
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Aug 31, 2025
01efcb6
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Aug 31, 2025
ce2fdef
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Sep 1, 2025
69b82f0
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Sep 1, 2025
8096da6
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso Sep 1, 2025
7cf7f65
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 1, 2025
ee2d130
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 1, 2025
944ce81
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 1, 2025
0b96556
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 2, 2025
0a21659
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 2, 2025
0087d9d
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 2, 2025
26e63a7
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 4, 2025
e108286
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 4, 2025
f633bfb
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 4, 2025
c6c878c
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso Sep 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
129 changes: 129 additions & 0 deletions packages/tasks/src/tasks/audio-text-to-text/about.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
## Use Cases

> This task takes `audio` and a `text prompt` and returns `text` (answers, summaries, structured notes, etc.).

### Audio question answering
Ask targeted questions about lectures, podcasts, or calls and get context-aware answers.
**Example:** Audio: physics lecture → Prompt: “What did the teacher say about gravity and how is it measured?”

### Meeting notes & action items
Turn multi-speaker meetings into concise minutes with decisions, owners, and deadlines.
**Example:** Audio: weekly stand-up → Prompt: “Summarize key decisions and list action items with assignees.”

### Speech understanding & intent
Go beyond transcription to extract intent, sentiment, uncertainty, or emotion from spoken language.
**Example:** “I’m not sure I can finish this on time.” → Prompt: “Describe speaker intent and confidence.”

### Music & sound analysis (textual)
Describe instrumentation, genre, tempo, or sections, and suggest edits or techniques (text output only).
**Example:** Song demo → Prompt: “Identify key and tempo, then suggest jazz reharmonization ideas for the chorus.”

## Inference
You can use the 'transformers' library, and your audio file to any of the `audio-text-to-text` model, with instructions and get text responses. Following code examples show how to do so.

### Speech Transcription and Analysis
These models don’t just turn speech into text—they also capture tone, emotion, and speaker traits. This makes them useful for tasks like sentiment analysis or identifying speaker profiles.

You can try audio transcription with [Voxtral Mini](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507) using the following code.

```python
from transformers import VoxtralForConditionalGeneration, AutoProcessor
import torch

device = "cuda"
repo_id = "mistralai/Voxtral-Mini-3B-2507"

processor = AutoProcessor.from_pretrained(repo_id)
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, dtype=torch.bfloat16, device_map=device)

inputs = processor.apply_transcription_request(language="en", audio="https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/obama.mp3", model_id=repo_id)
inputs = inputs.to(device, dtype=torch.bfloat16)

outputs = model.generate(**inputs, max_new_tokens=500)
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)

print("\nGenerated responses:")
print("=" * 80)
for decoded_output in decoded_outputs:
print(decoded_output)
print("=" * 80)
```

### Audio Question Answering
These models can understand audio directly and answer questions about it. For example, summarizing a podcast clip or explaining parts of a recorded conversation.

You can experiment with [Qwen2-Audio-Instruct-Demo](https://huggingface.co/Qwen/Qwen2-Audio-Instruct-Demo) for conversations with both text and audio inputs, letting you ask follow-up questions about different sounds or speech clips.

```python
from io import BytesIO
from urllib.request import urlopen
import librosa
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor

processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")

conversation = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
{"type": "text", "text": "What's that sound?"},
]},
{"role": "assistant", "content": "It is the sound of glass shattering."},
{"role": "user", "content": [
{"type": "text", "text": "What can you do when you hear that?"},
]},
{"role": "assistant", "content": "Stay alert and cautious, and check if anyone is hurt or if there is any damage to property."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"},
{"type": "text", "text": "What does the person say?"},
]},
]
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
audios.append(
librosa.load(
BytesIO(urlopen(ele['audio_url']).read()),
sr=processor.feature_extractor.sampling_rate)[0]
)

inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
inputs.input_ids = inputs.input_ids.to("cuda")

generate_ids = model.generate(**inputs, max_length=256)
generate_ids = generate_ids[:, inputs.input_ids.size(1):]

response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```

## Useful Resources

If you want to learn more about this concept, here are some useful links:

### Papers
- [SpeechGPT](https://huggingface.co/papers/2507.13264) — multimodal dialogue with speech and text.
- [Voxtral](https://huggingface.co/papers/2507.13264) — a state-of-the-art audio-text model.
- [Qwen2-audio-instruct](https://huggingface.co/papers/2407.10759) — large-scale audio-language modeling for instruction following.
- [AudioPaLM](https://huggingface.co/papers/2306.12925) — scaling audio-language models with PaLM.

### Models, Codes & Demos
- [Qwen2-audio-instruct](https://github.com/QwenLM/Qwen2-Audio) — open-source implementation with demos.
- [SpeechGPT](https://github.com/0nutation/SpeechGPT) — An end-to-end framework for audio conversational models built on top of large language models.
- [AudioPaLM](https://google-research.github.io/seanet/audiopalm/examples/) — resources and code for AudioPaLM.
- [Audio Flamingo](https://huggingface.co/nvidia/audio-flamingo-3) — unifies speech, sound, and music understanding with long-context reasoning.
- [Ultravox](https://github.com/fixie-ai/ultravox) — a fast multimodal large language model designed for real-time voice interactions.
- [Ichigo](https://github.com/menloresearch/ichigo) — an audio-text-to-text model for audio-related tasks.

### Datasets
- [nvidia/AF-Think](https://huggingface.co/datasets/nvidia/AF-Think)
- [nvidia/AudioSkills](https://huggingface.co/datasets/nvidia/AudioSkills)


### Tools & Extras
- [Fast-RTC](https://huggingface.co/fastrtc) — turn any Python function into a real-time audio/video stream.
- [PhiCookBook](https://github.com/microsoft/PhiCookBook) — Microsoft’s open-source guide to small language models.
- [Qwen2-audio-instruct](https://qwenlm.github.io/blog/qwen2-audio/) — Blogpost explaining usage and demos of Qwen2-audio-instruct.
70 changes: 70 additions & 0 deletions packages/tasks/src/tasks/audio-text-to-text/data.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import type { TaskDataCustom } from "../index.js";

const taskData: TaskDataCustom = {
datasets: [
{
description: "A dataset containing audio conversations with question–answer pairs.",
id: "nvidia/AF-Think",
},
{
description: "A more advanced and comprehensive dataset that contains characteristics of the audio as well",
id: "tsinghua-ee/QualiSpeech",
},
],
demo:
{
inputs: [
{
filename: "audio.wav",
type: "audio",
},
{
label: "Text Prompt",
content: "What is the gender of the speaker?",
type: "text",
},
],
outputs: [
{
label: "Generated Text",
content:
"The gender of the speaker is female.",
type: "text",
},
],
},
metrics: [],
models: [
{
description: "A lightweight model that has capabilities of taking both audio and text as inputs and generating responses.",
id: "fixie-ai/ultravox-v0_5-llama-3_2-1b",
},
{
description: "A multimodal model that supports voice chat and audio analysis.",
id: "Qwen/Qwen2-Audio-7B-Instruct",
},
{
description: "A model for audio understanding, speech translation, and transcription.",
id: "mistralai/Voxtral-Small-24B-2507",
},
{
description: "A new model capable of audio question answering and reasoning.",
id: "nvidia/audio-flamingo-3",
},
],
spaces: [
{
description: "A space that takes input as both audio and text and generates answers.",
id: "iamomtiwari/ATTT",
},
{
description: "A web application that demonstrates chatting with the Qwen2Audio Model.",
id: "freddyaboulton/talk-to-qwen-webrtc",
},
],
summary: "Audio-text-to-text models take both an audio clip and a text prompt as input, and generate natural language text as output. These models can answer questions about spoken content, summarize meetings, analyze music, or interpret speech beyond simple transcription. They are useful for applications that combine speech understanding with reasoning or conversation.",
widgetModels: [],
youtubeId: "",
};

export default taskData;