-
Notifications
You must be signed in to change notification settings - Fork 500
Add audio text to text task #1692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
MrShahzebKhoso
wants to merge
37
commits into
huggingface:main
Choose a base branch
from
MrShahzebKhoso:add-audio-text-to-text-task
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+199
−0
Open
Changes from 20 commits
Commits
Show all changes
37 commits
Select commit
Hold shift + click to select a range
cbb8c19
Add audio-text-to-text task with datasets, demo, models, datsasets, s…
MrShahzebKhoso 73cf5fe
Update about.md
MrShahzebKhoso bc4dfe7
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso fd95ece
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso b031fef
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso ce51d87
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso bf06eee
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso 0f6538e
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso bd8733b
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 7930ccc
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 69a2b79
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso c91ccef
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 5dfde65
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso c50a69c
Update about.md
MrShahzebKhoso 5289f1b
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 9c4e6a4
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso e5438ec
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 4db1edc
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso e8f652e
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso b9f1c48
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 5fd050e
Update about.md
MrShahzebKhoso cdeaba0
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso a427dee
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 01efcb6
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso ce2fdef
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 69b82f0
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 8096da6
Update packages/tasks/src/tasks/audio-text-to-text/about.md
MrShahzebKhoso 7cf7f65
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso ee2d130
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso 944ce81
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso 0b96556
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso 0a21659
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso 0087d9d
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso 26e63a7
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso e108286
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso f633bfb
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso c6c878c
Merge branch 'main' into add-audio-text-to-text-task
MrShahzebKhoso File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,180 @@ | ||
## Use Cases | ||
|
||
> This task takes `audio` and a `text prompt` and returns `text` (answers, summaries, structured notes, etc.). | ||
|
||
### Audio question answering | ||
Ask targeted questions about lectures, podcasts, or calls and get prices, and context-aware answers. | ||
**Example:** Audio: physics lecture → Prompt: “What did the teacher say about gravity and how is it measured?” | ||
|
||
### Meeting notes & action items | ||
Turn multi-speaker meetings into concise minutes with decisions, owners, and deadlines. | ||
**Example:** Audio: weekly stand-up → Prompt: “Summarize key decisions and list action items with assignees.” | ||
|
||
### Speech understanding & intent | ||
Go beyond transcription to extract intent, sentiment, uncertainty, or emotion from spoken language. | ||
**Example:** “I’m not sure I can finish this on time.” → Prompt: “Describe speaker intent and confidence.” | ||
|
||
### Music & sound analysis (textual) | ||
Describe instrumentation, genre, tempo, or sections, and suggest edits or techniques (text output only). | ||
**Example:** Song demo → Prompt: “Identify key and tempo, then suggest jazz reharmonization ideas for the chorus.” | ||
|
||
## Inference | ||
You can use 'transformers' library, and your audio file to any of the `audio-text-to-text` model, with instructions and get text responses. Following code examples show how to do so. | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### Summarization / Q&A on a Single Audio | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Run queries or request summaries directly from an audio clip. | ||
|
||
```python | ||
from transformers import VoxtralForConditionalGeneration, AutoProcessor | ||
import torch | ||
|
||
device = "cuda" | ||
repo_id = "mistralai/Voxtral-Mini-3B-2507" | ||
|
||
processor = AutoProcessor.from_pretrained(repo_id) | ||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device) | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
conversation = [ | ||
{ | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "audio", | ||
"path": "https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/winning_call.mp3", | ||
}, | ||
{"type": "text", "text": "Summarize this audio"}, | ||
], | ||
} | ||
] | ||
|
||
inputs = processor.apply_chat_template(conversation) | ||
inputs = inputs.to(device, dtype=torch.bfloat16) | ||
|
||
outputs = model.generate(**inputs, max_new_tokens=500) | ||
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True) | ||
|
||
print("\nGenerated response:") | ||
print("=" * 80) | ||
print(decoded_outputs[0]) | ||
print("=" * 80) | ||
``` | ||
|
||
### Multiple Audio Querying | ||
Pass multiple audio inputs in the same request and ask questions that compare or reference them. | ||
|
||
```python | ||
from transformers import VoxtralForConditionalGeneration, AutoProcessor | ||
import torch | ||
|
||
device = "cuda" | ||
repo_id = "mistralai/Voxtral-Mini-3B-2507" | ||
|
||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
processor = AutoProcessor.from_pretrained(repo_id) | ||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device) | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
conversation = [ | ||
{ | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "audio", | ||
"path": "https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/mary_had_lamb.mp3", | ||
}, | ||
{ | ||
"type": "audio", | ||
"path": "https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/winning_call.mp3", | ||
}, | ||
{"type": "text", "text": "What sport and what nursery rhyme are referenced?"}, | ||
], | ||
} | ||
] | ||
|
||
inputs = processor.apply_chat_template(conversation) | ||
inputs = inputs.to(device, dtype=torch.bfloat16) | ||
|
||
outputs = model.generate(**inputs, max_new_tokens=500) | ||
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True) | ||
|
||
print("\nGenerated response:") | ||
print("=" * 80) | ||
print(decoded_outputs[0]) | ||
print("=" * 80) | ||
``` | ||
|
||
### Multi-Turn Conversation with Audio | ||
Mix audio and text across multiple turns in a conversation, just like a dialogue with context. | ||
|
||
```python | ||
from transformers import VoxtralForConditionalGeneration, AutoProcessor | ||
import torch | ||
|
||
device = "cuda" | ||
repo_id = "mistralai/Voxtral-Mini-3B-2507" | ||
|
||
processor = AutoProcessor.from_pretrained(repo_id) | ||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device) | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
conversation = [ | ||
{ | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "audio", | ||
"path": "https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/obama.mp3", | ||
}, | ||
{ | ||
"type": "audio", | ||
"path": "https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/bcn_weather.mp3", | ||
}, | ||
{"type": "text", "text": "Describe briefly what you can hear."}, | ||
], | ||
}, | ||
{ | ||
"role": "assistant", | ||
"content": "The audio begins with the speaker delivering a farewell address in Chicago, reflecting on his eight years as president and expressing gratitude to the American people. The audio then transitions to a weather report, stating that it was 35 degrees in Barcelona the previous day, but the temperature would drop to minus 20 degrees the following day.", | ||
}, | ||
{ | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "audio", | ||
"path": "https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/winning_call.mp3", | ||
}, | ||
{"type": "text", "text": "Ok, now compare this new audio with the previous one."}, | ||
], | ||
}, | ||
] | ||
|
||
inputs = processor.apply_chat_template(conversation) | ||
inputs = inputs.to(device, dtype=torch.bfloat16) | ||
|
||
outputs = model.generate(**inputs, max_new_tokens=500) | ||
decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True) | ||
|
||
print("\nGenerated response:") | ||
print("=" * 80) | ||
print(decoded_outputs[0]) | ||
print("=" * 80) | ||
``` | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Useful Resources | ||
MrShahzebKhoso marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
If you want to learn more about this concept, here are some useful links: | ||
|
||
### Papers | ||
- [SpeechGPT (Paper)](https://huggingface.co/papers/2507.13264) | ||
- [Voxtral (Paper)](https://huggingface.co/papers/2507.13264) | ||
- [Qwen2-audio-instruct (Paper)](https://huggingface.co/papers/2407.10759) | ||
- [AudioPaLM (Paper)](https://huggingface.co/papers/2306.12925) | ||
|
||
### Blogs | ||
- [Qwen2-audio-instruct (Blog)](https://qwenlm.github.io/blog/qwen2-audio/) | ||
|
||
### Datasets | ||
- [nvidia/AF-Think](https://huggingface.co/datasets/nvidia/AF-Think) | ||
- [nvidia/AudioSkills](https://huggingface.co/datasets/nvidia/AudioSkills) | ||
|
||
### Code & Demos | ||
- [Qwen2-audio-instruct](https://github.com/QwenLM/Qwen2-Audio) | ||
- [SpeechGPT](https://github.com/QwenLM/Qwen2-Audio) | ||
- [AudioPaLM](https://github.com/0nutation/SpeechGPT) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
import type { TaskDataCustom } from "../index.js"; | ||
|
||
const taskData: TaskDataCustom = { | ||
datasets: [ | ||
{ | ||
description: "A dataset containing audio conversations with question–answer pairs.", | ||
id: "nvidia/AF-Think", | ||
}, | ||
{ | ||
description: "A more advanced and comprehensive dataset that contains characteristics of the audio as well", | ||
id: "tsinghua-ee/QualiSpeech", | ||
}, | ||
], | ||
demo: | ||
{ | ||
inputs: [ | ||
{ | ||
filename: "audio.wav", | ||
type: "audio", | ||
}, | ||
{ | ||
label: "Text Prompt", | ||
content: "What is the gender of the speaker?", | ||
type: "text", | ||
}, | ||
], | ||
outputs: [ | ||
{ | ||
label: "Generated Text", | ||
content: | ||
"The gender of the speaker is female.", | ||
type: "text", | ||
}, | ||
], | ||
}, | ||
metrics: [], | ||
models: [ | ||
{ | ||
description: "A lightweight model that has capabilities of taking both audio and text as inputs and generating responses.", | ||
id: "fixie-ai/ultravox-v0_5-llama-3_2-1b", | ||
}, | ||
{ | ||
description: "A multimodal model that supports voice chat and audio analysis.", | ||
id: "Qwen/Qwen2-Audio-7B-Instruct", | ||
}, | ||
{ | ||
description: "A model for audio understanding, speech translation, and transcription.", | ||
id: "mistralai/Voxtral-Small-24B-2507", | ||
}, | ||
{ | ||
description: "A new model capable of audio question answering and reasoning.", | ||
id: "nvidia/audio-flamingo-3", | ||
}, | ||
], | ||
spaces: [ | ||
{ | ||
description: "A space that takes input as both audio and text and generates answers.", | ||
id: "iamomtiwari/ATTT", | ||
}, | ||
{ | ||
description: "A web application that demonstrates chatting with the Qwen2Audio Model.", | ||
id: "freddyaboulton/talk-to-qwen-webrtc", | ||
}, | ||
], | ||
summary: "Audio-text-to-text models take both an audio clip and a text prompt as input, and generate natural language text as output. These models can answer questions about spoken content, summarize meetings, analyze music, or interpret speech beyond simple transcription. They are useful for applications that combine speech understanding with reasoning or conversation.", | ||
widgetModels: [], | ||
youtubeId: "", | ||
}; | ||
|
||
export default taskData; |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.