Running multiple whispers on the same GPU? #2063
Unanswered
silvacarl2
asked this question in
Q&A
Replies: 1 comment
-
No problem, you need only RAM to load model for every whisper.
На вт, 5.03.2024 г. в 1:21 Carl Silva ***@***.***> написа:
… Has anyone had any experience running multiple whispers on the same GPU?
Not using docker, but just using FastAPI?
—
Reply to this email directly, view it on GitHub
<#2063>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYTSUK5XTDRJ6FPGPZ6AFDYWT6YRAVCNFSM6AAAAABEGDDOZOVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZWGMZDEOJRGQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Has anyone had any experience running multiple whispers on the same GPU? Not using docker, but just using FastAPI?
Beta Was this translation helpful? Give feedback.
All reactions