Replies: 1 comment
-
Hi! Can you paste the stack trace you see along with the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi All,
I am exporting the whisper large v2 model for Azure deployment; when I export the model as a pickle file and load it back with word_timestamp=True getting the error. Here are my use cases. Does anyone face similar issues? Please let me know how to overcome this issue. Eventually, I must tune the model and deploy it into the cloud.
Also, when we word_timestamp=False, the number of words is much lessor than word_timestamp=True. How did you address all the missing words in the transcribe?
import whisper
modelwhisper = whisper.load_model("large-v2")
import pickle
pickle.dump(modelwhisper,open('my_whisper.pkl','wb'))
file = 'sample.m4a'
result = modelwhisper.transcribe(file,
task='transcribe',
temperature=(0.0, 0.2, 0.4, 0.8, 1.0),
best_of=5,
beam_size=3,
suppress_tokens="-1",
condition_on_previous_text=True,
fp16=True,
compression_ratio_threshold=2.4,
logprob_threshold=-1.,
no_speech_threshold=0.3,
word_timestamps=True,
initial_prompt='perineal, Solensia, abdomen, abdominal cavity, abdominocentesis, abortion, abscess, acariasis, acaricide, ACE inhibitor, acidic, acquired, acromegaly, actinobacillosis, actinomycosis, acupuncture')
print("\tTranscripts : "+result['text'])
Output: Returns expected output without any error
Case 2
import pickle
with open('deployment/my_whisper.pkl', 'rb') as f:
modelwhisper= pickle.load(f)
file = 'sample.m4a'
result = modelwhisper.transcribe(file,
task='transcribe',
temperature=(0.0, 0.2, 0.4, 0.8, 1.0),
best_of=5,
beam_size=3,
suppress_tokens="-1",
condition_on_previous_text=True,
fp16=True,
compression_ratio_threshold=2.4,
logprob_threshold=-1.,
no_speech_threshold=0.3,
word_timestamps=True,
initial_prompt='perineal, Solensia, abdomen, abdominal cavity, abdominocentesis, abortion, abscess, acariasis, acaricide, ACE inhibitor, acidic, acquired, acromegaly, actinobacillosis, actinomycosis, acupuncture')
print("\tTranscripts : "+result['text'])
Output :
RuntimeError: Cannot get indices on an uncoalesced tensor, please call .coalesce() first
Case 3
import pickle
with open('deployment/my_whisper.pkl', 'rb') as f:
modelwhisper= pickle.load(f)
file = 'sample.m4a'
result = modelwhisper.transcribe(file,
task='transcribe',
temperature=(0.0, 0.2, 0.4, 0.8, 1.0),
best_of=5,
beam_size=3,
suppress_tokens="-1",
condition_on_previous_text=True,
fp16=True,
compression_ratio_threshold=2.4,
logprob_threshold=-1.,
no_speech_threshold=0.3,
word_timestamps=False,
initial_prompt='perineal, Solensia, abdomen, abdominal cavity, abdominocentesis, abortion, abscess, acariasis, acaricide, ACE inhibitor, acidic, acquired, acromegaly, actinobacillosis, actinomycosis, acupuncture')
print("\tTranscripts : "+result['text'])
Output :
Returns expected output without any error
Beta Was this translation helpful? Give feedback.
All reactions