-
Notifications
You must be signed in to change notification settings - Fork 374
[GUIDES] Improve inference providers documentation with guides #1797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks nice, i especially like the "Your First API Call" one
cc @Wauplin @hanouticelina too
Note, i think we should showcase more and more content in JS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm could we split those into another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened this PR with just billing changes #1799
|
Thanks for the review @julien-c
I could add a JS version of the transcription app too. |
|
^yes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! very cool! love the simplicity of it, left some suggestions and more specifically a question about auto vs specific provider. Given this is tailored towards beginners or people who might not have as much insight about the ML/AI world - I tend to side with auto i.e. reduce the barrier to entry as much as possible.
|
|
||
| When prompted, paste your Hugging Face token. This handles authentication automatically for all your inference calls. You can generate one from [your settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). | ||
|
|
||
| ## Step 2: Build the User Interface |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can upload these as a colab notebook, so that people can just execute these as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice idea. I'll gunna come back to this and just re-use the new model repo notebooks.
|
Thanks for the review @Vaibhavs10 . I've responded to all your comments and implemented a js version of the app. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is very cool, thanks for adding these guides @burtenshaw! I left some suggestions
|
|
||
| const output = await client.automaticSpeechRecognition({ | ||
| data: file, | ||
| model: "openai/whisper-large-v3-turbo", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be openai/whisper-large-v3, right?
| model: "openai/whisper-large-v3-turbo", | |
| model: "openai/whisper-large-v3", |
Thanks @hanouticelina ! I've committed your suggestions. |
|
Thanks @Vaibhavs10 @hanouticelina. All done. Could I get an approval please? |
…ithub.com/huggingface/hub-docs into improve-inference-providers-documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thank you! (sorry for the delay!)
| provider: "auto" | ||
| }); | ||
|
|
||
| return output.text || output || 'Transcription completed'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(nit)
return output.text ?? output ?? 'Transcription completed';
This PR is add's bite sized 'day 1' guides to the inference providers guides which should help first time users with the product:
It also includes two minor fixes: