Skip to content

Commit cfc3eaf

Browse files
Vaibhavs10pcuencaWauplinhanouticelina
authored
[Codex generated] Guide for VSCode. (#1910)
* [Codex generated] Guide for VSCode. * up * up * up * Up. * Thanks for the review comments! Co-authored-by: Wauplin \\<[email protected]\\> * Apply suggestions from code review Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: Lucain <[email protected]> * up. * up. * up. * update demo * Update docs/inference-providers/guides/vscode.md Co-authored-by: célina <[email protected]> --------- Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: Lucain <[email protected]> Co-authored-by: Celina Hanouti <[email protected]>
1 parent 96ca7a4 commit cfc3eaf

File tree

2 files changed

+30
-1
lines changed

2 files changed

+30
-1
lines changed

docs/inference-providers/_toctree.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@
2323
title: How to use OpenAI gpt-oss
2424
- local: guides/image-editor
2525
title: Build an Image Editor
26+
- local: guides/vscode
27+
title: VS Code with GitHub Copilot
2628

2729
- local: tasks/index
2830
title: Inference Tasks
@@ -106,4 +108,4 @@
106108
title: Hub API
107109

108110
- local: register-as-a-provider
109-
title: Register as an Inference Provider
111+
title: Register as an Inference Provider
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# 🤗 Use Hugging Face Inference Providers with GitHub Copilot Chat in VS Code
2+
3+
![Demo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers-guides/demo_vscode.gif)
4+
5+
Use frontier open LLMs like Kimi K2, DeepSeek V3.1, GLM 4.5 and more in VS Code with GitHub Copilot Chat powered by [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/index) 🔥
6+
7+
## ⚡ Quick start
8+
9+
1. Install the HF Copilot Chat extension [here](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode-chat).
10+
2. Open VS Code's chat interface.
11+
3. Click the model picker and click "Manage Models...".
12+
4. Select "Hugging Face" provider.
13+
5. Enter your Hugging Face Token. You can get one from your [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained).
14+
6. Choose the models you want to add to the model picker. 🥳
15+
16+
## ✨ Why use the Hugging Face provider in Copilot
17+
18+
- Access [SoTA open‑source LLMs](https://huggingface.co/models?pipeline_tag=text-generation&inference_provider=cerebras,together,fireworks-ai,nebius,novita,sambanova,groq,hyperbolic,nscale,fal-ai,cohere,replicate,scaleway,black-forest-labs,ovhcloud&sort=trending) with tool calling capabilities.
19+
- Single API to switch between multiple providers like Groq, Cerebras, Together AI, SambaNova, and more.
20+
- Built for high availability (across providers) and low latency.
21+
- Transparent pricing: what the provider charges is what you pay.
22+
23+
💡 The free Hugging Face user tier gives you a small amount of monthly inference credits to experiment. Upgrade to [Hugging Face PRO](https://huggingface.co/pro) or [Team or Enterprise](https://huggingface.co/enterprise) for $2 in monthly credits plus pay‑as‑you‑go access across all providers!
24+
25+
Check out the whole workflow in action in the video below:
26+
27+
<iframe width="560" height="315" src="https://www.youtube.com/embed/rqawpJhPhvM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

0 commit comments

Comments
 (0)