|
1 | 1 | # 🤗 Hugging Face Inference Providers for VS Code Copilot |
2 | 2 |
|
3 | | - |
| 3 | + |
4 | 4 |
|
5 | 5 | <!-- Place demo.gif at docs/inference-providers/guides/assets/demo.gif --> |
6 | 6 |
|
7 | | -Bring SoTA open‑source models to VS Code Copilot Chat with [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/index), built on the [Language Model Chat Provider API](https://code.visualstudio.com/api/extension-guides/ai/language-model-chat-provider). |
8 | | - |
9 | | -## ✨ Why use the Hugging Face provider in Copilot |
10 | | -- 4k+ open‑source LLMs with tool calling capabilities. |
11 | | -- Single API to thousands of open‑source LLMs via providers like Groq, Cerebras, Together AI, SambaNova, and more. |
12 | | -- Built for high availability (across providers) and low latency through world‑class providers. |
13 | | -- No extra markup on provider rates. |
| 7 | +Bring SoTA open‑source LLMs to VS Code Copilot Chat with [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/index), built on the [Language Model Chat Provider API](https://code.visualstudio.com/api/extension-guides/ai/language-model-chat-provider). |
14 | 8 |
|
15 | 9 | ## ⚡ Quick start |
| 10 | +1. Install the HF Copilot Chat extension [here](#todo). |
16 | 11 | 1. Open VS Code's chat interface. |
17 | 12 | 2. Click the model picker and click "Manage Models...". |
18 | 13 | 3. Select "Hugging Face" provider. |
19 | 14 | 4. Provide your Hugging Face Token, you can get one in your [settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). |
20 | 15 | 5. Select the models you want to add to the model picker. |
21 | 16 |
|
22 | | -💡 The free tier gives you a small amount of monthly inference credits to start building and experimenting. Upgrade to [Hugging Face PRO](https://huggingface.co/pro) for $2 in monthly credits plus pay‑as‑you‑go access across all providers! |
| 17 | +## ✨ Why use the Hugging Face provider in Copilot |
| 18 | +- 4k+ open‑source LLMs with tool calling capabilities. |
| 19 | +- Single API to thousands of open‑source LLMs via providers like Groq, Cerebras, Together AI, SambaNova, and more. |
| 20 | +- Built for high availability (across providers) and low latency through world‑class providers. |
| 21 | +- No extra markup on provider rates. |
23 | 22 |
|
| 23 | +💡 The free tier gives you a small amount of monthly inference credits to experiment. Upgrade to [Hugging Face PRO](https://huggingface.co/pro) or [Enterprise](https://huggingface/enterprise) for $2 in monthly credits plus pay‑as‑you‑go access across all providers! |
0 commit comments