|
| 1 | +# Hugging Face |
| 2 | + |
| 3 | +[Hugging Face](https://huggingface.co/) is an AI platform with all major open source models, datasets, MCPs, and demos. You can use [Inference Providers](https://huggingface.co/docs/inference-providers) to run open source models like DeepSeek R1 on scalable serverless infrastructure. |
| 4 | + |
| 5 | +## Install |
| 6 | + |
| 7 | +To use `HuggingFaceModel`, you need to either install `pydantic-ai`, or install `pydantic-ai-slim` with the `huggingface` optional group: |
| 8 | + |
| 9 | +```bash |
| 10 | +pip/uv-add "pydantic-ai-slim[huggingface]" |
| 11 | +``` |
| 12 | + |
| 13 | +## Configuration |
| 14 | + |
| 15 | +To use [Hugging Face](https://huggingface.co/) inference, you'll need to set up an account which will give you [free tier](https://huggingface.co/docs/inference-providers/pricing) allowance on [Inference Providers](https://huggingface.co/docs/inference-providers). To setup inference, follow these steps: |
| 16 | + |
| 17 | +1. Go to [Hugging Face](https://huggingface.co/join) and sign up for an account. |
| 18 | +2. Create a new access token in [Hugging Face](https://huggingface.co/settings/tokens). |
| 19 | +3. Set the `HF_TOKEN` environment variable to the token you just created. |
| 20 | + |
| 21 | +Once you have a Hugging Face access token, you can set it as an environment variable: |
| 22 | + |
| 23 | +```bash |
| 24 | +export HF_TOKEN='hf_token' |
| 25 | +``` |
| 26 | + |
| 27 | +## Usage |
| 28 | + |
| 29 | +You can then use [`HuggingFaceModel`][pydantic_ai.models.huggingface.HuggingFaceModel] by name: |
| 30 | + |
| 31 | +```python |
| 32 | +from pydantic_ai import Agent |
| 33 | + |
| 34 | +agent = Agent('huggingface:Qwen/Qwen3-235B-A22B') |
| 35 | +... |
| 36 | +``` |
| 37 | + |
| 38 | +Or initialise the model directly with just the model name: |
| 39 | + |
| 40 | +```python |
| 41 | +from pydantic_ai import Agent |
| 42 | +from pydantic_ai.models.huggingface import HuggingFaceModel |
| 43 | + |
| 44 | +model = HuggingFaceModel('Qwen/Qwen3-235B-A22B') |
| 45 | +agent = Agent(model) |
| 46 | +... |
| 47 | +``` |
| 48 | + |
| 49 | +By default, the [`HuggingFaceModel`][pydantic_ai.models.huggingface.HuggingFaceModel] uses the |
| 50 | +[`HuggingFaceProvider`][pydantic_ai.providers.huggingface.HuggingFaceProvider] that will select automatically |
| 51 | +the first of the inference providers (Cerebras, Together AI, Cohere..etc) available for the model, sorted by your |
| 52 | +preferred order in https://hf.co/settings/inference-providers. |
| 53 | + |
| 54 | +## Configure the provider |
| 55 | + |
| 56 | +If you want to pass parameters in code to the provider, you can programmatically instantiate the |
| 57 | +[`HuggingFaceProvider`][pydantic_ai.providers.huggingface.HuggingFaceProvider] and pass it to the model: |
| 58 | + |
| 59 | +```python |
| 60 | +from pydantic_ai import Agent |
| 61 | +from pydantic_ai.models.huggingface import HuggingFaceModel |
| 62 | +from pydantic_ai.providers.huggingface import HuggingFaceProvider |
| 63 | + |
| 64 | +model = HuggingFaceModel('Qwen/Qwen3-235B-A22B', provider=HuggingFaceProvider(api_key='hf_token', provider_name='nebius')) |
| 65 | +agent = Agent(model) |
| 66 | +... |
| 67 | +``` |
| 68 | + |
| 69 | +## Custom Hugging Face client |
| 70 | + |
| 71 | +[`HuggingFaceProvider`][pydantic_ai.providers.huggingface.HuggingFaceProvider] also accepts a custom |
| 72 | +[`AsyncInferenceClient`][huggingface_hub.AsyncInferenceClient] client via the `hf_client` parameter, so you can customise |
| 73 | +the `headers`, `bill_to` (billing to an HF organization you're a member of), `base_url` etc. as defined in the |
| 74 | +[Hugging Face Hub python library docs](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client). |
| 75 | + |
| 76 | +```python |
| 77 | +from huggingface_hub import AsyncInferenceClient |
| 78 | + |
| 79 | +from pydantic_ai import Agent |
| 80 | +from pydantic_ai.models.huggingface import HuggingFaceModel |
| 81 | +from pydantic_ai.providers.huggingface import HuggingFaceProvider |
| 82 | + |
| 83 | +client = AsyncInferenceClient( |
| 84 | + bill_to='openai', |
| 85 | + api_key='hf_token', |
| 86 | + provider='fireworks-ai', |
| 87 | +) |
| 88 | + |
| 89 | +model = HuggingFaceModel( |
| 90 | + 'Qwen/Qwen3-235B-A22B', |
| 91 | + provider=HuggingFaceProvider(hf_client=client), |
| 92 | +) |
| 93 | +agent = Agent(model) |
| 94 | +... |
| 95 | +``` |
0 commit comments