Skip to content

Commit cd9801b

Browse files
committed
add first basic guide on inference
1 parent 588daf9 commit cd9801b

File tree

1 file changed

+131
-0
lines changed

1 file changed

+131
-0
lines changed
Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
# Your First Inference Provider Call
2+
3+
In this guide we're going to help you make your first API call with Inference Providers.
4+
5+
Many developers avoid using open source AI models because they assume deployment is complex. This guide will show you how to use a state-of-the-art model in under five minutes, with no infrastructure setup required.
6+
7+
We're going to use the [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) model, which is a powerful text-to-image model.
8+
9+
<Tip>
10+
11+
This guide assumes you have a Hugging Face account. If you don't have one, you can create one for free at [huggingface.co](https://huggingface.co).
12+
13+
</Tip>
14+
15+
## Step 1: Find a Model on the Hub
16+
17+
Visit the [Hugging Face Hub](https://huggingface.co/models) and look for models with the "Inference Providers" filter, you can select the provider that you want. We'll go with `fal`.
18+
19+
![search image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers-guides/search.png)
20+
21+
For this example, we'll use [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), a powerful text-to-image model. Next, navigate to the model page and scroll down to find the inference widget on the right side.
22+
23+
## Step 2: Try the Interactive Widget
24+
25+
Before writing any code, try the widget directly on the model page:
26+
27+
![widget image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers-guides/widget.png)
28+
29+
Here, you can test the model directly in the browser from any of the available providers. You can also copy relevant code snippets to use in your own projects.
30+
31+
1. Enter a prompt like "A serene mountain landscape at sunset"
32+
2. Click **"Generate"**
33+
3. Watch as the model creates an image in seconds
34+
35+
This widget uses the same endpoint you're about to implement in code.
36+
37+
<Tip warning={true}>
38+
39+
You'll need a Hugging Face account (free at [huggingface.co](https://huggingface.co)) and remaining credits to use the model.
40+
41+
</Tip>
42+
43+
## Step 3: From Clicks to Code
44+
45+
Now let's replicate this with Python. Click the **"View Code Snippets"** button in the widget to see the generated code snippets.
46+
47+
![code snippets image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers-guides/code-snippets.png)
48+
49+
You will need to populate this snippet with a valid Hugging Face User Access Token. You can find your User Access Token in your [settings page](https://huggingface.co/settings/tokens).
50+
51+
Set your token as an environment variable:
52+
53+
```bash
54+
export HF_TOKEN="your_token_here"
55+
```
56+
57+
The Python or TypeScript code snippet will use the token from the environment variable.
58+
59+
<hfoptions id="python-code-snippet">
60+
61+
<hfoption id="python">
62+
63+
Install the required package:
64+
65+
```bash
66+
pip install huggingface_hub
67+
```
68+
69+
You can now use the code snippet to generate an image:
70+
71+
```python
72+
import os
73+
from huggingface_hub import InferenceClient
74+
75+
client = InferenceClient(
76+
provider="fal-ai",
77+
api_key=os.environ["HF_TOKEN"],
78+
)
79+
80+
# output is a PIL.Image object
81+
image = client.text_to_image(
82+
"Astronaut riding a horse",
83+
model="black-forest-labs/FLUX.1-schnell",
84+
)
85+
```
86+
87+
</hfoption>
88+
89+
<hfoption id="typescript">
90+
91+
Install the required package:
92+
93+
```bash
94+
npm install @huggingface/inference
95+
```
96+
97+
```typescript
98+
import { InferenceClient } from "@huggingface/inference";
99+
100+
const client = new InferenceClient(process.env.HF_TOKEN);
101+
102+
const image = await client.textToImage({
103+
provider: "fal-ai",
104+
model: "black-forest-labs/FLUX.1-schnell",
105+
inputs: "Astronaut riding a horse",
106+
parameters: { num_inference_steps: 5 },
107+
});
108+
/// Use the generated image (it's a Blob)
109+
```
110+
</hfoption>
111+
112+
</hfoptions>
113+
114+
## What Just Happened?
115+
116+
Nice work! You've successfully used a production-grade AI model without any complex setup. In just a few lines of code, you:
117+
118+
- Connected to a powerful text-to-image model
119+
- Generated a custom image from text
120+
- Saved the result locally
121+
122+
The model you just used runs on professional infrastructure, handling scaling, optimization, and reliability automatically.
123+
124+
## Next Steps
125+
126+
Now that you've seen how easy it is to use AI models, you might wonder:
127+
- What was that "provider" system doing behind the scenes?
128+
- How does billing work?
129+
- What other models can you use?
130+
131+
Continue to the next guide to understand the provider ecosystem and make informed choices about authentication and billing.

0 commit comments

Comments
 (0)