You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/api-inference/pricing.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ Here is a table that sums up what we've seen so far:
39
39
|**Routed request with custom key**| Yes | Provider | No | Yes | SDKs, Playground, widgets, Data AI Studio |
40
40
|**Direct call**| No | Provider | No | Yes | SDKs only |
41
41
42
-
## HFInference cost
42
+
## HF-Inference cost
43
43
44
44
As you may have noticed, you can select to work with `"hf-inference"` provider. This is what used to be the "Inference API (serverless)" prior to the Inference Providers integration. From a user point of view, working with HF Inference is the same as with any other providers. Past the free-tier credits, you get charged for every inference request based on the compute time x price of the underlying hardware.
0 commit comments