Skip to content

Commit 973e946

Browse files
authored
Fix url typo in Inference API docs (#1416)
1 parent 9953a78 commit 973e946

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/api-inference/rate-limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The Inference API has rate limits based on the number of requests. These rate limits are subject to change in the future to be compute-based or token-based.
44

5-
Serverless API is not meant to be used for heavy production applications. If you need higher rate limits, consider [Inference Endpoints](https://huggingface.co/docs/inference/endpoints) to have dedicated resources.
5+
Serverless API is not meant to be used for heavy production applications. If you need higher rate limits, consider [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to have dedicated resources.
66

77
| User Tier | Rate Limit |
88
|---------------------|---------------------------|

docs/api-inference/supported-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,4 +23,4 @@ In addition to thousands of public models available in the Hub, PRO and Enterpri
2323

2424
## Running Private Models
2525

26-
The free Serverless API is designed to run popular public models. If you have a private model, you can use [Inference Endpoints](https://huggingface.co/docs/inference/endpoints) to deploy it.
26+
The free Serverless API is designed to run popular public models. If you have a private model, you can use [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to deploy it.

0 commit comments

Comments
 (0)