Skip to content

Commit 43b6254

Browse files
julien-cWauplinhanouticelina
authored
selective Rename of api-inference => inference-providers (#1666)
* selective rename * folder rename * rename scripts/api-inference to scripts/inference-providers * Update docs/hub/billing.md Co-authored-by: célina <[email protected]> * Update docs/hub/models-inference.md Co-authored-by: célina <[email protected]> * more renames --------- Co-authored-by: Wauplin <[email protected]> Co-authored-by: célina <[email protected]>
1 parent 47b3ffa commit 43b6254

File tree

69 files changed

+77
-74
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+77
-74
lines changed

.github/workflows/api_inference_build_documentation.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ name: Build Inference API documentation
33
on:
44
push:
55
paths:
6-
- "docs/api-inference/**"
6+
- "docs/inference-providers/**"
77
branches:
88
- main
99

@@ -13,8 +13,8 @@ jobs:
1313
with:
1414
commit_sha: ${{ github.sha }}
1515
package: hub-docs
16-
package_name: api-inference
17-
path_to_docs: hub-docs/docs/api-inference/
16+
package_name: inference-providers
17+
path_to_docs: hub-docs/docs/inference-providers/
1818
additional_args: --not_python_module
1919
secrets:
2020
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

.github/workflows/api_inference_build_pr_documentation.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ name: Build Inference API PR Documentation
33
on:
44
pull_request:
55
paths:
6-
- "docs/api-inference/**"
6+
- "docs/inference-providers/**"
77

88
concurrency:
99
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@@ -16,6 +16,6 @@ jobs:
1616
commit_sha: ${{ github.event.pull_request.head.sha }}
1717
pr_number: ${{ github.event.number }}
1818
package: hub-docs
19-
package_name: api-inference
20-
path_to_docs: hub-docs/docs/api-inference/
19+
package_name: inference-providers
20+
path_to_docs: hub-docs/docs/inference-providers/
2121
additional_args: --not_python_module

.github/workflows/api_inference_generate_documentation.yml

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,23 +23,23 @@ jobs:
2323
with:
2424
run_install: |
2525
- recursive: true
26-
cwd: ./scripts/api-inference
26+
cwd: ./scripts/inference-providers
2727
args: [--frozen-lockfile]
28-
package_json_file: ./scripts/api-inference/package.json
28+
package_json_file: ./scripts/inference-providers/package.json
2929
- name: Update huggingface/tasks package
30-
working-directory: ./scripts/api-inference
30+
working-directory: ./scripts/inference-providers
3131
run: |
3232
pnpm update @huggingface/tasks@latest
3333
# Generate
3434
- name: Generate API inference documentation
3535
run: pnpm run generate
36-
working-directory: ./scripts/api-inference
36+
working-directory: ./scripts/inference-providers
3737

3838
# Check changes
3939
- name: Check changes
4040
run: |
4141
git diff --name-only > changed_files.txt
42-
if grep -v -E "^(scripts/api-inference/package.json|scripts/api-inference/pnpm-lock.yaml)$" changed_files.txt | grep -q '.'; then
42+
if grep -v -E "^(scripts/inference-providers/package.json|scripts/inference-providers/pnpm-lock.yaml)$" changed_files.txt | grep -q '.'; then
4343
echo "changes_detected=true" >> $GITHUB_ENV
4444
else
4545
echo "changes_detected=false" >> $GITHUB_ENV
@@ -58,13 +58,13 @@ jobs:
5858
with:
5959
token: ${{ secrets.TOKEN_INFERENCE_SYNC_BOT }}
6060
commit-message: Update API inference documentation (automated)
61-
branch: update-api-inference-docs-automated-pr
61+
branch: update-inference-providers-docs-automated-pr
6262
delete-branch: true
6363
title: "[Bot] Update API inference documentation"
6464
body: |
6565
This PR automatically upgrades the `@huggingface/tasks` package and regenerates the API inference documentation by running:
6666
```sh
67-
cd scripts/api-inference
67+
cd scripts/inference-providers
6868
pnpm update @huggingface/tasks@latest
6969
pnpm run generate
7070
```

.github/workflows/api_inference_upload_pr_documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ jobs:
1010
build:
1111
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
1212
with:
13-
package_name: api-inference
13+
package_name: inference-providers
1414
secrets:
1515
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
1616
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

docs/hub/billing.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,10 @@ Private repository storage above the [included storage](./storage-limits) will b
2525

2626
The PRO subscription unlocks additional features for users, including:
2727

28-
- Higher free tier for the Serverless Inference API and when consuming ZeroGPU Spaces
29-
- Higher [storage capacity](./storage-limits) for private repositories
28+
- Higher tier for ZeroGPU Spaces usage
3029
- Ability to create ZeroGPU Spaces and use Dev Mode
30+
- Included credits for [Inference Providers](/docs/inference-providers/)
31+
- Higher [storage capacity](./storage-limits) for private repositories
3132
- Ability to write Social Posts and Community Blogs
3233
- Leverage the Dataset Viewer on private datasets
3334

@@ -48,7 +49,7 @@ It is billed with the renewal invoices of your PRO or Enterprise Hub subscriptio
4849

4950
## Compute Services on the Hub
5051

51-
We also directly provide compute services with [Spaces](./spaces), [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) and the [Serverless Inference API](https://huggingface.co/docs/api-inference/index).
52+
We also directly provide compute services with [Spaces](./spaces), [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) and [Inference Providers](https://huggingface.co/docs/inference-providers/index).
5253

5354
While most of our compute services have a comprehensive free tier, users and organizations can pay to access more powerful hardware accelerators.
5455

docs/hub/models-inference.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
# Serverless Inference API
1+
# Inference Providers
22

3-
Please refer to [Serverless Inference API Documentation](https://huggingface.co/docs/api-inference) for detailed information.
3+
Please refer to the [Inference Providers Documentation](https://huggingface.co/docs/inference-providers) for detailed information.
44

55

6-
## What technology do you use to power the Serverless Inference API?
6+
## What technology do you use to power the HF-Inference API?
77

88
For 🤗 Transformers models, [Pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) power the API.
99

@@ -14,13 +14,13 @@ On top of `Pipelines` and depending on the model type, there are several product
1414

1515
For models from [other libraries](./models-libraries), the API uses [Starlette](https://www.starlette.io) and runs in [Docker containers](https://github.com/huggingface/api-inference-community/tree/main/docker_images). Each library defines the implementation of [different pipelines](https://github.com/huggingface/api-inference-community/tree/main/docker_images/sentence_transformers/app/pipelines).
1616

17-
## How can I turn off the Serverless Inference API for my model?
17+
## How can I turn off the HF-Inference API for my model?
1818

1919
Specify `inference: false` in your model card's metadata.
2020

2121
## Why don't I see an inference widget, or why can't I use the API?
2222

23-
For some tasks, there might not be support in the Serverless Inference API, and, hence, there is no widget.
23+
For some tasks, there might not be support in the HF-Inference API, and, hence, there is no widget.
2424
For all libraries (except 🤗 Transformers), there is a [library-to-tasks.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/library-to-tasks.ts) of supported tasks in the API. When a model repository has a task that is not supported by the repository library, the repository has `inference: false` by default.
2525

2626
## Can I send large volumes of requests? Can I get accelerated APIs?
@@ -31,6 +31,6 @@ If you are interested in accelerated inference, higher volumes of requests, or a
3131

3232
You can check your usage in the [Inference Dashboard](https://ui.endpoints.huggingface.co/endpoints). The dashboard shows both your serverless and dedicated endpoints usage.
3333

34-
## Is there programmatic access to the Serverless Inference API?
34+
## Is there programmatic access to the HF-Inference API?
3535

36-
Yes, the `huggingface_hub` library has a client wrapper documented [here](https://huggingface.co/docs/huggingface_hub/how-to-inference).
36+
Yes, the `huggingface_hub` library has a client wrapper documented [here](https://huggingface.co/docs/huggingface_hub/guides/inference).

docs/hub/models-the-hub.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## What is the Model Hub?
44

5-
The Model Hub is where the members of the Hugging Face community can host all of their model checkpoints for simple storage, discovery, and sharing. Download pre-trained models with the [`huggingface_hub` client library](https://huggingface.co/docs/huggingface_hub/index), with 🤗 [`Transformers`](https://huggingface.co/docs/transformers/index) for fine-tuning and other usages or with any of the over [15 integrated libraries](./models-libraries). You can even leverage the [Serverless Inference API](./models-inference) or [Inference Endpoints](https://huggingface.co/docs/inference-endpoints). to use models in production settings.
5+
The Model Hub is where the members of the Hugging Face community can host all of their model checkpoints for simple storage, discovery, and sharing. Download pre-trained models with the [`huggingface_hub` client library](https://huggingface.co/docs/huggingface_hub/index), with 🤗 [`Transformers`](https://huggingface.co/docs/transformers/index) for fine-tuning and other usages or with any of the over [15 integrated libraries](./models-libraries). You can even leverage [Inference Providers](/docs/inference-providers/) or [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to use models in production settings.
66

77
You can refer to the following video for a guide on navigating the Model Hub:
88

docs/hub/models-widgets.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -168,9 +168,9 @@ Here are some links to examples:
168168
- `table-question-answering`, for instance [`google/tapas-base-finetuned-wtq`](https://huggingface.co/google/tapas-base-finetuned-wtq)
169169
- `sentence-similarity`, for instance [`osanseviero/full-sentence-distillroberta2`](/osanseviero/full-sentence-distillroberta2)
170170

171-
## How can I control my model's widget Inference API parameters?
171+
## How can I control my model's widget HF-Inference API parameters?
172172

173-
Generally, the Inference API for a model uses the default pipeline settings associated with each task. But if you'd like to change the pipeline's default settings and specify additional inference parameters, you can configure the parameters directly through the model card metadata. Refer [here](https://huggingface.co/docs/api-inference/detailed_parameters) for some of the most commonly used parameters associated with each task.
173+
Generally, the HF-Inference API for a model uses the default pipeline settings associated with each task. But if you'd like to change the pipeline's default settings and specify additional inference parameters, you can configure the parameters directly through the model card metadata. Refer [here](https://huggingface.co/docs/inference-providers/detailed_parameters) for some of the most commonly used parameters associated with each task.
174174

175175
For example, if you want to specify an aggregation strategy for a NER task in the widget:
176176

@@ -188,4 +188,6 @@ inference:
188188
temperature: 0.7
189189
```
190190

191-
The Serverless inference API allows you to send HTTP requests to models in the Hugging Face Hub programatically. ⚡⚡ Learn more about it by reading the [Inference API documentation](./models-inference). Finally, you can also deploy all those models to dedicated [Inference Endpoints](https://huggingface.co/docs/inference-endpoints).
191+
Inference Providers allows you to send HTTP requests to models in the Hugging Face Hub programatically. It is an abstraction layer on top of External providers. ⚡⚡ Learn more about it by reading the [
192+
Inference Providers documentation](/docs/inference-providers).
193+
Finally, you can also deploy all those models to dedicated [Inference Endpoints](https://huggingface.co/docs/inference-endpoints).

docs/hub/oauth.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The currently supported scopes are:
3535
- `read-repos`: Get read access to the user's personal repos.
3636
- `write-repos`: Get write/read access to the user's personal repos.
3737
- `manage-repos`: Get full access to the user's personal repos. Also grants repo creation and deletion.
38-
- `inference-api`: Get access to the [Inference API](https://huggingface.co/docs/api-inference/index), you will be able to make inference requests on behalf of the user.
38+
- `inference-api`: Get access to the [Inference API](https://huggingface.co/docs/inference-providers/index), you will be able to make inference requests on behalf of the user.
3939
- `write-discussions`: Open discussions and Pull Requests on behalf of the user as well as interact with discussions (including reactions, posting/editing comments, closing discussions, ...). To open Pull Requests on private repos, you need to request the `read-repos` scope as well.
4040

4141
All other information is available in the [OpenID metadata](https://huggingface.co/.well-known/openid-configuration).

docs/hub/spaces-oauth.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Those scopes are optional and can be added by setting `hf_oauth_scopes` in your
8181
- `read-repos`: Get read access to the user's personal repos.
8282
- `write-repos`: Get write/read access to the user's personal repos.
8383
- `manage-repos`: Get full access to the user's personal repos. Also grants repo creation and deletion.
84-
- `inference-api`: Get access to the [Inference API](https://huggingface.co/docs/api-inference/index), you will be able to make inference requests on behalf of the user.
84+
- `inference-api`: Get access to the [Inference API](https://huggingface.co/docs/inference-providers/index), you will be able to make inference requests on behalf of the user.
8585
- `write-discussions`: Open discussions and Pull Requests on behalf of the user as well as interact with discussions (including reactions, posting/editing comments, closing discussions, ...). To open Pull Requests on private repos, you need to request the `read-repos` scope as well.
8686

8787
## Accessing organization resources

0 commit comments

Comments
 (0)