Skip to content

Commit f666b4a

Browse files
Rename Hugging Face CLI from huggingface-cli to hf (#1847)
* update hf commands * Update docs/hub/models-downloading.md Co-authored-by: Lucain <[email protected]> --------- Co-authored-by: Lucain <[email protected]>
1 parent b952c11 commit f666b4a

24 files changed

+29
-29
lines changed

docs/hub/datasets-argilla.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ AI teams from companies like [the Red Cross](https://510.global/), [Loris.ai](ht
3030
First [login with your Hugging Face account](/docs/huggingface_hub/quick-start#login):
3131

3232
```bash
33-
huggingface-cli login
33+
hf auth login
3434
```
3535

3636
Make sure you have `argilla>=2.0.0` installed:

docs/hub/datasets-dask.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Since Dask uses [fsspec](https://filesystem-spec.readthedocs.io) to read and wri
1313
First you need to [Login with your Hugging Face account](/docs/huggingface_hub/quick-start#login), for example using:
1414

1515
```
16-
huggingface-cli login
16+
hf auth login
1717
```
1818

1919
Then you can [Create a dataset repository](/docs/huggingface_hub/quick-start#create-a-repository), for example using:

docs/hub/datasets-distilabel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The Argilla community uses distilabel to create amazing [datasets](https://huggi
1717
First [login with your Hugging Face account](/docs/huggingface_hub/quick-start#login):
1818

1919
```bash
20-
huggingface-cli login
20+
hf auth login
2121
```
2222

2323
Make sure you have `distilabel` installed:

docs/hub/datasets-downloading.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ If a dataset on the Hub is tied to a [supported library](./datasets-libraries),
1919
You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, update and retrieve information from repos. For example, to download the `HuggingFaceH4/ultrachat_200k` dataset from the command line, run
2020

2121
```bash
22-
huggingface-cli download HuggingFaceH4/ultrachat_200k --repo-type dataset
22+
hf download HuggingFaceH4/ultrachat_200k --repo-type dataset
2323
```
2424

25-
See the [huggingface-cli download documentation](https://huggingface.co/docs/huggingface_hub/en/guides/cli#download-a-dataset-or-a-space) for more information.
25+
See the [HF CLI download documentation](https://huggingface.co/docs/huggingface_hub/en/guides/cli#download-a-dataset-or-a-space) for more information.
2626

2727
You can also integrate this into your own library! For example, you can quickly load a CSV dataset with a few lines using Pandas.
2828
```py

docs/hub/datasets-duckdb-auth.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ This command automatically retrieves the stored token from `~/.cache/huggingface
3434
First you need to [Login with your Hugging Face account](/docs/huggingface_hub/quick-start#login), for example using:
3535

3636
```bash
37-
huggingface-cli login
37+
hf auth login
3838
```
3939

4040
Alternatively, you can set your Hugging Face token as an environment variable:

docs/hub/datasets-fiftyone.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ FiftyOne datasets directly from the Hub.
1818
First [login with your Hugging Face account](/docs/huggingface_hub/quick-start#login):
1919

2020
```bash
21-
huggingface-cli login
21+
hf auth login
2222
```
2323

2424
Make sure you have `fiftyone>=0.24.0` installed:

docs/hub/datasets-gated.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ The dataset authors have complete control over dataset access. In particular, th
158158
To download files from a gated dataset you'll need to be authenticated. In the browser, this is automatic as long as you are logged in with your account. If you are using a script, you will need to provide a [user token](./security-tokens). In the Hugging Face Python ecosystem (`transformers`, `diffusers`, `datasets`, etc.), you can login your machine using the [`huggingface_hub`](/docs/huggingface_hub/index) library and running in your terminal:
159159

160160
```bash
161-
huggingface-cli login
161+
hf auth login
162162
```
163163

164164
Alternatively, you can programmatically login using `login()` in a notebook or a script:

docs/hub/datasets-pandas.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ You can save a pandas DataFrame using `to_csv/to_json/to_parquet` to a local fil
4141
To save the DataFrame on Hugging Face, you first need to [Login with your Hugging Face account](/docs/huggingface_hub/quick-start#login), for example using:
4242

4343
```
44-
huggingface-cli login
44+
hf auth login
4545
```
4646

4747
Then you can [Create a dataset repository](/docs/huggingface_hub/quick-start#create-a-repository), for example using:

docs/hub/datasets-polars-auth.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ pl.read_parquet(
2525

2626
## CLI
2727

28-
Alternatively, you can you use the [Hugging Face CLI](/docs/huggingface_hub/en/guides/cli) to authenticate. After successfully logging in with `huggingface-cli login` an access token will be stored in the `HF_HOME` directory which defaults to `~/.cache/huggingface`. Polars will then use this token for authentication.
28+
Alternatively, you can you use the [Hugging Face CLI](/docs/huggingface_hub/en/guides/cli) to authenticate. After successfully logging in with `hf auth login` an access token will be stored in the `HF_HOME` directory which defaults to `~/.cache/huggingface`. Polars will then use this token for authentication.
2929

3030
If multiple methods are specified, they are prioritized in the following order:
3131

docs/hub/datasets-pyarrow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ You can save a pyarrow Table using `pyarrow.parquet.write_table` to a local file
5555
To save the Table on Hugging Face, you first need to [Login with your Hugging Face account](/docs/huggingface_hub/quick-start#login), for example using:
5656

5757
```
58-
huggingface-cli login
58+
hf auth login
5959
```
6060

6161
Then you can [create a dataset repository](/docs/huggingface_hub/quick-start#create-a-repository), for example using:

0 commit comments

Comments
 (0)