Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/hub/mlx.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,9 @@ response = generate(model, tokenizer, prompt="hello", verbose=True)
MLX-LM supports popular LLM architectures including LLaMA, Phi-2, Mistral, and Qwen. Models other than supported ones can easily be downloaded as follows:

```py
pip install huggingface_hub hf_transfer
pip install "huggingface_hub[hf_xet]"

export HF_HUB_ENABLE_HF_TRANSFER=1
export HF_XET_HIGH_PERFORMANCE=1
hf download --local-dir <LOCAL FOLDER PATH> <USER_ID>/<MODEL_NAME>
```

Expand Down
16 changes: 4 additions & 12 deletions docs/hub/models-downloading.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,18 +51,10 @@ Add your SSH public key to [your user settings](https://huggingface.co/settings/
## Faster downloads

If you are running on a machine with high bandwidth,
you can increase your download speed with [`hf_transfer`](https://github.com/huggingface/hf_transfer),
a Rust-based library developed to speed up file transfers with the Hub.
you can increase your download speed with [`hf_xet`](https://github.com/huggingface/xet-core),
a Rust-based library developed to speed up file transfers with the Hub, powered by [Xet](https://huggingface.co/docs/hub/en/xet/index).

```bash
pip install "huggingface_hub[hf_transfer]"
HF_HUB_ENABLE_HF_TRANSFER=1 hf download ...
pip install "huggingface_hub[hf_xet]"
HF_XET_HIGH_PERFORMANCE=1 hf download ...
```

> [!WARNING]
> `hf_transfer` is a power user tool!
> It is tested and production-ready,
> but it lacks user-friendly features like advanced error handling or proxies.
> For more details, please take a look at this [guide](https://huggingface.co/docs/huggingface_hub/hf_transfer).