You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* add hf_xet as an optional dependency
* update installed packages at runtime
* split xet testing in CI
* fix workflow
* fix windows
* Xet download workflow (#2875)
* first draft
* remove comment
* hf_xet instead of xet
* update docstring
* fix
* update docstring
* simplify typing
* quality
* add logging
* fix tests
* add unit tests for xet utilities
* first draft of download testing
* more tests
* address some comments
* fix tests
* check if hf_xet is available or not
* remove unnecessary dest dir creation
* keep comment
Co-authored-by: Lucain <[email protected]>
* post-review improvements
* Update tests/test_xet_download.py
---------
Co-authored-by: Lucain <[email protected]>
* Add ability to enable/disable xet storage on a repo (#2893)
* add ability to enable/disable xet storage
* add test
* better way to check if all settings are none
* don't strip authorization header with downloading with xet
* update comment
* Xet upload workflow (#2887)
* add upload workflow
* fixes and tests
* use helper for prgress bar
* use tmp repo in tests
* some fixes
* update tests
* mock HF_XET_CACHE
* fix tests
* fix utils tests
* debug CI
* fix
* check if xet is enabled
* debug CI
* debug CI again
* revert
* debugging
* don't rerun xet tests
* revert
* remove pytest timeout
* don't run tests in parallel
* add comment
* revert and rename variable
* don't skip tests
* remove warning
* fix tests
* Apply suggestions from code review
* fixes
* fix syntax error with python 3.8
* catch Invalid credentials
* fix
* record Space API VCR test
* use raise instead of raise e
Co-authored-by: Lucain <[email protected]>
* disable xet storage for the other tests
* reverting
* isolate xet tests for windows
* fix windows
* install hf_xet for xet testing
---------
Co-authored-by: Lucain <[email protected]>
Co-authored-by: Lucain Pouget <[email protected]>
* Xet Docs for huggingface_hub (#2899)
* Xet docs
* PR feedback, added waitlist links
* Added HF_XET_CACHE env variable docs
* PR feedback
* Doc feedback
* Added two lines about flow of upload/download
* Updating links to Hub doc location
* Reformat headings, less levels in TOC
---------
Co-authored-by: Julien Chaumond <[email protected]>
Co-authored-by: Pierric Cistac <[email protected]>
Co-authored-by: Célina <[email protected]>
Co-authored-by: Lucain <[email protected]>
* Adding Token Refresh Xet Test (#2932)
Directly calling hfxet.download_files() with token_refresher callback
to ensure that hfxet calls the token refresher as expected.
---------
Co-authored-by: Celina Hanouti <[email protected]>
* Using a two stage download path for xet files. (#2920)
* Adding request header on resolve endpoint indicating that we can receive xet info.
* Adding test to ensure that the header is always sent on metdata request
* Using a two stage download path for xet files.
* Using the GET call's JSON
* Using xet_backed for the whether the file is a xet file or not to disambiguate from whether xet is enabled
* Adding and fixing tests
* Testing fix WIP
* Rewriting xet download to use the refresh route to resolve the xetmetadata
* Parameter type check
* Docs
* Removing extraneous constant
* Fixing file_download tests
* Readding the refresh route into the file metadata
* Refactoring the XetMetadata object into two objects to reflect the Hub changes.
* Fixing broken tests
* Code cleanup from self review
* Fixing types
* Quality & Lint
* Handling when hub returns the entire refresh route in its headers.
* Update tests/test_xet_utils.py
* Fixing merge conflicts in the new tests
* Extracting the refresh route from the link header (#2953)
* Getting the refresh route from the links header
* refactor xet_file_data func signature & tests
Co-authored-by: Lucain <[email protected]>
Co-authored-by: Rajat Arya <[email protected]>
* Update src/huggingface_hub/constants.py
Co-authored-by: Célina <[email protected]>
---------
Co-authored-by: Celina Hanouti <[email protected]>
Co-authored-by: Rajat Arya <[email protected]>
Co-authored-by: Julien Chaumond <[email protected]>
Co-authored-by: Pierric Cistac <[email protected]>
Co-authored-by: Brian Ronan <[email protected]>
Co-authored-by: Rajat Arya <[email protected]>
Copy file name to clipboardExpand all lines: docs/source/en/guides/download.md
+24Lines changed: 24 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,6 +166,30 @@ For more details about the CLI download command, please refer to the [CLI guide]
166
166
167
167
## Faster downloads
168
168
169
+
There are two options to speed up downloads. Both involve installing a Python package written in Rust.
170
+
171
+
*`hf_xet` is newer and uses the Xet storage backend for upload/download. It is available in production, but is in the process of being rolled out to all users, so join the [waitlist](https://huggingface.co/join/xet) to get onboarded soon!
172
+
*`hf_transfer` is a power-tool to download and upload to our LFS storage backend (note: this is less future-proof than Xet). It is thoroughly tested and has been in production for a long time, but it has some limitations.
173
+
174
+
### hf_xet
175
+
176
+
Take advantage of faster downloads through `hf_xet`, the Python binding to the [`xet-core`](https://github.com/huggingface/xet-core) library that enables
177
+
chunk-based deduplication for faster downloads and uploads. `hf_xet` integrates seamlessly with `huggingface_hub`, but uses the Rust `xet-core` library and Xet storage instead of LFS.
178
+
179
+
`hf_xet` uses the Xet storage system, which breaks files down into immutable chunks, storing collections of these chunks (called blocks or xorbs) remotely and retrieving them to reassemble the file when requested. When downloading, after confirming the user is authorized to access the files, `hf_xet` will query the Xet content-addressable service (CAS) with the LFS SHA256 hash for this file to receive the reconstruction metadata (ranges within xorbs) to assemble these files, along with presigned URLs to download the xorbs directly. Then `hf_xet` will efficiently download the xorb ranges necessary and will write out the files on disk. `hf_xet` uses a local disk cache to only download chunks once, learn more in the [Chunk-based caching(Xet)](./manage-cache.md#chunk-based-caching-xet) section.
180
+
181
+
To enable it, specify the `hf_xet` package when installing `huggingface_hub`:
182
+
183
+
```bash
184
+
pip install -U huggingface_hub[hf_xet]
185
+
```
186
+
187
+
Note: `hf_xet` will only be utilized when the files being downloaded are being stored with Xet Storage.
188
+
189
+
All other `huggingface_hub` APIs will continue to work without any modification. To learn more about the benefits of Xet storage and `hf_xet`, refer to this [section](https://huggingface.co/docs/hub/storage-backends).
190
+
191
+
### hf_transfer
192
+
169
193
If you are running on a machine with high bandwidth,
170
194
you can increase your download speed with [`hf_transfer`](https://github.com/huggingface/hf_transfer),
171
195
a Rust-based library developed to speed up file transfers with the Hub.
`huggingface_hub` utilizes the local disk as two caches, which avoid re-downloading items again. The first cache is a file-based cache, which caches individual files downloaded from the Hub and ensures that the same file is not downloaded again when a repo gets updated. The second cache is a chunk cache, where each chunk represents a byte range from a file and ensures that chunks that are shared across files are only downloaded once.
8
+
9
+
## File-based caching
8
10
9
11
The Hugging Face Hub cache-system is designed to be the central cache shared across libraries
10
12
that depend on the Hub. It has been updated in v0.8.0 to prevent re-downloading same files
@@ -170,6 +172,95 @@ When symlinks are not supported, a warning message is displayed to the user to a
170
172
them they are using a degraded version of the cache-system. This warning can be disabled
171
173
by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable to true.
172
174
175
+
## Chunk-based caching (Xet)
176
+
177
+
To provide more efficient file transfers, `hf_xet` adds a `xet` directory to the existing `huggingface_hub` cache, creating additional caching layer to enable chunk-based deduplication. This cache holds chunks, which are immutable byte ranges from files (up to 64KB) that are created using content-defined chunking. For more information on the Xet Storage system, see this [section](https://huggingface.co/docs/hub/storage-backends).
178
+
179
+
The `xet` directory, located at `~/.cache/huggingface/xet` by default, contains two caches, utilized for uploads and downloads with the following structure
180
+
181
+
```bash
182
+
<CACHE_DIR>
183
+
├─ chunk_cache
184
+
├─ shard_cache
185
+
```
186
+
187
+
The `xet` cache, like the rest of `hf_xet` is fully integrated with `huggingface_hub`. If you use the existing APIs for interacting with cached assets, there is no need to update your workflow. The `xet` cache is built as an optimization layer on top of the existing `hf_xet` chunk-based deduplication and `huggingface_hub` cache system.
188
+
189
+
The `chunk-cache` directory contains cached data chunks that are used to speed up downloads while the `shard-cache` directory contains cached shards that are utilized on the upload path.
190
+
191
+
### `chunk_cache`
192
+
193
+
This cache is used on the download path. The cache directory structure is based on a base-64 encoded hash from the content-addressed store (CAS) that backs each Xet-enabled repository. A CAS hash serves as the key to lookup the offsets of where the data is stored.
194
+
195
+
At the topmost level, the first two letters of the base 64 encoded CAS hash are used to create a subdirectory in the `chunk_cache` (keys that share these first two letters are grouped here). The inner levels are comprised of subdirectories with the full key as the directory name. At the base are the cache items which are ranges of blocks that contain the cached chunks.
When requesting a file, the first thing `hf_xet` does is communicate with Xet storage’s content addressed store (CAS) for reconstruction information. The reconstruction information contains information about the CAS keys required to download the file in its entirety.
209
+
210
+
Before executing the requests for the CAS keys, the `chunk_cache` is consulted. If a key in the cache matches a CAS key, then there is no reason to issue a request for that content. `hf_xet` uses the chunks stored in the directory instead.
211
+
212
+
As the `chunk_cache` is purely an optimization, not a guarantee, `hf_xet` utilizes a computationally efficient eviction policy. When the `chunk_cache` is full (see `Limits and Limitations` below), `hf_xet` implements a random eviction policy when selecting an eviction candidate. This significantly reduces the overhead of managing a robust caching system (e.g., LRU) while still providing most of the benefits of caching chunks.
213
+
214
+
### `shard_cache`
215
+
216
+
This cache is used when uploading content to the Hub. The directory is flat, comprising only of shard files, each using an ID for the shard name.
- Locally generated and successfully uploaded to the CAS
231
+
- Downloaded from CAS as part of the global deduplication algorithm
232
+
233
+
Shards provide a mapping between files and chunks. During uploads, each file is chunked and the hash of the chunk is saved. Every shard in the cache is then consulted. If a shard contains a chunk hash that is present in the local file being uploaded, then that chunk can be discarded as it is already stored in CAS.
234
+
235
+
All shards have an expiration date of 3-4 weeks from when they are downloaded. Shards that are expired are not loaded during upload and are deleted one week after expiration.
236
+
237
+
### Limits and Limitations
238
+
239
+
The `chunk_cache` is limited to 10GB in size while the `shard_cache` is technically without limits (in practice, the size and use of shards are such that limiting the cache is unnecessary).
240
+
241
+
By design, both caches are without high-level APIs. These caches are used primarily to facilitate the reconstruction (download) or upload of a file. To interact with the assets themselves, it’s recommended that you use the [`huggingface_hub` cache system APIs](https://huggingface.co/docs/huggingface_hub/guides/manage-cache).
242
+
243
+
If you need to reclaim the space utilized by either cache or need to debug any potential cache-related issues, simply remove the `xet` cache entirely by running `rm -rf ~/<cache_dir>/xet` where `<cache_dir>` is the location of your Hugging Face cache, typically `~/.cache/huggingface`
To learn more about Xet Storage, see this [section](https://huggingface.co/docs/hub/storage-backends).
263
+
173
264
## Caching assets
174
265
175
266
In addition to caching files from the Hub, downstream libraries often requires to cache
@@ -232,15 +323,17 @@ In practice, your assets cache should look like the following tree:
232
323
└── (...)
233
324
```
234
325
235
-
## Scan your cache
326
+
## Manage your file-based cache
327
+
328
+
### Scan your cache
236
329
237
330
At the moment, cached files are never deleted from your local directory: when you download
238
331
a new revision of a branch, previous files are kept in case you need them again.
239
332
Therefore it can be useful to scan your cache directory in order to know which repos
240
333
and revisions are taking the most disk space. `huggingface_hub` provides an helper to
241
334
do so that can be used via `huggingface-cli` or in a python script.
242
335
243
-
### Scan cache from the terminal
336
+
**Scan cache from the terminal**
244
337
245
338
The easiest way to scan your HF cache-system is to use the `scan-cache` command from
246
339
`huggingface-cli` tool. This command scans the cache and prints a report with information
@@ -291,7 +384,7 @@ Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G.
291
384
Got 1 warning(s) while scanning. Use -vvv to print details.
292
385
```
293
386
294
-
#### Grep example
387
+
**Grep example**
295
388
296
389
Since the output is in tabular format, you can combine it with any `grep`-like tools to
297
390
filter the entries. Here is an example to filter only revisions from the "t5-small"
@@ -304,7 +397,7 @@ t5-small model d0a119eedb3718e34c648e594394474cf95e0617
304
397
t5-small model d78aea13fa7ecd06c29e3e46195d6341255065d5 970.7M 9 1 week ago main /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/d78aea13fa7ecd06c29e3e46195d6341255065d5
305
398
```
306
399
307
-
### Scan cache from Python
400
+
**Scan cache from Python**
308
401
309
402
For a more advanced usage, use [`scan_cache_dir`] which is the python utility called by
310
403
the CLI tool.
@@ -368,15 +461,15 @@ HFCacheInfo(
368
461
)
369
462
```
370
463
371
-
## Clean your cache
464
+
###Clean your cache
372
465
373
466
Scanning your cache is interesting but what you really want to do next is usually to
374
467
delete some portions to free up some space on your drive. This is possible using the
375
468
`delete-cache` CLI command. One can also programmatically use the
376
469
[`~HFCacheInfo.delete_revisions`] helper from [`HFCacheInfo`] object returned when
377
470
scanning the cache.
378
471
379
-
### Delete strategy
472
+
**Delete strategy**
380
473
381
474
To delete some cache, you need to pass a list of revisions to delete. The tool will
382
475
define a strategy to free up the space based on this list. It returns a
@@ -408,7 +501,7 @@ error is thrown. The deletion continues for other paths contained in the
408
501
409
502
</Tip>
410
503
411
-
### Clean cache from the terminal
504
+
**Clean cache from the terminal**
412
505
413
506
The easiest way to delete some revisions from your HF cache-system is to use the
414
507
`delete-cache` command from `huggingface-cli` tool. The command has two modes. By
@@ -417,7 +510,7 @@ revisions to delete. This TUI is currently in beta as it has not been tested on
417
510
platforms. If the TUI doesn't work on your machine, you can disable it using the
418
511
`--disable-tui` flag.
419
512
420
-
#### Using the TUI
513
+
**Using the TUI**
421
514
422
515
This is the default mode. To use it, you first need to install extra dependencies by
423
516
running the following command:
@@ -461,7 +554,7 @@ Start deletion.
461
554
Done. Deleted 1 repo(s) and 0 revision(s) for a total of 3.1G.
462
555
```
463
556
464
-
#### Without TUI
557
+
**Without TUI**
465
558
466
559
As mentioned above, the TUI mode is currently in beta and is optional. It may be the
467
560
case that it doesn't work on your machine or that you don't find it convenient.
@@ -522,7 +615,7 @@ Example of command file:
522
615
# 9cfa5647b32c0a30d0adfca06bf198d82192a0d1 # Refs: main # modified 5 days ago
523
616
```
524
617
525
-
### Clean cache from Python
618
+
**Clean cache from Python**
526
619
527
620
For more flexibility, you can also use the [`~HFCacheInfo.delete_revisions`] method
528
621
programmatically. Here is a simple example. See reference for details.
0 commit comments