Releases: huggingface/huggingface_hub
v0.17.3 - Hot-fix: ignore errors when checking available disk space
Full Changelog: v0.17.2...v0.17.3
Fixing a bug when downloading files to a non-existent directory. In #1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in #1690. This hot-fix fixes it thanks to #1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any OSError
) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.
Checkout those release notes to learn more about the v0.17 release.
v0.17.2 - Hot-fix: make `huggingface-cli upload` work with Spaces
Full Changelog: v0.17.1...v0.17.2
Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because space_sdk
was not found in that case. More details in #1669.
Also updated the user-agent when using huggingface-cli upload
. See #1664.
Checkout those release notes to learn more about the v0.17 release.
v0.17.0: Inference, CLI and Space API
InferenceClient
All tasks are now supported! 💥
Thanks to a massive community effort, all inference tasks are now supported in InferenceClient
. Newly added tasks are:
- Object detection by @dulayjm in #1548
- Text classification by @martinbrose in #1606
- Token classification by @martinbrose in #1607
- Translation by @martinbrose in #1608
- Question answering by @martinbrose in #1609
- Table question answering by @martinbrose in #1612
- Fill mask by @martinbrose in #1613
- Tabular classification by @martinbrose in #1614
- Tabular regression by @martinbrose in #1615
- Document question answering by @martinbrose in #1620
- Visual question answering by @martinbrose in #1621
- Zero shot classification by @Wauplin in #1644
Documentation, including examples, for each of these tasks can be found in this table.
All those methods also support async mode using AsyncInferenceClient
.
Get InferenceAPI status
Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:
list_deployed_models
aims to help users discover which models are currently deployed, listed by task.get_model_status
aims to get the status of a specific model. That's useful if you already know which model you want to use.
Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# Discover zero-shot-classification models currently deployed
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]
# Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')
- Add get_model_status function by @sifisKoen in #1558
- Add list_deployed_models to inference client by @martinbrose in #1622
Few fixes
- Send Accept: image/png as header for image tasks by @Wauplin in #1567
- FIX
text_to_image
andimage_to_image
parameters by @Wauplin in #1582 - Distinguish _bytes_to_dict and _bytes_to_list + fix issues by @Wauplin in #1641
- Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648
Download and upload files... from the CLI 🔥 🔥 🔥
This is a long-awaited feature finally implemented! huggingface-cli
now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for git clone
, git pull
and git push
. Despite being less feature-complete than git
(no .git/
folder, no notion of local commits), it offers the flexibility required when working with large repositories.
Download
# Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
# Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json
# Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files: 100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7
Upload
# Upload single file
huggingface-cli upload my-cool-model model.safetensors
# Upload entire directory
huggingface-cli upload my-cool-model ./models
# Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"
Docs
For more examples, check out the documentation:
- Implemented CLI download functionality by @martinbrose in #1617
- Implemented CLI upload functionality by @martinbrose in #1618
🚀 Space API
Some new features have been added to the Space API to:
- request persistent storage for a Space
- set a description to a Space's secrets
- set variables on a Space
- configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
... repo_id=repo_id,
... repo_type="space",
... space_sdk="gradio",
... space_hardware="t4-medium",
... space_sleep_time="3600",
... space_storage="large",
... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
A special thank to @martinbrose who largely contributed on those new features.
- Request Persistent Storage by @freddyaboulton in #1571
- Support factory reboot when restarting a Space by @Wauplin in #1586
- Added support for secret description by @martinbrose in #1594
- Added support for space variables by @martinbrose in #1592
- Add settings for creating and duplicating spaces by @martinbrose in #1625
📚 Documentation
A new section has been added to the upload guide with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.
- Tips to upload large models/datasets by @Wauplin in #1565
- Add the hard limit of 50GB on LFS files by @severo in #1624
🗺️ The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!
- Add translation guide + update repo structure by @Wauplin in #1602
- Fix i18n issue template links by @Wauplin in #1627
Breaking change
The behavior of InferenceClient.feature_extraction
has been updated to fix a bug happening with certain models. The shape of the returned array for transformers
models has changed from (sequence_length, hidden_size)
to (1, sequence_length, hidden_size)
which is the breaking change.
- Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648
QOL improvements
HfApi
helpers:
Two new helpers have been added to check if a file or a repo exists on the Hub:
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False
- Check if repo or file exists by @martinbrose in #1591
Also, hf_hub_download
and snapshot_download
are now part of HfApi
(keeping the same syntax and behavior).
Download improvements:
- When a user tries to download a model but the disk is full, a warning is triggered.
- When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.
- Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by @jiamings in #1561
- Implemented check_disk_space function by @martinbrose in #1590
Small fixes and maintenance
⚙️ Doc fixes
- Fix table by @stevhliu in #1577
- Improve docstrings for text generation by @osanseviero in #1597
- Fix superfluous-typo by @julien-c in #1611
- minor missing paren by @julien-c in #1637
- update i18n template by @Wauplin (direct commit on main)
- Add documentation for modelcard Metadata. Resolves by @sifisKoen in #1448
⚙️ Other fixes
- Add
missing_ok
option indelete_repo
by @Wauplin in #1640 - Implement
super_squash_history
inHfApi
by @Wauplin in #1639 - 1546 fix empty metadata on windows by @Wauplin in #1547
- Fix tqdm by @NielsRogge in #1629
- Fix bug #1634 (drop finishing spaces and EOL) by @GBR-613 in #1638
⚙️ Internal
- Prepare for 0.17 by @Wauplin in #1540
- update mypy version + fix issues + remove deprecatedlist helper by @Wauplin in #1628
- mypy traceck by @Wauplin (direct commit on main)
- pin pydantic version by @Wauplin (direct commit on main)
- Fix ci tests by @Wauplin in #1630
- Fix test in contrib CI by @Wauplin (direct commit on main)
- skip gated repo test on contrib by @Wauplin (direct commit on main)
- skip failing test by @Wauplin (direct commit on main)
- Fix fsspec tests in ci by @Wauplin in #1635
- FIX windows CI by @Wauplin (direct commit on main)
- FIX style issues by pinning black version by @Wauplin (direct commit on main)
- forgot test case by @Wauplin (direct commit on main)
- shorter is better by @Wauplin (direct commit on main)
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @dulayjm
- Add object detection to inference client (#1548)
- @martinbrose
- Added support for s...
v0.16.4 - Hot-fix: Do not share request.Session between processes
Full Changelog: v0.16.3...v0.16.4
Hotfix to avoid sharing requests.Session
between processes. More information in #1545. Internally, we create a Session object per thread to benefit from the HTTPSConnectionPool (i.e. do not reopen connection between calls). Due to an implementation bug, the Session object from the main thread was shared if a fork of the main process happened. The shared Session gets corrupted in the process, leading to some random ConnectionErrors in rare occasions.
Check out these release notes to learn more about the v0.16 release.
v0.16.3: Hotfix - More verbose ConnectionError
Full Changelog: v0.16.2...v0.16.3
Hotfix to print the request ID if any RequestException
happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.
Check out these release notes to learn more about the v0.16 release.
v0.16.2: Inference, CommitScheduler and Tensorboard
Inference
Introduced in the v0.15
release, the InferenceClient
got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.
Async client
Asyncio calls are supported thanks to AsyncInferenceClient
. Based on asyncio
and aiohttp
, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient
is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
Text-generation
Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.
Text generation has 4 modes depending on details
(bool) and stream
(bool) values. By default, a raw string is returned. If details=True
, more information about the generated tokens is returned. If stream=True
, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
# stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>> print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)
Of course, the async client also supports text-generation (see docs):
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
- prepare for tgi by @Wauplin in #1511
- Support text-generation in InferenceClient by @Wauplin in #1513
Zero-shot-image-classification
InferenceClient
now supports zero-shot-image-classification (see docs). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
... labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]
Thanks to @dulayjm for your contribution on this task!
Other
When using InferenceClient
's task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.
It is now possible to configure headers and cookies to be sent when initializing the client: InferenceClient(headers=..., cookies=...)
. All calls made with this client will then use these headers/cookies.
Commit API
CommitScheduler
The CommitScheduler
is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.
>>> from huggingface_hub import CommitScheduler
# Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
... repo_id="report-translation-feedback",
... repo_type="dataset",
... folder_path=feedback_folder,
... path_in_repo="data",
... every=10,
... )
Check out this guide to understand how to use the CommitScheduler
. It comes with a Space to showcase how to use it in 4 practical examples.
CommitScheduler
: upload folder every 5 minutes by @Wauplin in #1494- Encourage to overwrite CommitScheduler.push_to_hub by @Wauplin in #1506
- FIX Use token by default in CommitScheduler by @Wauplin in #1509
- safer commit scheduler by @Wauplin (direct commit on main)
HFSummaryWriter (tensorboard)
The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as tfevents
) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than 42k models are already using this feature!
With the HFSummaryWriter
you can now take full advantage of the feature for your training, simply by updating a single line of code.
>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)
HFSummaryWriter
inherits from SummaryWriter
and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.
For more information on how to use it, check out this documentation page. Please note that this is still an experimental feature so feedback is very welcome.
CommitOperationCopy
It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information here.
- add CommitOperationCopy by @lhoestq in #1495
- Use CommitOperationCopy in hffs by @Wauplin in #1497
- Batch fetch_lfs_files_to_copy by @lhoestq in #1504
Breaking changes
ModelHubMixin
got updated (after a deprecation cycle):
- Force to use kwargs instead of passing everything a positional arg
- It is not possible anymore to pass
model_id
asusername/repo_name@revision
inModelHubMixin
. Revision must be passed as a separaterevision
argument if needed.
Bug fixes and small improvements
Doc fixes
- [doc build] Use secrets by @mishig25 in #1501
- Migrate doc files to Markdown by @Wauplin in #1522
- fix doc example by @Wauplin (direct commit on main)
- Update readme and contributing guide by @Wauplin in #1534
HTTP fixes
A x-request-id
header is sent by default for every request made to the Hub. This should help debugging user issues.
3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.
- Set 30s timeout on downloads (instead of 10s) by @Wauplin in #1514
- Set timeout to 60 instead of 30 when downloading files by @Wauplin in #1523
- Set timeout to 10s by @ydshieh in #1530
Misc
- Rename "configs" dataset card field to "config_names" by @polinaeterna in #1491
- update stats by @Wauplin (direct commit on main)
- Retry on both ConnectTimeout and ReadTimeout by @Wauplin in #1529
- update tip by @Wauplin (direct commit on main)
- make repo_info public by @Wauplin (direct commit on main)
Significant community contributions
The following contributors have made significant changes to the library over the last ...
v0.15.1: InferenceClient and background uploads!
InferenceClient
We introduce InferenceClient
, a new client to run inference on the Hub. The objective is to:
- support both InferenceAPI and Inference Endpoints services in a single client.
- offer a nice interface with:
- 1 method per task (e.g.
summary = client.summarization("this is a long text")
) - 1 default model per task (i.e. easy to prototype)
- explicit and documented parameters
- convenient binary inputs (from url, path, file-like object,...)
- 1 method per task (e.g.
- be flexible and support custom requests if needed
Check out the Inference guide to get a complete overview.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]
The short-term goal is to add support for more tasks (here is the current list), especially text-generation and handle asyncio
calls. The mid-term goal is to deprecate and replace InferenceAPI
.
Non-blocking uploads
It is now possible to run HfApi calls in the background! The goal is to make it easier to upload files periodically without blocking the main thread during a training. The was previously possible when using Repository
but is now available for HTTP-based methods like upload_file
, upload_folder
and create_commit
. If run_as_future=True
is passed:
- the job is queued in a background thread. Only 1 worker is spawned to ensure no race condition. The goal is NOT to speed up a process by parallelizing concurrent calls to the Hub.
- a
Future
object is returned to check the job status - main thread is not interrupted, even if an exception occurs during the upload
In addition to this parameter, a run_as_future(...) method is available to queue any other calls to the Hub. More details in this guide.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(...) # takes Xs
# URL to upload file
>>> future = api.upload_file(..., run_as_future=True) # instant
>>> future.result() # wait until complete
# URL to upload file
- Run
HfApi
methods in the background (run_as_future
) by @Wauplin in #1458 - fix docs for run_as_future by @Wauplin (direct commit on main)
Breaking changes
Some (announced) breaking changes have been introduced:
list_models
,list_datasets
andlist_spaces
return an iterable instead of a list (lazy-loading of paginated results)- The parameter
cardData
inlist_datasets
has been removed in favor of the parameterfull
.
Both changes had a deprecation cycle for a few releases now.
Bugfixes and small improvements
Token permission
New parameters in login()
:
new_session
: skip login if new_session=False and user is already logged inwrite_permission
: write permission is required (login fails otherwise)
Also added a new HfApi().get_token_permission()
method that returns "read"
or "write"
(or None
if not logged in).
- Add new_session, write_permission args by @aliabid94 in #1476
List files with details
New parameter to get more details when listing files: list_repo_files(..., expand=True)
.
API call is slower but lastCommit
and security
fields are returned as well.
Docs fixes
- Resolve broken link to 'filesystem' by @tomaarsen in #1461
- Fix broken link in docs to hf_file_system guide by @albertvillanova in #1469
- Remove hffs from docs by @albertvillanova in #1468
Misc
- Fix consistency check when downloading a file by @Wauplin in #1449
- Fix discussion URL on datasets and spaces by @Wauplin in #1465
- FIX user agent not passed in snapshot_download by @Wauplin in #1478
- Avoid
ImportError
when importingWebhooksServer
and Gradio is not installed by @mariosasko in #1482 - add utf8 encoding when opening files for windows by @abidlabs in #1484
- Fix incorrect syntax in
_deprecation.py
warning message for_deprecate_list_output()
by @x11kjm in #1485 - Update _hf_folder.py by @SimonKitSangChu in #1487
- fix pause_and_restart test by @Wauplin (direct commit on main)
- Support image-to-image task in InferenceApi by @Wauplin in #1489
v0.14.1: patch release
Fixed an issue reported in diffusers
impacting users downloading files from outside of the Hub. Expected download size now takes into account potential compression in the HTTP requests.
Full Changelog: v0.14.0...v0.14.1
v0.14.0: Filesystem API, Webhook Server, upload improvements, keep-alive connections, and more
HfFileSystem: interact with the Hub through the Filesystem API
We introduce HfFileSystem, a pythonic filesystem interface compatible with fsspec
. Built on top of HfApi
, it offers typical filesystem operations like cp
, mv
, ls
, du
, glob
, get_file
and put_file
.
>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()
# List all files in a directory
>>> fs.ls("datasets/myself/my-dataset/data", detail=False)
['datasets/myself/my-dataset/data/train.csv', 'datasets/myself/my-dataset/data/test.csv']
>>> train_data = fs.read_text("datasets/myself/my-dataset/data/train.csv")
Its biggest advantage is to provide ready-to-use integrations with popular libraries like Pandas, DuckDB and Zarr.
import pandas as pd
# Read a remote CSV file into a dataframe
df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")
# Write a dataframe to a remote CSV file
df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")
For a more detailed overview, please have a look to this guide.
- Transfer the
hffs
code tohfh
by @mariosasko in #1420 - Hffs misc improvements by @mariosasko in #1433
Webhook Server
WebhooksServer
allows to implement, debug and deploy webhook endpoints on the Hub without any overhead. Creating a new endpoint is as easy as decorating a Python function.
# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload
@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
if payload.repo.type == "dataset" and payload.event.action == "update":
# Trigger a training job if a dataset is updated
...
For more details, check out this twitter thread or the documentation guide.
Note that this feature is experimental which means the API/behavior might change without prior notice. A warning is displayed to the user when using it. As it is experimental, we would love to get feedback!
Some upload QOL improvements
Faster upload with hf_transfer
Integration with a Rust-based library to upload large files in chunks and concurrently. Expect x3 speed-up if your bandwidth allows it!
Upload in multiple commits
Uploading large folders at once might be annoying if any error happens while committing (e.g. a connection error occurs). It is now possible to upload a folder in multiple (smaller) commits. If a commit fails, you can re-run the script and resume the upload. Commits are pushed to a dedicated PR. Once completed, the PR is merged to the main
branch resulting in a single commit in your git history.
upload_folder(
folder_path="local/checkpoints",
repo_id="username/my-dataset",
repo_type="dataset",
multi_commits=True, # resumable multi-upload
multi_commits_verbose=True,
)
Note that this feature is also experimental, meaning its behavior might be updated in the future.
Upload validation
Some more pre-validation done before committing files to the Hub. The .git
folder is ignored in upload_folder
(if any) + fail early in case of invalid paths.
- Fix
path_in_repo
validation when committing files by @Wauplin in #1382 - Raise issue if trying to upload
.git/
folder + ignore.git/
folder inupload_folder
by @Wauplin in #1408
Keep-alive connections between requests
Internal update to reuse the same HTTP session across huggingface_hub
. The goal is to keep the connection open when doing multiple calls to the Hub which ultimately saves a lot of time. For instance, updating metadata in a README became 40% faster while listing all models from the Hub is 60% faster. This has no impact for atomic calls (e.g. 1 standalone GET call).
- Keep-alive connection between requests by @Wauplin in #1394
- Accept backend_factory to configure Sessions by @Wauplin in #1442
Custom sleep time for Spaces
It is now possible to programmatically set a custom sleep time on your upgraded Space. After X seconds of inactivity, your Space will go to sleep to save you some $$$.
from huggingface_hub import set_space_sleep_time
# Put your Space to sleep after 1h of inactivity
set_space_sleep_time(repo_id=repo_id, sleep_time=3600)
Breaking change
fsspec
has been added as a main dependency. It's a lightweight Python library required forHfFileSystem
.
No other breaking change expected in this release.
Bugfixes & small improvements
File-related
A lot of effort has been invested in making huggingface_hub
's cache system more robust especially when working with symlinks on Windows. Hope everything's fixed by now.
- Fix relative symlinks in cache by @Wauplin in #1390
- Hotfix - use relative symlinks whenever possible by @Wauplin in #1399
- [hot-fix] Malicious repo can overwrite any file on disk by @Wauplin in #1429
- Fix symlinks on different volumes on Windows by @Wauplin in #1437
- [FIX] bug "Invalid cross-device link" error when using snapshot_download to local_dir with no symlink by @thaiminhpv in #1439
- Raise after download if file size is not consistent by @Wauplin in # 1403
ETag-related
After a server-side configuration issue, we made huggingface_hub
more robust when getting Hub's Etags to be more future-proof.
- Update file_download.py by @Wauplin in #1406
- 🧹 Use
HUGGINGFACE_HEADER_X_LINKED_ETAG
const by @julien-c in #1405 - Normalize both possible variants of the Etag to remove potentially invalid path elements by @dwforbes in #1428
Documentation-related
- Docs about how to hide progress bars by @Wauplin in #1416
- [docs] Update docstring for repo_id in push_to_hub by @tomaarsen in #1436
Misc
- Prepare for 0.14 by @Wauplin in #1381
- Add force_download to snapshot_download by @Wauplin in #1391
- Model card template: Move model usage instructions out of Bias section by @NimaBoscarino in #1400
- typo by @Wauplin (direct commit on main)
- Log as warning when waiting for ongoing commands by @Wauplin in #1415
- Fix: notebook_login() does not update UI on Databricks by @fwetdb in #1414
- Passing the headers to hf_transfer download. by @Narsil in #1444
Internal stuff
Security patch v0.13.4
Security patch to fix a vulnerability in huggingface_hub
. In some cases, downloading a file with hf_hub_download
or snapshot_download
could lead to overwriting any file on a Windows machine. With this fix, only files in the cache directory (or a user-defined directory) can be updated/overwritten.
Full Changelog: v0.13.3...v0.13.4