Skip to content

Releases: huggingface/huggingface_hub

v0.17.3 - Hot-fix: ignore errors when checking available disk space

26 Sep 08:27
Compare
Choose a tag to compare

Full Changelog: v0.17.2...v0.17.3

Fixing a bug when downloading files to a non-existent directory. In #1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in #1690. This hot-fix fixes it thanks to #1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any OSError) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.

Checkout those release notes to learn more about the v0.17 release.

v0.17.2 - Hot-fix: make `huggingface-cli upload` work with Spaces

18 Sep 14:01
Compare
Choose a tag to compare

Full Changelog: v0.17.1...v0.17.2

Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because space_sdk was not found in that case. More details in #1669.
Also updated the user-agent when using huggingface-cli upload. See #1664.

Checkout those release notes to learn more about the v0.17 release.

v0.17.0: Inference, CLI and Space API

08 Sep 14:41
Compare
Choose a tag to compare

InferenceClient

All tasks are now supported! 💥

Thanks to a massive community effort, all inference tasks are now supported in InferenceClient. Newly added tasks are:

Documentation, including examples, for each of these tasks can be found in this table.

All those methods also support async mode using AsyncInferenceClient.

Get InferenceAPI status

Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:

  1. list_deployed_models aims to help users discover which models are currently deployed, listed by task.
  2. get_model_status aims to get the status of a specific model. That's useful if you already know which model you want to use.

Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# Discover zero-shot-classification models currently deployed 
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]

# Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')

Few fixes

  • Send Accept: image/png as header for image tasks by @Wauplin in #1567
  • FIX text_to_image and image_to_image parameters by @Wauplin in #1582
  • Distinguish _bytes_to_dict and _bytes_to_list + fix issues by @Wauplin in #1641
  • Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648

Download and upload files... from the CLI 🔥 🔥 🔥

This is a long-awaited feature finally implemented! huggingface-cli now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for git clone, git pull and git push. Despite being less feature-complete than git (no .git/ folder, no notion of local commits), it offers the flexibility required when working with large repositories.

Download

# Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json

# Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json

# Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files:   100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7

Upload

# Upload single file
huggingface-cli upload my-cool-model model.safetensors

# Upload entire directory
huggingface-cli upload my-cool-model ./models

# Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"

Docs

For more examples, check out the documentation:

🚀 Space API

Some new features have been added to the Space API to:

  • request persistent storage for a Space
  • set a description to a Space's secrets
  • set variables on a Space
  • configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio",
...     space_hardware="t4-medium",
...     space_sleep_time="3600",
...     space_storage="large",
...     space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
...     space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )

A special thank to @martinbrose who largely contributed on those new features.

📚 Documentation

A new section has been added to the upload guide with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.

🗺️ The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!

Breaking change

The behavior of InferenceClient.feature_extraction has been updated to fix a bug happening with certain models. The shape of the returned array for transformers models has changed from (sequence_length, hidden_size) to (1, sequence_length, hidden_size) which is the breaking change.

  • Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648

QOL improvements

HfApi helpers:

Two new helpers have been added to check if a file or a repo exists on the Hub:

>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False

>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False

Also, hf_hub_download and snapshot_download are now part of HfApi (keeping the same syntax and behavior).

  • Add download alias for hf_hub_download to HfApi by @Wauplin in #1580

Download improvements:

  1. When a user tries to download a model but the disk is full, a warning is triggered.
  2. When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.
  • Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by @jiamings in #1561
  • Implemented check_disk_space function by @martinbrose in #1590

Small fixes and maintenance

⚙️ Doc fixes

⚙️ Other fixes

⚙️ Internal

  • Prepare for 0.17 by @Wauplin in #1540
  • update mypy version + fix issues + remove deprecatedlist helper by @Wauplin in #1628
  • mypy traceck by @Wauplin (direct commit on main)
  • pin pydantic version by @Wauplin (direct commit on main)
  • Fix ci tests by @Wauplin in #1630
  • Fix test in contrib CI by @Wauplin (direct commit on main)
  • skip gated repo test on contrib by @Wauplin (direct commit on main)
  • skip failing test by @Wauplin (direct commit on main)
  • Fix fsspec tests in ci by @Wauplin in #1635
  • FIX windows CI by @Wauplin (direct commit on main)
  • FIX style issues by pinning black version by @Wauplin (direct commit on main)
  • forgot test case by @Wauplin (direct commit on main)
  • shorter is better by @Wauplin (direct commit on main)

🤗 Significant community contributions

The following contributors have made significant changes to the library over the last release:

Read more

v0.16.4 - Hot-fix: Do not share request.Session between processes

07 Jul 14:38
Compare
Choose a tag to compare

Full Changelog: v0.16.3...v0.16.4

Hotfix to avoid sharing requests.Session between processes. More information in #1545. Internally, we create a Session object per thread to benefit from the HTTPSConnectionPool (i.e. do not reopen connection between calls). Due to an implementation bug, the Session object from the main thread was shared if a fork of the main process happened. The shared Session gets corrupted in the process, leading to some random ConnectionErrors in rare occasions.

Check out these release notes to learn more about the v0.16 release.

v0.16.3: Hotfix - More verbose ConnectionError

07 Jul 07:37
Compare
Choose a tag to compare

Full Changelog: v0.16.2...v0.16.3

Hotfix to print the request ID if any RequestException happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.

Check out these release notes to learn more about the v0.16 release.

v0.16.2: Inference, CommitScheduler and Tensorboard

05 Jul 07:33
Compare
Choose a tag to compare

Inference

Introduced in the v0.15 release, the InferenceClient got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.

Async client

Asyncio calls are supported thanks to AsyncInferenceClient. Based on asyncio and aiohttp, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")

Text-generation

Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.

Text generation has 4 modes depending on details (bool) and stream (bool) values. By default, a raw string is returned. If details=True, more information about the generated tokens is returned. If stream=True, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>>     print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)

Of course, the async client also supports text-generation (see docs):

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

Zero-shot-image-classification

InferenceClient now supports zero-shot-image-classification (see docs). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]

Thanks to @dulayjm for your contribution on this task!

Other

When using InferenceClient's task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.

It is now possible to configure headers and cookies to be sent when initializing the client: InferenceClient(headers=..., cookies=...). All calls made with this client will then use these headers/cookies.

Commit API

CommitScheduler

The CommitScheduler is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.

>>> from huggingface_hub import CommitScheduler

# Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
...     repo_id="report-translation-feedback",
...     repo_type="dataset",
...     folder_path=feedback_folder,
...     path_in_repo="data",
...     every=10,
... )

Check out this guide to understand how to use the CommitScheduler. It comes with a Space to showcase how to use it in 4 practical examples.

  • CommitScheduler: upload folder every 5 minutes by @Wauplin in #1494
  • Encourage to overwrite CommitScheduler.push_to_hub by @Wauplin in #1506
  • FIX Use token by default in CommitScheduler by @Wauplin in #1509
  • safer commit scheduler by @Wauplin (direct commit on main)

HFSummaryWriter (tensorboard)

The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as tfevents) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than 42k models are already using this feature!

With the HFSummaryWriter you can now take full advantage of the feature for your training, simply by updating a single line of code.

>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)

HFSummaryWriter inherits from SummaryWriter and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.

For more information on how to use it, check out this documentation page. Please note that this is still an experimental feature so feedback is very welcome.

CommitOperationCopy

It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information here.

Breaking changes

ModelHubMixin got updated (after a deprecation cycle):

  • Force to use kwargs instead of passing everything a positional arg
  • It is not possible anymore to pass model_id as username/repo_name@revision in ModelHubMixin. Revision must be passed as a separate revision argument if needed.

Bug fixes and small improvements

Doc fixes

HTTP fixes

A x-request-id header is sent by default for every request made to the Hub. This should help debugging user issues.

3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.

Misc

  • Rename "configs" dataset card field to "config_names" by @polinaeterna in #1491
  • update stats by @Wauplin (direct commit on main)
  • Retry on both ConnectTimeout and ReadTimeout by @Wauplin in #1529
  • update tip by @Wauplin (direct commit on main)
  • make repo_info public by @Wauplin (direct commit on main)

Significant community contributions

The following contributors have made significant changes to the library over the last ...

Read more

v0.15.1: InferenceClient and background uploads!

01 Jun 10:22
Compare
Choose a tag to compare

InferenceClient

We introduce InferenceClient, a new client to run inference on the Hub. The objective is to:

  • support both InferenceAPI and Inference Endpoints services in a single client.
  • offer a nice interface with:
    • 1 method per task (e.g. summary = client.summarization("this is a long text"))
    • 1 default model per task (i.e. easy to prototype)
    • explicit and documented parameters
    • convenient binary inputs (from url, path, file-like object,...)
  • be flexible and support custom requests if needed

Check out the Inference guide to get a complete overview.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]

The short-term goal is to add support for more tasks (here is the current list), especially text-generation and handle asyncio calls. The mid-term goal is to deprecate and replace InferenceAPI.

Non-blocking uploads

It is now possible to run HfApi calls in the background! The goal is to make it easier to upload files periodically without blocking the main thread during a training. The was previously possible when using Repository but is now available for HTTP-based methods like upload_file, upload_folder and create_commit. If run_as_future=True is passed:

  • the job is queued in a background thread. Only 1 worker is spawned to ensure no race condition. The goal is NOT to speed up a process by parallelizing concurrent calls to the Hub.
  • a Future object is returned to check the job status
  • main thread is not interrupted, even if an exception occurs during the upload

In addition to this parameter, a run_as_future(...) method is available to queue any other calls to the Hub. More details in this guide.

>>> from huggingface_hub import HfApi

>>> api = HfApi()
>>> api.upload_file(...)  # takes Xs
# URL to upload file

>>> future = api.upload_file(..., run_as_future=True) # instant
>>> future.result() # wait until complete
# URL to upload file
  • Run HfApi methods in the background (run_as_future) by @Wauplin in #1458
  • fix docs for run_as_future by @Wauplin (direct commit on main)

Breaking changes

Some (announced) breaking changes have been introduced:

  • list_models, list_datasets and list_spaces return an iterable instead of a list (lazy-loading of paginated results)
  • The parameter cardData in list_datasets has been removed in favor of the parameter full.

Both changes had a deprecation cycle for a few releases now.

Bugfixes and small improvements

Token permission

New parameters in login() :

  • new_session : skip login if new_session=False and user is already logged in
  • write_permission : write permission is required (login fails otherwise)

Also added a new HfApi().get_token_permission() method that returns "read" or "write" (or None if not logged in).

List files with details

New parameter to get more details when listing files: list_repo_files(..., expand=True).
API call is slower but lastCommit and security fields are returned as well.

Docs fixes

Misc

  • Fix consistency check when downloading a file by @Wauplin in #1449
  • Fix discussion URL on datasets and spaces by @Wauplin in #1465
  • FIX user agent not passed in snapshot_download by @Wauplin in #1478
  • Avoid ImportError when importing WebhooksServer and Gradio is not installed by @mariosasko in #1482
  • add utf8 encoding when opening files for windows by @abidlabs in #1484
  • Fix incorrect syntax in _deprecation.py warning message for _deprecate_list_output() by @x11kjm in #1485
  • Update _hf_folder.py by @SimonKitSangChu in #1487
  • fix pause_and_restart test by @Wauplin (direct commit on main)
  • Support image-to-image task in InferenceApi by @Wauplin in #1489

v0.14.1: patch release

25 Apr 14:48
Compare
Choose a tag to compare

Fixed an issue reported in diffusers impacting users downloading files from outside of the Hub. Expected download size now takes into account potential compression in the HTTP requests.

  • Fix consistency check when downloading a file by @Wauplin in #1449

Full Changelog: v0.14.0...v0.14.1

v0.14.0: Filesystem API, Webhook Server, upload improvements, keep-alive connections, and more

18 Apr 19:25
Compare
Choose a tag to compare

HfFileSystem: interact with the Hub through the Filesystem API

We introduce HfFileSystem, a pythonic filesystem interface compatible with fsspec. Built on top of HfApi, it offers typical filesystem operations like cp, mv, ls, du, glob, get_file and put_file.

>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()

# List all files in a directory
>>> fs.ls("datasets/myself/my-dataset/data", detail=False)
['datasets/myself/my-dataset/data/train.csv', 'datasets/myself/my-dataset/data/test.csv']

>>> train_data = fs.read_text("datasets/myself/my-dataset/data/train.csv")

Its biggest advantage is to provide ready-to-use integrations with popular libraries like Pandas, DuckDB and Zarr.

import pandas as pd

# Read a remote CSV file into a dataframe
df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")

# Write a dataframe to a remote CSV file
df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")

For a more detailed overview, please have a look to this guide.

Webhook Server

WebhooksServer allows to implement, debug and deploy webhook endpoints on the Hub without any overhead. Creating a new endpoint is as easy as decorating a Python function.

# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

For more details, check out this twitter thread or the documentation guide.

Note that this feature is experimental which means the API/behavior might change without prior notice. A warning is displayed to the user when using it. As it is experimental, we would love to get feedback!

Some upload QOL improvements

Faster upload with hf_transfer

Integration with a Rust-based library to upload large files in chunks and concurrently. Expect x3 speed-up if your bandwidth allows it!

Upload in multiple commits

Uploading large folders at once might be annoying if any error happens while committing (e.g. a connection error occurs). It is now possible to upload a folder in multiple (smaller) commits. If a commit fails, you can re-run the script and resume the upload. Commits are pushed to a dedicated PR. Once completed, the PR is merged to the main branch resulting in a single commit in your git history.

upload_folder(
    folder_path="local/checkpoints",
    repo_id="username/my-dataset",
    repo_type="dataset",
    multi_commits=True, # resumable multi-upload
    multi_commits_verbose=True,
)

Note that this feature is also experimental, meaning its behavior might be updated in the future.

Upload validation

Some more pre-validation done before committing files to the Hub. The .git folder is ignored in upload_folder (if any) + fail early in case of invalid paths.

  • Fix path_in_repo validation when committing files by @Wauplin in #1382
  • Raise issue if trying to upload .git/ folder + ignore .git/ folder in upload_folder by @Wauplin in #1408

Keep-alive connections between requests

Internal update to reuse the same HTTP session across huggingface_hub. The goal is to keep the connection open when doing multiple calls to the Hub which ultimately saves a lot of time. For instance, updating metadata in a README became 40% faster while listing all models from the Hub is 60% faster. This has no impact for atomic calls (e.g. 1 standalone GET call).

Custom sleep time for Spaces

It is now possible to programmatically set a custom sleep time on your upgraded Space. After X seconds of inactivity, your Space will go to sleep to save you some $$$.

from huggingface_hub import set_space_sleep_time

# Put your Space to sleep after 1h of inactivity
set_space_sleep_time(repo_id=repo_id, sleep_time=3600)

Breaking change

  • fsspec has been added as a main dependency. It's a lightweight Python library required for HfFileSystem.

No other breaking change expected in this release.

Bugfixes & small improvements

File-related

A lot of effort has been invested in making huggingface_hub's cache system more robust especially when working with symlinks on Windows. Hope everything's fixed by now.

  • Fix relative symlinks in cache by @Wauplin in #1390
  • Hotfix - use relative symlinks whenever possible by @Wauplin in #1399
  • [hot-fix] Malicious repo can overwrite any file on disk by @Wauplin in #1429
  • Fix symlinks on different volumes on Windows by @Wauplin in #1437
  • [FIX] bug "Invalid cross-device link" error when using snapshot_download to local_dir with no symlink by @thaiminhpv in #1439
  • Raise after download if file size is not consistent by @Wauplin in # 1403

ETag-related

After a server-side configuration issue, we made huggingface_hub more robust when getting Hub's Etags to be more future-proof.

  • Update file_download.py by @Wauplin in #1406
  • 🧹 Use HUGGINGFACE_HEADER_X_LINKED_ETAG const by @julien-c in #1405
  • Normalize both possible variants of the Etag to remove potentially invalid path elements by @dwforbes in #1428

Documentation-related

Misc

Internal stuff

  • Fix CI by @Wauplin in #1392
  • PR should not fail if codecov is bad by @Wauplin (direct commit on main)
  • remove cov check in PR by @Wauplin (direct commit on main)
  • Fix restart space test by @Wauplin (direct commit on main)
  • fix move repo test by @Wauplin (direct commit on main)

Security patch v0.13.4

06 Apr 15:05
Compare
Choose a tag to compare

Security patch to fix a vulnerability in huggingface_hub. In some cases, downloading a file with hf_hub_download or snapshot_download could lead to overwriting any file on a Windows machine. With this fix, only files in the cache directory (or a user-defined directory) can be updated/overwritten.

  • Malicious repo can overwrite any file on disk #429 @Wauplin

Full Changelog: v0.13.3...v0.13.4