Skip to content

Releases: huggingface/huggingface_hub

[v0.27.0] DDUF tooling, torch model loading helpers & multiple quality of life improvements and bug fixes

13 Dec 16:02
Compare
Choose a tag to compare

📦 Introducing DDUF tooling

DDUF Banner

DDUF (DDUF's Diffusion Unified Format) is a single-file format for diffusion models that aims to unify the different model distribution methods and weight-saving formats by packaging all model components into a single file. We will soon have a detailed documentation for that.

The huggingface_hub library now provides tooling to handle DDUF files in Python. It includes helpers to read and export DDUF files, and built-in rules to validate file integrity.

How to write a DDUF file?

>>> from huggingface_hub import export_folder_as_dduf

# Export "path/to/FLUX.1-dev" folder as a DDUF file
>>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")

How to read a DDUF file?

>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file

# Read DDUF metadata (only metadata is loaded, lightweight operation)
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")

# Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)

# Load the `model_index.json` content
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']}

# Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
...     state_dict = safetensors.torch.load(mm)

⚠️ Note that this is a very early version of the parser. The API and implementation can evolve in the near future.
👉 More details about the API in the documentation here.

DDUF parser v0.1 by @Wauplin in #2692

💾 Serialization

Following the introduction of the torch serialization module in 0.22.* and the support of saving torch state dict to disk in 0.24.*, we now provide helpers to load torch state dicts from disk.
By centralizing these functionalities in huggingface_hub, we ensure a consistent implementation across the HF ecosystem while allowing external libraries to benefit from standardized weight handling.

>>> from huggingface_hub import load_torch_model, load_state_dict_from_file

# load state dict from a single file
>>> state_dict = load_state_dict_from_file("path/to/weights.safetensors")

# Directly load weights into a PyTorch model
>>> model = ... # A PyTorch model
>>> load_torch_model(model, "path/to/checkpoint")

More details in the serialization package reference.

[Serialization] support loading torch state dict from disk by @hanouticelina in #2687

We added a flag to save_torch_state_dict() helper to properly handle model saving in distributed environments, aligning with existing implementations across the Hugging Face ecosystem:

[Serialization] Add is_main_process argument to save_torch_state_dict() by @hanouticelina in #2648

A bug with shared tensor handling reported in transformers#35080 has been fixed:

add argument to pass shared tensors keys to discard by @hanouticelina in #2696

✨ HfApi

The following changes align the client with server-side updates in how security metadata is handled and exposed in the API responses. In particular, The repository security status returned by HfApi().model_info() is now available in the security_repo_status field:

from huggingface_hub import HfApi

api = HfApi()

model = api.model_info("your_model_id", securityStatus=True)

# get security status info of your model
- security_info = model.securityStatus
+ security_info = model.security_repo_status

🌐 📚 Documentation

Thanks to @miaowumiaomiaowu, more documentation is now available in Chinese! And thanks @13579606 for reviewing these PRs. Check out the result here.

📝Translating docs to Simplified Chinese by @miaowumiaomiaowu in #2689, #2704 and #2705.

💔 Breaking changes

A few breaking changes have been introduced:

  • RepoCardData serialization now preserves None values in nested structures.
  • InferenceClient.image_to_image() now takes a target_size argument instead of height and width arguments. This is has been reflected in the InferenceClient async equivalent as well.
  • InferenceClient.table_question_answering() no longer accepts a parameter argument. This is has been reflected in the InferenceClient async equivalent as well.
  • Due to low usage, list_metrics() has been removed from HfApi.

⏳ Deprecations

Some deprecations have been introduced as well:

  • Legacy token permission checks are deprecated as they are no longer relevant with fine-grained tokens, This includes is_write_action in build_hf_headers(), write_permission=True in login methods. get_token_permission has been deprecated as well.
  • labels argument is deprecated in InferenceClient.zero_shot_classification() and InferenceClient.image_zero_shot_classification(). This is has been reflected in the InferenceClient async equivalent as well.
  • Deprecate is_write_action and write_permission=True when login by @Wauplin in #2632
  • Fix and deprecate get_token_permission by @Wauplin in #2631
  • [Inference Client] fix param docstring and deprecate labels param in zero-shot classification tasks by @hanouticelina in #2668

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

[v0.26.5]: Serialization: Add argument to pass shared tensors names to drop when saving

06 Dec 18:28
Compare
Choose a tag to compare

[v0.26.3]: Fix timestamp parsing to always include milliseconds

28 Nov 10:16
Compare
Choose a tag to compare

[v0.26.2] Fix: Reflect API response changes in file and repo security status fields

28 Oct 14:49
Compare
Choose a tag to compare

This patch release includes updates to align with recent API response changes:

  • Update how file's security metadata is retrieved following changes in the API response (#2621).
  • Expose repo security status field in ModelInfo (#2639).

Full Changelog: v0.26.1...v0.26.2

[v0.26.1] Hot-fix: fix Python 3.8 support for `huggingface-cli` commands

21 Oct 13:41
Compare
Choose a tag to compare

v0.26.0: Multi-tokens support, conversational VLMs and quality of life improvements

17 Oct 15:21
Compare
Choose a tag to compare

🔐 Multiple access tokens support

Managing fine-grained access tokens locally just became much easier and more efficient!
Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.

To make managing these tokens easier, we've added a ✨ new set of CLI commands ✨ that allow you to handle them programmatically:

  • Store multiple tokens on your machine by simply logging in with the login() command with each token:
huggingface-cli login
  • Switch between tokens and choose the one that will be used for all interactions with the Hub:
huggingface-cli auth switch
  • List available access tokens on your machine:
huggingface-cli auth list
  • Delete a specific token from your machine with:
huggingface-cli logout [--token-name TOKEN_NAME]

✅ Nothing changes if you are using the HF_TOKEN environment variable as it takes precedence over the token set via the CLI. More details in the documentation. 🤗

⚡️ InferenceClient improvements

🖼️ Conversational VLMs support

Conversational vision-language models inference is now supported with InferenceClient's chat completion!

from huggingface_hub import InferenceClient

# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image_url",
                    "image_url": {"url": image_url},
                },
                {
                    "type": "text",
                    "text": "Describe this image in one sentence.",
                },
            ],
        },
    ],
)

print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.

🔧 More complete support for inference parameters

You can now pass additional inference parameters to more task methods in the InferenceClient, including: image_classification, text_classification, image_segmentation, object_detection, document_question_answering and more!
For more details, visit the InferenceClient reference guide.

✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗

  • Support VLM in chat completion (+some specs updates) by @Wauplin in #2556
  • [Inference Client] Add task parameters and a maintenance script of these parameters by @hanouticelina in #2561
  • Document vision chat completion with Llama 3.2 11B V by @Wauplin in #2569

✨ HfApi

update_repo_settings can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility which is deprecated and will be removed in version v0.29.0.

- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
  • Feature: switch visibility with update_repo_settings by @WizKnight in #2541

📄 Daily papers API is now supported in huggingface_hub, enabling you to search for papers on the Hub and retrieve detailed paper information.

>>> from huggingface_hub import HfApi

>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")

🌐 📚 Documentation

Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.

  • Translated index.md and installation.md to Tamil by @Raghul-M in #2555

💔 Breaking changes

A few breaking changes have been introduced:

  • cached_download(), url_to_filename(), filename_to_url() methods are now completely removed. From now on, you will have to use hf_hub_download() to benefit from the new cache layout.
  • legacy_cache_layout argument from hf_hub_download() has been removed as well.

These breaking changes have been announced with a regular deprecation cycle.

Also, any templating-related utility has been removed from huggingface_hub. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.

Prepare for release 0.26 by @hanouticelina in #2579
Remove templating utility by @Wauplin in #2611

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 fixes

🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

[v0.25.2]: Fix snapshot download when `local_dir` is provided

09 Oct 08:35
Compare
Choose a tag to compare

Full Changelog : v0.25.1...v0.25.2
For more details, refer to the related PR #2592

[v0.25.1]: Raise error if encountered in chat completion SSE stream

23 Sep 13:28
Compare
Choose a tag to compare

Full Changelog : v0.25.0...v0.25.1
For more details, refer to the related PR #2558

v0.25.0: Large uploads made simple + quality of life improvements

17 Sep 16:36
10e403d
Compare
Choose a tag to compare

📂 Upload large folders

Uploading large models or datasets is challenging. We've already written some tips and tricks to facilitate the process but something was still missing. We are now glad to release the huggingface-cli upload-large-folder command. Consider it as a "please upload this no matter what, and be quick" command. Contrarily to huggingface-cli download, this new command is more opinionated and will split the upload into several commits. Multiple workers are started locally to hash, pre-upload and commit the files in a way that is resumable, resilient to connection errors, and optimized against rate limits. This feature has already been stress tested by the community over the last months to make it as easy and convenient to use as possible.

Here is how to use it:

huggingface-cli upload-large-folder <repo-id> <local-path> --repo-type=dataset

Every minute, a report is logged with the current status of the files and workers:

---------- 2024-04-26 16:24:25 (0:00:00) ----------
Files:   hashed 104/104 (22.5G/22.5G) | pre-uploaded: 0/42 (0.0/22.5G) | committed: 58/104 (24.9M/22.5G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 6 | committing: 0 | waiting: 0
---------------------------------------------------

You can also run it from a script:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_large_folder(
...     repo_id="HuggingFaceM4/Docmatix",
...     repo_type="dataset",
...     folder_path="/path/to/local/docmatix",
... )

For more details about the command options, run:

huggingface-cli upload-large-folder --help

or visit the upload guide.

  • CLI to upload arbitrary huge folder by @Wauplin in #2254
  • Reduce number of commits in upload large folder by @Wauplin in #2546
  • Suggest using upload_large_folder when appropriate by @Wauplin in #2547

✨ HfApi & CLI improvements

🔍 Search API

The search API have been updated. You can now list gated models and datasets, and filter models by their inference status (warm, cold, frozen).

More complete support for the expand[] parameter:

  • Document baseModels and childrenModelCount as expand parameters by @Wauplin in #2475
  • Better support for trending score by @Wauplin in #2513
  • Add GGUF as supported expand[] parameter by @Wauplin in #2545

👤 User API

Organizations are now included when retrieving the user overview:

get_user_followers and get_user_following are now paginated. This was not the case before, leading to issues for users with more than 1000 followers.

  • Paginate followers and following endpoints by @Wauplin in #2506

📦 Repo API

Added auth_check to easily verify if a user has access to a repo. It raises GatedRepoError if the repo is gated and the user don't have the permission or RepositoryNotFoundError if the repo does not exist or is private. If the method does not raise an error, you can assume the user has the permission to access the repo.

>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
try:
    auth_check("user/my-cool-model")
except GatedRepoError:
    # Handle gated repository error
    print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:
    # Handle repository not found error
    print("The repository was not found or you do not have access.")

It is now possible to set a repo as gated from a script:

>>> from huggingface_hub import HfApi

>>> api = HfApi()
>>> api.update_repo_settings(repo_id=repo_id, gated="auto")  # Set to "auto", "manual" or False

⚡️ Inference Endpoint API

A few improvements in the InferenceEndpoint API. It's now possible to set a scale_to_zero_timeout parameter + to configure secrets when creating or updating an Inference Endpoint.

  • Add scale_to_zero_timeout parameter to HFApi.create/update_inference_endpoint by @hommayushi3 in #2463
  • Update endpoint.update signature by @Wauplin in #2477
  • feat: ✨ allow passing secrets to the inference endpoint client by @LuisBlanche in #2486

💾 Serialization

The torch serialization module now supports tensor subclasses.
We also made sure that now the library is tested with both torch 1.x and 2.x to ensure compatibility.

  • Making wrapper tensor subclass to work in serialization by @jerryzh168 in #2440
  • Torch: test on 1.11 and latest versions + explicitly load with weights_only=True by @Wauplin in #2488

💔 Breaking changes

Breaking changes:

  • InferenceClient.conversational task has been removed in favor of InferenceClient.chat_completion. Also removed ConversationalOutput data class.
  • All InferenceClient output values are now dataclasses, not dictionaries.
  • list_repo_likers is now paginated. This means the output is now an iterator instead of a list.

Deprecation:

  • multi_commit: bool parameter in upload_folder is not deprecated, along the create_commits_on_pr. It is now recommended to use upload_large_folder instead. Thought its API and internals are different, the goal is still to be able to upload many files in several commits.

🛠️ Small fixes and maintenance

⚡️ InferenceClient fixes

Thanks to community feedback, we've been able to improve or fix significant things in both the InferenceClient and its async version AsyncInferenceClient. This fixes have been mainly focused on the OpenAI-compatible chat_completion method and the Inference Endpoints services.

  • [Inference] Support stop parameter in text-generation instead of stop_sequences by @Wauplin in #2473
  • [hot-fix] Handle [DONE] signal from TGI + remove logic for "non-TGI servers" by @Wauplin in #2410
  • Fix chat completion url for OpenAI compatibility by @Wauplin in #2418
  • Bug - [InferenceClient] - use proxy set in var env by @morgandiverrez in #2421
  • Document the difference between model and base_url by @Wauplin in #2431
  • Fix broken AsyncInferenceClient on [DONE] signal by @Wauplin in #2458
  • Fix InferenceClient for HF Nvidia NIM API by @Wauplin in #2482
  • Properly close session in AsyncInferenceClient by @Wauplin in #2496
  • Fix unclosed aiohttp.ClientResponse objects by @Wauplin in #2528
  • Fix resolve chat completion URL by @Wauplin in #2540

😌 QoL improvements

When uploading a folder, we validate the README.md file before hashing all the files, not after.
This should save some precious time when uploading large files and a corrupted model card.

Also, it is now possible to pass a --max-workers argument when uploading a folder from the CLI

  • huggingface-cli upload - Validate README.md before file hashing by @hlky in #2452
  • Solved: Need to add the max-workers argument to the huggingface-cli command by @devymex in #2500

All custom exceptions raised by huggingface_hub are now defined in huggingface_hub.errors module. This should make it easier to import them for your try/except statements.

At the same occasion, we've reworked how errors are formatted in hf_raise_for_status to print more relevant information to the users.

All constants in huggingface_hub are now imported as a module. This makes it easier to patch their values, for example in a test pipeline.

Other quality of life improvements:

🐛 fixes

Read more

[v0.24.7]: Fix race-condition issue when downloading from multiple threads

12 Sep 09:05
Compare
Choose a tag to compare