Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: impossible to change models #13245

Open
1 task done
ostap667inbox opened this issue Sep 14, 2023 · 15 comments
Open
1 task done

[Bug]: impossible to change models #13245

ostap667inbox opened this issue Sep 14, 2023 · 15 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@ostap667inbox
Copy link

ostap667inbox commented Sep 14, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

  • A lot of errors on startup.
  • Impossible to change models in WebUI

Steps to reproduce the problem

  1. Try to start the WebUI

What should have happened?

On startup I get a long list of errors. The log is below.
Then the interface starts. But if I try to change the model to any SDXL, I get a long list of errors again and see in the console that it is trying to download strange file ip_pytorch_model.bin weighing 10GB

Sysinfo

sysinfo-2023-09-14-04-05.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

creating model quickly: OSError
Traceback (most recent call last):
  File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 261, in hf_raise_for_status
    response.raise_for_status()
  File "C:\stable-diffusion-webui\venv\lib\site-packages\requests\models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 429, in cached_file
    resolved_file = hf_hub_download(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1195, in hf_hub_download
    metadata = get_hf_file_metadata(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1541, in get_hf_file_metadata
    hf_raise_for_status(r)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 293, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6502842e-038d35a170ef5a983f0dfb8f;375cda1b-e897-4693-aef7-667164338587)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\itech\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\itech\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\itech\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui\modules\shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui\modules\sd_models.py", line 499, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui\modules\sd_models.py", line 602, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1650, in __init__
    super().__init__(concat_keys, *args, **kwargs)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1515, in __init__
    super().__init__(*args, **kwargs)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py", line 2377, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Running on local URL:  http://127.0.0.1:7860


### Additional information

_No response_
@ostap667inbox ostap667inbox added the bug-report Report of a bug, yet to be confirmed label Sep 14, 2023
@rugabunda
Copy link

rugabunda commented Sep 14, 2023

I had that same problem in windows today as well, after cloning the git. I was testing to see if I could get it to work with python 3.11 and cuda121, pytorch 2.2... and that was the result. Make sure you install the venv with python 3.10. If you have a newer version, or multiple versions of python installed, you can get around that problem by installing python 3.10.6, and running this at command prompt; you may have to delete your venv first

C:\Users\(username)\AppData\Local\Programs\Python\Python310\python.exe -m venv C:\stable-diffusion-webui\venv

Otherwise, try deleting your venv, and creating a new one., or deleting the packages listed with the errors

@ostap667inbox
Copy link
Author

I have Python 3.10.6. I have also already tried deleting /venv/ in the WebUI folder.
Nothing helps.

@ClashSAN
Copy link
Collaborator

Do you have a way to reproduce the issue?

You can try to move the models completely outside stable-diffusion-webui folder, this error could be webui's "falling back to previously used model" failing to load

@ostap667inbox
Copy link
Author

ostap667inbox commented Sep 15, 2023

I solved the problem this way:
I have a copy of the WebUI on another drive with auto-update disabled at startup (a working copy in case WebUI becomes inoperable after the next update). I copied the /venv/ folder from there and the above errors went away.

@Ph0rk0z
Copy link

Ph0rk0z commented Sep 17, 2023

I have the same issue. Updated my extensions today and it stopped working. Can't load any checkpoints and always tries to downoad model.safetensors despite there being one there already. The command line arg to stop it from downloading a model doesn't work. I can get the UI up in debug mode.

So I figured it out.. I had to let it download the new clip model. It ate my config.json but it's back.

@Ph0rk0z
Copy link

Ph0rk0z commented Sep 18, 2023

and now XL also downloads a 10gb ip_pytorch_model.bin to an unknown folder. If I let it finish it will probably load but that's a lot to download through unstable PT downloader with no resume.

@ostap667inbox
Copy link
Author

I have no idea what to do :(
It's again.

@ostap667inbox
Copy link
Author

UPD: Just discovered that the problem only occurs when the Clip Interrogator extension is installed. After removing it from the extensions folder, the problem went away.

I have no idea how the Clip Interrogator extension can cause errors and an attempt to download a 10Gb file when trying to change the model to SDXL in the WebUI. This even as formulated looks extremely strange :) But the fact is: removing the extension fixed the problem.

@Ph0rk0z
Copy link

Ph0rk0z commented Oct 16, 2023

The 10gb model is a CLIP model. The errors that cause it come from repositories/stable-diffusion. I will try to disable the clip extension and see if the error goes away.

@Ph0rk0z
Copy link

Ph0rk0z commented Oct 17, 2023

I found the bug!

https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/sd_disable_initialization.py

        def CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs):
            res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
            res.name_or_path = pretrained_model_name_or_path
            return res

It is supposed to disable loading the clip model but it loads anyway so no point.

fix by:

        def CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs):
            res = self.CLIPTextModel_from_pretrained(pretrained_model_name_or_path, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
            res.name_or_path = pretrained_model_name_or_path
            return res

@MetallicPickaxe
Copy link

Thanks for your hard working. It is very useful.

@christopherquenneville
Copy link

This worked you're a genius @Ph0rk0z

@Crestina2001
Copy link

UPD: Just discovered that the problem only occurs when the Clip Interrogator extension is installed. After removing it from the extensions folder, the problem went away.

I have no idea how the Clip Interrogator extension can cause errors and an attempt to download a 10Gb file when trying to change the model to SDXL in the WebUI. This even as formulated looks extremely strange :) But the fact is: removing the extension fixed the problem.

Hi, are you still there? Could you expand on how to remove the extension? I haven't installed any extensions, and there is nothing in the stable-diffusion-webui-master\stable-diffusion-webui-master\extensions folder.

@ride5k
Copy link

ride5k commented Sep 25, 2024

I found the bug!

fantastic work. i wiped my venv today and upon rebuild got the same error messages. substituting "pretrained_model_name_or_path" in place of None worked perfectly!

@rickvalstar
Copy link

See my cross-post here:

TheLastBen/fast-stable-diffusion#2937 (comment)

I think it will solve the issue as it did for me.

Enjoy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants