Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] #1

Open
HansonChan opened this issue Sep 28, 2023 · 20 comments
Open

[Bug] #1

HansonChan opened this issue Sep 28, 2023 · 20 comments

Comments

@HansonChan
Copy link

Traceback (most recent call last):
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/Users/Shuai/AI/stable-diffusion-webui/extensions/sd-webui-cleaner/scripts/lama.py", line 13, in clean_object_init_img_with_mask
return clean_object(init_img_with_mask['image'],init_img_with_mask['mask'])
File "/Users/Shuai/AI/stable-diffusion-webui/extensions/sd-webui-cleaner/scripts/lama.py", line 18, in clean_object
Lama = LiteLama2()
File "/Users/Shuai/AI/stable-diffusion-webui/extensions/sd-webui-cleaner/scripts/lama.py", line 62, in init
self.load()
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/litelama/litelama.py", line 19, in load
self._model = load_model(config_path=self._config_path, checkpoint_path=self._checkpoint_path, use_safetensors=use_safetensors)
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/litelama/model.py", line 60, in load_model
with safetensors.safe_open(checkpoint_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

It seems haven't got the right device type?

@ka1tte
Copy link
Contributor

ka1tte commented Sep 28, 2023

@HansonChan Are you using a Mac?

@HansonChan
Copy link
Author

@ka1tte Yes. Mac with M2. I try to move the models to "mps", but aother error appeared.

Traceback (most recent call last):
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1335, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/Users/Shuai/AI/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components/gallery.py", line 205, in postprocess
raise ValueError(f"Cannot process type as image: {type(img)}")
ValueError: Cannot process type as image: <class 'NoneType'>

@ka1tte
Copy link
Contributor

ka1tte commented Sep 28, 2023

@HansonChan I provide a parameter that can use the CPU for inference. You can configure it through the setting page or api.

{
    "cleaner_use_gpu": true
}

@ka1tte
Copy link
Contributor

ka1tte commented Sep 28, 2023

In my Mac, It works normally.

@LinTevis
Copy link

I solved my issue by downloading the model at https://huggingface.co/smartywu/big-lama and replacing it.

@zopi4k
Copy link

zopi4k commented Sep 28, 2023

I have same error on windows :

Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion-webui\extensions\sd-webui-cleaner\scripts\lama.py", line 14, in clean_object_init_img_with_mask
    return clean_object(init_img_with_mask['image'],init_img_with_mask['mask'])
  File "D:\stable-diffusion-webui\extensions\sd-webui-cleaner\scripts\lama.py", line 19, in clean_object
    Lama = LiteLama2()
  File "D:\stable-diffusion-webui\extensions\sd-webui-cleaner\scripts\lama.py", line 69, in __init__
    self.load(location="cpu")
  File "D:\stable-diffusion-webui\venv\lib\site-packages\litelama\litelama.py", line 19, in load
    self._model = load_model(config_path=self._config_path, checkpoint_path=self._checkpoint_path, use_safetensors=use_safetensors)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\litelama\model.py", line 60, in load_model
    with safetensors.safe_open(checkpoint_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

And I have change the model like Keny25 : https://huggingface.co/anyisalin/big-lama/tree/main

image

It's work well.

@zopi4k
Copy link

zopi4k commented Sep 28, 2023

But the error for me is now in "Clean Up Upload" ...

image

Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1335, in postprocess_data
    prediction_value = block.postprocess(prediction_value)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\components\gallery.py", line 205, in postprocess
    raise ValueError(f"Cannot process type as image: {type(img)}")
ValueError: Cannot process type as image: <class 'NoneType'>

@ka1tte
Copy link
Contributor

ka1tte commented Sep 28, 2023

@zopi4k Do you have a GPU? I have only tested it on Linux and Mac, and I have not run it on Windows.

@zopi4k
Copy link

zopi4k commented Sep 28, 2023

Yes Ka1tte ^^) I have a NVidia GeForce RTX 3070 Laptop GPU.
if you change just the Model it's possible to work.

And Thx for this extention ! I've been looking for one since February xDD

@LinTevis
Copy link

@zopi4k I was able to replicate those errors when trying to use the extention and generating an image, otherwise it's working as intended

@Rkkss
Copy link

Rkkss commented Nov 3, 2023

@zopi4k I was able to replicate those errors when trying to use the extention and generating an image, otherwise it's working as intended

same error. How do we fix it?

@derFleder
Copy link

I also encounter the error:
Error while deserializing header: HeaderTooLarge

It works pretty well on Mac OSX, but on my Windows Machine I encounter that error.
I also tried replacing the safetensor files. It did not work.

Is this project still maintained?

@xiaoshun111
Copy link

*** Error loading script: clean_up_tab.py
Traceback (most recent call last):
File "D:\ Stable Diffusion\sd-webui-aki-v4\modules\scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\ Stable Diffusion\sd-webui-aki-v4\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\AStable Diffusion\sd-webui-aki-v4\extensions\sd-webui-cleaner\scripts\clean_up_tab.py", line 7, in
from modules.ui_components import ToolButton, ResizeHandleRow
ImportError: cannot import name 'ResizeHandleRow' from 'modules.ui_components' (D:\ Stable Diffusion\sd-webui-aki-v4\modules\ui_components.py)

How do you fix this error

@tomchify
Copy link

A possibly helpful tip: the mask and image must be the same size for the API to work. Remember to mask.resize(image.size) before b64 encoding the images.

@gitesend
Copy link

{
"cleaner_use_gpu": true
}
@ka1tte May I ask which file this parameter is added to?My Mac is running with an error message

ValueError(f"Cannot process type as image: {type(img)}")
ValueError: Cannot process type as image: <class 'NoneType'>

@douyan2662
Copy link

你好,我的mac也出现了同样的问题
Traceback (most recent call last):
File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1335, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/Users/apple/Documents/AI/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components/gallery.py", line 205, in postprocess
raise ValueError(f"Cannot process type as image: {type(img)}")
ValueError: Cannot process type as image: <class 'NoneType'>
请问如何解决,谢谢!

@BannyLon
Copy link

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/blocks.py", line 1434, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/blocks.py", line 1335, in postprocess_data
prediction_value = block.postprocess(prediction_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/components/gallery.py", line 205, in postprocess
raise ValueError(f"Cannot process type as image: {type(img)}")
ValueError: Cannot process type as image: <class 'NoneType'>

我的电脑是mac m2。出现这样报错

@valuetodays
Copy link

valuetodays commented Dec 10, 2024

@BannyLon @douyan2662 我尝试在sd-webui页面中依次点击Settings,搜索Uncategorized,它下面有Cleaner,不勾选Is Use GPU,再重启sd-webui,就正常了。(我用的电脑是windows10)

I tried to click Settings, search Uncategorized, where there is a Clearner sub-menu, click Cleaner sub-menu and keep Is Use GPU unchecked. Restart sd-webui. It works for me.

Snipaste_2024-12-10_22-36-24 Snipaste_2024-12-10_22-34-23

@scoaming
Copy link

image

Traceback (most recent call last):
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connection.py", line 179, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x000001E6BB0A2D10>, 'Connection to huggingface.co timed out. (connect timeout=None)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\requests\adapters.py", line 667, in send
resp = conn.urlopen(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /anyisalin/big-lama/resolve/main/big-lama.safetensors (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001E6BB0A2D10>, 'Connection to huggingface.co timed out. (connect timeout=None)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\SD\sd-webui-aki-v4.9.1\extensions\sd-webui-cleaner\scripts\lama.py", line 14, in clean_object_init_img_with_mask
return clean_object(init_img_with_mask['image'],init_img_with_mask['mask'])
File "D:\SD\sd-webui-aki-v4.9.1\extensions\sd-webui-cleaner\scripts\lama.py", line 19, in clean_object
Lama = LiteLama2()
File "D:\SD\sd-webui-aki-v4.9.1\extensions\sd-webui-cleaner\scripts\lama.py", line 65, in init
download_file("https://huggingface.co/anyisalin/big-lama/resolve/main/big-lama.safetensors", checkpoint_path)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\litelama\model.py", line 22, in download_file
with requests.get(url, stream=True) as r:
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\requests\api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "D:\SD\sd-webui-aki-v4.9.1\python\lib\site-packages\requests\adapters.py", line 688, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /anyisalin/big-lama/resolve/main/big-lama.safetensors (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001E6BB0A2D10>, 'Connection to huggingface.co timed out. (connect timeout=None)'))

@scoaming
Copy link

scoaming

我手动下载了https://huggingface. co/anyisalin/big-lama/resolve/main/big-lama.safetensors后将big-lama.safetensors放在sd-webui~\extensions\sd-webui-cleaner\models下后解决问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests