Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: [xformers] NotImplementedError: No operator found for memory_efficient_attention_forward with inputs #1757

Open
Looong01 opened this issue Dec 17, 2024 · 4 comments
Labels
enhancement New feature or request feature request New feature or request Under Investigation

Comments

@Looong01
Copy link

Looong01 commented Dec 17, 2024

Problem Description

Traceback (most recent call last):
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/call_queue.py", line 74, in f
    res = list(func(*args, **kwargs))
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/call_queue.py", line 53, in f
    res = func(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
    processed = processing.process_images(p)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/processing.py", line 847, in process_images
    res = process_images_inner(p)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/processing.py", line 988, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/processing.py", line 1362, in sample
    return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/processing.py", line 1461, in sample_hr_pass
    decoded_samples = decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/processing.py", line 632, in decode_latent_batch
    sample = decode_first_stage(model, batch[i:i + 1])[0]
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/sd_samplers_common.py", line 76, in decode_first_stage
    return samples_to_images_tensor(x, approx_index, model)
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/sd_samplers_common.py", line 58, in samples_to_images_tensor
    x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/mnt/4T/Codes/stable-diffusion-webui/modules/sd_hijack_utils.py", line 36, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 826, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "/mnt/4T/Codes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 90, in decode
    dec = self.decoder(z)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 631, in forward
    h = self.mid.attn_1(h)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/4T/Codes/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 258, in forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 306, in memory_efficient_attention
    return _memory_efficient_attention(
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 467, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 486, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp, False)
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 135, in _dispatch_fw
    return _run_priority_list(
  File "/mnt/4T/Codes/stable-diffusion-webui/stable-diffusion-webui/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 76, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(1, 24576, 1, 512) (torch.float16)
     key         : shape=(1, 24576, 1, 512) (torch.float16)
     value       : shape=(1, 24576, 1, 512) (torch.float16)
     attn_bias   : <class 'NoneType'>
     p           : 0.0
`ckF` is not supported because:
    max(query.shape[-1], value.shape[-1]) > 256

Operating System

Ubuntu 22.04.4 LTS (Jammy Jellyfish)

CPU

Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz

GPU

AMD Radeon RX 7900 XTX

Other

No response

ROCm Version

ROCm 6.2.3

ROCm Component

Composable Kernel

Steps to Reproduce

Use xformers for ROCm with stable diffusion.

(Optional for Linux users) Output of /opt/rocm/bin/rocminfo --support

$ /opt/rocm/bin/rocminfo --support
ROCk module version 6.8.5 is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.14
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz
  Uuid:                    CPU-XX                             
  Marketing Name:          Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   4700                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            8                                  
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    65781360(0x3ebbe70) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    65781360(0x3ebbe70) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65781360(0x3ebbe70) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1100                            
  Uuid:                    GPU-85631fd855c9cea1               
  Marketing Name:          Radeon RX 7900 XTX                 
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      6144(0x1800) KB                    
    L3:                      98304(0x18000) KB                  
  Chip ID:                 29772(0x744c)                      
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2482                               
  BDFID:                   768                                
  Internal Node ID:        1                                  
  Compute Unit:            96                                 
  SIMDs per CU:            2                                  
  Shader Engines:          6                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 342                                
  SDMA engine uCode::      21                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    25149440(0x17fc000) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    25149440(0x17fc000) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1100         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*** Done ***   

Additional Information

The problem is exactly as stated: the head dimension of the data (512) is too big to use xformers memory_efficient_attention !
The implementations (Ops) behind fmha are different on different platforms. On nvidia there's the cutlass backend which supports very large embedding dimensions. On AMD there's only the ck ones.

Do u have any plan to implement this in future?

@Looong01
Copy link
Author

@ppanchad-amd
Copy link

Hi @Looong01. Internal ticket has been created to assist with your issue. Thanks!

@tcgu-amd tcgu-amd added enhancement New feature or request feature request New feature or request labels Dec 18, 2024
@qianfengz
Copy link
Contributor

Please raise a ticket internally

@tcgu-amd
Copy link

Hi @Looong01 Hi, thanks for reaching out! We are aware of this gap and are working on addressing it internally. However, we do not have a specific release date at the moment. Please keep an eye out for updates on official ROCm release channel. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request New feature or request Under Investigation
Projects
None yet
Development

No branches or pull requests

4 participants