-
Notifications
You must be signed in to change notification settings - Fork 65
Issues: huggingface/optimum-quanto
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
ValueError: invalid literal for int() with base 10: '90a'
#361
opened Dec 24, 2024 by
MatthewCroughan
"does not have a parameter or a buffer named input_scale" in quanto FluxTransformer2DModel
#359
opened Dec 6, 2024 by
LianShuaiLong
Clarification on Per-Channel vs. Per-Tensor Quantization for Weights and Activations
Stale
#356
opened Nov 24, 2024 by
kirkdort44
Only random noise is generated with Flux + LoRA with optimum-quanto >= 0.2.5
#343
opened Oct 30, 2024 by
nelapetrzelkova
Corrupted outputs with Marlin int4 kernels as parallelization increases
bug
Something isn't working
help wanted
Extra attention is needed
#332
opened Oct 6, 2024 by
dacorvo
qint4 failed for diffusers: QBitsTensor cannot be changed
Stale
#312
opened Sep 19, 2024 by
liyihao1230
Packages created on the CI are missing cpp and cuda extension files
#254
opened Jul 23, 2024 by
dacorvo
ProTip!
Whatβs not been updated in a month: updated:<2024-11-25.