-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenCL #1024
Comments
additionally I would like to know second thing, can I load all backends and let lib to use best one? let's say cuda 12, cuda11, opencl, vulkan, cpu? |
Okay I went through closed tickets just in case someone will end up in same problem as I did... Now my last question is it possible to show user what technology is supported at which card? Is there API for this? like Vulkan is available to you on this card or cuda is on this card so user could actually choose or not? Or if I can decide for the user to show or hide specific functionality in order to not let him use some LLM search on his slow CPU and so on.. |
Looks like you've answered a lot of your own questions, just to re-iterate to be clear:
This was replaced by Vulkan upstream in llama.cpp, so we did the same here.
This should work. The loading system inspects your systems and tries to load the best available backend (e.g. which one out of Vulkan vs CUDA is installed, what AVX level does your CPU support etc). it's not always perfect, so we have the methods like
LLamaSharp doesn't currently have an API for you to get this info directly. There may be other nuget packages that you could use to get it, or you could maybe add a PR to expose some of this info. |
Hi there,
can you please point me where I can find some ussage of OpenCL?
I understand this that user should be able to tell if he do want to use his gpu or not. When I installed OpenCL package I still saw only withCuda not with OpenCL so it's clear I'm missing something.
Thank you for your time ✌️
The text was updated successfully, but these errors were encountered: