Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we pool cpu and gpu? Or is that impossible? #25

Open
DuckersMcQuack opened this issue May 18, 2023 · 3 comments
Open

Can we pool cpu and gpu? Or is that impossible? #25

DuckersMcQuack opened this issue May 18, 2023 · 3 comments

Comments

@DuckersMcQuack
Copy link

I plan to get another 3090, but if it is possible to "pool" cpu and gpu performance and allocate 24GB ram as well as a "shared same amount each", i'd love to try that! Might be dreadfully slow due to "potato cpu vs gpu performance", but worth an experimentation!

Cpu in question here is a 5900x.

@NickLucche
Copy link
Owner

Hey, sorry for the late reply, this is possible although a bit unconventional due to the huge speed differences. You would want to have two models, one on cpu and the other on gpu did I get that right?

@DuckersMcQuack
Copy link
Author

Hey, sorry for the late reply, this is possible although a bit unconventional due to the huge speed differences. You would want to have two models, one on cpu and the other on gpu did I get that right?

If that's what it takes to get the pooling of cpu as well, sure! Got 64GB ram, so plenty to take from :) As i wanna see for myself what "speed increase" that would achieve.

@NickLucche
Copy link
Owner

RAM wouldn't be the bottleneck here tho, what would happen is that you get the GPU model output in a few seconds for say image_0 and then you're stuck waiting for the CPU models to finish computation for image_1, image_2...

What I would rather consider is adding an optimized model for CPU inference only..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants