You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using a MacBook Pro M1, everything works fine in the ingest part, however, when I run the run_localGPT.py file it takes a lot of time, even for a simple question like Hi.
What could be the issue here? I tried mps and cpu, same thing.
I tried ollama llama 2 for chatting and it's quite fast, I'm not sure why the inference here is so slow.
The text was updated successfully, but these errors were encountered:
I am using a MacBook Pro M1, everything works fine in the ingest part, however, when I run the
run_localGPT.py
file it takes a lot of time, even for a simple question likeHi
.What could be the issue here? I tried
mps
andcpu
, same thing.I tried ollama llama 2 for chatting and it's quite fast, I'm not sure why the inference here is so slow.
The text was updated successfully, but these errors were encountered: