-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using CoreML on macOS #137
Comments
Hey 👋. Can you try using fastembed-rs/src/text_embedding/init.rs Lines 47 to 54 in cfec7d7
Ref: https://ort.pyke.io/perf/execution-providers for CoreML |
Hey @Anush008 !! Do you think that adding "default execution providers" can be a good idea ? Something like this: .with_execution_providers({
#[cfg(any(target_os = "macos", target_os = "ios"))]
{
use ort::{CoreMLExecutionProvider, XNNPACKExecutionProvider};
[
CoreMLExecutionProvider::default().build(),
XNNPACKExecutionProvider::default().build(),
]
}
#[cfg(target_os = "windows")]
{
use ort::DirectMLExecutionProvider;
[DirectMLExecutionProvider::default().build()]
}
...
}) If yes, I'll try smtg this week and PR it ;) Cool lib and cool resource to learn basic things with ort btw ! :) Not my main gh account, thats why it looks weird lol (@PierreLouisLetoquart) |
I believe this would be better left to the users to choose. |
Okay yes thats what I thought because of the large model choice |
Hello, I have a MacBook with a M4 chip which comes with a neural engine. But when using fastembed, the neural engine is not used at all, instead the cpu is running on 100%.
Is there a way to tell fastembed to use the neural engine?
Or is this feature just missing?
The text was updated successfully, but these errors were encountered: