Replies: 3 comments 2 replies
-
Have you tried the llamacpp LLM? I'm still debugging the flows but I think I'm able to use that against 7B and 13B models I have running locally. (I think. Still working out the rough spots.) |
Beta Was this translation helpful? Give feedback.
-
I am able to run models using the CTransformers wrapper but I can´t seem to be able to use GPU inference. So for now it isn´t workable yet. |
Beta Was this translation helpful? Give feedback.
-
Hey! Thanks for sharing your question here. I'm sorry for not getting back to you sooner. |
Beta Was this translation helpful? Give feedback.
-
Hey folks! I really would like to see an alternative llm in langflow. I would like to be able to use at least the HuggingfaceHub models, better would be even custom model on my local machine. I think this is for all beneficial and could free us developers from private costs and code blindness. :(((
Beta Was this translation helpful? Give feedback.
All reactions