-
-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Newer ggml files output gibberish due to outdated llama.cpp #614
Comments
ahh ok. LCPP is pretty far out of date so ive been working on getting it up to date |
It might be worthwhile pointing out that it would be a lot easier to get it up to date if you followed correct GIT procedure because most of the patches would apply automatically. My suggestion would be for this version just to start with a fresh copy of llama.cpp and then apply your changes, but do it using correct git commands so that it actually keeps track of where you're moving files and stuff. For example, if you have to move a file, you don't want to copy and paste it using the operating system, instead use |
the patches ive applied is only in 1 file. Ive already "fixed" it so it compiles now but for some reason its crashing on macos |
Can you send me the crash dump, the debug binary file, and the source code you're working with? |
Its just the https://github.com/Mobile-Artificial-Intelligence/maid/actions/runs/10571003934 |
Well I need the binary and crash dump from you because I can't generate a
OSX crash dump or binary without a OSX machine.
…On Mon, Aug 26, 2024, 10:59 PM Dane Madsen ***@***.***> wrote:
Its just the update-lccp branch of maid_llm
—
Reply to this email directly, view it on GitHub
<#614 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACB45UZCK7XJA3OC35NIQTZTQIT5AVCNFSM6AAAAABNFE55XKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJRGYZTMMJZGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
It crashes on android too and presumably linux and windows too. MacOS binary was from the same actions commit: You can try one of the other platforms too if you have linux or windows https://github.com/Mobile-Artificial-Intelligence/maid/actions/runs/10571003936 heres the crash dump
|
hang on this is a babylon problem |
I was going to say that it says that right in the error, so... |
yeah i fixed the babylon problem now its a different error |
ok new dump
|
Still a babylon error in this func: DeepPhonemizer::Session::Session(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator>, bool) |
Nvmd was looking at the wrong error. |
Models generated recently don't work, for example anything llama 3.1 based.
The text was updated successfully, but these errors were encountered: