-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add llamafile code snippet #1088
base: main
Are you sure you want to change the base?
Conversation
(launching the CI) |
Co-authored-by: Julien Chaumond <[email protected]>
const LinuxAndMacCommand = () => { | ||
const snippet = [ | ||
"# Load and run the model:", | ||
`wget https://huggingface.co/${model.id}/resolve/main/${filepath ?? "{{LLMAFILE_FILE}}"}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is LLMAFILE_FILE
correct? Should it be LLAMAFILE_FILE
? Where is it defined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for raising this @pcuenca
this was the most confusing aspect of this PR, i found {{GGUF_FILE}}
as well but it was not defined anywhere else in the repo, would love to get some inputs on this one.
maybe it's some kind of a jinja pattern of some sorts, if so let me know what other changes that needs to be made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mishig25 will be able to help
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simplest would be to get one example file ending with .llamafile (if we have the list of files here, i don't remember) – not sure we need to implement the same selector as for GGUF @mishig25 – maybe overkill here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for 🦙📁 the file needs to have .llamafile
somewhere in the filename but it might not need to end with the file_extension.
an example of this might be https://huggingface.co/Mozilla/Meta-Llama-3-70B-Instruct-llamafile/tree/main
for heavy files you might find that the cat[number]
might be either at the end or right before the file_extension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, I didn't realize we were planning to use a similar method as for gguf, makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at llamafile model repos on the hub: https://huggingface.co/models?library=llamafile, I only see a few from Mozilla which have more than 2-3 llama files.
Unless implementing a selector is trivial, I'd recommend that we pick up the first file/ or just name xyz.llamafile
as we did in really early GGUF snippets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes^
(BTW we only detect *.llamafile
files as library=llamafile in the Hub. That's fine IMO, let's keep things simple)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! thanks a lot for picking this up again @not-lain 💪
const LinuxAndMacCommand = () => { | ||
const snippet = [ | ||
"# Load and run the model:", | ||
`wget https://huggingface.co/${model.id}/resolve/main/${filepath ?? "{{LLMAFILE_FILE}}"}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at llamafile model repos on the hub: https://huggingface.co/models?library=llamafile, I only see a few from Mozilla which have more than 2-3 llama files.
Unless implementing a selector is trivial, I'd recommend that we pick up the first file/ or just name xyz.llamafile
as we did in really early GGUF snippets.
fixes : #871 and #848
as mentioned llamafile can work on both files ending with a
.llamafile
extension andGGUF
ones as well.this pr will support only the
.llamafile
extension files.more details can be found in this slack thread (internal)