You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Starting with GPT4All 3.5.0, compatibility with Jinja templates that can be found in the large language models accompanying tokenizer_config.json has been introduced. Since GPT4All's jinja parser is c++ based and some model authors create their templates with a python based jinja tooling, there are some compatibility issues left to be addressed. In this issue we collect reports of model chat templates that do not (yet) work with GPT4All.
If you want your models chat template to be added to this list, please create a separate github isse and then link to this issue using Markdown syntax.
Chat template issues raised on Discord:
MaziyarPanahi/calme-2.1-phi3.5-4b-GGUF (fixed) link
QuantFactory/Hermes-3-Llama-3.2-3B-GGUF (already fixed by openorca substitution): link
The text was updated successfully, but these errors were encountered:
Info
Starting with GPT4All 3.5.0, compatibility with Jinja templates that can be found in the large language models accompanying tokenizer_config.json has been introduced. Since GPT4All's jinja parser is c++ based and some model authors create their templates with a python based jinja tooling, there are some compatibility issues left to be addressed. In this issue we collect reports of model chat templates that do not (yet) work with GPT4All.
If you want your models chat template to be added to this list, please create a separate github isse and then link to this issue using Markdown syntax.
Chat template issues raised on Discord:
The text was updated successfully, but these errors were encountered: