- 
                Notifications
    
You must be signed in to change notification settings  - Fork 36
 
Description
For some reason, there seems to be an issue with the Gemma 2b v1.0 and v1.1 models. Even on a branch of this repository that has an up-to-date llama.cpp, no matter the chat format I'm still seeing superfluous tokens and bad quality responses come back.
I've tried all sorts of chat templates.
The one from the HF page:
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
The one used by LM Studio:
<start_of_turn>user
USER_PROMPT_HERE<end_of_turn>
<start_of_turn>model
No chat template at all:
USER_PROMPT_HERE
And many more.
In the case of no chat template, I see <eos> printed, and that's all.
In the other cases, I sometimes see a reasonable generation, but then I also see extra strings printed at the end, such as:
</start_of_turn><eos>
The issue does not appear to be in llama.cpp. (1) I've tested with llama-cpp-python and llama.cpp directly, and (2) LM Studio does not have these issues, even with the same weights. I also do not have problems with this repository when using Mistral 7b v0.2.
Has anyone else seen this happen? Is there something I'm missing?