Skip to content

Conversation

@ggerganov
Copy link
Member

cont #10171

Was accidentally accumulating Q*K results in F16 instead of F32.

./llama-cli -m ./models/qwen2.5-1.5b-coder/ggml-model-f16.gguf -s 1 -p "I believe the meaning of life is to" -n 32 -fa

...

I believe the meaning of life is to keep it simple.

@@@@@@@@@@@@@@@@@@@@@@@@@@@@

@ggerganov ggerganov merged commit bb38cdd into master Nov 9, 2024
49 checks passed
@ggerganov ggerganov deleted the gg/metal-fa-vec-fix-prec branch November 9, 2024 09:52
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant