context window size error on simple prompt with small model #2220
Closed
AwaisBinKaleem
started this conversation in
1. Feature requests
Replies: 1 comment
-
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
i am using zbook fury 15 g7
i7 hexa core
64GB RAM
4GB - NVEDIA quadra T1000
ubuntu 24
latest versions of ollama and vs code
nvedia drivers are working as we can se its status:
i have setup ollama with model qwen3:14b
qwen3:14b bdbd181c33f2 9.3 GB 13 days ago
config it with kilo code
when i give a simple prompt "print first 10 prime number in python"
it gives following error:
Input message is too long for the selected model. Estimated tokens: 7982, Max tokens: 4096. To increase the context window size, see: http://localhost:3000/docs/providers/ollama#preventing-prompt-truncation
here are logs
Beta Was this translation helpful? Give feedback.
All reactions