Name and Version
version: 4690 (4078c77)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
llama-server -m <any model>
Problem description & steps to reproduce
The user textarea has extreme sluggish performance when typing when the conversation above has a lot of context.
- Write a very long message (16K+ tokens) and send.
- Try typing into the textarea. Sluggish performance. Are we rerendering a component like ChatScreen on each keystroke and doing complex computations on the previous messages? I can't easily spot it myself if we are.
First Bad Commit
New React UI commit but not entirely sure
Relevant log output