-
I've just come back from vacation, so all parts of the chain are updated. Meaning new version of VS Code, Continue, Ollama and OSX. Other than updating the SW, I have not changed the configuration. Loading config now takes minutes. To debug I have deleted .continue, reinstalled the plugin, restart ollama. Loading config still takes minutes, same with changing models. If I keep the same model the response is "instant". This is my current config:
I have done a tcpdump to try to see if it is Ollama or the editor. Based on the tcpdump output it seems that the editor does not send queries to ollama until some timeout. (Not sure to blame vscode or continue so grouping them together as "editor"). The continue console in vscode timing seems to match the tcpdump timing. Meaning it does not get an "chat" when I press enter in the chat box, but when the data in tcpdump starts to flow. Using curl to test ollama it immediately switches between models, no timeout there. Also reproducing the issue when using LM Studio. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I have no idea why. But today the issue disappeared. Did not reboot the mac. Just closed it after work yesterday and opened it today and magically the timeout is not there anymore. |
Beta Was this translation helpful? Give feedback.
We are moving support requests to discussions for better discovery. #6929