The JetBrains Plugin doesnt seem to provide any functionality for targeting my local models from Ollama. I have managed to resolve the issues with VSCode also having such problems, but that is only because I could edit the config for the VS Code Cody Extension, which allowed me to point to the Ollama server API for inference code-completion/chat.
Please advise how to point the JetBrains plugin to the local Ollama for offline chat and auto-complete.
Thanks in advance!
