Describe the bug
I am running ollama-client on Vivaldi, and configured it with the IP address on my local LAN to an Asus Ascent GX10 computer (same as Nvidia GDX Spark) running Ollama. ollama-client sees my computer and identified the models correctly, but when I submit a prompt I immediately get: ❌ Unknown error: Failed to fetch
To Reproduce
See above
Expected behavior
I expected the system to submit my prompt and then receive a response
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
- OS: Local machine Windows 11
- Browser: Vivaldi Version 7.0.3495.27
Additional context
Confirmed Ollama is responding locally on the GX10 and via Open WebUI.
Also confirmed Ollama API is reachable at my IP address in my web browser: "ollama is running"
Describe the bug
I am running ollama-client on Vivaldi, and configured it with the IP address on my local LAN to an Asus Ascent GX10 computer (same as Nvidia GDX Spark) running Ollama. ollama-client sees my computer and identified the models correctly, but when I submit a prompt I immediately get: ❌ Unknown error: Failed to fetch
To Reproduce
See above
Expected behavior
I expected the system to submit my prompt and then receive a response
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
Additional context
Confirmed Ollama is responding locally on the GX10 and via Open WebUI.
Also confirmed Ollama API is reachable at my IP address in my web browser: "ollama is running"