use ollama with intel gpu. #8372
Replies: 1 comment
-
|
This feels like a specific issue to triage via Ollama. The best I can do is provide some triaging ideas. Based on my research (in claude): Summary of the ProblemThis is a connection error when trying to use The Core Issue Port 58023The error shows port 58023 instead of Ollama's standard port 11434. This is highly unusual and suggests:
Fix: Ollama-IPEX-LLM + Continue Connection ErrorProblemYour error shows port 58023, but Ollama uses port 11434. This is a configuration mismatch. Quick Fix1. Start Ollama CorrectlyWindows: set OLLAMA_HOST=0.0.0.0:11434
set OLLAMA_NUM_GPU=999
ollama serveLinux: export OLLAMA_HOST=0.0.0.0:11434
export OLLAMA_NUM_GPU=999
./ollama serveLeave this terminal open. 2. Fix Continue ConfigOpen VS Code → Set this in {
"models": [{
"title": "IPEX-LLM",
"provider": "ollama",
"model": "AUTODETECT",
"apiBase": "http://localhost:11434"
}]
}Critical: Make sure 3. Reload VS Code
Verify It WorksNew terminal: curl http://localhost:11434/api/tagsShould return JSON with your models. Still Not Working?
Links: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
i am using intel arcA770. with normal ollama download from website it useage cpu. i tried using ollama-ipex-llm and it worked on intel gpu. But when i tried to use it on Vscode it giving me this error :
Error parsing Ollama response: Error: POST predict: Post "http://127.0.0.1:58023/completion": read tcp 127.0.0.1:58025->127.0.0.1:58023: wsarecv: An existing connection was forcibly closed by the remote host. {"error":"POST predict: Post "http://127.0.0.1:58023/completion\": read tcp 127.0.0.1:58025-\u003e127.0.0.1:58023: wsarecv: An existing connection was forcibly closed by the remote host."}


Beta Was this translation helpful? Give feedback.
All reactions