Can't fetch models from local Ollama API #5482
Replies: 6 comments
-
Same issue, can't curl from the docker to the local hosted ollama on port: 11434. Some docker network bridge problem maybe? librechat.yaml effectively the same as ai212983. added OLLAMA_HOST env var to local. Also added this line to docker compose.yml , to no avail: environment:
- OLLAMA_HOST=http://host.docker.internal:11434 # added same problem: "code": "ECONNABORTED",
"config": {
"adapter": [
"xhr",
"http",
"fetch"
],
"env": {},
"headers": {
"Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, compress, deflate, br",
"User-Agent": "axios/1.7.7"
},
"maxBodyLength": -1,
"maxContentLength": -1,
"method": "get",
"timeout": 5000,
"transformRequest": [
null
],
"transformResponse": [
null
],
"transitional": {
"clarifyTimeoutError": false,
"forcedJSONParsing": true,
"silentJSONParsing": true
},
"url": "http://host.docker.internal:11434/api/tags",
"xsrfCookieName": "XSRF-TOKEN",
"xsrfHeaderName": "X-XSRF-TOKEN"
},
"level": "error",
"message": "Failed to fetch models from Ollama API. If you are not using Ollama directly, and instead, through some aggregator or reverse proxy that handles fetching via OpenAI spec, ensure the name of the endpoint doesn't start with `ollama` (case-insensitive). timeout of 5000ms exceeded", |
Beta Was this translation helpful? Give feedback.
-
@dane-git I guess this is not the same issue. The meaningful part of the error message is at the end of it. In my case it is |
Beta Was this translation helpful? Give feedback.
-
Make sure Ollama is configured for network access: https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server |
Beta Was this translation helpful? Give feedback.
-
Not sure if you are using WSL. If yes, have you tried your WSL 2 IP in case?
You can get your WSL 2 IP with this command :
For me, that was the way I fixed it. I am using WSL with Windows 11. I am not using Ollama as a container. I have it as an exe. |
Beta Was this translation helpful? Give feedback.
-
I'm having this same issue. Libre refuses to see, connect, or have anything to do with Ollama even though Ollama is set up correctly, exposed for Libre to see, and shows models ready for use. Seems to be a Libre issue. |
Beta Was this translation helpful? Give feedback.
-
I'm running Ollama on my Windows natively. What worked for me is commenting out this part in
Remember that your ollama URL needs to use host.docker.internal!
I think |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
With local Ollama being set up as a custom endpoint in
librechat.yaml
, I see the following error in the console:And the following json in the log files:
curl -s http://127.0.0.1:11434/api/tags
works fine:Steps to Reproduce
librechat.yaml
.librechat.yaml
:docker compose up
(non-daemon mode to see errors and messages)What browsers are you seeing the problem on?
No response
Relevant log output
Screenshots
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions