Replies: 16 comments
-
Some extra context Langchain Python library is installed on where the models are being ran from. |
Beta Was this translation helpful? Give feedback.
-
Getting the same error on Ollama and on LM Studio as well. Looks like by default the model name is set to 3.5 in the request and is not changing even after switching the model. I have both mistral and llama2 on my local Request from LM Studio log as below: |
Beta Was this translation helpful? Give feedback.
-
Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot? LM studio server mode shouldn't depend on a model name since you load the model first in its UI. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Should add to my post I am using Ollama to serve the API not LM studio |
Beta Was this translation helpful? Give feedback.
-
I encountered the same issue, but turn out that my own mistakes have been the cause. I shall share my own experience here. I'm using Windows PowerShell to start |
Beta Was this translation helpful? Give feedback.
-
Trying this out tonight. Thank you |
Beta Was this translation helpful? Give feedback.
-
I conducted commands accordingly, and several local models pulled already, but Obsidian Copilot says ### "do not find llama2, please pull it first".C:\Users\adam>set OLLAMA_ORIGINS=app://obsidian.md* C:\Users\adam>ollama serve |
Beta Was this translation helpful? Give feedback.
-
Same |
Beta Was this translation helpful? Give feedback.
-
same error |
Beta Was this translation helpful? Give feedback.
-
I got it all working . Are you guys running on windows WLS? |
Beta Was this translation helpful? Give feedback.
-
I'm using sonoma |
Beta Was this translation helpful? Give feedback.
-
In case someone using fish-shell too, I fixed this issue by set variable set -gx OLLAMA_ORIGINS 'app://obsidian.md*' |
Beta Was this translation helpful? Give feedback.
-
Can someone help me? I'm on mac sonoma, I'm using iterm2 and none of the answers above seem to work. I still get the lang chain fetch error |
Beta Was this translation helpful? Give feedback.
-
I get the same error - W10 in PowerShell in Windows Terminal, and Ollama server seems to be working using |
Beta Was this translation helpful? Give feedback.
-
This worked for me on Linux with systemd in ollama.service instead of app:// Change To
If running as a service and wanting to run manually with ollama serve then stop the service. Anyone know why app:// is recommended? Is it a flatpak thing or a Mac thing? Anyways try * on its own. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
When I try to chat in or the QA chat / indexing I keep on getting a lang chain fetch error
My setup
The error only shows in the obsidian UI
Preventing any text generation from happening.
Beta Was this translation helpful? Give feedback.
All reactions