Using fastmcp with ollama: qwen, mistral, llama, .... #965
Unanswered
jordimassaguerpla
asked this question in
Q&A
Replies: 1 comment
-
|
Have you tried this https://github.com/jonigl/mcp-client-for-ollama ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
We are implementing an MCP Server with FastMCP. We use openWebUI with the mcpo proxy. All works well with GPT or Gemini. However, when we try to connect it to Qwen or Mistral, which ran with ollama, it does not work, meaning it does not call the tools to get the answers... I see there is docs for Gemini, Claude and GPT, but not for Qwen or Llama or Mistral... https://github.com/jlowin/fastmcp/tree/main/docs/integrations
Is running models with ollama not supported with FastMCP?
Beta Was this translation helpful? Give feedback.
All reactions