Skip to content

feat: Allow AI to deny using tools, and to use them multiple times until satisfied with its answer #45

@jack5github

Description

@jack5github

This has the potential to be a really intuitive way of utilising MCP servers with Ollama, but it's missing the natural flow of knowledge gathering and execution that most LLM applications strive to have. It would be great if there were settings added that allowed models to make more decisions in-between responses and thus return a more complete output.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions