Skip to content

Add remote model support #7

@Papierkorb

Description

@Papierkorb

Hello, cool project!

I'm running my LLM on a server using the Obabooga Web UI which exposes an OpenAI API to allow other services to make use of it. This allows me to use a comparably big LLM for every day tasks.

I think this project should absolutely keep the ability to run a (small?) local LLM, but I think it would greatly benefit from the ability to also use a remote LLM. As the OpenAI API seems to be the current de-facto standard, I suggest using it and documenting on how to set the URL to the custom host.

Additionally, with the configuration options to be increasing, it may be a good idea to store them in a configuration file instead.

Cheers!

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions