How to setup LibreChat to use llamafile? #2459
Closed
newptcai
started this conversation in
Help Wanted
Replies: 2 comments
-
You can add it in as a custom endpoint in the librechat.yaml file (You might have to add or drop parameters depending on the models, but it worked fine as is when I tested) version: 1.0.5
cache: true
interface:
privacyPolicy:
externalUrl: 'https://librechat.ai/privacy-policy'
openNewTab: true
termsOfService:
externalUrl: 'https://librechat.ai/tos'
openNewTab: true
registration:
socialLogins: ['github', 'google', 'discord', 'openid', 'facebook']
endpoints:
custom:
# llamafile Example
- name: 'llamafile'
apiKey: 'no-key'
baseURL: 'http://host.docker.internal:8080/v1/'
models:
default: [
'LLaMA_CPP',
]
fetch: false
titleConvo: true
titleModel: 'LLaMA_CPP'
modelDisplayLabel: 'llamafile' |
Beta Was this translation helpful? Give feedback.
0 replies
-
It works for me! My librechat.yaml is the following
I am not sure if I need to set anything else. But all seems to work well. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
llamafile is a project that allows you to run llm locally using just one file. It's probably the most convenient way to run llm locally. I wonder since it provides an openai compatible API, if it is possible to use it with LibreChat, since the web interface that comes with llamafile is quite rudimentary. I think llamafile is based on llama.cpp.
Beta Was this translation helpful? Give feedback.
All reactions