Replies: 1 comment 1 reply
-
How are you resolving HTTPS? Likely an NGINX config issue: #3061 (comment) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Community !
Hope you are doing well!
I have deployed the librechat helm (https://github.com/bat-bs/helm-charts/tree/main/charts/librechat) to kubernates, now everything is good!
I'm used the OpenAI model and when i upload the file larger than 1 MB, i got the error

But, when test file less than 1MB , everything is good!

here is my config!
Yaml config
# For adding a custom config yaml-file you can set the contents in this var
configYamlContent: |
includedTools: []
version: 1.1.5
endpoints:
custom:
- name: 'TestAI'
# For
apiKey
andbaseURL
, you can use environment variables that you define.# recommended environment variables:
# Known issue: you should not use
OPENROUTER_API_KEY
as it will then override theopenAI
endpoint to use OpenRouter as well.apiKey: 'xx-xx!'
baseURL: 'xxx:9099'
models:
default: ['custom_rag_1', 'openai_pipeline']
fetch: true
titleConvo: true
titleModel: 'Custom RAG'
# Recommended: Drop the stop parameter from the request as Openrouter models use a variety of stop tokens.
dropParams: ['stop']
modelDisplayLabel: 'Kindred AI'
fileConfig:
serverFileSizeLimit: 20 # Global server file size limit in MB
avatarSizeLimit: 2 # Limit for user avatar image size in MB
also, tried https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/file_config
not really solve my problem !
Thank you
Beta Was this translation helpful? Give feedback.
All reactions