[Bug]: [AskController] Error handling request fetch failed #7483
Unanswered
Leon-Sander
asked this question in
Troubleshooting
Replies: 1 comment
-
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
I have a model deployed via vllm, context size set to 20k.
When writing a larger amount of text in the librechat frontend, 5k tokens for example, I get the error:
I also dont see the request reaching my vllm container.
With less text there is no problem.
Previous discussions about this problem focused on proxy or reverse proxy, does not apply here I guess.
In the librechat.yaml:
I tried setting fetch to false in hopes that it does not trigger the fetch, but still does it seems.
and in the .env for ollama as backend for an embedding model:
Version Information
docker images | grep librechat
ghcr.io/danny-avila/librechat-dev latest 07698c5407cb 2 weeks ago 950MB
ghcr.io/danny-avila/librechat-rag-api-dev latest 96061ddf7d66 6 weeks ago 7.81GB
ghcr.io/danny-avila/librechat-rag-api-dev-lite latest 817faebddae9 6 weeks ago 1.31GB
Steps to Reproduce
What browsers are you seeing the problem on?
No response
Relevant log output
error: [AskController] Error handling request fetch failed error: [handleAbortError] AI response error; aborting request: fetch failed
Screenshots
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions