ChatLocalAI (GPT4ALL) Request failed with status code 500 #528
Replies: 4 comments 19 replies
-
Hi @hannahnguyen14, Is your Flowise cloud hosted or run locally? |
Beta Was this translation helpful? Give feedback.
-
Does step 3 mean that I need to install flowise using Git and/or the Docker Compose method? Like the OP, I just installed using the quick start and so I only have the LocalAI instance in Docker, but not the Flowise instance. According to this discussion, it seems that to run LocalAi open models, I need to go down this route. Additionally, if I need to use the Docker Compose method, then does that come after I perform the Git operations so that I can then add .env file? |
Beta Was this translation helpful? Give feedback.
-
Thanks for the detailed explanation. I test the API, the following returns info with StatusCode 200: I build a simple chatbot in Flowise and get a If I remove the http from the Base Path url, I get the following error: |
Beta Was this translation helpful? Give feedback.
-
Ok, thank you. Two things:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to use ChatLocalAI as the chat (language) model but it keeps returning the error of "Request failed with status code 500".
I don't know what is the issue here because I have followed the ChatLocalAI setup tutorial (https://docs.flowiseai.com/chat-models/chatlocalai). I have done until the Test API part before setup chat flows by running this command "curl http://localhost:8080/v1/models" and it returns successfully like the guide. I don't know what to expect when open http://localhost:8080 on google chrome tab, I have tried to and it returns "{"error":{"code":404,"message":"Cannot GET /","type":""}}".
Moreover, I have tried ChatLocalAI with 3 different chat flows like below and they return the same error. The chat flow with OpenAI embedding and pinecone vector database works fine with other chat models like OpenAI and Azure as I have tried.
Also, I have tried to change the base path to http://localhost:8080/ or add v1/models and it returns "failed with status code 404" so these paths seem wrong. Changing it to http://localhost:8080 base path returns the same error of 500.
I have tried my best, please help me out!

Beta Was this translation helpful? Give feedback.
All reactions