[Enhancement]: n8n Agent Support #7116
Replies: 3 comments 7 replies
-
Hi I tried this in multiple ways. I can get a valid response that is empty, but had to do horrible hacks in baseclient.js to get the message through that was not sustainable for me. I ended up making a tool instead: Might not fit you use case, but you can make an agents with instructions that uses the tool like an MCP. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your response @oxeone I got it to work with the custom endpoints :
Had to use addParams to set session id and stream to false In the .env In the webhook set authentication to "Header Auth" with credentials of Authorization: Bearer (N8N_CHAT_KEY Values from .env) In the primary ai agent User message set as :
Had to create an n8n "code" (run once for all items) before webhook response
And it works, my initial goal is complete, will now move onto creating the secondary implementations. |
Beta Was this translation helpful? Give feedback.
-
Hi @dpnmw ! I wanted to let you know that I tried the exact configuration you described, including the directEndpoint: false, forcePrompt: true, and stream: false settings. Here's my current librechat.yaml config:
I have the ${N8N_CHAT_KEY} properly set in my .env file. The model does show up in the LibreChat UI, but when I try sending a message, I get this error: Something went wrong. Here's the specific error message we encountered: An error occurred while processing your request. Please contact the Admin. I also tested the same webhook using curl, and it works fine:
This returns a valid JSON response in the expected chat.completion format. However, LibreChat never seems to actually send the request — the webhook workflow in n8n is never triggered and no incoming request is logged. Is there anything else I could check? Maybe a way to enable more verbose logging from LibreChat? Or could something internal be silently blocking the request? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
What features would you like to see added?
Hi, I'm looking into using my n8n to create custom responses to users that use the chat, I don't need the api models that are listed and not looking into training my own model, but provide a support chat for customers.
The issue i've seen is that my n8n webhooks are not OpenAI APIs.
I want to send raw messages, and get raw answers — not OpenAI chat completions.
Thus, using the Custom Endpoint system as-is won't work cleanly for my use case scenario.
I've modified the messages.js
`const webhookMap = {
sales: process.env.N8N_WEBHOOK_SALES,
support: process.env.N8N_WEBHOOK_SUPPORT,
hr: process.env.N8N_WEBHOOK_HR,
};
const defaultWebhook = process.env.N8N_WEBHOOK_SUPPORT;
router.post('/:conversationId', validateMessageReq, async (req, res) => {
try {
const { conversationId } = req.params;
const { text, selectedWebhook = 'support' } = req.body;
} catch (error) {
console.error('Error sending to webhook:', error);
res.status(500).json({ error: 'Failed to process webhook message.' });
}
});
and added the following to my .env
# ==============================N8N Webhook URLs
==============================
N8N_WEBHOOK_SALES=https://n8n.tld.com/webhook/sales
N8N_WEBHOOK_SUPPORT=https://n8n.tld.com/webhook/support
N8N_WEBHOOK_HR=https://n8n.tld.com/webhook/hr
`
Doesn't seem to work, so want to know where did I go wrong? Is there another way I could achieve my goal without editing the messages.js?
More details
I'm using LibreChat in docker and I used docker-compose up --build after making my edits.
Which components are impacted by your request?
General
Pictures
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions