Add Lima-City MCP server for n8n and ChatGPT#12
Conversation
Summary of ChangesHello @stife, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Node.js service designed to act as a Model Context Protocol (MCP) bridge. Its purpose is to seamlessly connect n8n workflows with ChatGPT and other OpenAI-compatible models, streamlining automation tasks. The service provides unified endpoints for orchestrating AI interactions and workflow execution, making it particularly suitable for deployment on Lima-City VPS environments. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Codex Review: Here are some suggestions.
Reply with @codex fix comments to fix any unresolved comments.
About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".
| if (!targetUrl && (options.webhookSlug || options.workflowId)) { | ||
| targetUrl = this.buildWebhookUrl(options.webhookSlug || options.workflowId); | ||
| } | ||
|
|
||
| if (!targetUrl && options.workflowId) { | ||
| const endpoint = this.buildRunEndpoint(options.workflowId); |
There was a problem hiding this comment.
[P1] Avoid treating workflowId as a webhook slug
The first branch in triggerWorkflow builds a webhook URL when either webhookSlug or workflowId is provided. If both N8N_WEBHOOK_BASE_URL and workflowId are set, this assigns a URL like /webhook/<id> and skips the subsequent API branch that should call /rest/workflows/:id/run. As a result, requests that only supply workflowId attempt to hit a non‑existent webhook instead of running the workflow via the n8n REST API. Restrict the webhook branch to webhookSlug so that workflowId falls through to the API path.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Code Review
This pull request introduces a well-structured MCP bridge server for connecting n8n and ChatGPT. The code is generally clean, with good separation of concerns into different client modules for external services, and includes helpful features like environment-based configuration and logging. However, I've identified a critical issue in the n8n client that would prevent API-based workflow execution from working correctly. Additionally, there are several high and medium-severity issues related to production readiness, such as the lack of graceful shutdown, process stability on unhandled rejections, and a permissive CORS policy. Addressing these points will significantly improve the robustness and security of the new service.
| const response = await this.apiHttp.post(targetUrl, { | ||
| payload, | ||
| query, | ||
| }); |
There was a problem hiding this comment.
When triggering a workflow via the n8n REST API (viaApi: true), the request body is being sent as { payload, query }. The n8n API expects the workflow's input payload directly as the request body, and any query parameters should be sent as URL search parameters. This will cause the workflow execution to fail or not receive the correct data.
const response = await this.apiHttp.post(targetUrl, payload, {
params: query,
});| app.listen(port, host, () => { | ||
| logger.info("MCP server is running", { | ||
| host, | ||
| port, | ||
| }); | ||
| }); | ||
|
|
||
| process.on("unhandledRejection", (reason) => { | ||
| logger.error("Unhandled promise rejection", { | ||
| reason: reason instanceof Error ? reason.message : reason, | ||
| }); | ||
| }); | ||
|
|
||
| process.on("SIGTERM", () => { | ||
| logger.info("Received SIGTERM, shutting down gracefully"); | ||
| process.exit(0); | ||
| }); | ||
|
|
||
| process.on("SIGINT", () => { | ||
| logger.info("Received SIGINT, shutting down gracefully"); | ||
| process.exit(0); | ||
| }); |
There was a problem hiding this comment.
The SIGTERM and SIGINT handlers call process.exit(0) directly, which will terminate the process immediately and could interrupt in-flight requests. A graceful shutdown mechanism should be implemented to allow existing requests to complete before the server exits.
const server = app.listen(port, host, () => {
logger.info("MCP server is running", {
host,
port,
});
});
process.on("unhandledRejection", (reason) => {
logger.error("Unhandled promise rejection", {
reason: reason instanceof Error ? reason.message : reason,
});
});
const shutdown = (signal) => {
logger.info(`Received ${signal}, shutting down gracefully`);
server.close(() => {
logger.info("All connections closed, server shutting down");
process.exit(0);
});
};
process.on("SIGTERM", () => shutdown("SIGTERM"));
process.on("SIGINT", () => shutdown("SIGINT"));| "license": "MIT" | ||
| }, | ||
| "node_modules/axios": { | ||
| "version": "1.12.2", |
There was a problem hiding this comment.
The version for axios is specified as 1.12.2, which appears to be a typo as this version does not exist. The integrity hash matches axios@1.7.2. This should be corrected to avoid confusion and potential issues with tooling. You can likely fix this by removing node_modules and package-lock.json, and then running npm install again to regenerate the lockfile with the correct version string.
| const host = process.env.HOST || "0.0.0.0"; | ||
| const jsonBodyLimit = process.env.JSON_BODY_LIMIT || "1mb"; | ||
|
|
||
| app.use(cors()); |
There was a problem hiding this comment.
The CORS middleware is configured to allow requests from any origin. For a production service, this is a security risk. It's a best practice to restrict this to a whitelist of known origins, which can be managed via an environment variable.
| app.use(cors()); | |
| app.use(cors({ origin: process.env.CORS_ORIGIN })); |
| process.on("unhandledRejection", (reason) => { | ||
| logger.error("Unhandled promise rejection", { | ||
| reason: reason instanceof Error ? reason.message : reason, | ||
| }); | ||
| }); |
There was a problem hiding this comment.
The unhandledRejection handler logs the error but allows the process to continue. After an unhandled promise rejection, the application could be in an inconsistent state. It is generally safer to exit the process and let a process manager (like systemd or Docker) restart it cleanly.
| process.on("unhandledRejection", (reason) => { | |
| logger.error("Unhandled promise rejection", { | |
| reason: reason instanceof Error ? reason.message : reason, | |
| }); | |
| }); | |
| process.on("unhandledRejection", (reason) => { | |
| logger.error("Unhandled promise rejection", { | |
| reason: reason instanceof Error ? reason.message : reason, | |
| }); | |
| process.exit(1); | |
| }); |
Summary
Testing
https://chatgpt.com/codex/tasks/task_e_68d322ef8118832585ed60adcc65dc76