Skip to content

Add Lima-City MCP server for n8n and ChatGPT#12

Merged
stife merged 1 commit intomainfrom
codex/erstelle-mcp-server-fur-lima-city-vps
Sep 23, 2025
Merged

Add Lima-City MCP server for n8n and ChatGPT#12
stife merged 1 commit intomainfrom
codex/erstelle-mcp-server-fur-lima-city-vps

Conversation

@stife
Copy link
Owner

@stife stife commented Sep 23, 2025

Summary

  • add a dedicated Node.js MCP bridge service for Lima-City VPS that connects n8n workflows with ChatGPT endpoints
  • implement helper clients, logging utilities, and environment defaults to trigger n8n via webhooks or REST and call OpenAI-compatible models
  • document deployment, configuration, and reference the new service from the project README

Testing

  • timeout 2 node src/index.js

https://chatgpt.com/codex/tasks/task_e_68d322ef8118832585ed60adcc65dc76

@stife stife merged commit e5b2814 into main Sep 23, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @stife, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new Node.js service designed to act as a Model Context Protocol (MCP) bridge. Its purpose is to seamlessly connect n8n workflows with ChatGPT and other OpenAI-compatible models, streamlining automation tasks. The service provides unified endpoints for orchestrating AI interactions and workflow execution, making it particularly suitable for deployment on Lima-City VPS environments.

Highlights

  • New Service Introduction: A dedicated Node.js Model Context Protocol (MCP) bridge service has been added to facilitate integration between n8n workflows and ChatGPT (OpenAI-compatible models).
  • Core Functionality: The service includes helper clients for n8n and OpenAI, logging utilities, and environment variable defaults to enable triggering n8n via webhooks or REST and calling OpenAI-compatible models.
  • Documentation: Comprehensive documentation for deployment, configuration, and usage of the new service is provided, including examples for systemd and Nginx on Lima-City VPS, and is referenced from the main project README.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

Reply with @codex fix comments to fix any unresolved comments.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

Comment on lines +116 to +121
if (!targetUrl && (options.webhookSlug || options.workflowId)) {
targetUrl = this.buildWebhookUrl(options.webhookSlug || options.workflowId);
}

if (!targetUrl && options.workflowId) {
const endpoint = this.buildRunEndpoint(options.workflowId);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] Avoid treating workflowId as a webhook slug

The first branch in triggerWorkflow builds a webhook URL when either webhookSlug or workflowId is provided. If both N8N_WEBHOOK_BASE_URL and workflowId are set, this assigns a URL like /webhook/<id> and skips the subsequent API branch that should call /rest/workflows/:id/run. As a result, requests that only supply workflowId attempt to hit a non‑existent webhook instead of running the workflow via the n8n REST API. Restrict the webhook branch to webhookSlug so that workflowId falls through to the API path.

Useful? React with 👍 / 👎.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a well-structured MCP bridge server for connecting n8n and ChatGPT. The code is generally clean, with good separation of concerns into different client modules for external services, and includes helpful features like environment-based configuration and logging. However, I've identified a critical issue in the n8n client that would prevent API-based workflow execution from working correctly. Additionally, there are several high and medium-severity issues related to production readiness, such as the lack of graceful shutdown, process stability on unhandled rejections, and a permissive CORS policy. Addressing these points will significantly improve the robustness and security of the new service.

Comment on lines +145 to +148
const response = await this.apiHttp.post(targetUrl, {
payload,
query,
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

When triggering a workflow via the n8n REST API (viaApi: true), the request body is being sent as { payload, query }. The n8n API expects the workflow's input payload directly as the request body, and any query parameters should be sent as URL search parameters. This will cause the workflow execution to fail or not receive the correct data.

        const response = await this.apiHttp.post(targetUrl, payload, {
          params: query,
        });

Comment on lines +186 to +207
app.listen(port, host, () => {
logger.info("MCP server is running", {
host,
port,
});
});

process.on("unhandledRejection", (reason) => {
logger.error("Unhandled promise rejection", {
reason: reason instanceof Error ? reason.message : reason,
});
});

process.on("SIGTERM", () => {
logger.info("Received SIGTERM, shutting down gracefully");
process.exit(0);
});

process.on("SIGINT", () => {
logger.info("Received SIGINT, shutting down gracefully");
process.exit(0);
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The SIGTERM and SIGINT handlers call process.exit(0) directly, which will terminate the process immediately and could interrupt in-flight requests. A graceful shutdown mechanism should be implemented to allow existing requests to complete before the server exits.

const server = app.listen(port, host, () => {
  logger.info("MCP server is running", {
    host,
    port,
  });
});

process.on("unhandledRejection", (reason) => {
  logger.error("Unhandled promise rejection", {
    reason: reason instanceof Error ? reason.message : reason,
  });
});

const shutdown = (signal) => {
  logger.info(`Received ${signal}, shutting down gracefully`);
  server.close(() => {
    logger.info("All connections closed, server shutting down");
    process.exit(0);
  });
};

process.on("SIGTERM", () => shutdown("SIGTERM"));
process.on("SIGINT", () => shutdown("SIGINT"));

"license": "MIT"
},
"node_modules/axios": {
"version": "1.12.2",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The version for axios is specified as 1.12.2, which appears to be a typo as this version does not exist. The integrity hash matches axios@1.7.2. This should be corrected to avoid confusion and potential issues with tooling. You can likely fix this by removing node_modules and package-lock.json, and then running npm install again to regenerate the lockfile with the correct version string.

const host = process.env.HOST || "0.0.0.0";
const jsonBodyLimit = process.env.JSON_BODY_LIMIT || "1mb";

app.use(cors());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The CORS middleware is configured to allow requests from any origin. For a production service, this is a security risk. It's a best practice to restrict this to a whitelist of known origins, which can be managed via an environment variable.

Suggested change
app.use(cors());
app.use(cors({ origin: process.env.CORS_ORIGIN }));

Comment on lines +193 to +197
process.on("unhandledRejection", (reason) => {
logger.error("Unhandled promise rejection", {
reason: reason instanceof Error ? reason.message : reason,
});
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The unhandledRejection handler logs the error but allows the process to continue. After an unhandled promise rejection, the application could be in an inconsistent state. It is generally safer to exit the process and let a process manager (like systemd or Docker) restart it cleanly.

Suggested change
process.on("unhandledRejection", (reason) => {
logger.error("Unhandled promise rejection", {
reason: reason instanceof Error ? reason.message : reason,
});
});
process.on("unhandledRejection", (reason) => {
logger.error("Unhandled promise rejection", {
reason: reason instanceof Error ? reason.message : reason,
});
process.exit(1);
});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant