-
Notifications
You must be signed in to change notification settings - Fork 988
feat(model-service): add OpenAI-compatible wrapper (+ pm2 + env example) and update ignores #254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
MCVelasquez45
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Subject:
Add minimal OpenAI-compatible model service with Express server
Details of Updates:
This pull request introduces a new directory, model-service, containing a lightweight Node.js/Express server that acts as a proxy to OpenAI’s chat/completions API. Key additions include:
- server.js: Defines two endpoints (
/healthand/generate)./healthreturns basic status and config info./generateforwards chat requests to OpenAI, handling authentication and error responses gracefully.
- package.json: Declares service dependencies (
express,dotenv,node-fetch), sets up scripts for local and PM2 deployment, and specifies the minimum Node.js version. - ecosystem.config.js: Provides PM2 configuration for process management and optional environment overrides.
- Procfile: Adds Heroku compatibility for deployment.
- .gitignore: Ensures node modules, lock files, environment files, and system artifacts are not tracked.
- .env.example: Documents required environment variables for setup and testing.
Notable Points:
- The implementation is straightforward and readable, following best practices for Express apps.
- Error handling is robust, with clear messages for missing configuration or upstream API problems.
- Flexible design allows for easy environment configuration and model switching.
Approval Comment Example:
Great work on this addition! The new model-service is clean, well-structured, and makes integration with OpenAI’s API easy. The endpoints are documented and error handling is thorough. The deployment configs (.env, Procfile, PM2) are thoughtful, supporting local and cloud workflows. No issues found—approved!
| # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore | ||
| # and can be added to the global gitignore or merged into this file. For a more nuclear | ||
| # option (not recommended) you can uncomment the following to ignore the entire idea folder. | ||
| #.idea/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you're referencing the .gitignore file and mentioning the addition of the following lines:
model-service/.env
model-service/.env
Explanation in context:
These lines tell Git to ignore the .env file located in the model-service directory. The .env file typically contains environment variables, which often include sensitive information like API keys, database credentials, or other configuration details that should not be committed to source control.
Code Review Feedback & Suggestions:
-
Duplication:
- You have the same line twice. You only need one entry for
model-service/.env. Remove the duplicate to keep the.gitignoreclean.
- You have the same line twice. You only need one entry for
-
Security:
- Good practice to ignore
.envfiles. This helps prevent accidentally pushing sensitive data to the repository.
- Good practice to ignore
-
Clarity:
- If you expect
.envfiles in multiple subdirectories, consider using a pattern like**/.env. However, if you only want to ignore it inmodel-service, your current approach is correct.
- If you expect
-
Team Communication:
- Remind your team that ignoring
.envmeans everyone needs to create their own local version or use a sample file (like.env.example) to share the required variables.
- Remind your team that ignoring
Suggested Correction:
Just keep one entry:
model-service/.env
Summary:
Your intent is good for security and standard practice. Just remove the duplicate line for neatness. If you need further help with .env handling or best practices, let me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenAI API base URL
OPENAI_BASE_URL=https://api.openai.com/v1
Replace with your OpenAI API key
OPENAI_API_KEY=sk-REPLACE_ME
Model name to use for inference
MODEL_NAME=gpt-4o-mini
Port to run the service on
PORT=7071
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The selected line node_modules/ in a .gitignore file tells Git to ignore the entire node_modules directory. This means that all files and subdirectories inside node_modules/ will not be tracked by Git or included in commits.
Context:
node_modules/ is the default directory where the Node.js package manager (npm or yarn) installs project dependencies. It can contain thousands of files and is typically very large. These files are generated from package.json and can always be recreated, so it's best practice not to include them in version control. This keeps the repository size manageable and avoids sharing unnecessary files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line defines a process type for deployment using a Procfile, which is commonly used with platforms like Heroku. The selection:
web: node server.js
specifies that the web process should be started by running the command node server.js. This means that when the application is deployed, the platform will launch the web server by executing server.js with Node.js.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explanation is for a line typically found in a Procfile (e.g., web: node server.js). However, in the provided diff, the selected line is:
apps: [This line is part of a Node.js configuration file, specifically for PM2, a process manager for Node.js applications. In this context, the line:
apps: [starts the definition of an array of application configurations to be managed by PM2. Each object in this array specifies details about how PM2 should run a particular Node.js app (like its name, script, environment variables, etc.). In this file, the array contains the configuration for running the reentry-model-service by executing server.js as a single instance in fork mode.
So, while both are related to starting Node.js services, this line is not for a Procfile or Heroku, but is part of a PM2 configuration specifying which applications PM2 should manage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const express = require('express');
const fetch = require('node-fetch');
const dotenv = require('dotenv');
dotenv.config();
const app = express();
app.use(express.json());
const PORT = process.env.PORT || 7071;
const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, '');
const OPENAI_API_KEY = process.env.OPENAI_API_KEY || '';
const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini';Line-by-Line Breakdown
-
Import Express:
const express = require('express');
Loads the Express framework, used for creating and managing the HTTP server and API endpoints. -
Import node-fetch:
const fetch = require('node-fetch');
Imports the node-fetch library, allowing the app to make HTTP requests, such as forwarding payloads to external APIs (here, OpenAI). -
Import dotenv:
const dotenv = require('dotenv');
Loads the dotenv library, which enables loading environment variables from a.envfile intoprocess.env. -
Load Environment Variables:
dotenv.config();
Executes dotenv’s config method, which reads the.envfile and populatesprocess.envwith its values. -
Initialize Express App:
const app = express();
Creates an Express application instance, which will be used to set up routes/endpoints. -
Enable JSON Parsing:
app.use(express.json());
Sets up middleware to automatically parse incoming request bodies that contain JSON, making it easier to handle API requests. -
Set PORT Variable:
const PORT = process.env.PORT || 7071;
Determines the port for the server to listen on, first trying the value from environment variables and defaulting to 7071 if not set. -
Configure OpenAI Base URL:
const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, '');
Sets the base URL for the OpenAI API. If an environment variable is provided, it uses that; otherwise, it defaults tohttps://api.openai.com/v1. The.replace(/\/+$/, '')part removes any trailing slashes for consistency. -
Get OpenAI API Key:
const OPENAI_API_KEY = process.env.OPENAI_API_KEY || '';
Retrieves the OpenAI API key from environment variables, defaulting to an empty string if not set. -
Set Model Name:
const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini';
Sets the model name to use for API requests. Uses an environment variable if available, otherwise defaults togpt-4o-mini.
Purpose and Best Practices
- Modularity & Configuration:
The use of environment variables for sensitive information (API keys, URLs, port numbers) allows for secure and easy configuration across different environments (development, production, etc.). - Middleware Usage:
The Express JSON middleware ensures that any incoming API requests with JSON bodies are parsed automatically, reducing boilerplate code and potential errors. - Sanitization:
Stripping trailing slashes from the API base URL prevents issues with endpoint construction later in the code. - Default Values:
Providing sensible defaults (like port 7071 and the default model name) ensures the service can start with minimal configuration for development or testing.
Suggestions
- Validation:
Consider adding explicit validation for required environment variables (like the API key), logging a warning or error if they’re missing. - Security:
Never commit actual.envfiles or secrets to version control. - Scalability:
If more configuration options are added later, consider using a configuration management library or schema validation.
Summary:
These lines set up the foundation of an Express-based API service, configuring it for integration with OpenAI’s API, and ensuring that configuration is flexible and secure via environment variables.
MCVelasquez45
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Subject:
Add minimal OpenAI-compatible model service with Express server
Details of Updates:
This pull request introduces a new directory, model-service, containing a lightweight Node.js/Express server that acts as a proxy to OpenAI’s chat/completions API. Key additions include:
- server.js: Defines two endpoints (
/healthand/generate)./healthreturns basic status and config info./generateforwards chat requests to OpenAI, handling authentication and error responses gracefully.
- package.json: Declares service dependencies (
express,dotenv,node-fetch), sets up scripts for local and PM2 deployment, and specifies the minimum Node.js version. - ecosystem.config.js: Provides PM2 configuration for process management and optional environment overrides.
- Procfile: Adds Heroku compatibility for deployment.
- .gitignore: Ensures node modules, lock files, environment files, and system artifacts are not tracked.
- .env.example: Documents required environment variables for setup and testing.
Notable Points:
- The implementation is straightforward and readable, following best practices for Express apps.
- Error handling is robust, with clear messages for missing configuration or upstream API problems.
- Flexible design allows for easy environment configuration and model switching.
Approval Comment Example:
Great work on this addition! The new model-service is clean, well-structured, and makes integration with OpenAI’s API easy. The endpoints are documented and error handling is thorough. The deployment configs (.env, Procfile, PM2) are thoughtful, supporting local and cloud workflows. No issues found—approved!
PR title
feat(model-service): add OpenAI-compatible wrapper (+ pm2 + env example) and update ignores
PR body
Summary
This PR adds a minimal Node/Express model-service that exposes:
GET /health→ returns{ ok, model, base }POST /generate→ forwardsmessagesto${OPENAI_BASE_URL}/chat/completionsand returns{ text }The wrapper is provider-agnostic (OpenAI, Mistral API, OpenRouter, etc.) via
.env.What changed
model-service/server.js– Express service with/healthand/generate, reads.envviadotenvoai_citation:0‡server.jsmodel-service/package.json– scripts:dev,start,pm2; Node ≥ 18 engines oai_citation:1‡package.jsonmodel-service/.env.example– starter env for provider URL/key/model/portmodel-service/ecosystem.config.js– pm2 process file (npm run pm2) oai_citation:2‡ecosystem.config.jsmodel-service/Procfile– proc entry for platforms that use Procfile.gitignore– ignoremodel-service/.env(secrets not committed)Why
How to run (reviewer quick start)