A full-stack web application that uses an AI language model to analyze user reviews in a DHTMLX Grid. The application determines sentiment, extracts relevant tags, and provides a summary for each review, updating the grid in real-time, row-by-row.
- Real-time, Row-by-Row Analysis: Click "Analyze All Reviews" to see the grid populate with AI-generated data one row at a time, providing a live-updating experience.
- On-the-Fly Editing: Edit a review directly in the grid, and it will be instantly re-analyzed.
- Clear Sentiment Visualization: Uses icons (👍, 👎, 🤔) for an at-a-glance understanding of the sentiment.
- Reliable & Scalable: The backend is built to handle multiple simultaneous requests to the AI service without hitting rate limits, ensuring stable operation.
- Configured to work with any OpenAI API-compatible proxy.
- Tested with
gpt-4.1-nano
model.
Follow these steps to get the project running on your local machine.
cd grid-ai-demo
npm install
Create a new file named .env
inside the grid-ai-demo
directory by copying from env.sample
. This file holds your secret keys and configuration.
📄 grid-ai-demo/.env
# --- OpenAI API Configuration ---
OPENAI_API_KEY=sk-YourSecretApiKeyGoesHere
OPENAI_BASE_URL=https://api.openai.com/v1
# --- Security Configuration ---
CORS_ALLOWED_ORIGINS=http://localhost:3001,http://127.0.0.1:3001,http://localhost:5500,http://127.0.0.1:5500
OPENAI_API_KEY
: (Required) Your secret API key for the AI service.OPENAI_BASE_URL
: The API endpoint for the AI service. Can be changed to use a proxy or a different provider compatible with the OpenAI API.CORS_ALLOWED_ORIGINS
: A crucial security setting. This is a comma-separated list of web addresses allowed to connect to your backend server. For production, you must change this to your public frontend's URL (e.g.,https://myapp.com
).
In the same grid-ai-demo
directory, run the start command:
npm start
You should see the following output in your terminal:
Server started on port 3001
Open your favorite web browser and navigate to: http://localhost:3001
You should see the application, ready for analysis!
The application uses a real-time, event-driven architecture to provide a seamless and reliable user experience.
- Initiation: The user clicks the "Analyze All Reviews" button on the frontend.
- Bulk Request: The frontend gathers all reviews that need analysis and sends them to the server as a single list via a
analyze_bulk_reviews
Socket.IO event. - Concurrent Backend Processing: The Node.js server receives the list. Using the
p-map
library, it starts processing reviews concurrently (with a configurable limit, e.g., 5 at a time) by making asynchronous calls to the AI service. - Streaming Results: As soon as the analysis for any single review is complete, the server immediately sends the result for that specific row back to the client using a
review_analyzed
event. This happens without waiting for the entire batch to finish. - Instant Grid Update: The frontend receives the analysis for that one row and instantly updates its data in the grid.
- Completion: This process continues, creating a live, row-by-row update effect. Once all reviews have been processed, the server emits a final
bulk_analysis_finished
event to signal that the entire job is done.
This application is ready to be deployed on any service that supports Node.js, such as Render, Heroku, or Vercel.
Key deployment steps:
- Do not upload your
.env
file. Use the hosting provider's "Environment Variables" section to setOPENAI_API_KEY
,OPENAI_BASE_URL
, andCORS_ALLOWED_ORIGINS
. - The
Root Directory
should be left blank (or set to /). - The
Start Command
should benpm start
.
DHTMLX Grid is a commercial library - use it under a valid license or evaluation agreement.