Skip to content

Rohitprj/AI-backend

Repository files navigation

AI Chat Backend

A comprehensive backend service for an AI chat application with JWT authentication, role-based authorization, and MongoDB storage. Built with Node.js, Express, TypeScript, and designed for production use.

Features

  • 🔐 Authentication & Authorization

    • JWT-based authentication with access and refresh tokens
    • Role-based authorization (user, admin, super-admin)
    • Secure password hashing with bcrypt
  • 💬 Chat System

    • Session-based chat storage
    • User messages and AI responses
    • Message history and pagination
    • Pluggable AI service architecture
  • 👥 User Management

    • User registration and login
    • Profile management
    • Admin user management capabilities
  • 🛡️ Security

    • Input validation with Joi
    • Rate limiting for API endpoints
    • Error handling and logging
    • CORS and Helmet security headers
  • 🔧 Developer Experience

    • Full TypeScript support
    • Comprehensive testing suite
    • Docker containerization
    • API documentation and examples

Tech Stack

  • Runtime: Node.js 18+
  • Framework: Express.js
  • Language: TypeScript
  • Database: MongoDB with Mongoose ODM
  • Authentication: JWT (jsonwebtoken)
  • Validation: Joi
  • Testing: Jest + Supertest
  • Containerization: Docker & Docker Compose

Quick Start

Prerequisites

  • Node.js 18+ and npm
  • MongoDB (local or cloud)
  • Git

Local Development

  1. Clone and install dependencies
git clone <repository-url>
cd ai-chat-backend
npm install
  1. Environment setup
cp .env.example .env
# Edit .env with your configuration
  1. Start MongoDB (if running locally)
# Using Docker
docker run -d -p 27017:27017 --name mongodb mongo:7.0

# Or use your local MongoDB installation
  1. Run the application
# Development mode
npm run dev

# Production build and start
npm run build
npm start

The server will be available at http://localhost:4000

Docker Deployment

  1. Using Docker Compose (Recommended)
# Start all services
docker-compose up -d

# View logs
docker-compose logs -f app

# Stop services
docker-compose down
  1. Manual Docker build
# Build the image
docker build -t ai-chat-backend .

# Run with MongoDB
docker run -d --name ai-chat-app \
  -p 4000:4000 \
  -e MONGO_URI=mongodb://your-mongo-host:27017/ai-chat \
  -e JWT_ACCESS_SECRET=your-secret \
  -e JWT_REFRESH_SECRET=your-refresh-secret \
  ai-chat-backend

Environment Configuration

Create a .env file based on .env.example:

# Server Configuration
PORT=4000
NODE_ENV=development

# Database
MONGO_URI=mongodb://localhost:27017/ai-chat

# JWT Configuration
JWT_ACCESS_SECRET=your-super-secret-access-key-change-this-in-production
JWT_REFRESH_SECRET=your-super-secret-refresh-key-change-this-in-production
ACCESS_TOKEN_TTL=15m
REFRESH_TOKEN_TTL=7d

# Security
BCRYPT_SALT_ROUNDS=12

# AI Service (Optional)
OPENAI_API_KEY=your-openai-api-key-here
AI_MODEL=gpt-3.5-turbo

# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX_REQUESTS=100
CHAT_RATE_LIMIT_MAX_REQUESTS=20

API Documentation

Authentication Endpoints

Register User

POST /auth/register
Content-Type: application/json

{
  "name": "John Doe",
  "email": "[email protected]",
  "password": "SecurePassword123!"
}

Login

POST /auth/login
Content-Type: application/json

{
  "email": "[email protected]",
  "password": "SecurePassword123!"
}

Refresh Token

POST /auth/refresh
Content-Type: application/json

{
  "refreshToken": "your_refresh_token_here"
}

Logout

POST /auth/logout
Content-Type: application/json

{
  "refreshToken": "your_refresh_token_here"
}

User Endpoints

Get Profile

GET /user/profile
Authorization: Bearer your_access_token_here

Update Profile

PUT /user/profile
Authorization: Bearer your_access_token_here
Content-Type: application/json

{
  "name": "Updated Name",
  "password": "NewPassword123!"
}

Chat Endpoints

Create Session

POST /chat/session
Authorization: Bearer your_access_token_here
Content-Type: application/json

{
  "title": "My Chat Session"
}

Get User Sessions

GET /chat/sessions?page=1&limit=10
Authorization: Bearer your_access_token_here

Get Session Messages

GET /chat/session/SESSION_ID?page=1&limit=50
Authorization: Bearer your_access_token_here

Send Message

POST /chat/session/SESSION_ID/message
Authorization: Bearer your_access_token_here
Content-Type: application/json

{
  "content": "Hello, how are you?",
  "metadata": {
    "custom": "data"
  }
}

Delete Session

DELETE /chat/session/SESSION_ID
Authorization: Bearer your_access_token_here

Admin Endpoints

Get All Users (Admin/Super-Admin)

GET /admin/users?page=1&limit=10
Authorization: Bearer your_admin_token_here

Get User Details (Admin/Super-Admin)

GET /admin/user/USER_ID
Authorization: Bearer your_admin_token_here

Delete User (Super-Admin)

DELETE /admin/user/USER_ID
Authorization: Bearer your_super_admin_token_here

Update User Role (Super-Admin)

PUT /admin/user/USER_ID/role
Authorization: Bearer your_super_admin_token_here
Content-Type: application/json

{
  "role": "admin"
}

Get All Sessions (Admin/Super-Admin)

GET /admin/sessions?page=1&limit=10
Authorization: Bearer your_admin_token_here

AI Service Integration

The application includes a pluggable AI service architecture. Currently, it uses a mock AI service for development and testing.

Integrating with OpenAI

  1. Set your OpenAI API key
OPENAI_API_KEY=your-openai-api-key-here
AI_MODEL=gpt-3.5-turbo
  1. Update the AI service (in src/services/aiService.ts)
import OpenAI from 'openai';

private async generateOpenAiResponse(
  messages: AiMessage[],
  options: AiServiceOptions
): Promise<AiResponse> {
  const openai = new OpenAI({
    apiKey: config.openaiApiKey,
  });

  const response = await openai.chat.completions.create({
    model: options.model || config.aiModel,
    messages: messages,
    max_tokens: options.maxTokens || 150,
    temperature: options.temperature || 0.7,
  });

  return {
    content: response.choices[0]?.message?.content || 'No response generated',
    metadata: {
      model: options.model || config.aiModel,
      usage: response.usage,
      timestamp: new Date().toISOString()
    }
  };
}

Integrating with Other Providers

The AI service is designed to be provider-agnostic. You can easily integrate with:

  • Anthropic Claude
  • Google Gemini
  • Cohere
  • Hugging Face
  • Self-hosted models

Testing

Run the test suite:

# Run all tests
npm test

# Run tests in watch mode
npm run test:watch

# Run with coverage
npm test -- --coverage

The test suite includes:

  • Authentication flow tests
  • Chat functionality tests
  • Middleware tests (auth, validation, rate limiting)
  • Integration tests for API endpoints

Database Schema

User Model

{
  _id: ObjectId,
  name: string,
  email: string (unique),
  password: string (hashed),
  role: 'user' | 'admin' | 'super-admin',
  refreshTokens: string[],
  createdAt: Date,
  updatedAt: Date
}

Chat Session Model

{
  _id: ObjectId,
  userId: ObjectId (ref: User),
  title: string,
  createdAt: Date,
  lastUpdatedAt: Date,
  metadata: object (optional)
}

Message Model

{
  _id: ObjectId,
  sessionId: ObjectId (ref: ChatSession),
  userId: ObjectId (ref: User),
  sender: 'user' | 'ai' | 'system',
  content: string,
  metadata: object (optional),
  createdAt: Date
}

Performance Considerations

Database Indexes

  • users.email: Unique index for fast user lookups
  • chatsessions.userId + createdAt: Compound index for user session queries
  • messages.sessionId + createdAt: Compound index for session message queries

Rate Limiting

  • General API: 100 requests per 15 minutes
  • Chat endpoints: 20 requests per 15 minutes
  • Auth endpoints: 5 requests per 15 minutes

Security Features

  • Password hashing with bcrypt (12 rounds)
  • JWT tokens with short TTL (15m access, 7d refresh)
  • Input validation and sanitization
  • CORS and security headers
  • Error handling without information leakage

Production Deployment

Environment Setup

  1. Use strong, unique secrets for JWT tokens
  2. Configure MongoDB with authentication
  3. Set up proper logging and monitoring
  4. Use HTTPS in production
  5. Configure rate limiting based on your needs

Scaling Considerations

  1. Database: Use MongoDB Atlas or a properly configured replica set
  2. Caching: Add Redis for session storage and caching
  3. Load Balancing: Use a load balancer for multiple app instances
  4. Monitoring: Implement proper logging and monitoring

Docker Production

# docker-compose.prod.yml
version: '3.8'
services:
  app:
    build: .
    environment:
      NODE_ENV: production
      # Add production environment variables
    deploy:
      replicas: 2
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Run the test suite
  6. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For support and questions:

  • Create an issue in the repository
  • Check existing issues and documentation
  • Review the test files for usage examples

Roadmap

  • WebSocket support for real-time messaging
  • Message attachments and file uploads
  • Chat session sharing and collaboration
  • Advanced AI model switching
  • Metrics and analytics dashboard
  • Multi-language support
  • Message search functionality

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published