An intelligent AI-powered question-and-answer system that enables users to interact with an AI assistant in real-time. Built with modern web technologies and powered by Google's Gemini AI model, this system provides a seamless conversational experience with persistent chat history and user authentication.
This full-stack application demonstrates modern web development practices by combining a robust FastAPI backend with a responsive Next.js frontend. The system leverages Google's Gemini AI model to provide intelligent responses while maintaining user context through persistent conversation history.
- ** AI-Powered Conversations**: Leverage Google Gemini's free-tier for intelligent responses
- ** Secure Authentication**: JWT-based user authentication system
- ** Real-time Chat Interface**: Interactive chat with immediate AI responses
- ** Conversation History**: Persistent storage and retrieval of chat history per user
- ** Paginated History View**: Efficient browsing of past conversations
- ** User Management**: Complete user registration and login system
- ** RESTful API**: Well-structured API endpoints with FastAPI
- ** Database Integration**: PostgreSQL with SQLAlchemy ORM
- ** Modern Frontend**: TypeScript-based React components with Next.js
- ** Environment Security**: Secure environment variable management
- ** Responsive Design**: Mobile-friendly interface
- FastAPI - High-performance Python web framework
- SQLAlchemy - Database ORM for Python
- PostgreSQL - Robust relational database
- JWT - Secure authentication tokens
- python-dotenv - Environment variable management
- Uvicorn - ASGI server for FastAPI
- Next.js - React framework for production
- TypeScript - Type-safe JavaScript development
- React - Component-based UI library
- Google Gemini Free-Tier - Generative AI language model
Interactive-QA-System/
βββ backend/
β βββ Chat/
β β βββ __init__.py
β β βββ main.py # FastAPI application entry point
β β βββ models.py # Chat data models
β β βββ database.py # Database configuration
β β βββ crud.py # Database operations
β β βββ routes.py # Chat API endpoints
β βββ Authentication/
β β βββ __init__.py
β β βββ models.py # User data models
β β βββ database.py # Auth database config
β β βββ auth.py # Authentication logic
β β βββ routes.py # Auth API endpoints
β βββ requirements.txt # Python dependencies
β βββ .env.example # Environment variables template
βββ frontend/
β βββ pages/
β β βββ index.js # Home page
β β βββ login.js # Login page
β β βββ chat.js # Chat interface
β βββ components/
β β βββ ChatInterface.tsx # Main chat component
β β βββ MessageBubble.tsx # Individual message display
β β βββ AuthForm.tsx # Authentication forms
β βββ styles/ # CSS styles
β βββ utils/ # Utility functions
β βββ package.json # Node.js dependencies
β βββ next.config.js # Next.js configuration
βββ README.md # Project documentation
βββ .gitignore # Git ignore rules
βββ LICENSE # Project license
Before you begin, ensure you have the following installed:
git clone https://github.com/MuchiraIrungu/Technical-assessment.git
cd Technical-assessment-
Create and activate a virtual environment:
python -m venv the_env # On Linux/macOS source the_env/bin/activate # On Windows the_env\Scripts\activate
-
Install Python dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env
Edit the
.envand.config.pyfile with your configuration:# Database Configuration DATABASE_URL=postgresql://postgres:your_password@localhost:5432/qa_system # JWT Configuration JWT_SECRET_KEY=your_super_secret_jwt_key_here JWT_ALGORITHM=HS256 JWT_ACCESS_TOKEN_EXPIRE_MINUTES=30 # AI Configuration GEMINI_API_KEY=your_gemini_api_key_here # Server Configuration DEBUG=True HOST=0.0.0.0 PORT=8001
-
Set up the database:
# Create database (make sure PostgreSQL is running) createdb qa_system # Initialize database tables python -m Chat.database
-
Start the backend server:
uvicorn Chat.main:app --reload --host 0.0.0.0 --port 8001
-
Navigate to frontend directory:
cd frontend -
Install Node.js dependencies:
npm install
-
Create frontend environment file:
# Create .env.local echo "NEXT_PUBLIC_API_BASE_URL=http://localhost:8001" > .env.local
-
Start the development server:
npm run dev
- Visit Google AI Studio
- Create a new project or select an existing one
- Generate an API key
- Add the key to your
.envfile
-
Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8001
- API Documentation: http://localhost:8001/docs
-
Create an account:
- Navigate to the signup page
- Provide required information
- Verify your account (if email verification is implemented)
-
Start chatting:
- Log in to your account
- Navigate to the chat interface
- Type your question and press Enter
- View AI responses in real-time
-
Manage conversations:
- Access chat history from the sidebar
- Use pagination to browse older conversations
- Search through previous chats
POST /auth/register- User registrationPOST /auth/login- User loginGET /auth/me- Get current user info
POST /chat/ask- Send a question to AIGET /chat/history- Get user's chat historyGET /chat/conversation/{id}- Get specific conversationDELETE /chat/conversation/{id}- Delete conversation
# Backend tests
cd backend
pytest tests/ -v
# Frontend tests
cd frontend
npm test
# Run tests with coverage
pytest --cov=Chat tests/
npm run test:coverage- Test user registration and login
- Verify JWT token functionality
- Test AI response generation
- Check conversation history persistence
- Validate error handling
- JWT tokens with expiration
- Password hashing using bcrypt
- Protected API endpoints
- CORS configuration for frontend
- Sensitive data stored in environment variables
.envfiles excluded from version control- API key protection and rotation
- SQL injection prevention with SQLAlchemy
- Database connection encryption
- User data privacy compliance
- Gemini Free Tier: Limited to 60 requests per minute
- Daily Quota: 1000 requests per day
- Response Length: Maximum 2048 tokens per response
- Database queries optimized with indexing
- Frontend implements lazy loading for chat history
- Rate limiting implemented on backend
- File Upload Support - Allow users to upload documents for context
- Voice Integration - Speech-to-text and text-to-speech capabilities
- Multi-language Support - Internationalization (i18n)
- Chat Export - Download conversations as PDF/JSON
- Admin Dashboard - User management and system analytics
- Redis Caching - Improve response times
- WebSocket Support - Real-time messaging
- Docker Containerization - Simplified deployment
- CI/CD Pipeline - Automated testing and deployment
- Monitoring & Logging - Application performance monitoring
1. Database Connection Error
# Check if PostgreSQL is running
sudo systemctl status postgresql
# Verify database exists
psql -U postgres -l2. Gemini API Key Issues
- Verify API key is correct in
.env - Check quota limits in Google Cloud Console
- Ensure billing is enabled (if required)
3. Frontend Build Errors
# Clear cache and reinstall dependencies
rm -rf node_modules package-lock.json
npm install4. CORS Issues
- Check FastAPI CORS middleware configuration
- Verify frontend URL in allowed origins
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use TypeScript for frontend development
- Write comprehensive tests for new features
- Update documentation for API changes
This project is licensed under the MIT License - see the LICENSE file for details.
Developer: Muchira Irungu
LinkedIn: www.linkedin.com/in/muchira-irungu
- π Bug Reports: Create an issue with the bug template
- π‘ Feature Requests: Use the feature request template
- β Questions: Check existing issues or create a new discussion
β If you found this project helpful, please consider giving it a star!
Last updated: 13/08/2025