A modern chatbot application with a React frontend and FastAPI backend, powered by OpenAI's GPT models.
- 🤖 Real-time chat interface with OpenAI integration
- 💬 Maintains conversation context across messages
- 🎨 Modern, responsive UI with gradient design
- 🔄 Loading indicators and smooth animations
- 📱 Mobile-friendly responsive design
- 🧹 Clear chat functionality
- ⚡ Async FastAPI backend for optimal performance
- 📚 Automatic API documentation at /docs
Chatbot/
├── backend/
│ ├── app.py # FastAPI server with async support
│ └── requirements.txt # Python dependencies
├── frontend/
│ ├── public/
│ │ └── index.html
│ ├── src/
│ │ ├── App.js # Main React component
│ │ ├── App.css # Styling
│ │ ├── index.js # React entry point
│ │ └── index.css # Global styles
│ └── package.json # Node dependencies
└── .env.example # Environment variables template
- Python 3.8 or higher
- Node.js 16 or higher
- OpenAI API key
-
Navigate to the backend directory:
cd backend -
Create and activate a virtual environment (if not already activated):
# Windows ..\myenv\Scripts\activate # macOS/Linux source ../myenv/bin/activate
-
Install Python dependencies:
pip install -r requirements.txt
-
Create a
.envfile in the root directory:# Copy from .env.example cp ../.env.example ../.env -
Edit
.envand add your OpenAI API key:OPENAI_API_KEY=your_actual_api_key_here -
Run the FastAPI backend:
python app.py
Or use uvicorn directly:
uvicorn app:app --reload --port 5000
The backend will start on
http://localhost:5000📚 API Documentation: Visit
http://localhost:5000/docsfor interactive API docs!
-
Open a new terminal and navigate to the frontend directory:
cd frontend -
Install Node.js dependencies:
npm install
-
Start the React development server:
npm start
The frontend will start on
http://localhost:3000
- Open your browser and navigate to
http://localhost:3000 - Type your message in the input field at the bottom
- Press Enter or click the "Send" button
- The AI will respond with context from your conversation history
- Use "Clear Chat" button to start a new conversation
Send a message to the chatbot and receive a response.
Request Body:
{
"messages": [
{ "role": "user", "content": "Hello!" },
{ "role": "assistant", "content": "Hi! How can I help you?" },
{ "role": "user", "content": "Tell me about AI" }
]
}Response:
{
"message": "AI stands for Artificial Intelligence...",
"usage": {
"prompt_tokens": 50,
"completion_tokens": 100,
"total_tokens": 150
}
}Check if the backend is running.
Response:
{
"status": "healthy"
}By default, the app uses gpt-4o-mini. You can change this in backend/app.py:
model="gpt-4o-mini", # Change to "gpt-4", "gpt-3.5-turbo", etc.Adjust these parameters in backend/app.py:
temperature=0.7, # Controls randomness (0.0-2.0)
max_tokens=1000 # Maximum response lengthThe next phase will add:
- Vector database integration (e.g., Pinecone, Weaviate, or Chroma)
- Document ingestion and chunking
- Embedding generation and storage
- Semantic search for relevant context
- Enhanced prompts combining chat history + retrieved documents
- ImportError: Make sure all dependencies are installed with
pip install -r requirements.txt - OpenAI API Error: Verify your API key is correct in the
.envfile - Port 5000 in use: Change the port in
app.py:uvicorn.run(app, host="0.0.0.0", port=5001)
- Cannot connect to backend: Ensure the FastAPI backend is running on port 5000
- Module not found: Run
npm installin the frontend directory - Port 3000 in use: React will prompt you to use a different port
MIT License - Feel free to use this project for learning or commercial purposes.
Contributions are welcome! Please feel free to submit a Pull Request.