A modern, responsive chat interface built with Django for interacting with OpenAI's Large Language Models (LLMs).
- π¨ Modern UI: Beautiful, responsive design with gradient backgrounds and smooth animations
- π¬ Real-time Chat: Interactive chat interface with typing indicators
- π€ OpenAI Integration: Powered by GPT-3.5-turbo for intelligent responses
- π± Mobile Responsive: Works seamlessly on desktop and mobile devices
- π Session Management: Maintains chat history with unique session IDs
- πΎ Database Storage: Stores all chat messages in SQLite database
- π οΈ Admin Interface: Django admin panel for managing chat messages
- π RESTful API: Clean API endpoints for sending messages and retrieving history
- π§ Conversation Memory: AI remembers conversation context for coherent responses
The interface features:
- Gradient background with modern card design
- User and AI message bubbles with distinct styling
- Typing indicators during AI responses
- Responsive design for all screen sizes
- Python 3.8 or higher
- pip (Python package installer)
- OpenAI API key (get one at https://platform.openai.com/)
-
Clone or download the project
cd test-project -
Create and activate virtual environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up OpenAI API key
Create a
.envfile in the project root:cp env.example .env
Edit
.envand add your OpenAI API key:OPENAI_API_KEY=your_actual_openai_api_key_here
-
Run database migrations
python manage.py makemigrations python manage.py migrate
-
Create superuser (optional, for admin access)
python manage.py createsuperuser
-
Run the development server
python manage.py runserver
-
Open your browser and navigate to
http://127.0.0.1:8000/
- Type your message in the input field
- Press Enter or click Send to submit
- View AI responses in real-time
- Chat history is automatically saved
- Access at
http://127.0.0.1:8000/admin/ - View and manage all chat messages
- Filter by role, session, or timestamp
- Search through message content
test-project/
βββ chat/ # Chat application
β βββ models.py # Database models
β βββ views.py # View functions and API endpoints
β βββ urls.py # URL routing for chat app
β βββ admin.py # Admin interface configuration
β βββ templates/chat/ # HTML templates
β βββ chat.html # Main chat interface
βββ chat_project/ # Django project settings
β βββ settings.py # Project configuration
β βββ urls.py # Main URL routing
β βββ wsgi.py # WSGI configuration
βββ manage.py # Django management script
βββ requirements.txt # Python dependencies
βββ README.md # This file
- URL:
/api/send/ - Method: POST
- Body:
{"message": "Your message", "conversation_id": "optional_conversation_id"} - Response:
{"success": true, "response": "AI response", "conversation_id": "conversation_id"}
- URL:
/api/history/<conversation_id>/ - Method: GET
- Response:
{"messages": [{"role": "user", "content": "...", "timestamp": "..."}]}
- URL:
/api/conversation/create/ - Method: POST
- Body:
{"title": "Conversation title"} - Response:
{"success": true, "conversation_id": "id", "title": "title"}
- URL:
/api/conversation/<conversation_id>/delete/ - Method: DELETE
- Response:
{"success": true}
This project now includes full OpenAI integration:
- GPT-3.5-turbo: Uses OpenAI's latest chat model
- Conversation Memory: AI remembers the full conversation context
- Error Handling: Graceful handling of API errors, rate limits, and authentication issues
- Configurable: Easy to switch models or adjust parameters
The OpenAI integration is configured through environment variables:
# Required
OPENAI_API_KEY=your_openai_api_key_here
# Optional (can be set in settings.py)
OPENAI_MODEL=gpt-3.5-turbo # Default model
OPENAI_MAX_TOKENS=1000 # Maximum response length
OPENAI_TEMPERATURE=0.7 # Response creativity (0.0-1.0)- User sends a message
- System formats the conversation history for OpenAI API
- OpenAI generates a contextual response
- Response is saved to the database
- User receives the AI response
You can easily customize the AI behavior by modifying chat/services.py:
# Change the model
ai_response = openai_service.get_chat_response(messages_for_api, model="gpt-4")
# Adjust system prompt
messages.append({
"role": "system",
"content": "You are a helpful coding assistant. Provide code examples and explanations."
})To connect with actual LLM services (OpenAI, Anthropic, etc.), modify the send_message view in chat/views.py:
# Example OpenAI integration
import openai
# In the send_message view, replace the placeholder response with:
openai.api_key = 'your-api-key'
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": user_message}]
)
ai_response = response.choices[0].message.content- Modify CSS in
chat/templates/chat/chat.html - Update color schemes, fonts, and layout
- Add custom animations and transitions
python manage.py test- Follow PEP 8 style guidelines
- Use meaningful variable and function names
- Add docstrings to functions and classes
- Set
DEBUG = Falseinsettings.py - Configure
ALLOWED_HOSTSwith your domain - Use a production database (PostgreSQL, MySQL)
- Set up static file serving
- Configure HTTPS
export DJANGO_SECRET_KEY="your-secret-key"
export DJANGO_DEBUG="False"
export DATABASE_URL="your-database-url"- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is open source and available under the MIT License.
For issues and questions:
- Check the Django documentation
- Review the code comments
- Open an issue in the repository
- User authentication and user-specific chat history
- File upload support
- Real-time WebSocket communication
- Multiple AI model support
- Chat export functionality
- Advanced message formatting
- Chat analytics and insights