Skip to content

ghstrider/oneline-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Oneline Chat

A modern, real-time streaming chat application built with FastAPI (Python) backend and React TypeScript frontend.

πŸ—οΈ Project Structure

oneline-chat/
β”œβ”€β”€ backend/          # FastAPI Python backend
β”‚   β”œβ”€β”€ src/         # Source code
β”‚   β”œβ”€β”€ tests/       # Test files
β”‚   └── requirements.txt
β”œβ”€β”€ ui/              # React TypeScript frontend
β”‚   β”œβ”€β”€ src/         # Source code
β”‚   └── package.json
β”œβ”€β”€ docker-compose.yml
└── README.md

✨ Features

Backend (FastAPI + Python)

  • πŸ”„ Real-time streaming chat via Server-Sent Events (SSE)
  • πŸ€– AI Integration with OpenAI/Ollama support
  • πŸ—„οΈ PostgreSQL database with SQLModel ORM
  • πŸ”§ Configurable settings via environment variables
  • πŸ“ Comprehensive logging and error handling
  • πŸ§ͺ Full test coverage with pytest

Frontend (React + TypeScript)

  • πŸ’¬ Modern chat interface with real-time streaming
  • πŸŒ“ Dark/light mode support
  • πŸ“± Responsive design with Tailwind CSS
  • βš™οΈ Chat settings (model selection, temperature, etc.)
  • πŸ“‘ Axios HTTP client with interceptors
  • πŸ”§ Configurable API endpoints via environment variables

πŸš€ Quick Start

Prerequisites

  • Node.js 18+
  • Python 3.11+
  • PostgreSQL 14+
  • npm/yarn

Backend Setup

cd backend
pip install -r requirements.txt
python -m oneline_chat.main

Frontend Setup

cd ui
npm install
npm run dev

Environment Variables

Backend (.env in /backend):

DB_HOST=localhost
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=your_password
DB_NAME=oneline_chat_app
AI_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=deepseek-r1:8b

Frontend (.env in /ui):

VITE_API_BASE_URL=http://localhost:8000
VITE_API_TIMEOUT=30000

🐳 Docker Setup

docker-compose up -d

This will start:

  • PostgreSQL database
  • FastAPI backend
  • React frontend
  • Nginx reverse proxy

πŸ“š API Documentation

Once the backend is running, visit:

Main Endpoints

  • POST /api/v1/chat/stream - Streaming chat endpoint
  • POST /api/v1/chat/completions - Non-streaming chat
  • GET /api/v1/chat/history/{chat_id} - Chat history
  • GET /api/v1/models - Available models

πŸ› οΈ Development

Backend Development

cd backend
pip install -r requirements.txt
pytest  # Run tests
python -m oneline_chat.main  # Start server

Frontend Development

cd ui
npm install
npm run dev     # Start dev server
npm run build   # Build for production
npm run lint    # Run linting

πŸ›οΈ Architecture

Backend Stack

  • FastAPI - Modern Python web framework
  • SQLModel - SQLAlchemy + Pydantic for database ORM
  • PostgreSQL - Primary database
  • Pydantic Settings - Configuration management
  • Uvicorn - ASGI server

Frontend Stack

  • React 18 - UI framework
  • TypeScript - Type safety
  • Vite - Build tool and dev server
  • Tailwind CSS - Utility-first styling
  • Axios - HTTP client with interceptors

Communication

  • REST API for standard operations
  • Server-Sent Events (SSE) for real-time streaming
  • JSON for data exchange

πŸ§ͺ Testing

Backend Tests

cd backend
pytest
pytest --cov  # With coverage

Frontend Tests

cd ui
npm test

πŸ“¦ Deployment

Production Build

# Backend
cd backend
pip install -r requirements.txt
python -m oneline_chat.main

# Frontend
cd ui
npm install
npm run build
npm run preview

Docker Deployment

docker-compose -f docker-compose.prod.yml up -d

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ”§ Configuration

Chat Modes

  • Single: Single AI agent responds
  • Multiple: Multiple AI agents can participate

Supported Models

  • DeepSeek R1 8B (default)
  • GPT-3.5 Turbo
  • GPT-4
  • Claude 3 Haiku/Sonnet

Settings

  • Temperature: 0.0-2.0 (creativity level)
  • Max Tokens: 50-4000 (response length)
  • Save to DB: Toggle conversation persistence

About

A modern real-time streaming chat application with FastAPI backend and React TypeScript frontend

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •