A modern, modular AI chat platform with character‑driven conversations.
API‑only backend (no HTML templates or WebSockets) so frontend teams can integrate freely.
- Highlights
- Quickstart
- API Endpoints
- Architecture
- Configuration
- Testing & Coverage
- Roadmap
- Screenshots
- Contributing
- License
- Acknowledgements
- FastAPI, production-ready: Modular
api
,service
,core
,model
layers. - Character-driven chat: Predefined characters via
data/characters.json
(e.g.,coach
). - LLM stub mode: Works offline; echoes responses for rapid dev/testing.
- Clean orchestration:
PlatformService
composes domain services. - Thorough tests: Pytest with coverage; CI-friendly.
- Clone and enter the repo
git clone https://github.com/yourusername/ai-character-platform.git
cd ai-character-platform
- Create environment (Conda or venv)
# Conda (recommended)
conda env create -f backend/environment.yml
conda activate ai-character-backend
# OR venv + pip
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
pip install -e backend
- Configure environment variables (optional)
cp backend/.env.example backend/.env
# LLMService runs in stub mode if no GROQ_API_KEY is provided
- Run the server
uvicorn backend.main:app --reload --host 127.0.0.1 --port 8000
- Try the demo client
python backend/api_demo.py
# or point to a custom server
BACKEND_BASE_URL=http://127.0.0.1:8000/api/v1 python backend/api_demo.py
Base path: /api/v1
GET /health
— Service health/statusPOST /chat
— Body:{ "character_id": "coach", "message": "Hello" }
Example cURL:
curl -s http://127.0.0.1:8000/api/v1/health | jq .
curl -s -X POST http://127.0.0.1:8000/api/v1/chat \
-H 'Content-Type: application/json' \
-d '{"character_id":"coach","message":"Hello there!"}' | jq .
Tip: No API key? The LLM stub mode returns an echo‑style reply so you can build end‑to‑end without external dependencies.
backend/
api/
v1/
routes.py # FastAPI routes (health, chat)
core/
config.py # Pydantic settings (.env)
logging.py # Logging bootstrap
model/
schemas.py # Pydantic domain models
service/
character_service.py # Load/generate characters
conversation_service.py# Conversation lifecycle & persistence
llm_service.py # LLM wrapper (Groq, stub mode)
platform_service.py # Orchestrator
main.py # FastAPI app factory (API-only)
data/
characters.json # Predefined characters
conversations/ # Conversation JSON persistence (default)
Notes:
- Storage defaults to JSON under
data/
. Upcoming: optional SQLite. LLMService
stub mode returns an echo response for local testing.
Environment variables (via backend/.env
):
APP_NAME
,APP_VERSION
,ENVIRONMENT
HOST
,PORT
,RELOAD
GROQ_API_KEY
(optional; omit to use stub mode)
Optional: Set
BACKEND_BASE_URL
when runningbackend/api_demo.py
against a non‑default host/port.
pytest --cov=backend --cov-report=term-missing backend
The suite covers API, services, and utilities. Stub mode ensures tests run offline.
- Optional SQLite persistence for conversations/messages
- Rich retrieval + embeddings integration
- Streaming responses
- AuthN/AuthZ hardening
Placeholder space for frontend integration shots. Add your UI captures here:
- Service Health (REST client or Swagger UI)
- Chat workflow from the consuming frontend
Contributions are welcome! Please:
- Create a feature branch:
git checkout -b feat/short-name
- Run tests:
pytest --cov=backend backend
- Open a PR with a concise description and screenshots/logs when helpful
MIT