A modern AI chat web application built with Astro.js, Tailwind CSS, and Ollama AI
- 🚀 Astro.js - Modern static site generator with zero JavaScript runtime
- 🎨 Tailwind CSS - Utility-first CSS framework for rapid modern UI development
- 🌓 Dark Mode - Complete dark mode support with system preference detection
- 🤖 Multi LLM Providers - Support for OpenAI, Anthropic Claude, Google Gemini, Ollama, etc.
- ☁️ Cloud AI Integration - Seamless integration with major cloud LLM services
- 🏠 Local AI Support - Ollama and OpenLLM local deployment options
- 💬 Real-time Chat - Smooth AI conversation experience with streaming responses
- ⚙️ Config Management - JSON export/import, drag-drop upload, config backup & sharing
- 🎭 Rich Animations - Pulse, ripple, bounce and various interactive animation effects
- 🔄 Express.js API - Independent RESTful API server with multi-provider switching
- 📱 Responsive Design - Perfect adaptation for desktop and mobile devices
- ⚡ Fast Development - Hot reload development experience with millisecond build speed
- 🛡️ Type Safety - Complete TypeScript support
- 🎯 Zero Configuration - Out-of-the-box development environment
- 🔒 Privacy First - Support for complete local deployment, data stays local
- Node.js 18+
- npm or yarn
- Ollama (for local AI models)
-
Clone the repository
git clone <repository-url> cd templ
-
Install dependencies
npm install
-
Install and start Ollama
# Download and install Ollama (visit https://ollama.ai) # Start Ollama service ollama serve # Download models in a new terminal ollama pull llama2
-
Start development server
npm run dev
-
Open browser
Visit http://localhost:4321 to get started!
📦 templ/
├── 📂 public/ # Static assets
│ └── favicon.svg
├── 📂 src/
│ ├── 📂 components/ # React components
│ │ ├── Dashboard.tsx # Main dashboard component
│ │ └── ui/ # shadcn/ui components
│ ├── 📂 lib/ # Utility libraries
│ │ ├── config.ts # Application configuration
│ │ ├── ollama.ts # Ollama API wrapper
│ │ └── utils.ts # Utility functions
│ ├── 📂 routes/ # Express routes
│ │ ├── chat.ts # Chat routes
│ │ └── models.ts # Model management routes
│ ├── 📂 pages/ # Astro page routes
│ │ ├── 📂 api/ # Astro API endpoints
│ │ │ ├── chat.ts # Chat API
│ │ │ └── models.ts # Models list API
│ │ └── index.astro # Homepage
│ ├── 📂 styles/ # Global styles
│ │ └── globals.css # Global CSS
│ └── server.ts # Express server
├── 📂 docs/ # Project documentation
│ ├── 📂 integration/ # Integration guides
│ └── 📂 testing/ # Testing documentation
├── .env.example # Environment variables example
├── astro.config.mjs # Astro configuration
├── tailwind.config.mjs # Tailwind configuration
└── package.json # Project dependencies
| Command | Description |
|---|---|
npm run dev |
Start Astro development server (http://localhost:4321) |
npm run build |
Build production version to dist/ |
npm run preview |
Preview built website |
npm run server |
Start Express API server (http://localhost:3000) |
npm run server:dev |
Start Express in development mode (auto-restart) |
npm run server:watch |
Start Express with file watching (auto-restart on changes) |
This project supports two running modes:
npm run devVisit http://localhost:4321
Run in two separate terminal windows:
Terminal 1 - Astro Frontend:
npm run devTerminal 2 - Express Backend:
npm run server:devThen access:
- Astro Frontend: http://localhost:4321
- Express API: http://localhost:3000
- Health Check: http://localhost:3000/health
Copy .env.example to .env and configure:
# Express server port
PORT=3000
# CORS configuration
CORS_ORIGIN=http://localhost:4321
# Ollama service address
OLLAMA_HOST=http://localhost:11434- Visit Homepage - View project introduction and feature overview
- Enter Chat - Start chatting with AI models
- Select Model - Choose different AI models at the top of chat interface
- Start Conversation - Enter message and press Enter or click send button
The project supports all models installed via Ollama:
| Model | Size | Features | Download Command |
|---|---|---|---|
| llama2 | 3.8GB | General conversation model | ollama pull llama2 |
| codellama | 3.8GB | Code generation expert | ollama pull codellama |
| mistral | 4.1GB | Efficient multilingual model | ollama pull mistral |
| neural-chat | 4.1GB | Conversation optimized model | ollama pull neural-chat |
| starling-lm | 4.1GB | Instruction following model | ollama pull starling-lm |
💡 Tip: First-time use requires model download, recommend starting with
llama2
These endpoints are integrated into the Astro application, suitable for simple SSR scenarios.
Get list of available Ollama models
Send message to AI model for conversation
Independent RESTful API server providing more powerful features and streaming response support.
Health check endpoint
Response:
{
"status": "ok",
"timestamp": "2025-10-13T12:00:00.000Z",
"uptime": 3600.5
}Get list of available Ollama models
Response Example:
{
"success": true,
"models": [
{
"name": "llama2:latest",
"size": 3826793677,
"digest": "sha256:...",
"modified_at": "2024-01-15T12:00:00Z"
}
],
"count": 1
}Send message to AI model for conversation (non-streaming)
Request Body:
{
"message": "Explain what Astro.js is",
"model": "llama2"
}Response Example:
{
"success": true,
"data": "Astro.js is a modern static site generator that uses Islands Architecture...",
"model": "llama2"
}Error Response:
{
"success": false,
"error": "Message content cannot be empty"
}Send message to AI model for conversation (streaming response)
Request Body:
{
"message": "Write a poem about spring",
"model": "llama2"
}Response Format: Server-Sent Events (SSE)
data: {"content":"Spring"}
data: {"content":" is"}
data: {"content":" here"}
data: [DONE]
📚 Complete API Documentation: See Express API Documentation for more details
Customize Ollama settings in src/lib/config.ts:
export const OLLAMA_CONFIG = {
HOST: 'http://localhost:11434', // Ollama server address
DEFAULT_MODEL: 'llama2', // Default model
REQUEST_TIMEOUT: 30000, // Request timeout (30 seconds)
// Supported models list
FALLBACK_MODELS: [
'llama2', 'codellama', 'mistral',
'neural-chat', 'starling-lm'
],
// API endpoint configuration
ENDPOINTS: {
HEALTH: '/api/version',
MODELS: '/api/tags',
CHAT: '/api/chat'
}
};Create .env.local file for personalized configuration:
# Ollama server address (optional)
OLLAMA_HOST=http://localhost:11434
# Default model (optional)
DEFAULT_MODEL=llama2
# Request timeout (optional)
REQUEST_TIMEOUT=30000Modify style theme in tailwind.config.mjs:
export default {
content: ['./src/**/*.{astro,html,js,jsx,md,mdx,ts,tsx}'],
theme: {
extend: {
colors: {
primary: '#3B82F6', // Custom primary color
secondary: '#10B981', // Custom secondary color
}
},
},
plugins: [],
}❌ Ollama Service Connection Failed
Symptoms: Chat interface shows "Connection failed", cannot get model list
Solutions:
-
Check Ollama service status
ollama serve
-
Verify service port (default 11434)
curl http://localhost:11434/api/version
-
Check firewall settings, ensure port is accessible
-
Confirm models are downloaded
ollama list
🐌 Slow Model Response
Possible Causes and Solutions:
- Insufficient Memory: Ensure system has enough memory (recommended 8GB+)
- Model Too Large: Try smaller models (
llama2:7bvsllama2:70b) - High CPU Load: Close other CPU-intensive programs
- Disk I/O: Ensure models are stored on SSD
Performance Optimization Tips:
# Use quantized models (smaller but similar performance)
ollama pull llama2:7b-q4_0
# Set concurrency limit
export OLLAMA_NUM_PARALLEL=1🚫 Build Errors
Common Issues:
- Node.js Version: Ensure using Node.js 18+
- Dependency Conflicts: Delete
node_modulesandpackage-lock.json, reinstall - TypeScript Errors: Run
npm run astro checkto check types
# Clean and reinstall
rm -rf node_modules package-lock.json
npm install
# Check Node.js version
node --version # Should be >= 18.0.0🌐 Port Already in Use
If default port 4321 is occupied:
# Start with different port
npm run dev -- --port 3000
# Or modify astro.config.mjs
export default defineConfig({
server: { port: 3000 },
integrations: [tailwind()],
});|
Astro |
Tailwind |
TypeScript |
Ollama |
- Astro.js
^5.14.3- Modern static site generator - Tailwind CSS
^3.4.18- Utility-first CSS framework - TypeScript - Type-safe JavaScript superset
- Ollama
^0.6.0- Local large language model runtime - Express.js
^5.1.0- Fast, minimalist web framework - React
^19.0.0- UI component library - shadcn/ui - Re-usable components built with Radix UI
- @astrojs/check - Astro project type checking
- @astrojs/react - Astro React integration
- @astrojs/tailwind - Astro Tailwind CSS integration
- tsx - TypeScript execution environment
- nodemon - Auto-restart on file changes
- Vite - Fast frontend build tool (built into Astro)
- ✅ Basic architecture completed
- ✅ Ollama API integration completed
- ✅ Chat interface development completed
- ✅ Responsive design completed
- ✅ Error handling completed
- ✅ TypeScript support completed
- ✅ Streaming response support completed
- ✅ Express.js API server completed
- ✅ React + shadcn/ui integration completed
Contributions are welcome! Please follow these steps:
- Fork the project
- Create feature branch (
git checkout -b feature/AmazingFeature) - Commit changes (
git commit -m 'Add some AmazingFeature') - Push to branch (
git push origin feature/AmazingFeature) - Open Pull Request
- Write code in TypeScript
- Follow ESLint and Prettier rules
- Add appropriate comments and documentation
- Ensure all tests pass
This project is licensed under the MIT License - see the LICENSE file for details
- Astro Team - Excellent static site generator
- Tailwind Labs - Elegant CSS framework
- Ollama Community - Making local AI simple
- All open-source contributors ❤️
⭐ Star Us • 🐛 Report Issues • 💡 Feature Requests
Made with ❤️ by Your Name