Skip to content

Gridsfeed/Steady-Research-Pro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

54 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Astro + Tailwind CSS + Ollama AI

Astro Tailwind CSS TypeScript Ollama

A modern AI chat web application built with Astro.js, Tailwind CSS, and Ollama AI

Live Demo β€’ Quick Start β€’ Documentation

✨ Features

  • πŸš€ Astro.js - Modern static site generator with zero JavaScript runtime
  • 🎨 Tailwind CSS - Utility-first CSS framework for rapid modern UI development
  • πŸŒ“ Dark Mode - Complete dark mode support with system preference detection
  • πŸ€– Multi LLM Providers - Support for OpenAI, Anthropic Claude, Google Gemini, Ollama, etc.
  • ☁️ Cloud AI Integration - Seamless integration with major cloud LLM services
  • 🏠 Local AI Support - Ollama and OpenLLM local deployment options
  • πŸ’¬ Real-time Chat - Smooth AI conversation experience with streaming responses
  • βš™οΈ Config Management - JSON export/import, drag-drop upload, config backup & sharing
  • 🎭 Rich Animations - Pulse, ripple, bounce and various interactive animation effects
  • πŸ”„ Express.js API - Independent RESTful API server with multi-provider switching
  • πŸ“± Responsive Design - Perfect adaptation for desktop and mobile devices
  • ⚑ Fast Development - Hot reload development experience with millisecond build speed
  • πŸ›‘οΈ Type Safety - Complete TypeScript support
  • 🎯 Zero Configuration - Out-of-the-box development environment
  • πŸ”’ Privacy First - Support for complete local deployment, data stays local

πŸš€ Quick Start

Prerequisites

  • Node.js 18+
  • npm or yarn
  • Ollama (for local AI models)

Installation Steps

  1. Clone the repository

    git clone <repository-url>
    cd templ
  2. Install dependencies

    npm install
  3. Install and start Ollama

    # Download and install Ollama (visit https://ollama.ai)
    # Start Ollama service
    ollama serve
    
    # Download models in a new terminal
    ollama pull llama2
  4. Start development server

    npm run dev
  5. Open browser

    Visit http://localhost:4321 to get started!

πŸ“ Project Structure

πŸ“¦ templ/
β”œβ”€β”€ πŸ“‚ public/                     # Static assets
β”‚   └── favicon.svg
β”œβ”€β”€ πŸ“‚ src/
β”‚   β”œβ”€β”€ πŸ“‚ components/             # React components
β”‚   β”‚   β”œβ”€β”€ Dashboard.tsx          # Main dashboard component
β”‚   β”‚   └── ui/                    # shadcn/ui components
β”‚   β”œβ”€β”€ πŸ“‚ lib/                    # Utility libraries
β”‚   β”‚   β”œβ”€β”€ config.ts              # Application configuration
β”‚   β”‚   β”œβ”€β”€ ollama.ts              # Ollama API wrapper
β”‚   β”‚   └── utils.ts               # Utility functions
β”‚   β”œβ”€β”€ πŸ“‚ routes/                 # Express routes
β”‚   β”‚   β”œβ”€β”€ chat.ts                # Chat routes
β”‚   β”‚   └── models.ts              # Model management routes
β”‚   β”œβ”€β”€ πŸ“‚ pages/                  # Astro page routes
β”‚   β”‚   β”œβ”€β”€ πŸ“‚ api/                # Astro API endpoints
β”‚   β”‚   β”‚   β”œβ”€β”€ chat.ts            # Chat API
β”‚   β”‚   β”‚   └── models.ts          # Models list API
β”‚   β”‚   └── index.astro            # Homepage
β”‚   β”œβ”€β”€ πŸ“‚ styles/                 # Global styles
β”‚   β”‚   └── globals.css            # Global CSS
β”‚   └── server.ts                  # Express server
β”œβ”€β”€ πŸ“‚ docs/                       # Project documentation
β”‚   β”œβ”€β”€ πŸ“‚ integration/            # Integration guides
β”‚   └── πŸ“‚ testing/                # Testing documentation
β”œβ”€β”€ .env.example                   # Environment variables example
β”œβ”€β”€ astro.config.mjs               # Astro configuration
β”œβ”€β”€ tailwind.config.mjs            # Tailwind configuration
└── package.json                   # Project dependencies

🎯 Usage Guide

Development Commands

Command Description
npm run dev Start Astro development server (http://localhost:4321)
npm run build Build production version to dist/
npm run preview Preview built website
npm run server Start Express API server (http://localhost:3000)
npm run server:dev Start Express in development mode (auto-restart)
npm run server:watch Start Express with file watching (auto-restart on changes)

Dual Server Architecture

This project supports two running modes:

1. Astro Only (Using Astro API Routes)

npm run dev

Visit http://localhost:4321

2. Astro + Express (Recommended)

Run in two separate terminal windows:

Terminal 1 - Astro Frontend:

npm run dev

Terminal 2 - Express Backend:

npm run server:dev

Then access:

Environment Configuration

Copy .env.example to .env and configure:

# Express server port
PORT=3000

# CORS configuration
CORS_ORIGIN=http://localhost:4321

# Ollama service address
OLLAMA_HOST=http://localhost:11434

Feature Usage

  1. Visit Homepage - View project introduction and feature overview
  2. Enter Chat - Start chatting with AI models
  3. Select Model - Choose different AI models at the top of chat interface
  4. Start Conversation - Enter message and press Enter or click send button

Supported AI Models

The project supports all models installed via Ollama:

Model Size Features Download Command
llama2 3.8GB General conversation model ollama pull llama2
codellama 3.8GB Code generation expert ollama pull codellama
mistral 4.1GB Efficient multilingual model ollama pull mistral
neural-chat 4.1GB Conversation optimized model ollama pull neural-chat
starling-lm 4.1GB Instruction following model ollama pull starling-lm

πŸ’‘ Tip: First-time use requires model download, recommend starting with llama2

πŸ“‘ API Endpoints

Astro API Routes (Port 4321)

These endpoints are integrated into the Astro application, suitable for simple SSR scenarios.

GET /api/models

Get list of available Ollama models

POST /api/chat

Send message to AI model for conversation

Express API Server (Port 3000)

Independent RESTful API server providing more powerful features and streaming response support.

GET /health

Health check endpoint

Response:

{
  "status": "ok",
  "timestamp": "2025-10-13T12:00:00.000Z",
  "uptime": 3600.5
}

GET /api/models

Get list of available Ollama models

Response Example:

{
  "success": true,
  "models": [
    {
      "name": "llama2:latest",
      "size": 3826793677,
      "digest": "sha256:...",
      "modified_at": "2024-01-15T12:00:00Z"
    }
  ],
  "count": 1
}

POST /api/chat

Send message to AI model for conversation (non-streaming)

Request Body:

{
  "message": "Explain what Astro.js is",
  "model": "llama2"
}

Response Example:

{
  "success": true,
  "data": "Astro.js is a modern static site generator that uses Islands Architecture...",
  "model": "llama2"
}

Error Response:

{
  "success": false,
  "error": "Message content cannot be empty"
}

POST /api/chat/stream

Send message to AI model for conversation (streaming response)

Request Body:

{
  "message": "Write a poem about spring",
  "model": "llama2"
}

Response Format: Server-Sent Events (SSE)

data: {"content":"Spring"}
data: {"content":" is"}
data: {"content":" here"}
data: [DONE]

πŸ“š Complete API Documentation: See Express API Documentation for more details

βš™οΈ Configuration

Ollama Configuration

Customize Ollama settings in src/lib/config.ts:

export const OLLAMA_CONFIG = {
  HOST: 'http://localhost:11434',     // Ollama server address
  DEFAULT_MODEL: 'llama2',            // Default model
  REQUEST_TIMEOUT: 30000,             // Request timeout (30 seconds)
  
  // Supported models list
  FALLBACK_MODELS: [
    'llama2', 'codellama', 'mistral', 
    'neural-chat', 'starling-lm'
  ],
  
  // API endpoint configuration
  ENDPOINTS: {
    HEALTH: '/api/version',
    MODELS: '/api/tags', 
    CHAT: '/api/chat'
  }
};

Environment Variables

Create .env.local file for personalized configuration:

# Ollama server address (optional)
OLLAMA_HOST=http://localhost:11434

# Default model (optional)
DEFAULT_MODEL=llama2

# Request timeout (optional)
REQUEST_TIMEOUT=30000

Tailwind CSS Customization

Modify style theme in tailwind.config.mjs:

export default {
  content: ['./src/**/*.{astro,html,js,jsx,md,mdx,ts,tsx}'],
  theme: {
    extend: {
      colors: {
        primary: '#3B82F6',    // Custom primary color
        secondary: '#10B981',   // Custom secondary color
      }
    },
  },
  plugins: [],
}

πŸ”§ Troubleshooting

❌ Ollama Service Connection Failed

Symptoms: Chat interface shows "Connection failed", cannot get model list

Solutions:

  1. Check Ollama service status

    ollama serve
  2. Verify service port (default 11434)

    curl http://localhost:11434/api/version
  3. Check firewall settings, ensure port is accessible

  4. Confirm models are downloaded

    ollama list
🐌 Slow Model Response

Possible Causes and Solutions:

  • Insufficient Memory: Ensure system has enough memory (recommended 8GB+)
  • Model Too Large: Try smaller models (llama2:7b vs llama2:70b)
  • High CPU Load: Close other CPU-intensive programs
  • Disk I/O: Ensure models are stored on SSD

Performance Optimization Tips:

# Use quantized models (smaller but similar performance)
ollama pull llama2:7b-q4_0

# Set concurrency limit
export OLLAMA_NUM_PARALLEL=1
🚫 Build Errors

Common Issues:

  1. Node.js Version: Ensure using Node.js 18+
  2. Dependency Conflicts: Delete node_modules and package-lock.json, reinstall
  3. TypeScript Errors: Run npm run astro check to check types
# Clean and reinstall
rm -rf node_modules package-lock.json
npm install

# Check Node.js version
node --version  # Should be >= 18.0.0
🌐 Port Already in Use

If default port 4321 is occupied:

# Start with different port
npm run dev -- --port 3000

# Or modify astro.config.mjs
export default defineConfig({
  server: { port: 3000 },
  integrations: [tailwind()],
});

πŸ› οΈ Tech Stack

Astro
Astro
Tailwind
Tailwind
TypeScript
TypeScript
Ollama
Ollama

Core Technologies

  • Astro.js ^5.14.3 - Modern static site generator
  • Tailwind CSS ^3.4.18 - Utility-first CSS framework
  • TypeScript - Type-safe JavaScript superset
  • Ollama ^0.6.0 - Local large language model runtime
  • Express.js ^5.1.0 - Fast, minimalist web framework
  • React ^19.0.0 - UI component library
  • shadcn/ui - Re-usable components built with Radix UI

Development Tools

  • @astrojs/check - Astro project type checking
  • @astrojs/react - Astro React integration
  • @astrojs/tailwind - Astro Tailwind CSS integration
  • tsx - TypeScript execution environment
  • nodemon - Auto-restart on file changes
  • Vite - Fast frontend build tool (built into Astro)

πŸ“Š Project Status

  • βœ… Basic architecture completed
  • βœ… Ollama API integration completed
  • βœ… Chat interface development completed
  • βœ… Responsive design completed
  • βœ… Error handling completed
  • βœ… TypeScript support completed
  • βœ… Streaming response support completed
  • βœ… Express.js API server completed
  • βœ… React + shadcn/ui integration completed

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the project
  2. Create feature branch (git checkout -b feature/AmazingFeature)
  3. Commit changes (git commit -m 'Add some AmazingFeature')
  4. Push to branch (git push origin feature/AmazingFeature)
  5. Open Pull Request

Development Guidelines

  • Write code in TypeScript
  • Follow ESLint and Prettier rules
  • Add appropriate comments and documentation
  • Ensure all tests pass

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details

πŸ™ Acknowledgments


⭐ Star Us β€’ πŸ› Report Issues β€’ πŸ’‘ Feature Requests

Made with ❀️ by Your Name

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Contributors