Skip to content

A modern full-stack application for interacting with Groq AI models through a clean, responsive chat interface.

Notifications You must be signed in to change notification settings

Abhay-Kanwasi/Groq-powered-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Groq Chatbot Full-Stack Application

A modern full-stack application for interacting with Groq AI models through a clean, responsive chat interface.

🌟 Features

  • Interactive Chat Interface: User-friendly chat UI similar to ChatGPT and Claude
  • Multiple AI Models: Support for various Groq models including Mixtral, LLaMA2, and Gemma
  • Customizable Parameters: Adjust temperature and token length for different response styles
  • Streaming Support: Option for real-time streaming responses
  • Responsive Design: Works seamlessly on desktop and mobile devices
  • Dark Mode Support: Automatic dark/light theme based on system preferences

πŸ—οΈ Architecture

This project consists of two main components:

  1. Backend API: FastAPI server that proxies requests to the Groq API
  2. Frontend UI: React + TypeScript application with a modern chat interface

πŸš€ Getting Started

Prerequisites

  • Node.js (v16+)
  • Python (v3.8+)
  • Groq API key (obtain from Groq's website)

Installation

Backend Setup

  1. Clone the repository:

    git clone https://github.com/Abhay-Kanwasi/Groq-powered-Chatbot.git
    cd Groq-powered-Chatbot/backend
  2. Create a virtual environment and install dependencies:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    pip install -r requirements.txt
  3. Create a .env file with your Groq API key:

    GROQ_API_KEY=your_groq_api_key_here
    
  4. Start the backend server:

    uvicorn app:app --reload

The FastAPI backend will run on http://localhost:8000.

Frontend Setup

  1. Navigate to the frontend directory:

    cd ../frontend
  2. Install dependencies:

    npm install
  3. Start the development server:

    npm run dev

The React frontend will run on http://localhost:3000.

πŸ“ Project Structure

groq-chatbot/
β”œβ”€β”€ backend/                 # FastAPI backend
β”‚   β”œβ”€β”€ app.py               # Main FastAPI application
β”‚   β”œβ”€β”€ requirements.txt     # Python dependencies
β”‚   └── .env                 # Environment variables
β”‚
β”œβ”€β”€ frontend/                # React frontend
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ api/             # API integration
β”‚   β”‚   β”œβ”€β”€ components/      # React components
β”‚   β”‚   β”œβ”€β”€ store/           # Zustand state management
β”‚   β”‚   β”œβ”€β”€ types.ts         # TypeScript types
β”‚   β”‚   β”œβ”€β”€ App.tsx          # Main application component
β”‚   β”‚   └── main.tsx         # Entry point
β”‚   β”œβ”€β”€ package.json         # Node.js dependencies
β”‚   β”œβ”€β”€ vite.config.ts       # Vite configuration
β”‚   └── tsconfig.json        # TypeScript configuration
β”‚
└── README.md                # Project documentation

πŸ’» API Endpoints

Backend FastAPI Endpoints

Endpoint Method Description
/chat POST Send a chat message and get an AI response
/models GET Get a list of available AI models
/health GET Health check endpoint

Frontend API Integration

The frontend communicates with the backend using Axios for HTTP requests and React Query for data fetching/caching.

🧩 Technologies Used

Backend

  • FastAPI: High-performance web framework for building APIs
  • httpx: Asynchronous HTTP client for Python
  • python-dotenv: Environment variable management

Frontend

  • React: UI library for building user interfaces
  • TypeScript: Typed JavaScript for better developer experience
  • Vite: Next-generation frontend tooling
  • Zustand: Lightweight state management
  • React Query: Data fetching and caching library
  • Tailwind CSS: Utility-first CSS framework
  • shadcn/ui: Reusable UI components
  • Lucide React: Beautiful icons

βš™οΈ Configuration Options

Backend Configuration

  • GROQ_API_KEY: Your Groq API key
  • GROQ_ORGANIZATION_ID : Groq Organization ID
  • Available models are defined in the MODELS dictionary in app.py

Frontend Configuration

The chat settings can be adjusted via the settings panel:

  • Model: Select which Groq model to use [Available models are: deepseek-r1-distill-llama-70b, llama-3.3-70b-versatile, qwen-qwq-32b, qwen-2.5-coder-32b]
  • Temperature: Adjust from 0.0 (more deterministic) to 1.0 (more creative)
  • Max Tokens: Set the maximum length of responses (100-2000)
  • Stream: Toggle streaming mode for real-time responses

πŸ”„ Development Workflow

  1. Make changes to the backend or frontend code
  2. Backend changes will automatically reload with uvicorn's --reload flag
  3. Frontend changes will automatically reload with Vite's hot module replacement

πŸ› οΈ Building for Production

Backend

cd backend
pip install -r requirements.txt

Run with a production ASGI server like Uvicorn or Gunicorn:

gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker

Frontend

cd frontend
npm run build

This creates a dist directory with production-ready static files that can be served by any static file server.

πŸ”’ Security Considerations

  • The backend should validate all inputs
  • In production, configure CORS properly to restrict access
  • Don't expose your Groq API key in client-side code
  • Consider adding rate limiting for the chat endpoint

πŸ‘₯ Contributions

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ™ Acknowledgements

About

A modern full-stack application for interacting with Groq AI models through a clean, responsive chat interface.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors