Skip to content

CONVOPRO is my first AI Agent system , created to replicate the essential features of ChatGPT using locally hosted LLMs through Ollama.

Notifications You must be signed in to change notification settings

AshutoshRajGupta/CONVOPRO-PRIVATE-CHATGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 CONVOPRO – A ChatGPT Clone (AI Agent Powered by Ollama + Python + MongoDB)

CONVOPRO is a fully functional ChatGPT-style conversational AI application built with:

  • Ollama for running LLMs locally
  • Python for backend logic
  • Streamlit for the UI
  • MongoDB for storing conversations

Users can choose from multiple models , start new chats, continue previous chats from the sidebar, and store complete conversation history—just like the real ChatGPT interface.


📌 Project Description

CONVOPRO is my first AI Agent system , created to replicate the essential features of ChatGPT using locally hosted LLMs through Ollama . It supports:

✔️ Switching between different LLM models ✔️ Real-time streaming responses ✔️ Automatic chat title generation using an LLM ✔️ Storing message history in MongoDB ✔️ Sidebar displaying previous chats ✔️ Clean and lightweight Streamlit UI

This project demonstrates how to build a full-stack AI chatbot that manages interactions, memory, chat titles, and multiple model selections.


🧰 Tech Stack

Layer Technologies
LLM Engine Ollama (LLaMA 3, Mistral, Phi, etc.)
Backend Python
Database MongoDB
Frontend Streamlit
Environment Handling Python-dotenv
Utilities & Helpers Custom-built Python modules

🔍 About Ollama

Ollama is a local LLM hosting engine that allows you to run models like:

  • LLaMA3
  • Mistral
  • Gemma
  • Phi
  • Many others

Using Ollama enables:

  • Offline LLM inference
  • Faster response times
  • Data privacy
  • Customizable models

CONVOPRO uses Ollama through a helper function (get_llm.py) to load and stream responses from selected models.


🟢 About MongoDB

MongoDB is used to store:

  • Each conversation
  • User messages
  • Assistant messages
  • Chat titles
  • Timestamps

The file conversations.py handles all MongoDB operations:

  • Create a new chat
  • Fetch conversation by ID
  • Add messages
  • View all chats in sidebar

This ensures persistent and structured conversation history, just like ChatGPT.


⚙️ How the Project Works (High-Level Flow)

1️⃣ User selects an LLM model

The model list is dynamically fetched from Ollama using:

services/get_models_list.py

2️⃣ A new chat window is created

A new chat ID is created and stored in MongoDB:

db/conversations.py

3️⃣ User sends a message

Message is stored in MongoDB and sent to the LLM:

services/chat_utilities.py

4️⃣ Assistant responds

Response is generated by Ollama using:

llm_factory/get_llm.py

Then saved in MongoDB.

5️⃣ Title is generated automatically

After the first message, the title is created using:

services/get_title.py

6️⃣ Sidebar loads previous chats

All stored chat titles are fetched via MongoDB.

7️⃣ Streamlit UI renders everything

In:

main.py

📁 Project File Structure

CONVOPRO
│── config/
│   ├── settings.py          # Manage environment variables
│
│── db/
│   ├── conversations.py     # All MongoDB operations
│   ├── mongo.py             # MongoDB connection
│
│── llm_factory/
│   ├── get_llm.py           # Initialize & configure Ollama models
│
│── services/
│   ├── chat_utilities.py    # LLM response generation logic
│   ├── get_models_list.py   # Fetch available models from Ollama
│   ├── get_title.py         # Auto-generate chat titles
│
│── main.py                  # Streamlit UI (chat interface)
│── requirements.txt         # Project dependencies
│── env_template.txt         # Env variable template
│── .gitignore
│── README.md
│── venv/ (ignored)

🖼️ Screenshots (Add images after uploading)

⭐ Chat Interface

image

⭐ Sidebar with Previous Chats

image

⭐ DataBase View

image

🚀 Getting Started

1️⃣ Install Dependencies

pip install -r requirements.txt

2️⃣ Start Ollama

Make sure you have Ollama installed:

https://ollama.com/download

Start your desired model, e.g.:

ollama pull llama3

3️⃣ Add Environment Variables

Create .env based on env_template.txt

MONGO_URI=your_mongo_connection_string
DB_NAME=convopro

4️⃣ Run the App

streamlit run main.py

📌 Future Enhancements

🔹 User authentication 🔹 Model settings (temperature, max tokens) 🔹 Voice input / audio responses 🔹 Export chat history 🔹 Add image model support (LLaVA, Florence, etc.)


🤝 Contributing

Pull requests are welcome! Feel free to suggest improvements, new features, or create issues.

About

CONVOPRO is my first AI Agent system , created to replicate the essential features of ChatGPT using locally hosted LLMs through Ollama.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages