Private, Offline AI Chatbot powered by Ollama + Streamlit + Whisper + LLaVA + RAG
LocalAI-Assistant is a fully private, offline AI-powered assistant that runs directly on your local machine. It integrates multiple advanced AI models like Llama, Mistral, DeepSeek, Phi, TinyLlama, and LLaVA, providing a seamless and privacy-focused experience without any cloud dependency.
- Chat with LLMs (Offline) — Converse with AI models locally
- PDF Summarization & Document Q&A — Upload documents and interact with them using AI
- Voice Input & Output — Convert speech to text (Whisper) and text to speech (pyttsx3) — fully offline
- Image Analysis with LLaVA — Understand and analyze image content through AI-powered vision models
- Chat with Documents via RAG — Retrieval-Augmented Generation for querying custom knowledge bases
- Multi-Chat Memory Management — Auto-save chats with options to rename, delete, and restore from a recycle bin
- 100% Local & Private — No internet required, no data leaves your machine
| Component | Description |
|---|---|
| Ollama | Run LLMs locally (Supports Llama3, Mistral, DeepSeek, Phi, TinyLlama, LLaVA) |
| Streamlit | Web-based user interface for easy interaction |
| LangChain | Enables document-based RAG (Retrieval-Augmented Generation) |
| Whisper (Offline) | Speech-to-Text model for voice input |
| pyttsx3 (Offline) | Text-to-Speech for voice responses |
| LLaVA | Vision-Language model for AI-powered image analysis |
| Local JSON Storage | Chat history, knowledge base, and recycle bin management |
-
Download and install Ollama from 👉 https://ollama.com/download
-
Run Ollama in the background:
ollama servegit clone https://github.com/your-username/LocalAI-Assistant.git
cd LocalAI-AssistantCreate a virtual environment (recommended):
python -m venv offenv
# Activate it:
offenv\Scripts\activate # On Windows
source offenv/bin/activate # On Mac/Linuxpip install -r requirements.txtollama pull llama3
ollama pull mistral
ollama pull deepseek-coder
ollama pull phi3
ollama pull tinyllama
ollama pull llavaRun the app:
streamlit run app.pyOpen in browser:
http://localhost:8501