Skip to content

OllamaMulti-RAG 🚀 is a multimodal AI chat app combining Whisper AI for audio, LLaVA for images, and Chroma DB for PDFs, enhanced with Ollama and OpenAI API. 📄 Built for AI enthusiasts, it welcomes contributions—features, bug fixes, or optimizations—to advance practical multimodal AI research and development collaboratively.

License

Notifications You must be signed in to change notification settings

UjjwalSaini07/OllamaMulti-RAG

Repository files navigation

🤖 OllamaMulti-RAG: Neura-Nix

An AI-powered multimodal assistant that seamlessly understands Text, Images, PDFs, and Audio — built to boost productivity, empower businesses, and transform everyday workflows.

🌟 Introduction

Neura-Nix is a next-gen multimodal assistant that combines the power of Ollama, OpenAI, Whisper, and Redis into a single streamlined platform.
It enables natural and intelligent interaction across different mediums — chat with documents, analyze images, transcribe audio, and converse in real-time.

Built with Streamlit, Docker, and Redis caching, Neura-Nix is designed for speed, extensibility, and scalability.
Whether you’re an enterprise optimizing workflows or an individual boosting productivity, Neura-Nix adapts seamlessly.

Github License Info Generic badge GitHub stars Github Release

🚀 Key Features

  • 📖 Document Chat (PDF RAG) → Upload PDFs and extract context-aware insights.
  • 🖼️ Image Analysis → Unlock hidden stories from visual data.
  • 🎤 Audio to Text → Record or upload audio and let Whisper transcribe in seconds.
  • 💬 Persistent Chat Sessions → Store, manage, and reload previous conversations.
  • ⚡ Redis Caching → Lightning-fast performance with session-level caching.
  • 🎯 Model Flexibility → Switch easily between Ollama and OpenAI models.
  • 🔐 Secure by Design → Environment variables, .env.sample, and Dockerized deployment.
  • 📊 Optimized Vector Storage → ChromaDB-backed semantic search for documents.
  • 🎨 Modern UI → Built with Streamlit, responsive and minimal.

📽️ Demo

Coming soon... 🎬

Important

Use the Docker image or run the project locally via localhost to get started. Before proceeding, please contact me at Mail so I can share the credentials instead Docker extract your whole memory.

🛠️ Technology Stack

  • Frontend/UIStreamlit
  • LLMs & EmbeddingsOllama + OpenAI
  • Audio TranscriptionWhisper
  • Vector DatabaseChromaDB
  • Caching & Session StoreRedis Cloud
  • ContainerizationDocker + Docker Compose
  • Web ServerNGINX (reverse proxy & static serving)
  • Database → SQLite (lightweight local DB for session caching)
  • Orchestration → GitHub Actions + Dependabot for CI/CD

Getting Started ⚙️

Prerequisites

Before setting up Neura-Nix, make sure you have the following installed:

  • Python 3.10+ → Required for running the backend (tested on 3.10.12).

  • Docker & Docker Compose → Preferred method for containerized deployment. Install Docker

  • Git → For cloning and managing the repository. Install Git.

  • Redis (Cloud or Local) → Used for caching and optimizing performance.
    Sign up for Redis Cloud or run a local instance.

  • Ollama → Required for running local multimodal models.

  • (Optional) GPU Support → If available, install NVIDIA drivers + CUDA toolkit for accelerated model performance.

Tip: Ensure environment variables (like REDIS_HOST, REDIS_PORT, OPENAI_API_KEY) are properly configured in your .env file before running the project.

Installation 🛠️

You can follow the official setup guide for Linux, Windows, and Docker below to run Neura-Nix {OllamaMulti-RAG} locally.

  • First Read this License & their terms then proceed.
  • Star ⭐ the Repository
  • Fork the repository (Optional)
  • Project Setup:
  1. Clone the repository:
    git clone https://github.com/UjjwalSaini07/OllamaMulti-RAG.git
  1. Navigate to the project main directory:
    cd NexGen-Quillix

Important

All these cd directory paths are relative to the root directory of the cloned project.

After Cloning the repository and choose your preferred installation method.

🔹 Method 1: Docker Compose {Not Fastest Step}

  1. Modify Docker Compose →

    • Remove docker-compose.yml.
    • Rename docker-compose_with_ollama.ymldocker-compose.yml.
  2. Set model save path → Update line 25 in the docker-compose.yml file.

  3. Run Neura-Nix

      docker compose up

    ⚡ If you don’t have a GPU → remove the deploy section from the compose file.

  4. Optional Configurations

    • Edit config.yaml to match your needs.
    • Add custom icons → replace user_image.png and/or bot_image.png inside the chat_icons folder.
  5. Access the app → Open http://0.0.0.0:8501 in your browser.

  6. Pull Models → Visit Ollama Library and pull models:

      /pull MODEL_NAME

    ✅ You need:

    • An embedding model → e.g., nomic-embed-text for PDFs.
    • An image-capable model → e.g., llava for image analysis.

🔹 Method 2: Windows (Best Performance)

⚠️ Using Ollama inside Docker on Windows can be slow → prefer local installation.

  1. Install Ollama Desktop.

  2. Update config → In config.yaml, use line 4 (Windows) for base_url, remove line 3 {Default is Correct}.

  3. Start Neura-Nix Up the Docker Container:

      docker compose up
  4. Access → http://0.0.0.0:8501.

  5. Pull models as prescribed in Method 1.

🔹 Method 3: Manual Install

  1. Install Ollama.
  2. Create Python venv (tested with Python 3.10.12).
  3. Install dependencies:
      pip install --upgrade pip
      pip install -r requirements.txt
      pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
  4. Run setup:
      python3 database_operations.py   # initialize SQLite DB
      streamlit run app.py
  5. Pull models (embedding + multimodal as above).
  6. Optional → update config.yaml and add custom chat icons.

📊 Business Optimization

Neura-Nix is not just a research tool — it’s built to optimize workflows and unlock ROI:

  • 🏢 Enterprise Teams → Automate document review, compliance checks, and data-heavy workflows.
  • 📈 Startups → Accelerate content creation, customer support, and product research.
  • 👨‍💻 Freelancers & Creators → Boost productivity with multimodal AI (chat, docs, media).
  • 🔒 Privacy by Design → Keep your sensitive data secure with local-first deployment.

📚 Documentation

To help you navigate and extend Neura-Nix, we’ve structured the documentation into multiple layers:

Author ✍️

Contact 📞

Feel free to reach out if you have any questions or suggestions!

License 📄

License Credential Check.
You can use this project the way you want. Feel free to credit me if you want to!

Feedback and Contributions 💌

Feedback and contributions are always welcome! Feel free to open an Issue.

About

OllamaMulti-RAG 🚀 is a multimodal AI chat app combining Whisper AI for audio, LLaVA for images, and Chroma DB for PDFs, enhanced with Ollama and OpenAI API. 📄 Built for AI enthusiasts, it welcomes contributions—features, bug fixes, or optimizations—to advance practical multimodal AI research and development collaboratively.

Topics

Resources

License

Security policy

Stars

Watchers

Forks