An AI-powered multimodal assistant that seamlessly understands Text, Images, PDFs, and Audio — built to boost productivity, empower businesses, and transform everyday workflows.
Neura-Nix is a next-gen multimodal assistant that combines the power of Ollama, OpenAI, Whisper, and Redis into a single streamlined platform.
It enables natural and intelligent interaction across different mediums — chat with documents, analyze images, transcribe audio, and converse in real-time.
Built with Streamlit, Docker, and Redis caching, Neura-Nix is designed for speed, extensibility, and scalability.
Whether you’re an enterprise optimizing workflows or an individual boosting productivity, Neura-Nix adapts seamlessly.
- 📖 Document Chat (PDF RAG) → Upload PDFs and extract context-aware insights.
- 🖼️ Image Analysis → Unlock hidden stories from visual data.
- 🎤 Audio to Text → Record or upload audio and let Whisper transcribe in seconds.
- 💬 Persistent Chat Sessions → Store, manage, and reload previous conversations.
- ⚡ Redis Caching → Lightning-fast performance with session-level caching.
- 🎯 Model Flexibility → Switch easily between Ollama and OpenAI models.
- 🔐 Secure by Design → Environment variables,
.env.sample
, and Dockerized deployment. - 📊 Optimized Vector Storage → ChromaDB-backed semantic search for documents.
- 🎨 Modern UI → Built with Streamlit, responsive and minimal.
Coming soon... 🎬
Important
Use the Docker image or run the project locally via localhost
to get started.
Before proceeding, please contact me at Mail
so I can share the credentials instead Docker extract your whole memory.
- Frontend/UI → Streamlit
- LLMs & Embeddings → Ollama + OpenAI
- Audio Transcription → Whisper
- Vector Database → ChromaDB
- Caching & Session Store → Redis Cloud
- Containerization → Docker + Docker Compose
- Web Server → NGINX (reverse proxy & static serving)
- Database → SQLite (lightweight local DB for session caching)
- Orchestration → GitHub Actions + Dependabot for CI/CD
Before setting up Neura-Nix, make sure you have the following installed:
-
Python 3.10+ → Required for running the backend (tested on
3.10.12
). -
Docker & Docker Compose → Preferred method for containerized deployment. Install Docker
-
Git → For cloning and managing the repository. Install Git.
-
Redis (Cloud or Local) → Used for caching and optimizing performance.
Sign up for Redis Cloud or run a local instance. -
Ollama → Required for running local multimodal models.
- Download Ollama Desktop (Windows/macOS)
- Or install manually on Linux
-
(Optional) GPU Support → If available, install NVIDIA drivers + CUDA toolkit for accelerated model performance.
⚡ Tip: Ensure environment variables (like REDIS_HOST
, REDIS_PORT
, OPENAI_API_KEY
) are properly configured in your .env
file before running the project.
You can follow the official setup guide for Linux, Windows, and Docker below to run Neura-Nix {OllamaMulti-RAG} locally.
- First Read this License & their terms then proceed.
- Star ⭐ the Repository
- Fork the repository (Optional)
- Project Setup:
- Clone the repository:
git clone https://github.com/UjjwalSaini07/OllamaMulti-RAG.git
- Navigate to the project main directory:
cd NexGen-Quillix
Important
All these cd directory paths are relative to the root directory of the cloned project.
After Cloning the repository and choose your preferred installation method.
-
Modify Docker Compose →
- Remove
docker-compose.yml
. - Rename
docker-compose_with_ollama.yml
→docker-compose.yml
.
- Remove
-
Set model save path → Update line
25
in thedocker-compose.yml
file. -
Run Neura-Nix
docker compose up
⚡ If you don’t have a GPU → remove the
deploy
section from the compose file. -
Optional Configurations
- Edit
config.yaml
to match your needs. - Add custom icons → replace
user_image.png
and/orbot_image.png
inside thechat_icons
folder.
- Edit
-
Access the app → Open http://0.0.0.0:8501 in your browser.
-
Pull Models → Visit Ollama Library and pull models:
/pull MODEL_NAME
✅ You need:
- An embedding model → e.g., nomic-embed-text for PDFs.
- An image-capable model → e.g., llava for image analysis.
-
Install Ollama Desktop.
-
Update config → In
config.yaml
, use line 4 (Windows) forbase_url
, remove line 3 {Default is Correct}. -
Start Neura-Nix Up the Docker Container:
docker compose up
-
Access → http://0.0.0.0:8501.
-
Pull models as prescribed in Method 1.
- Install Ollama.
- Create Python venv (tested with
Python 3.10.12
). - Install dependencies:
pip install --upgrade pip pip install -r requirements.txt pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
- Run setup:
python3 database_operations.py # initialize SQLite DB streamlit run app.py
- Pull models (embedding + multimodal as above).
- Optional → update
config.yaml
and add custom chat icons.
Neura-Nix is not just a research tool — it’s built to optimize workflows and unlock ROI:
- 🏢 Enterprise Teams → Automate document review, compliance checks, and data-heavy workflows.
- 📈 Startups → Accelerate content creation, customer support, and product research.
- 👨💻 Freelancers & Creators → Boost productivity with multimodal AI (chat, docs, media).
- 🔒 Privacy by Design → Keep your sensitive data secure with local-first deployment.
To help you navigate and extend Neura-Nix, we’ve structured the documentation into multiple layers:
- Python Documentation
- Streamlit Documentation
- Ollama Documentation
- OpenAI API Documentation
- Whisper Models (OpenAI)
- Redis Documentation
- ChromaDB Documentation
- Docker Documentation
- Nginx Documentation
- GitHub Actions Documentation
Feel free to reach out if you have any questions or suggestions!
- Raise an issue for the same Issue
- Github: @Ujjwal Saini
- Mail: Mail ID
License Credential Check.
You can use this project the way you want. Feel free to credit me if you want to!
Feedback and contributions are always welcome! Feel free to open an Issue.