Skip to content

AbRather/Local-PDF-RAG-with-Ollama-LangChain

Repository files navigation

🦙 Local PDF RAG with Ollama & LangChain

A secure, private, and local Retrieval-Augmented Generation (RAG) application. This tool allows you to chat with your PDF documents using local LLMs via Ollama, ensuring that your data never leaves your machine.

🚀 Features

  • 100% Local: Uses Ollama to run LLMs and embeddings locally. No API keys or internet connection required after setup.
  • Privacy First: Your documents are processed and stored on your own machine.
  • RAG Architecture:
    • Ingestion: Loads and cleans PDF text using pypdf.
    • Chunking: Splits text into manageable chunks.
    • Embeddings: Uses nomic-embed-text for high-quality vector representations.
    • Vector Store: Stores embeddings in a local ChromaDB instance.
  • Interactive UI: Built with Streamlit for a clean chat interface.
  • Multi-Query Retrieval: Uses AI to generate different versions of your question to find the best answers in the text.

🛠️ Tech Stack

📋 Prerequisites

  1. Python 3.9 or higher installed.
  2. Ollama installed and running. Download here.

⚙️ Setup & Installation

1. Clone the repository

### 1. Clone the repository
```bash
git clone [https://github.com/AbRather/Local-PDF-RAG-with-Ollama-LangChain.git](https://github.com/AbRather/Local-PDF-RAG-with-Ollama-LangChain.git)
cd Local-PDF-RAG-with-Ollama-LangChain

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages