Chat_Dashboard_OpenAI_Gemini_KIMI.ipynb: Multi-LLM Chat Dashboard
A simple, user-friendly chat interface built with Gradio, supporting conversational access to multiple LLM providers:
- OpenAI (GPT-4, GPT-4o, DALL·E 3)
- Kimi (Moonshot AI)
- Google Gemini
Features:
- Select your preferred model from a dropdown menu.
- Switch between OpenAI, Gemini, and Kimi using their API keys.
- No setup complexity: just provide your API keys for each provider in a
config.jsonfile (see notebook for example). - Proxy support for network flexibility.
- Chat history retention and retry/undo features.
- Supports basic image generation via DALL·E 3 and built-in functions (where available) for Kimi.
For details on configuring API keys and using the dashboard, see the instructions in the notebook.
StandAlone_Load_Vecdb_RAG_CHAT_v4.ipynb: All-in-One RAG Dashboard (Recommended)
This notebook is the recommended entry point for most users. It brings together all steps needed for document-based Retrieval-Augmented Generation (RAG) in a single, easy-to-use Gradio dashboard.
Key Features:
- End-to-end pipeline: From PDF upload, extraction (via GROBID), chunking, embedding, vectorization, and retrieval—all in one place.
- Integrated chat interface: Ask questions about your documents using local LLMs (LM Studio - Server).
- Multiple retrieval strategies: Choose between standard and advanced retrieval methods (e.g., LangChain MultiQueryRetriever).
- Dashboard UI: User-friendly controls for every step, minimal setup.
🚩 Important Requirements:
- LM Studio must be running for local LLM chat and embedding tasks.
- GROBID Docker container must be running for PDF text extraction.
(Start GROBID via Docker before launching the notebook.)
This notebook is ideal if you want a one-stop solution without running multiple scripts or notebooks.
See the notebook for detailed inline instructions and troubleshooting tips.
This notebook performs the following operations:
- Reads all PDFs in a given folder
- Extracts text using GROBID
- Stores text elements in SQLite3 database
- Handles recursive chunks
- Embeds text
- Vectorizes extracted data
- Retrieval Methods
- Standard Retrieval
- LangChain: MultiQueryRetriever
- OpenAI-based chat using LM Studio
- Displays:
- Query
- Prompt Information
- Answer: Dashboard Browser Tab
- Retrieval and QA chain based chat using LM Studio
- Displays results in a new Browser Tab
- Additional information is within the Notebook, as some Markdown cells describe requirements and usage details.
- Install LM Studio
- Follow the official LM Studio installation guide for your operating system.
- Download LLM Model from Hugging Face
- You can download a pre-trained model from Hugging Face using the Hugging Face Model Hub. Follow the instructions on their site to use the desired model.
- Install Docker for GROBID
- Make sure Docker is installed on your machine. You can follow the installation instructions from the Docker website.
- After installing Docker, pull the GROBID Docker image by running:
docker pull lfoppiano/grobid
- Clone the repository.
- Install the required dependencies.
- Run the
PDFs_RAG_with_LMstudio.ipynbnotebook to begin processing your PDFs.
