Welcome to LocalNeural, a cutting-edge, self-hosted web interface designed to interact with your local LLMs (Large Language Models) via Ollama.
In a world where data privacy is paramount, LocalNeural bridges the gap between powerful AI and complete data sovereignty. It features a stunning "Nothing OS" inspired UI mixed with Liquid Glassmorphism, offering a premium user experience comparable to top-tier commercial AI platformsβbut running entirely on your machine.
Whether you are a developer needing a coding assistant, a writer brainstorming ideas, or a researcher organizing documents, LocalNeural provides the tools you need with zero latency and zero data tracking.
- β‘ Real-Time Streaming: Experience word-by-word streaming responses via WebSockets, just like ChatGPT.
- π§ Context-Aware Memory: The AI remembers your conversation history, allowing for deep, multi-turn discussions.
- π Project Workspaces (RAG): Create isolated projects and upload PDFs, Code files, and Markdown notes. The AI will read your files and use them as knowledge to answer your questions.
- π¨ Stunning UI/UX: A dark-themed, dot-matrix background with frosted glass elements and smooth Material 3 animations.
- πΈ Multimodal Support: Drag and drop images or paste them from your clipboard to have the AI analyze them (requires vision models like
llava). - π Markdown & Code Highlighting: Beautiful rendering of mathematical formulas, tables, and syntax-highlighted code blocks with one-click copying.
- π Regenerate & Edit: Made a mistake? Edit your prompt or ask the AI to try again with a single click.
- πΎ Prompt Library: Save your favorite system prompts and inject them into any chat instantly.
- π₯ Export Options: Download your conversations as Markdown, JSON, or HTML/PDF.
Before you begin, ensure you have the following installed on your system:
- Python 3.8+: The backend engine.
- Ollama: The core AI runner. Download here.
- Note: Ensure you have pulled a model (e.g.,
ollama pull llama3) and the service is running (ollama serve).
- Note: Ensure you have pulled a model (e.g.,
Follow these simple steps to get LocalNeural running on your machine.
git clone https://github.com/rkstudio585/LocalNeural.git
cd LocalNeuralpython -m venv venv
# Windows
venv\Scripts\activate
# Mac/Linux
source venv/bin/activatepip install -r requirements.txt(Dependencies include: flask, flask-socketio, requests, eventlet, pypdf)
python app.pyOpen your browser and navigate to: http://localhost:5000
LocalNeural uses a Model-View-Controller (MVC) architecture powered by ollama, Flask and WebSockets.
- Frontend (View): Built with HTML5, TailwindCSS, and jQuery. It handles user inputs, file drops, and renders the Markdown response. It connects to the backend via
Socket.IOfor bidirectional, low-latency communication. - Backend (Controller):
app.pyacts as the brain. It receives prompts, queries the SQLite database for context, and forwards the request to the running Ollama instance. - Database (Model): A robust
SQLitedatabase stores:sessions: Chat metadata and settings.messages: The actual conversation history.projects&documents: Uploaded files for the Knowledge Base.
- AI Engine: Ollama runs locally, processing the prompt and streaming tokens back to Flask, which instantly pushes them to your browser.
When you create a Project and upload files:
- The backend parses the text from PDFs or Code files.
- This text is stored in the database linked to the Project ID.
- When you chat inside that project, the system silently injects the file contents into the System Prompt of the AI.
- This allows the AI to "read" your files and answer specific questions about them.
Simply type in the input box and hit Ctrl+Enter.
- Enter: Creates a new line.
- Click "+ New Project" in the sidebar.
- Give it a name (e.g., "Python Learning").
- Upload relevant PDF text books or Python scripts.
- Click Create. The AI now knows everything inside those files for this specific chat session.
- Drag & Drop an image onto the text area.
- Paste an image directly from your clipboard (
Ctrl+V). - Ask the AI: "What is in this image?" (Ensure you have a vision-capable model selected).
- Click the Library button in the sidebar.
- Add frequently used prompts (e.g., "Act as a Senior React Developer").
- Click any saved prompt to instantly insert it into your input box.
LocalNeural/
βββ app.py # Main application entry point (Server)
βββ config.py # Configuration settings
βββ database.py # SQLite database management logic
βββ requirements.txt # Python dependencies
βββ README.md # Project overview
βββ LICENSE # Project MIT License
βββ .data/
β βββ neural_memory.db # SQL neural memory
βββ utilities/
β βββ chat_logic.py # AI Title generation & context helpers
β βββ file_parser.py # Extract text from PDFs/Code files
βββ static/
β βββ css/
β β βββ style.css # Custom styling & animations
βΒ Β βββ images
βΒ Β βΒ Β βββ logo.svg # Project SVG logo
β βββ js/
β βββ main.js # Core frontend logic & sockets
β βββ chat_extras.js # UI helpers (Regenerate, Copy, etc.)
βββ templates/
βββ base.html # Main HTML layout
βββ index.html # Chat interface
βββ settings.html # Configuration modal
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project.
- Create your Feature Branch (
git checkout -b feature/AmazingFeature). - Commit your Changes (
git commit -m 'Add some AmazingFeature'). - Push to the Branch (
git push origin feature/AmazingFeature). - Open a Pull Request.
Have a bug report or a feature request? Please open an issue!
Distributed under the MIT License. See LICENSE for more information.
Made with β€οΈ by rkstudio585