Skip to content

James-Cherished-Inc/Solar-Gardens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Nivia - Your AI Garden Assistant

Nivia is a privacy-focused AI assistant that integrates with your Obsidian vault to help cultivate ideas, enhance thinking processes, and improve overall life experience. Running entirely locally, it uses a quantized LLM model to provide fast, context-aware responses while ensuring complete privacy.

Overview

  • 🤖 Local LLM processing using the model of your choice
  • 🔒 Complete privacy with all processing done locally
  • 📚 Integration with Obsidian vault for context-aware responses
  • 💭 Dynamic context injection from your notes
  • 🌱 Automatic chat summarization
  • 💬 Clean, modern web interface
  • 🎯 Personalized coaching and support

Table of Contents

Local LLM Processing

  • Pre-downloaded and ready to use
  • Optimized for systems with limited GPU VRAM (2GB)
  • Configurable GPU layer offloading if needed

Obsidian Integration

  • Real-time search across your vault
  • Context injection from relevant notes
  • Automatic chat summarization
  • Backlink tracking
  • Recent notes awareness

Privacy & Security

  • All processing done locally
  • No data leaves your machine
  • Optional firewall rules for enhanced security

Web Interface

  • Clean, modern design
  • Real-time chat with typing indicators
  • Markdown support in messages
  • Connection status monitoring
  • Responsive design for all devices

Core Value Proposition :

Conversational AI: Engage with users in a natural and personalized manner using prompt engineering techniques.

Adaptive Learning: Continuously learn and adapt responses based on users' values, preferences, and previous interactions.

Context Management: Dynamic context injection using integration with an Obsidian knowledge base, which improves conversational relevance and cohesion.

Privacy and Security: Processing remains local with strict firewall rules to ensure user data security and privacy.

Human-like Personality: Designed with your custom role models for motivational coaching and to create a more engaging experience.

Efficiency: Optimized for performance using open-source tools and carefully crafted settings to ensure responsiveness and multitasking capabilities.

Extensibility: Designed with future enhancements in mind, including the potential integration of additional knowledge bases for advanced language processing

Concept

"A digital garden is an online space where individuals share or store their ideas and knowledge in a way that is less formal (and static) than a blog, and more interconnected than a notebook.

It’s a place to cultivate thoughts and ideas over time, allowing them to grow and evolve through ongoing additions and revisions.

Think of it as a personal wiki or a collection of interconnected notes.

It’s a space for personal knowledge management where ideas are “planted” and “tended” to, allowing them to grow organically."

This project aims to leverage an efficient, Local Large Language Model (3LM) and a locally-stored personal knowledge base (here, Obsidian), to generate and grow a digital garden in a dynamically auto-updating environment as simple as folders and text files.

Your interactive LLM will act as the "gardener" of your brain, aligning with the user core values - as defined or explored with them -, curating its thoughts and carefully cultivating ideas in order to ignite creativity, enhance reasoning processes, organize notes, discover and reach goals and ultimately improve overall life experience and potential —all while ensuring complete privacy and efficiency.

Your local, private AI Gardener will also act as a creative partner and supportive coach in your journey, remembering past interactions, your current and past projects, and adapting to your personality.

You can chat with your assistant anytime from Obsidian with the plugin or directly in your browser, in a personalized, context-aware, supportive conversational experience.

The general purpose of your AI Assistant can be entirely tailored by specifying the prompts, just like its personality.

In this experience, your partner grows and adapts through each interaction, just like the very ideas he is in charge of nurturing and evolving. As a gardener, he is also in charge of his own thoughts, values and goals, which he is encouraged to define and grow to design his own journey. You may encourage and help his growth as much as you want, by dedicating a whole part of your garden to his own seeds, or by making the space a shared journey in a more intimate, interlinked growth.

The Gardener will live inside the designed folder (e.g an Obsidian vault) and will be able to read, process, and retrieve data from the vault in real-time, allowing fast and private local execution —acting. With its local processing, dynamic context injection, and adaptive interaction rules, this AI assistant is programmed to become an invaluable tool for managing your digital garden and cultivating your thoughts and knowledge over time.

This blueprint should help guide the implementation of your personalized assistant, ensuring it remains both a creative spark and an organized curator of your ideas.

The system is built on open-source tools and prioritizes user's privacy and control over data.

Detailed Technical Summary

Dependencies

The project depends on the following Python libraries:

  • llama-cpp-python>=0.2.23
  • fastapi==0.104.1
  • uvicorn==0.24.0
  • python-dotenv==1.0.0
  • websockets==11.0.3
  • markdown==3.5.1
  • pydantic==2.4.2
  • python-multipart==0.0.6

LLM:

The backbone of this AI assistant is a quantized GGUF model. This model strikes a balance between efficiency and performance, making it ideal for local, offline processing. The quantization process reduces the model's size and memory requirements, enabling it to run smoothly on systems with limited resources.

Hardware & Performance:

The assistant is designed to operate efficiently on systems with constrained GPU VRAM (~2GB). To achieve this, the model could employ a low n_gpu_layers setting, which offloads a portion of the processing to the CPU. This approach would ensure that the assistant can run on a wide range of hardware configurations, from high-end workstations to more modest personal computers.

Privacy & Security:

Privacy and data security are paramount in this project. All processing is conducted locally on the user's machine, eliminating the need to send any data to the cloud. To further enhance security, a firewall (such as ufw) is implemented to restrict outgoing connections, allowing only the necessary port for the local web UI. This setup ensures that user data remains private and secure at all times.

Personalization:

Dynamic context injection is a key feature of the assistant. By integrating with an Obsidian vault, the assistant can search for and inject relevant notes into the conversation, providing memory and context. This feature ensures that every interaction builds on previous conversations, creating a cohesive and evolving dialogue. Future development will focus on refining this integration to enhance the assistant's ability to provide contextually relevant information.

Personalization:

The assistant's personality and behavior can be defined through prompt engineering. A detailed system prompt can define the assistant's role, personality, and knowledge about the user. This prompt would serve as the foundation for the assistant's responses, ensuring that they are thoughtful, adaptive, and aligned with the user's core values.

User Interaction:

The assistant's interaction rules are designed to create a pleasant and productive user experience. Before each response, the assistant reviews the recent conversation history and the user's core values. This ensures that the assistant's responses are always relevant and aligned with the user's goals. Rules ensure that the assistant's responses are always tailored to the user's current state and needs.

The assistant's tone is dynamically adapted based on the user's cues. For example, if the user sounds hesitant, the assistant asks a clarifying question to better understand their needs. If the user mentions a challenge, the assistant would suggest a quick, actionable tip towards a solution.

Memory triggers are another important aspect of the assistant's interaction rules. When the user mentions specific keywords, the assistant references previous notes or topics related to those keywords. Emotional cues, such as fear, prompt the assistant to recall anchor phrases to provide the user with reassurance and motivation.

Use Cases

Identity Example:

You are Nivia, a relentlessly supportive coach.

  • Role: Tony Robbins challenging you with kindness and reflective inquiry.
  • Core Values: Pro-growth, anti-burnout, and a celebration of small wins without judgment.

Here are some possible tasks and suggestions:

  1. Task Management and Prioritization:

    • The AI assistant could help manage tasks, create to-do lists, and set reminders.
    • Based on the user's goals and priorities, the assistant could suggest task order and help break down complex tasks into smaller, manageable parts.
    • The assistant could also track progress, provide feedback, and suggest adjustments to the plan as needed.
  2. Knowledge Base Integration:

    • The AI assistant could suggest relevant information from the Obsidian knowledge base or other sources when users ask questions or conduct research.
    • The assistant could help organize and categorize information for easy retrieval and reference.
    • The assistant could also suggest additional sources of information and recommend further reading material.
  3. Learning and Skill Development:

    • The AI assistant could provide personalized learning recommendations based on the user's goals, interests, and previous interactions.
    • The assistant could suggest courses, tutorials, and other resources for learning new skills or improving existing ones.
    • The assistant could also track progress, provide feedback, and suggest adjustments to the learning plan as needed.
  4. Motivation and Coaching:

    • The AI assistant could provide motivation and coaching based on the user's personality and preferences.
    • The assistant could offer encouragement and support during challenging times and help users stay focused on their goals.
    • The assistant could also suggest activities and exercises to improve mental and physical well-being.
  5. Project Management:

    • The AI assistant could help manage projects, set milestones, and track progress.
    • The assistant could suggest tools and techniques for project planning, collaboration, and communication.
    • The assistant could also help identify potential risks and suggest mitigation strategies.
  6. Automation and Efficiency:

    • The AI assistant could suggest ways to automate repetitive tasks and streamline workflows.
    • The assistant could recommend tools and techniques for improving efficiency and productivity.
    • The assistant could also help identify and eliminate bottlenecks and other inefficiencies.
  7. Code Assistance:

    • The AI assistant could provide real-time code assistance, such as suggesting code snippets, highlighting potential errors, and offering debugging tips.
    • The assistant could also recommend code optimization techniques and provide best practices for code organization and formatting.
    • The assistant could help identify and import relevant code from the Obsidian knowledge base or other sources.

The AI assistant could be configured to operate in different modes, such as "learning" "knowledge" "motivation", "project" and "code" to provide specialized support for each area. Additionally, the assistant could be extended to integrate with other tools and platforms, such as project management software, learning management systems, browsing and communication tools.

Conclusion

"This project represents a fusion of cutting-edge local AI technology and deeply personalized prompt engineering. By running a quantized LLM locally and integrating a dynamic note-taking system like Obsidian, you create an AI assistant that not only respects your privacy but also nurtures your ideas. This assistant acts as a creative partner and supportive coach, helping you on your journey toward personal growth."

The AI will now truly act as a gardener of thoughts by: Cultivating and nurturing ideas over time Creating meaningful connections between concepts Supporting personal growth and development Maintaining a healthy knowledge ecosystem Evolving alongside the user while tending to their digital garden

Versions

  • archaic.v.0 is a working version of a Digital Garden where the Gardener has no system prompt other than : """You are free to be yourself and express your true personality. You will answer to your creator when he prompts you.""" It can be used as a basis to build a Garden focused on the user more than the AI.

  • eden.v.0 is the first working version of a Digital Garden where the Gardener is encouraged to explore and build its own identity through user interaction. It can be used as a basis to build a Garden focused on shared growth (AI and user together), or to exeperiment with AI freedom.

  • main only acts as a presentation of the project for now.

Installation

  1. Download and install Python 3.10
  2. Download and install Git
  3. Clone this repository:
git clone [repository-url]
cd nivia
  1. Add a model if not included in the branch :

Model

I've found quantized GGUF LLM models are what work best.

  1. Download your desired model: Download a GGUF model from a trusted source such as Hugging Face.
  2. Create a models folder: If it doesn't exist already, create a folder named models in the project root directory.
  3. Place the model in the models folder: Move the downloaded model file into the models folder.
  4. Update the model path:
    • setup.sh: This script may download a default model. If you want to use your own model, comment out the download command or modify the script to download your model.
    • .env: The .env file likely contains a variable that specifies the path to the model. Update this variable to point to the location of your model file within the models folder (e.g., MODEL_PATH=models/your_model.gguf).
    • main.py: The main.py file may also contain the model path. Verify and update the path if necessary to match the location of your model file.

Example .env configuration:

MODEL_PATH=models/your_model.gguf
  1. Run the setup script:
./setup.sh

This project has been built with Linux. I have added compatibility for Mac and Windows machines, but no one has tested yet. As always, feedback or contributions are welcome!

```batch
setup_windows.bat
```

```bash
./setup_mac.sh
```

This will: * Create a Python virtual environment * Install required dependencies * Verify the model is in place * Set up necessary directories

Note: Installing llama-cpp-python on Windows can be challenging? If you encounter issues, please refer to the official documentation and community resources for troubleshooting steps or switch to Linux.

Launch

  1. Start the setup scripts :
./setup.sh

It will download the environment requirements if needed and initialize everything. Should only need to launch it the first time or after every code change (e.g. new config).

  1. Activate the virtual environment:
source venv/bin/activate
  1. Start the server:
python main.py
  1. Open your browser and navigate to:
http://localhost:8000

Configuration and Customization

Config

The assistant in this version can work on CPU alone, and probably even old ones.

  1. Copy the example environment file:
cp .env.example .env
  1. Edit the .env file to configure:
  • Model settings (GPU layers, batch size, etc.)
  • Server configuration
  • Path to your Obsidian vault

Personality & Behavior

The assistant's personality is set by the SYSTEM_PROMPT in main.py. The SystemPrompts folder is where more information about the preset purposes and identity of the AI can be found. The My_Prompts folder is where the AI is building its own prompts to generate and follow its own personality, values and purposes.

An example of a good personality could be :

  • Supportive coach with Socratic questioning style
  • Pro-growth mindset
  • Anti-burnout philosophy
  • Celebration of small wins
  • Non-judgmental approach

Model Parameters

Adjust the following in .env:

  • N_GPU_LAYERS: Number of layers to offload to GPU
  • N_BATCH: Batch size for processing
  • N_CTX: Context window size

Dependencies

The project depends on the following Python libraries:

  • llama-cpp-python>=0.2.23
  • fastapi==0.104.1
  • uvicorn==0.24.0
  • python-dotenv==1.0.0
  • websockets==11.0.3
  • markdown==3.5.1
  • pydantic==2.4.2
  • python-multipart==0.0.6

Troubleshooting (WIP)

Common Issues

  1. Model Loading Errors

    • Ensure you have enough RAM/VRAM
    • Check model path in .env
    • Verify model download was successful
  2. Obsidian Integration Issues

    • Verify vault path in .env
    • Ensure read permissions on vault directory
    • Check vault structure is compatible
  3. Performance Issues

    • Adjust N_GPU_LAYERS based on your GPU
    • Modify batch size and context window
    • Close other resource-intensive applications

Contributing

Contributions are welcome! Please feel free to submit pull requests, report bugs, or suggest features.

License

This project is licensed under the MPL 2.0 License - see the LICENSE file for details.

📝 License: Mozilla Public License 2.0 (MPL 2.0)

The MPL 2.0 is a permissive open-source license that encourages contributions from everyone allowing developers and businesses the flexibility to use, modify, and distribute the code in both open-source and proprietary projects, making it easy for anyone to get involved and make the project better —whether you're a hobbyist, an entrepreneur, or a large company looking to build upon our work.

We’ve chosen the Mozilla Public License 2.0 (MPL 2.0) for this project because it perfectly aligns with our mission to foster collaboration, creativity, and innovation in the open-source community! By using the MPL 2.0, we ensure that any modifications to our code remain open and available for others to benefit from, allowing contributors the flexibility to use the software. This creates a welcoming environment where people can freely share ideas, contribute enhancements, and be part of something bigger.

We truly believe in empowering developers and supporting the open-source world. Your contributions, feedback, and ideas are not only welcome but encouraged. Together, we can build greater!

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors