Skip to content
This repository was archived by the owner on Dec 20, 2025. It is now read-only.

A GitHub-hosted persistent memory system for GitHub Copilot and OpenAI Codex agents — designed to track, sync, and intelligently manage multi-host environments, settings, and memory state.

License

Notifications You must be signed in to change notification settings

GhostwheeI/AI-Memory-Persistence

Repository files navigation

⚠️ ARCHIVED PROJECT - NO LONGER MAINTAINED

Notice: This project has been archived and is no longer under active development. With the advancement of native memory management capabilities in modern AI models and tools (such as Claude Projects, ChatGPT Memory, and enhanced context handling in GitHub Copilot), the original purpose of this repository is now largely addressed by these built-in features. This repository remains available for historical and reference purposes.


AI Memory Persistence

A modular, extensible memory management system for AI agents like GitHub Copilot CLI and OpenAI Codex. This system provides intelligent storage, retrieval, and synchronization of contextual memories across different hosts and environments.

Features

  • 📝 Persistent Memory Storage: Store various types of memories (facts, preferences, conversations, code context, commands, errors, solutions)
  • 🔍 Intelligent Retrieval: Context-aware search with flexible filtering by host, environment, project, tags, and importance
  • 🔄 Cloud Sync: Synchronize memories across devices using GitHub or other cloud providers
  • 🏷️ Rich Context: Each memory includes host, environment, project, user, and custom tags
  • 📊 Access Tracking: Automatically tracks access frequency and patterns
  • 🎯 Importance Scoring: Prioritize memories with importance levels (1-10)
  • 🔌 Modular Design: Easy to integrate with IDE plugins, shell workflows, or custom applications
  • 💻 CLI Interface: Full-featured command-line interface for all operations

Installation

From Source

git clone https://github.com/GhostwheeI/AI-Memory-Persistence.git
cd AI-Memory-Persistence
pip install -e .

Development Installation

pip install -e ".[dev]"

Quick Start

Store a Memory

# Store a simple fact
ai-memory store "Python uses indentation for code blocks" --type fact --importance 8 --tags python programming

# Store a command
ai-memory store "docker-compose up -d" --type command --tags docker deployment --project myapp

# Store a preference
ai-memory store "Use single quotes for strings" --type preference --importance 7 --tags coding-style

Search Memories

# Search by text
ai-memory search "Python"

# Search with filters
ai-memory search --type fact --min-importance 7 --tags programming

# Search within a project
ai-memory search --project myapp --sort created_at

List All Memories

# List all memories
ai-memory list

# List memories for specific environment
ai-memory list --env production

Update a Memory

ai-memory update <memory-id> --content "Updated content" --importance 9

Delete a Memory

ai-memory delete <memory-id>

Sync Across Devices

# Configure sync repository
ai-memory config set sync.repo_path /path/to/your/sync/repo

# Push local memories to remote
ai-memory sync push

# Pull memories from remote
ai-memory sync pull

# Bidirectional sync
ai-memory sync sync

Python API

Basic Usage

from ai_memory import MemoryManager, MemoryType

# Initialize manager
manager = MemoryManager()

# Store a memory
memory = manager.store(
    content="Use pytest for testing Python code",
    memory_type=MemoryType.PREFERENCE,
    importance=8,
    tags=["python", "testing"]
)

# Search memories
from ai_memory.models import SearchQuery

query = SearchQuery(
    text="pytest",
    min_importance=5,
    tags=["python"]
)
results = manager.search(query)

# Get specific memory
memory = manager.get(memory.id)

# Update memory
manager.update(
    memory_id=memory.id,
    content="Use pytest with coverage for testing Python code",
    importance=9
)

# Delete memory
manager.delete(memory.id)

Advanced Context Management

# Create custom context
context = manager.create_context(
    project="my-web-app",
    environment="production",
    tags=["backend", "api"]
)

# Store with custom context
memory = manager.store(
    content="API rate limit is 1000 requests/hour",
    memory_type=MemoryType.FACT,
    context=context,
    importance=9
)

# Search within context
results = manager.list_all(context)

Sync Operations

from ai_memory.sync import SyncManager, GitHubSyncProvider
from pathlib import Path

# Setup sync
provider = GitHubSyncProvider(repo_path=Path("/path/to/sync/repo"))
sync_manager = SyncManager(manager.storage, provider)

# Push to remote
sync_manager.push()

# Pull from remote (with merge)
sync_manager.pull(merge=True)

# Bidirectional sync
sync_manager.sync()

Architecture

Core Components

  1. MemoryManager: Main interface for memory operations
  2. Storage Backend: Pluggable storage system (JSON, SQLite support planned)
  3. Sync Manager: Handles synchronization with remote storage
  4. Context System: Tracks host, environment, project, and other metadata
  5. Search Engine: Intelligent retrieval with multiple filters

Memory Types

  • FACT: General facts and information
  • PREFERENCE: User or system preferences
  • CONVERSATION: Conversation history
  • CODE_CONTEXT: Code-related context
  • COMMAND: Shell commands and scripts
  • ERROR: Error messages and diagnostics
  • SOLUTION: Problem solutions

Storage Structure

~/.local/share/ai-memory/
  memories/
    memories.json         # Main storage file

~/.config/ai-memory/
  config.yaml            # Configuration file

Configuration

Configuration is stored in ~/.config/ai-memory/config.yaml:

storage:
  backend: json
  path: ~/.local/share/ai-memory/memories

sync:
  enabled: false
  provider: github
  auto_sync: false
  sync_interval: 3600
  repo_path: null

context:
  host: your-hostname
  environment: default
  user: your-username

retrieval:
  max_results: 10
  min_importance: 3
  context_matching: flexible  # or 'strict'

View/Update Configuration

# View all configuration
ai-memory config show

# Get specific value
ai-memory config get storage.path

# Set value
ai-memory config set sync.auto_sync true

Integration Examples

Shell Integration (Bash/Zsh)

Add to your .bashrc or .zshrc:

# Store successful commands
store_command() {
    ai-memory store "$1" --type command --tags shell
}

# Quick memory search
mem() {
    ai-memory search "$@"
}

IDE Plugin Integration

# Example for VS Code extension
from ai_memory import MemoryManager, MemoryType

manager = MemoryManager()

# Store code snippet context
manager.store(
    content="Function to parse JSON config files",
    memory_type=MemoryType.CODE_CONTEXT,
    tags=["json", "config", "parsing"],
    metadata={"file": "config.py", "line": 42}
)

CI/CD Integration

# Store deployment information
ai-memory store "Deployed version 1.2.3 to production" \
    --type fact \
    --env production \
    --project myapp \
    --tags deployment release \
    --importance 8

Development

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=ai_memory --cov-report=html

# Run specific test file
pytest tests/test_memory_manager.py

Code Formatting

# Format with black
black src/ai_memory tests

# Lint with ruff
ruff check src/ai_memory tests

Roadmap

  • SQLite storage backend for better query performance
  • Vector embeddings for semantic search
  • Multiple sync providers (S3, Dropbox, etc.)
  • Web interface for memory management
  • Real-time sync with WebSocket support
  • Memory deduplication and merging
  • Export/import functionality
  • Privacy controls and encryption
  • Team/shared memory spaces
  • Integration templates for popular IDEs

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License - See LICENSE file for details

Support

For issues, questions, or suggestions, please open an issue on GitHub.

AI-Memory-Persistence

A GitHub-hosted persistent memory system for GitHub Copilot and OpenAI Codex agents — designed to track, sync, and intelligently manage multi-host environments, settings, and memory state.

🎯 Overview

AI-Memory-Persistence provides a modular, multi-host-aware framework for AI agents to maintain persistent memory across sessions and environments. This enables agents to:

  • Remember context across different development sessions
  • Sync preferences and learned patterns between multiple machines
  • Maintain consistent coding standards and conventions
  • Track project-specific settings and configurations
  • Share knowledge across different AI agent instances

🏗️ Project Structure

persistent-agent-memory/
│
├── memory/                # Stores persistent memory
│   ├── host_template.json # Template memory schema for host-specific data
│   └── global.json        # Shared memory across all hosts
│
├── agents/                # Logic for specific AI agent integrations
│                          # (GitHub Copilot, OpenAI Codex, CLI tools, etc.)
├── scripts/               # CLI and automation scripts
│
├── README.md              # This file
├── .gitignore             # Git ignore patterns
└── LICENSE                # MIT License

📋 Memory Schema

Host Template (memory/host_template.json)

Each host can maintain its own memory file based on the template schema, containing:

  • host_id: Unique identifier for the host/machine
  • metadata: Creation date, last update, agent version
  • environment: OS, hostname, architecture, workspace paths
  • preferences: Coding style, tool preferences, project settings
  • context: Recent tasks, active projects, frequently used commands
  • learned_patterns: Code patterns, user behaviors, project conventions
  • notes: Free-form notes and observations

Global Memory (memory/global.json)

Shared memory accessible by all hosts and agents:

  • shared_preferences: Default coding standards, common patterns
  • cross_host_context: Synchronized settings, shared knowledge base
  • agents: Registered agents and their configurations
  • notes: Global notes and documentation

🚀 Getting Started

For AI Agents

  1. Clone this repository to access the memory files
  2. Read the appropriate memory files based on your context:
    • For host-specific data: Create/read memory/<your_host_id>.json based on the template
    • For shared data: Read memory/global.json
  3. Update memory files as you learn new patterns or preferences
  4. Commit changes back to the repository to persist learning

For Developers

  1. Clone this repository:

    git clone https://github.com/GhostwheeI/AI-Memory-Persistence.git
    cd AI-Memory-Persistence
  2. Set up your host-specific memory:

    cp persistent-agent-memory/memory/host_template.json persistent-agent-memory/memory/$(hostname).json
  3. Customize your memory files with your preferences and environment details

🤖 Agent Integration

The agents/ directory is designed to contain integration code for different AI agent types:

  • GitHub Copilot: Integration helpers for Copilot to read/write memory
  • OpenAI Codex: Codex-specific memory management
  • CLI Tools: Command-line utilities for managing memory
  • Custom Agents: Your own AI agent integrations

🖥️ CLI Usage Instructions

Using with GitHub Copilot CLI

GitHub Copilot CLI (gh copilot) can leverage this memory system to maintain context across your terminal sessions.

Setup

  1. Ensure you have GitHub CLI and Copilot CLI installed:

    gh extension install github/gh-copilot
  2. Clone this repository to a known location:

    git clone https://github.com/GhostwheeI/AI-Memory-Persistence.git ~/ai-memory
  3. Initialize your host memory:

    cp ~/ai-memory/persistent-agent-memory/memory/host_template.json \
       ~/ai-memory/persistent-agent-memory/memory/$(hostname).json

Usage

When using Copilot CLI, you can reference your memory files in prompts:

# Ask Copilot to consider your coding preferences
gh copilot suggest "Write a Python function following my coding style from ~/ai-memory/persistent-agent-memory/memory/$(hostname).json"

# Get command suggestions based on your common commands
gh copilot suggest "Suggest a git command based on my frequently used commands in ~/ai-memory/persistent-agent-memory/memory/$(hostname).json"

# Explain code with context from your project conventions
gh copilot explain "Explain this code considering the patterns in ~/ai-memory/persistent-agent-memory/memory/global.json"

Updating Memory

After completing tasks, update your memory to help Copilot learn:

# Using jq to update memory
jq '.context.recent_tasks += ["Deployed API v2.0"]' \
   ~/ai-memory/persistent-agent-memory/memory/$(hostname).json > temp.json \
   && mv temp.json ~/ai-memory/persistent-agent-memory/memory/$(hostname).json

# Commit and push changes
cd ~/ai-memory
git add .
git commit -m "Update memory: completed API deployment"
git push

Using with OpenAI Codex CLI

OpenAI Codex can be integrated via various CLI tools. Here's how to use this memory system:

Setup with Codex CLI Tools

  1. Install a Codex CLI tool (e.g., openai Python package):

    pip install openai
  2. Set up your OpenAI API key:

    export OPENAI_API_KEY='your-api-key-here'
  3. Clone this repository:

    git clone https://github.com/GhostwheeI/AI-Memory-Persistence.git ~/ai-memory

Using Memory with Codex API

Create a helper script to include memory context in your Codex requests:

#!/usr/bin/env python3
# save as: ~/bin/codex-with-memory

import json
import sys
import os
from openai import OpenAI

client = OpenAI()

# Load memory files
memory_path = os.path.expanduser("~/ai-memory/persistent-agent-memory/memory")
hostname = os.uname().nodename

with open(f"{memory_path}/{hostname}.json", "r") as f:
    host_memory = json.load(f)

with open(f"{memory_path}/global.json", "r") as f:
    global_memory = json.load(f)

# Prepare context from memory
context = f"""
My coding preferences: {json.dumps(host_memory['preferences'], indent=2)}
Recent tasks: {host_memory['context']['recent_tasks'][-5:]}
Project conventions: {json.dumps(global_memory['shared_preferences'], indent=2)}
"""

# Get user prompt
user_prompt = " ".join(sys.argv[1:])

# Make Codex request with memory context
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": f"Context from persistent memory:\n{context}"},
        {"role": "user", "content": user_prompt}
    ]
)

print(response.choices[0].message.content)

Make it executable:

chmod +x ~/bin/codex-with-memory

Usage Examples

# Generate code with your preferences
codex-with-memory "Write a REST API endpoint for user authentication"

# Get suggestions based on your context
codex-with-memory "What should I work on next based on my recent tasks?"

# Code review with your conventions
codex-with-memory "Review this code: $(cat myfile.py)"

Updating Memory for Codex

Create a helper script to update memory:

#!/usr/bin/env python3
# save as: ~/bin/update-memory

import json
import sys
import os
from datetime import datetime

memory_path = os.path.expanduser("~/ai-memory/persistent-agent-memory/memory")
hostname = os.uname().nodename
memory_file = f"{memory_path}/{hostname}.json"

with open(memory_file, "r") as f:
    memory = json.load(f)

# Update based on command line arguments
action = sys.argv[1]
value = " ".join(sys.argv[2:])

if action == "task":
    memory['context']['recent_tasks'].append(value)
elif action == "pattern":
    memory['learned_patterns']['code_patterns'].append({"pattern": value, "learned_at": datetime.now().isoformat()})
elif action == "note":
    memory['notes'].append({"content": value, "created_at": datetime.now().isoformat()})

memory['metadata']['last_updated'] = datetime.now().isoformat()

with open(memory_file, "w") as f:
    json.dump(memory, f, indent=2)

print(f"Memory updated: {action} = {value}")
# Update memory
chmod +x ~/bin/update-memory
update-memory task "Implemented user authentication"
update-memory pattern "Use JWT tokens with 24h expiry"
update-memory note "Remember to update API docs after changes"

📝 Usage Examples

Reading Memory (Python Example)

import json
import socket

# Get current hostname
hostname = socket.gethostname()

# Read host-specific memory
with open(f'persistent-agent-memory/memory/{hostname}.json', 'r') as f:
    host_memory = json.load(f)

# Access preferences
coding_style = host_memory['preferences']['coding_style']

# Read global memory
with open('persistent-agent-memory/memory/global.json', 'r') as f:
    global_memory = json.load(f)

Updating Memory (Python Example)

import json
import socket
from datetime import datetime

# Get current hostname
hostname = socket.gethostname()

# Update host memory
with open(f'persistent-agent-memory/memory/{hostname}.json', 'r+') as f:
    memory = json.load(f)
    memory['metadata']['last_updated'] = datetime.now().isoformat()
    memory['context']['recent_tasks'].append('Updated README')
    f.seek(0)
    json.dump(memory, f, indent=2)
    f.truncate()

🔒 Security & Privacy

  • Sensitive Data: Do not store API keys, passwords, or other sensitive credentials in memory files
  • Personal Information: Be mindful of personal information stored in memory
  • Repository Access: Ensure proper access controls on your repository if using for private projects

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Inspired by the need for persistent context in AI-assisted development
  • Built for GitHub Copilot, OpenAI Codex, and other AI coding assistants
  • Designed with multi-host and multi-agent collaboration in mind

📞 Support

For questions, issues, or suggestions:

  • Open an issue in this repository
  • Check existing issues for similar questions
  • Contribute improvements via Pull Requests

Made with ❤️ for AI-assisted development

About

A GitHub-hosted persistent memory system for GitHub Copilot and OpenAI Codex agents — designed to track, sync, and intelligently manage multi-host environments, settings, and memory state.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published