Skip to content

A demonstration project to explore a way to create an autonomous learning mind

License

Notifications You must be signed in to change notification settings

stevef1uk/artificial_mind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

506 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 Artificial Mind Project V0.2

AI Designing and Building AI

Project Directed by: Steven Fisher
Designed by: ChatGPT
Implemented using: Cursor
Powered by: Claude


🎯 Project Overview

This is an advanced Artificial Mind (Artificial Mind) system that represents a unique collaboration between human direction and AI capabilities. The project demonstrates AI designing and building AI, showcasing the potential for recursive intelligence development where AI systems can contribute to their own evolution and the creation of more sophisticated AI architectures.

Live Demo

A short demo video of the project running: YouTube Demo Video

🌟 Key Philosophy

This project embodies the concept of "AI Building AI" - where artificial intelligence systems are not just tools, but active participants in the design and implementation of more advanced AI systems. It represents a step toward recursive self-improvement and collaborative intelligence development.


πŸ—οΈ System Architecture

The Artificial Mind system consists of several interconnected components that work together to create a comprehensive artificial intelligence platform:

Core Components

  • 🧠 FSM Engine - Finite State Machine for cognitive state management
  • 🎯 HDN (Hierarchical Decision Network) - AI planning and execution system with ethical safeguards
  • βš–οΈ Principles API - Ethical decision-making system for AI actions
  • πŸŽͺ Conversational Layer - Natural language interface with chain-of-thought visibility
  • πŸ”§ Tool System - Extensible tool framework for AI capabilities
  • πŸ“Š Monitor UI - Real-time visualization and control interface with Chain of Thought tab
  • 🧠 Thinking Mode - Real-time AI introspection and transparency
  • πŸ”Œ MCP Knowledge Server - Exposes knowledge bases (Neo4j, Weaviate) as MCP tools for LLM access
  • πŸ” Coherence Monitor - Cross-system consistency checking and cognitive integrity system

Advanced Features

  • Real-time Thought Expression - See inside the AI's reasoning process with Chain of Thought UI
  • MCP Knowledge Integration - Knowledge bases (Neo4j, Weaviate) exposed as MCP tools for LLM access
  • Telegram Bot Integration - Chat with the AGI, run tools, and see thoughts directly from Telegram
  • Database-First Queries - LLM can query knowledge bases before generating responses
  • Ethical Safeguards - Built-in principles checking for all actions
  • Hierarchical Planning - Multi-level task decomposition and execution
  • Natural Language Interface - Conversational AI with full transparency and thought storage
  • Tool Integration - Extensible framework for AI capabilities with composite tool provider
  • Knowledge Growth - Continuous learning and adaptation
  • Focused Learning - System focuses on promising areas and learns from outcomes
  • Intelligent Goal Routing (Jan 2026) - Goals automatically routed to optimal execution paths (knowledge queries, tool calls, reasoning engine, or code generation)
  • Unified Goal Management System (Jan 2026) - All FSM autonomy activities (dream mode, hypothesis testing, coherence monitoring, active learning) post goals to Goal Manager for centralized workflow creation and UI visibility
  • Cognitive Integrity - Coherence monitoring with performance-optimized belief checking and behavior loop deduplication
  • Meta-Learning - System learns about its own learning process
  • Semantic Concept Discovery - LLM-based concept extraction with understanding
  • Intelligent Knowledge Filtering - LLM-based assessment of novelty and value to prevent storing obvious/duplicate knowledge
  • Session Management - Conversation sessions with message previews and thought history
  • Daily Summary Pipeline - Nightly autonomous analysis that produces a human-readable daily summary of system activity
  • Multi-Modal Memory System - Unified working, episodic (Qdrant), and semantic (Neo4j) memory used for planning, reasoning, and learning
  • Memory Consolidation & Compression - Periodic pipeline that compresses redundant episodes, promotes stable patterns to semantic memory, archives stale traces, and extracts skill abstractions from repeated workflows
  • Cross-System Consistency Checking - Global coherence monitor that detects inconsistencies across FSM, HDN, and Self-Model, generating self-reflection tasks to resolve contradictions, policy conflicts, goal drift, and behavior loops
  • Explanation-Grounded Learning Feedback - Post-hoc evaluation system that evaluates hypothesis accuracy, explanation quality, and reasoning alignment after each goal completion, then updates inference weighting, confidence scaling, and exploration heuristics to continuously improve reasoning quality
  • Active Learning Loops - Query-driven learning system that identifies high-uncertainty concepts, generates targeted data acquisition plans, and prioritizes experiments that reduce uncertainty fastest, transforming curiosity from opportunistic scanning into structured inquiry (runs even when a domain has no Neo4j concepts yet, by using beliefs/hypotheses/goals already in Redis)

πŸ“š Documentation

πŸ›οΈ Architecture & Design

🧠 AI & Reasoning

πŸ’¬ Interfaces & Communication

βš–οΈ Ethics & Safety

πŸ”§ Implementation & Development

🐳 Infrastructure & Deployment

πŸ“Š Monitoring & Analysis


πŸš€ Quick Start

🎯 Super Quick Start (5 minutes)

# 1. Clone the repository
git clone https://github.com/yourusername/agi-project.git
cd agi-project

# 2. Restart the entire system (infrastructure + services)
./restart.sh

# 3. Open your browser to http://localhost:8082

The restart.sh script automatically:

  • Stops all application services
  • Restarts infrastructure (Redis, Neo4j, Weaviate, NATS)
  • Starts all application services
  • Provides status check URLs

Alternative (if you prefer manual control):

# Start infrastructure only
docker compose up -d

# Start app services without touching infra (safer on macOS)
./scripts/start_servers.sh --skip-infra

πŸ“‹ Prerequisites

macOS Users:

  • The Monitor UI builds natively on macOS without CGO dependencies
  • Use make build-monitor or make build-macos to build
  • Ensure Go 1.21+ is installed: go version

βš™οΈ Configuration

  1. Copy environment template:

    cp env.example .env
  2. Edit configuration (see Configuration Guide):

    nano .env
  3. Configure your environment:

    # Copy the example configuration
    cp .env.example .env
    
    # Edit with your settings
    nano .env

    The .env file contains all configuration including:

    • LLM Provider Settings (OpenAI, Anthropic, Ollama, Mock)
    • Service URLs (Redis, NATS, Neo4j, Weaviate)
    • Database Configuration (Neo4j credentials, Qdrant URL)
    • Docker Resource Limits (Memory, CPU, PIDs)
    • Performance Tuning (Concurrent executions, timeouts)
    • Telegram Configuration (Bot token, allowed users)

🐳 Docker Setup (Development)

For x86_64 Systems (Intel/AMD)

# Start all services with x86_64 optimized images
docker-compose -f docker-compose.x86.yml up -d

# Check status
docker-compose -f docker-compose.x86.yml ps

# View logs
docker-compose -f docker-compose.x86.yml logs -f

For ARM64 Systems (Raspberry Pi, Apple Silicon)

# Start all services with ARM64 images
docker compose up -d   # prefer v2 syntax if available; otherwise use docker-compose up -d

# Check status
docker-compose ps

# View logs
docker-compose logs -f

▢️ Running App Services Without Managing Infra

If you already started infrastructure with Compose, you can start just the Go services without touching Docker ports using the new flag:

./scripts/start_servers.sh --skip-infra

This avoids killing Docker Desktop proxy processes on macOS and prevents daemon disruptions.

Multi-Architecture Build

# Build for multiple architectures
./scripts/build-multi-arch.sh -r your-registry.com -t latest --push

# Or use Makefile for local builds
make build-x86      # Build for x86_64
make build-arm64    # Build for ARM64
make build-all-archs # Build for both

☸️ Kubernetes Setup (Production)

# Deploy to k3s cluster on ARM Raspberry Pi
kubectl apply -f k3s/namespace.yaml
kubectl apply -f k3s/pvc-*.yaml
kubectl apply -f k3s/redis.yaml -f k3s/weaviate.yaml -f k3s/neo4j.yaml -f k3s/nats.yaml
kubectl apply -f k3s/principles-server.yaml -f k3s/hdn-server.yaml -f k3s/goal-manager.yaml -f k3s/fsm-server.yaml -f k3s/monitor-ui.yaml

# Check deployment
kubectl -n agi get pods,svc

Note: All Kubernetes configurations use kubernetes.io/arch: arm64 node selectors and are optimized for ARM Raspberry Pi hardware with Drone CI execution methods.

See k3s/README.md for detailed Kubernetes deployment instructions.

πŸ”§ Manual Setup (Development)

# Build all components
make build

# Start services individually
./bin/principles-server &
./bin/hdn-server -mode=server &
./bin/goal-manager -agent=agent_1 &
./bin/fsm-server &

🍎 Building Monitor UI on macOS

The Monitor UI can be built natively on macOS:

# Build monitor UI for macOS
make build-monitor

# Or build specifically for macOS (explicit)
make build-macos

# The binary will be created at: bin/monitor-ui

Note: The monitor UI uses pure Go (no CGO dependencies), so it builds cleanly on macOS without any special requirements. Just ensure you have Go 1.21+ installed.

Troubleshooting:

  • If you encounter issues, ensure you're using Go 1.21 or later: go version
  • The monitor UI doesn't require CGO, so it should build without any C compiler dependencies
  • If templates aren't found, ensure you're running from the project root directory

πŸ§ͺ Test Your Setup

# Test basic functionality
curl http://localhost:8081/health

# Test chat with thinking mode
curl -X POST http://localhost:8081/api/v1/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello! Think out loud about what you can do.", "show_thinking": true}'

# Test specific LLM provider
curl -X POST http://localhost:8081/api/v1/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "What LLM provider are you using?", "show_thinking": true}'

πŸ› οΈ Utility Scripts

The project includes several utility scripts for common operations:

Database Management

Clean All Databases (scripts/clean_databases.sh):

# Thoroughly clean all databases (stops services first)
./scripts/clean_databases.sh --confirm

# This script:
# - Stops all services to prevent key recreation
# - Clears Redis (all keys)
# - Clears Neo4j (all nodes and relationships)
# - Clears Weaviate (all collections)
# - Cleans persistent data directories
# - Restarts containers

Clean Reasoning Traces (scripts/clean_reasoning_traces.sh):

# Clean old reasoning traces from Redis (reduces UI clutter)
./scripts/clean_reasoning_traces.sh

# This script:
# - Trims reasoning traces to 10 per key
# - Trims explanations to 5 per key
# - Helps reduce UI spam after database cleanup

Clean All Data (scripts/clean_all.sh):

# Complete cleanup including logs
./scripts/clean_all.sh --confirm

# This script:
# - Clears all log files
# - Clears Redis, Neo4j, and Weaviate
# - More comprehensive than clean_databases.sh

System Management

Restart System (restart.sh):

# Quick restart of entire system (recommended for quick start)
./restart.sh

# This script:
# - Stops all application services
# - Restarts infrastructure (Docker Compose)
# - Starts all application services
# - Provides status check URLs
# - Simplifies the startup process to a single command

Note: The restart.sh script is the simplest way to get started. It handles all the complexity of starting infrastructure and services in the correct order.

Using Make Targets:

# Clean databases via Makefile
make clean-databases

# Full reset (stop β†’ clean β†’ restart)
make reset-all

# Clear Redis only
make clear-redis

# Clear all databases (requires confirmation)
make clear-redis CONFIRM=YES

See Database Cleanup Guide for detailed information.

πŸ” Secure Packaging (Optional)

Create security files for production:

# Create secure directory and keypairs
mkdir -p secure/
openssl genrsa -out secure/customer_private.pem 2048
openssl rsa -in secure/customer_private.pem -pubout -out secure/customer_public.pem
openssl genrsa -out secure/vendor_private.pem 2048
openssl rsa -in secure/vendor_private.pem -pubout -out secure/vendor_public.pem
echo "your-token-content-here" > secure/token.txt

See Secure Packaging Guide for details.


🎯 Key Features

🧠 Thinking Mode (NEW!)

Experience real-time AI introspection with our revolutionary thinking mode:

{
  "message": "Please learn about black holes and explain them to me",
  "show_thinking": true
}

Features:

  • Real-time thought streaming via WebSockets/SSE
  • Multiple thought styles (conversational, technical, streaming)
  • Confidence visualization and decision tracking
  • Tool usage monitoring and execution transparency
  • Educational interface for understanding AI reasoning

πŸ“‹ Activity Log (NEW!)

See what the system is doing in plain English with the activity log:

# View recent activities
curl http://localhost:8083/activity?limit=20

Features:

  • Human-readable activity log - See state transitions, hypothesis generation, knowledge growth
  • Real-time monitoring - Activities logged as they happen
  • Easy debugging - Understand why the system made certain decisions
  • Learning insights - Track when and how the knowledge base grows
  • Hypothesis tracking - Follow hypothesis generation and testing cycles

See Activity Log Documentation for complete details.

βš–οΈ Ethical AI

  • Pre-execution checking - All actions validated before execution
  • Dynamic rule loading - Update ethical rules without restarting
  • Fail-safe design - Continues operation with safety checks
  • Transparent decision-making - Clear reasoning for all decisions

🎯 Hierarchical Planning

  • Multi-level task decomposition - Break complex tasks into manageable steps
  • Dynamic task analysis - Handles LLM-generated tasks intelligently
  • Context-aware execution - Maintains context across task hierarchies
  • Progress tracking - Real-time monitoring of task execution

πŸ’¬ Natural Language Interface

  • Conversational AI - Natural language interaction with full transparency
  • Intent recognition - Understands user goals and context
  • Multi-modal communication - Text, structured data, and visual interfaces
  • Session management - Persistent conversation context
  • Telegram Integration - Secure, whitelist-based chat interface with rich formatting and tools

πŸ’» Multi-Language Code Generation & Execution

The system supports intelligent code generation and execution in multiple programming languages:

  • 🐍 Python - Full support with built-in libraries and external package management
  • πŸ¦€ Rust - Compilation and execution with borrow checker error fixing
  • 🐹 Go - Native Go compilation and execution
  • β˜• Java - Java compilation with automatic class name detection
  • πŸ“œ JavaScript/Node.js - Node.js execution with npm package support

Features:

  • Automatic language detection from natural language requests
  • Intelligent code generation using LLM with language-specific prompts
  • Docker-based execution in isolated containers for safety
  • Automatic error fixing - System retries and fixes compilation errors using LLM feedback
  • Language-specific guidance - Specialized error fixing for Rust borrow checker, Go compilation, JavaScript runtime errors
  • Code validation - Multi-step validation with retry mechanism
  • Artifact generation - Saves generated code as project artifacts

Example Usage:

# Request code in any supported language
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
  -H "Content-Type: application/json" \
  -d '{
    "task_name": "hello_world",
    "description": "Create a Rust program that prints Hello from Rust",
    "language": "rust",
    "force_regenerate": true
  }'

The system will automatically:

  1. Detect the requested language (or infer from description)
  2. Generate appropriate code with language-specific syntax
  3. Compile/execute in a Docker container
  4. Fix any errors automatically using LLM feedback
  5. Return the execution results and generated code

πŸ”Œ API Endpoints

Core Services

Service Port Description
Principles API 8080 Ethical decision-making
HDN Server 8081 AI planning and execution
Monitor UI 8082 Real-time visualization
FSM Server 8083 Cognitive state management
Telegram Bot - Chat interface (polls Telegram API)

Key Endpoints

🧠 Thinking Mode

  • POST /api/v1/chat - Chat with thinking mode enabled
  • GET /api/v1/chat/sessions/{id}/thoughts - Get AI thoughts
  • GET /api/v1/chat/sessions/{id}/thoughts/stream - Stream thoughts in real-time

🎯 Task Execution

  • POST /api/v1/interpret/execute - Natural language task execution
  • POST /api/v1/hierarchical/execute - Complex task planning
  • POST /api/v1/docker/execute - Code execution in containers

πŸ”§ Tools & Capabilities

  • GET /api/v1/tools - List available tools
  • POST /api/v1/tools/execute - Execute specific tools
  • GET /api/v1/intelligent/capabilities - AI capabilities

πŸ“Š FSM Monitoring & Activity Log

  • GET /health - FSM server health check
  • GET /status - Full FSM status and metrics
  • GET /thinking - Current thinking process and state
  • GET /activity?limit=50 - Activity log - See what the system is doing in plain English
  • GET /timeline?hours=24 - State transition timeline
  • GET /hypotheses - Active hypotheses
  • GET /episodes?limit=10 - Recent learning episodes

🎨 Usage Examples

Basic Chat with Thinking Mode

curl -X POST http://localhost:8081/api/v1/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Explain quantum computing in simple terms",
    "show_thinking": true,
    "session_id": "demo_session"
  }'

Real-time Thought Monitoring

# Stream AI thoughts in real-time
curl http://localhost:8081/api/v1/chat/sessions/demo_session/thoughts/stream

Natural Language Task Execution

curl -X POST http://localhost:8081/api/v1/interpret/execute \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Scrape https://example.com and analyze the content"
  }'

Code Generation and Execution

The system supports intelligent code generation and execution in Python, Rust, Go, Java, and JavaScript. Simply describe what you want in natural language, and the system will generate and execute code in the appropriate language.

Python Example:

curl -X POST http://localhost:8081/api/v1/intelligent/execute \
  -H "Content-Type: application/json" \
  -d '{
    "task_name": "fibonacci",
    "description": "Create a Python program that calculates the Fibonacci sequence",
    "language": "python"
  }'

Rust Example:

curl -X POST http://localhost:8081/api/v1/intelligent/execute \
  -H "Content-Type: application/json" \
  -d '{
    "task_name": "hello_rust",
    "description": "Create a Rust program that prints Hello from Rust",
    "language": "rust"
  }'

Go Example:

curl -X POST http://localhost:8081/api/v1/intelligent/execute \
  -H "Content-Type: application/json" \
  -d '{
    "task_name": "hello_go",
    "description": "Create a Go program that prints Hello from Go",
    "language": "go"
  }'

Java Example:

curl -X POST http://localhost:8081/api/v1/intelligent/execute \
  -H "Content-Type: application/json" \
  -d '{
    "task_name": "hello_java",
    "description": "Create a Java program that prints Hello from Java",
    "language": "java"
  }'

JavaScript Example:

curl -X POST http://localhost:8081/api/v1/intelligent/execute \
  -H "Content-Type: application/json" \
  -d '{
    "task_name": "hello_js",
    "description": "Create a JavaScript program that prints Hello from JavaScript",
    "language": "javascript"
  }'

Direct Code Execution (if you already have code):

curl -X POST http://localhost:8081/api/v1/docker/execute \
  -H "Content-Type: application/json" \
  -d '{
    "code": "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)",
    "language": "python"
  }'

View System Activity Log

See what the system is doing in real-time:

# Get recent activity (last 20 activities)
curl http://localhost:8083/activity?limit=20

# Watch activity in real-time (updates every 2 seconds)
watch -n 2 'curl -s http://localhost:8083/activity?limit=5 | jq -r ".activities[] | \"\(.timestamp | split(\".\")[0]) \(.message)\""'

# Get activity for specific agent
curl http://localhost:8083/activity?agent_id=agent_1&limit=50

Example Response:

{
  "activities": [
    {
      "timestamp": "2024-01-15T10:30:00Z",
      "message": "Moved from 'idle' to 'perceive': Ingesting and validating new data",
      "state": "perceive",
      "category": "state_change",
      "details": "Reason: new_input"
    },
    {
      "timestamp": "2024-01-15T10:30:15Z",
      "message": "🧠 Generating hypotheses from facts and domain knowledge",
      "state": "hypothesize",
      "category": "action",
      "action": "generate_hypotheses"
    },
    {
      "timestamp": "2024-01-15T10:30:30Z",
      "message": "Generated 3 hypotheses in domain 'programming'",
      "state": "hypothesize",
      "category": "hypothesis",
      "details": "Domain: programming, Count: 3"
    }
  ],
  "count": 3,
  "agent_id": "agent_1"
}

Activity Categories:

  • state_change - System moved to a new state
  • action - Important action being executed
  • hypothesis - Hypothesis generation or testing
  • learning - Knowledge base growth
  • decision - Decision-making processes

See Activity Log Documentation for more details.


πŸ§ͺ Testing

Integration Tests

make test-integration

Component Tests

make test-principles    # Test ethical decision-making
make test-hdn          # Test AI planning system
make test-thinking     # Test thinking mode features

Performance Tests

make test-performance  # Load and stress testing
make test-metrics      # Performance metrics

πŸ”§ Development

Development Mode

make dev  # Start all services with auto-reload

Building Components

Build All:

make build  # Build all components

Build Individual Components:

make build-principles  # Build Principles Server
make build-hdn         # Build HDN Server
make build-monitor     # Build Monitor UI
make build-fsm         # Build FSM Server
make build-goal        # Build Goal Manager

Cross-Platform Builds:

make build-macos       # Build for macOS (darwin/amd64)
make build-linux       # Build for Linux
make build-windows     # Build for Windows
make build-arm64       # Build for ARM64
make build-x86         # Build for x86_64
make build-all-archs   # Build for multiple architectures

Monitor UI on macOS:

# The monitor UI builds cleanly on macOS without CGO dependencies
make build-monitor

# Or use the explicit macOS target
make build-macos

# Verify the build
./bin/monitor-ui --help

Code Quality

make fmt      # Format code
make lint      # Lint code
make coverage  # Generate coverage report

Adding New Features

  1. Create feature branch
  2. Implement changes
  3. Add tests
  4. Update documentation
  5. Submit pull request

🌟 Innovation Highlights

AI Building AI

This project represents a unique approach where AI systems actively participate in their own development and the creation of more advanced AI architectures.

Real-time Transparency

The thinking mode provides unprecedented insight into AI decision-making processes, enabling trust and understanding.

Ethical by Design

Built-in ethical safeguards ensure all AI actions are evaluated against moral principles before execution.

Hierarchical Intelligence

Multi-level planning and execution capabilities that can handle complex, multi-step tasks intelligently.

Continuous Learning

The system grows and adapts through experience, demonstrating true learning capabilities.

Focused and Successful Learning (NEW!)

The system now includes six major improvements for more focused and successful learning:

  1. Goal Outcome Learning: Tracks which goals succeed/fail and learns from outcomes
  2. Enhanced Goal Scoring: Uses historical success data to prioritize goals
  3. Hypothesis Value Pre-Evaluation: Filters low-value hypotheses before testing
  4. Focused Learning Strategy: Focuses on promising areas (70% focused, 30% exploration)
  5. Meta-Learning System: Learns about its own learning process
  6. Improved Concept Discovery: Uses LLM-based semantic analysis instead of pattern matching

See docs/LEARNING_FOCUS_IMPROVEMENTS.md for detailed information.


🀝 Contributing

We welcome contributions from the AI and research community! This project represents a collaborative effort between human intelligence and artificial intelligence.

How to Contribute

  1. Fork the repository
  2. Create a feature branch
  3. Implement your changes
  4. Add comprehensive tests
  5. Update documentation
  6. Submit a pull request

Areas for Contribution

  • New AI capabilities - Extend the tool system
  • Ethical frameworks - Improve the principles system
  • Interface improvements - Enhance user experience
  • Performance optimization - Improve system efficiency
  • Documentation - Help others understand the system

πŸ“„ License

This project is licensed under the MIT License with Attribution Requirement.

Key Points:

  • βœ… Free to use for personal and commercial projects
  • βœ… Free to modify and create derivative works
  • βœ… Free to distribute and sell
  • πŸ“ Must attribute Steven Fisher as the original author
  • πŸ“ Must include this license file in derivative works

Attribution Requirements:

When using this software, you must:

  1. Include the original copyright notice: "Copyright (c) 2025 Steven Fisher"
  2. Display "Steven Fisher" in README files, credits, or documentation
  3. Include this LICENSE file in your project
  4. Preserve attribution in any derivative works

See the LICENSE file for complete terms.

This license ensures Steven Fisher receives proper credit while allowing maximum freedom for others to use and build upon this work.


πŸ™ Acknowledgments

  • Steven Fisher - Project Direction and Vision
  • ChatGPT - System Design and Architecture
  • Cursor - Development Environment and Tools
  • Claude - Implementation and Code Generation
  • Open Source Community - Foundational technologies and libraries

"The best way to predict the future is to invent it, and the best way to invent the future is to have AI help us build it."

This project demonstrates that the future of AI development is not just human-led or AI-led, but a collaborative partnership between human creativity and artificial intelligence capabilities.

About

A demonstration project to explore a way to create an autonomous learning mind

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published