Project Directed by: Steven Fisher
Designed by: ChatGPT
Implemented using: Cursor
Powered by: Claude
This is an advanced Artificial Mind (Artificial Mind) system that represents a unique collaboration between human direction and AI capabilities. The project demonstrates AI designing and building AI, showcasing the potential for recursive intelligence development where AI systems can contribute to their own evolution and the creation of more sophisticated AI architectures.
A short demo video of the project running: YouTube Demo Video
This project embodies the concept of "AI Building AI" - where artificial intelligence systems are not just tools, but active participants in the design and implementation of more advanced AI systems. It represents a step toward recursive self-improvement and collaborative intelligence development.
The Artificial Mind system consists of several interconnected components that work together to create a comprehensive artificial intelligence platform:
- π§ FSM Engine - Finite State Machine for cognitive state management
- π― HDN (Hierarchical Decision Network) - AI planning and execution system with ethical safeguards
- βοΈ Principles API - Ethical decision-making system for AI actions
- πͺ Conversational Layer - Natural language interface with chain-of-thought visibility
- π§ Tool System - Extensible tool framework for AI capabilities
- π Monitor UI - Real-time visualization and control interface with Chain of Thought tab
- π§ Thinking Mode - Real-time AI introspection and transparency
- π MCP Knowledge Server - Exposes knowledge bases (Neo4j, Weaviate) as MCP tools for LLM access
- π Coherence Monitor - Cross-system consistency checking and cognitive integrity system
- Real-time Thought Expression - See inside the AI's reasoning process with Chain of Thought UI
- MCP Knowledge Integration - Knowledge bases (Neo4j, Weaviate) exposed as MCP tools for LLM access
- Telegram Bot Integration - Chat with the AGI, run tools, and see thoughts directly from Telegram
- Database-First Queries - LLM can query knowledge bases before generating responses
- Ethical Safeguards - Built-in principles checking for all actions
- Hierarchical Planning - Multi-level task decomposition and execution
- Natural Language Interface - Conversational AI with full transparency and thought storage
- Tool Integration - Extensible framework for AI capabilities with composite tool provider
- Knowledge Growth - Continuous learning and adaptation
- Focused Learning - System focuses on promising areas and learns from outcomes
- Intelligent Goal Routing (Jan 2026) - Goals automatically routed to optimal execution paths (knowledge queries, tool calls, reasoning engine, or code generation)
- Unified Goal Management System (Jan 2026) - All FSM autonomy activities (dream mode, hypothesis testing, coherence monitoring, active learning) post goals to Goal Manager for centralized workflow creation and UI visibility
- Cognitive Integrity - Coherence monitoring with performance-optimized belief checking and behavior loop deduplication
- Meta-Learning - System learns about its own learning process
- Semantic Concept Discovery - LLM-based concept extraction with understanding
- Intelligent Knowledge Filtering - LLM-based assessment of novelty and value to prevent storing obvious/duplicate knowledge
- Session Management - Conversation sessions with message previews and thought history
- Daily Summary Pipeline - Nightly autonomous analysis that produces a human-readable daily summary of system activity
- Multi-Modal Memory System - Unified working, episodic (Qdrant), and semantic (Neo4j) memory used for planning, reasoning, and learning
- Memory Consolidation & Compression - Periodic pipeline that compresses redundant episodes, promotes stable patterns to semantic memory, archives stale traces, and extracts skill abstractions from repeated workflows
- Cross-System Consistency Checking - Global coherence monitor that detects inconsistencies across FSM, HDN, and Self-Model, generating self-reflection tasks to resolve contradictions, policy conflicts, goal drift, and behavior loops
- Explanation-Grounded Learning Feedback - Post-hoc evaluation system that evaluates hypothesis accuracy, explanation quality, and reasoning alignment after each goal completion, then updates inference weighting, confidence scaling, and exploration heuristics to continuously improve reasoning quality
- Active Learning Loops - Query-driven learning system that identifies high-uncertainty concepts, generates targeted data acquisition plans, and prioritizes experiments that reduce uncertainty fastest, transforming curiosity from opportunistic scanning into structured inquiry (runs even when a domain has no Neo4j concepts yet, by using beliefs/hypotheses/goals already in Redis)
- System Overview - High-level system architecture
- Architecture Details - Detailed technical architecture
- Solution Architecture Diagram - Visual system design
- HDN Architecture - Hierarchical Decision Network design
- Thinking Mode - Real-time AI introspection and transparency
- Reasoning & Inference - AI reasoning capabilities
- Reasoning Implementation - Technical implementation details
- Knowledge Growth - Continuous learning system
- Domain Knowledge - Knowledge representation and management
- LLM-Based Knowledge Filtering - Intelligent filtering of novel, valuable knowledge
- Memory Consolidation - Periodic memory optimization and compression system
- Testing Memory Consolidation - Guide for testing consolidation locally
- Cross-System Consistency Checking - Coherence monitor for detecting and resolving inconsistencies across systems
- Explanation-Grounded Learning Feedback - Post-hoc evaluation system that closes the loop between reasoning quality and execution outcomes
- Active Learning Loops - Query-driven learning system for identifying high-uncertainty concepts and generating targeted data acquisition plans
- Goal Manager Integration - Unified goal management system for FSM autonomy activities
- Conversational AI Summary - Natural language interface
- Telegram Bot Integration - Using the Telegram bot interface
- Natural Language Interface - Language processing capabilities
- MCP Knowledge Integration - MCP server for knowledge base access
- MCP Initialization Check - Startup verification of MCP connectivity
- API Reference - Complete API documentation
- Principles Integration - Ethical decision-making system
- Content Safety - Safety mechanisms and content filtering
- Dynamic Integration Guide - Dynamic system integration
- Setup Guide - Complete setup instructions for new users
- Configuration Guide - Docker, LLM, and deployment configuration
- Secure Packaging Guide - Binary encryption and security
- Implementation Summary - Development overview
- Integration Guide - System integration instructions
- Refactoring Plan - Code organization and refactoring
- Tool Metrics - Performance monitoring and metrics
- Docker Compose - Local development deployment
- Kubernetes (k3s) - Production Kubernetes deployment
- Docker Resource Config - Container configuration
- Docker Reuse Strategy - Container optimization
- Tool Metrics - Performance monitoring
- Intelligent Execution - Execution monitoring and analysis
# 1. Clone the repository
git clone https://github.com/yourusername/agi-project.git
cd agi-project
# 2. Restart the entire system (infrastructure + services)
./restart.sh
# 3. Open your browser to http://localhost:8082The restart.sh script automatically:
- Stops all application services
- Restarts infrastructure (Redis, Neo4j, Weaviate, NATS)
- Starts all application services
- Provides status check URLs
Alternative (if you prefer manual control):
# Start infrastructure only
docker compose up -d
# Start app services without touching infra (safer on macOS)
./scripts/start_servers.sh --skip-infra- Docker & Docker Compose - Download here
- Git - Download here
- Go 1.21+ - Download here (required for building services)
- LLM Provider - OpenAI, Anthropic, or local LLM (see Setup Guide)
macOS Users:
- The Monitor UI builds natively on macOS without CGO dependencies
- Use
make build-monitorormake build-macosto build - Ensure Go 1.21+ is installed:
go version
-
Copy environment template:
cp env.example .env
-
Edit configuration (see Configuration Guide):
nano .env
-
Configure your environment:
# Copy the example configuration cp .env.example .env # Edit with your settings nano .env
The
.envfile contains all configuration including:- LLM Provider Settings (OpenAI, Anthropic, Ollama, Mock)
- Service URLs (Redis, NATS, Neo4j, Weaviate)
- Database Configuration (Neo4j credentials, Qdrant URL)
- Docker Resource Limits (Memory, CPU, PIDs)
- Performance Tuning (Concurrent executions, timeouts)
- Telegram Configuration (Bot token, allowed users)
# Start all services with x86_64 optimized images
docker-compose -f docker-compose.x86.yml up -d
# Check status
docker-compose -f docker-compose.x86.yml ps
# View logs
docker-compose -f docker-compose.x86.yml logs -f# Start all services with ARM64 images
docker compose up -d # prefer v2 syntax if available; otherwise use docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -fIf you already started infrastructure with Compose, you can start just the Go services without touching Docker ports using the new flag:
./scripts/start_servers.sh --skip-infraThis avoids killing Docker Desktop proxy processes on macOS and prevents daemon disruptions.
# Build for multiple architectures
./scripts/build-multi-arch.sh -r your-registry.com -t latest --push
# Or use Makefile for local builds
make build-x86 # Build for x86_64
make build-arm64 # Build for ARM64
make build-all-archs # Build for both# Deploy to k3s cluster on ARM Raspberry Pi
kubectl apply -f k3s/namespace.yaml
kubectl apply -f k3s/pvc-*.yaml
kubectl apply -f k3s/redis.yaml -f k3s/weaviate.yaml -f k3s/neo4j.yaml -f k3s/nats.yaml
kubectl apply -f k3s/principles-server.yaml -f k3s/hdn-server.yaml -f k3s/goal-manager.yaml -f k3s/fsm-server.yaml -f k3s/monitor-ui.yaml
# Check deployment
kubectl -n agi get pods,svcNote: All Kubernetes configurations use kubernetes.io/arch: arm64 node selectors and are optimized for ARM Raspberry Pi hardware with Drone CI execution methods.
See k3s/README.md for detailed Kubernetes deployment instructions.
# Build all components
make build
# Start services individually
./bin/principles-server &
./bin/hdn-server -mode=server &
./bin/goal-manager -agent=agent_1 &
./bin/fsm-server &The Monitor UI can be built natively on macOS:
# Build monitor UI for macOS
make build-monitor
# Or build specifically for macOS (explicit)
make build-macos
# The binary will be created at: bin/monitor-uiNote: The monitor UI uses pure Go (no CGO dependencies), so it builds cleanly on macOS without any special requirements. Just ensure you have Go 1.21+ installed.
Troubleshooting:
- If you encounter issues, ensure you're using Go 1.21 or later:
go version - The monitor UI doesn't require CGO, so it should build without any C compiler dependencies
- If templates aren't found, ensure you're running from the project root directory
# Test basic functionality
curl http://localhost:8081/health
# Test chat with thinking mode
curl -X POST http://localhost:8081/api/v1/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello! Think out loud about what you can do.", "show_thinking": true}'
# Test specific LLM provider
curl -X POST http://localhost:8081/api/v1/chat \
-H "Content-Type: application/json" \
-d '{"message": "What LLM provider are you using?", "show_thinking": true}'The project includes several utility scripts for common operations:
Clean All Databases (scripts/clean_databases.sh):
# Thoroughly clean all databases (stops services first)
./scripts/clean_databases.sh --confirm
# This script:
# - Stops all services to prevent key recreation
# - Clears Redis (all keys)
# - Clears Neo4j (all nodes and relationships)
# - Clears Weaviate (all collections)
# - Cleans persistent data directories
# - Restarts containersClean Reasoning Traces (scripts/clean_reasoning_traces.sh):
# Clean old reasoning traces from Redis (reduces UI clutter)
./scripts/clean_reasoning_traces.sh
# This script:
# - Trims reasoning traces to 10 per key
# - Trims explanations to 5 per key
# - Helps reduce UI spam after database cleanupClean All Data (scripts/clean_all.sh):
# Complete cleanup including logs
./scripts/clean_all.sh --confirm
# This script:
# - Clears all log files
# - Clears Redis, Neo4j, and Weaviate
# - More comprehensive than clean_databases.shRestart System (restart.sh):
# Quick restart of entire system (recommended for quick start)
./restart.sh
# This script:
# - Stops all application services
# - Restarts infrastructure (Docker Compose)
# - Starts all application services
# - Provides status check URLs
# - Simplifies the startup process to a single commandNote: The restart.sh script is the simplest way to get started. It handles all the complexity of starting infrastructure and services in the correct order.
Using Make Targets:
# Clean databases via Makefile
make clean-databases
# Full reset (stop β clean β restart)
make reset-all
# Clear Redis only
make clear-redis
# Clear all databases (requires confirmation)
make clear-redis CONFIRM=YESSee Database Cleanup Guide for detailed information.
Create security files for production:
# Create secure directory and keypairs
mkdir -p secure/
openssl genrsa -out secure/customer_private.pem 2048
openssl rsa -in secure/customer_private.pem -pubout -out secure/customer_public.pem
openssl genrsa -out secure/vendor_private.pem 2048
openssl rsa -in secure/vendor_private.pem -pubout -out secure/vendor_public.pem
echo "your-token-content-here" > secure/token.txtSee Secure Packaging Guide for details.
Experience real-time AI introspection with our revolutionary thinking mode:
{
"message": "Please learn about black holes and explain them to me",
"show_thinking": true
}Features:
- Real-time thought streaming via WebSockets/SSE
- Multiple thought styles (conversational, technical, streaming)
- Confidence visualization and decision tracking
- Tool usage monitoring and execution transparency
- Educational interface for understanding AI reasoning
See what the system is doing in plain English with the activity log:
# View recent activities
curl http://localhost:8083/activity?limit=20Features:
- Human-readable activity log - See state transitions, hypothesis generation, knowledge growth
- Real-time monitoring - Activities logged as they happen
- Easy debugging - Understand why the system made certain decisions
- Learning insights - Track when and how the knowledge base grows
- Hypothesis tracking - Follow hypothesis generation and testing cycles
See Activity Log Documentation for complete details.
- Pre-execution checking - All actions validated before execution
- Dynamic rule loading - Update ethical rules without restarting
- Fail-safe design - Continues operation with safety checks
- Transparent decision-making - Clear reasoning for all decisions
- Multi-level task decomposition - Break complex tasks into manageable steps
- Dynamic task analysis - Handles LLM-generated tasks intelligently
- Context-aware execution - Maintains context across task hierarchies
- Progress tracking - Real-time monitoring of task execution
- Conversational AI - Natural language interaction with full transparency
- Intent recognition - Understands user goals and context
- Multi-modal communication - Text, structured data, and visual interfaces
- Session management - Persistent conversation context
- Telegram Integration - Secure, whitelist-based chat interface with rich formatting and tools
The system supports intelligent code generation and execution in multiple programming languages:
- π Python - Full support with built-in libraries and external package management
- π¦ Rust - Compilation and execution with borrow checker error fixing
- πΉ Go - Native Go compilation and execution
- β Java - Java compilation with automatic class name detection
- π JavaScript/Node.js - Node.js execution with npm package support
Features:
- Automatic language detection from natural language requests
- Intelligent code generation using LLM with language-specific prompts
- Docker-based execution in isolated containers for safety
- Automatic error fixing - System retries and fixes compilation errors using LLM feedback
- Language-specific guidance - Specialized error fixing for Rust borrow checker, Go compilation, JavaScript runtime errors
- Code validation - Multi-step validation with retry mechanism
- Artifact generation - Saves generated code as project artifacts
Example Usage:
# Request code in any supported language
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
-H "Content-Type: application/json" \
-d '{
"task_name": "hello_world",
"description": "Create a Rust program that prints Hello from Rust",
"language": "rust",
"force_regenerate": true
}'The system will automatically:
- Detect the requested language (or infer from description)
- Generate appropriate code with language-specific syntax
- Compile/execute in a Docker container
- Fix any errors automatically using LLM feedback
- Return the execution results and generated code
| Service | Port | Description |
|---|---|---|
| Principles API | 8080 | Ethical decision-making |
| HDN Server | 8081 | AI planning and execution |
| Monitor UI | 8082 | Real-time visualization |
| FSM Server | 8083 | Cognitive state management |
| Telegram Bot | - | Chat interface (polls Telegram API) |
POST /api/v1/chat- Chat with thinking mode enabledGET /api/v1/chat/sessions/{id}/thoughts- Get AI thoughtsGET /api/v1/chat/sessions/{id}/thoughts/stream- Stream thoughts in real-time
POST /api/v1/interpret/execute- Natural language task executionPOST /api/v1/hierarchical/execute- Complex task planningPOST /api/v1/docker/execute- Code execution in containers
GET /api/v1/tools- List available toolsPOST /api/v1/tools/execute- Execute specific toolsGET /api/v1/intelligent/capabilities- AI capabilities
GET /health- FSM server health checkGET /status- Full FSM status and metricsGET /thinking- Current thinking process and stateGET /activity?limit=50- Activity log - See what the system is doing in plain EnglishGET /timeline?hours=24- State transition timelineGET /hypotheses- Active hypothesesGET /episodes?limit=10- Recent learning episodes
curl -X POST http://localhost:8081/api/v1/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Explain quantum computing in simple terms",
"show_thinking": true,
"session_id": "demo_session"
}'# Stream AI thoughts in real-time
curl http://localhost:8081/api/v1/chat/sessions/demo_session/thoughts/streamcurl -X POST http://localhost:8081/api/v1/interpret/execute \
-H "Content-Type: application/json" \
-d '{
"input": "Scrape https://example.com and analyze the content"
}'The system supports intelligent code generation and execution in Python, Rust, Go, Java, and JavaScript. Simply describe what you want in natural language, and the system will generate and execute code in the appropriate language.
Python Example:
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
-H "Content-Type: application/json" \
-d '{
"task_name": "fibonacci",
"description": "Create a Python program that calculates the Fibonacci sequence",
"language": "python"
}'Rust Example:
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
-H "Content-Type: application/json" \
-d '{
"task_name": "hello_rust",
"description": "Create a Rust program that prints Hello from Rust",
"language": "rust"
}'Go Example:
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
-H "Content-Type: application/json" \
-d '{
"task_name": "hello_go",
"description": "Create a Go program that prints Hello from Go",
"language": "go"
}'Java Example:
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
-H "Content-Type: application/json" \
-d '{
"task_name": "hello_java",
"description": "Create a Java program that prints Hello from Java",
"language": "java"
}'JavaScript Example:
curl -X POST http://localhost:8081/api/v1/intelligent/execute \
-H "Content-Type: application/json" \
-d '{
"task_name": "hello_js",
"description": "Create a JavaScript program that prints Hello from JavaScript",
"language": "javascript"
}'Direct Code Execution (if you already have code):
curl -X POST http://localhost:8081/api/v1/docker/execute \
-H "Content-Type: application/json" \
-d '{
"code": "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)",
"language": "python"
}'See what the system is doing in real-time:
# Get recent activity (last 20 activities)
curl http://localhost:8083/activity?limit=20
# Watch activity in real-time (updates every 2 seconds)
watch -n 2 'curl -s http://localhost:8083/activity?limit=5 | jq -r ".activities[] | \"\(.timestamp | split(\".\")[0]) \(.message)\""'
# Get activity for specific agent
curl http://localhost:8083/activity?agent_id=agent_1&limit=50Example Response:
{
"activities": [
{
"timestamp": "2024-01-15T10:30:00Z",
"message": "Moved from 'idle' to 'perceive': Ingesting and validating new data",
"state": "perceive",
"category": "state_change",
"details": "Reason: new_input"
},
{
"timestamp": "2024-01-15T10:30:15Z",
"message": "π§ Generating hypotheses from facts and domain knowledge",
"state": "hypothesize",
"category": "action",
"action": "generate_hypotheses"
},
{
"timestamp": "2024-01-15T10:30:30Z",
"message": "Generated 3 hypotheses in domain 'programming'",
"state": "hypothesize",
"category": "hypothesis",
"details": "Domain: programming, Count: 3"
}
],
"count": 3,
"agent_id": "agent_1"
}Activity Categories:
state_change- System moved to a new stateaction- Important action being executedhypothesis- Hypothesis generation or testinglearning- Knowledge base growthdecision- Decision-making processes
See Activity Log Documentation for more details.
make test-integrationmake test-principles # Test ethical decision-making
make test-hdn # Test AI planning system
make test-thinking # Test thinking mode featuresmake test-performance # Load and stress testing
make test-metrics # Performance metricsmake dev # Start all services with auto-reloadBuild All:
make build # Build all componentsBuild Individual Components:
make build-principles # Build Principles Server
make build-hdn # Build HDN Server
make build-monitor # Build Monitor UI
make build-fsm # Build FSM Server
make build-goal # Build Goal ManagerCross-Platform Builds:
make build-macos # Build for macOS (darwin/amd64)
make build-linux # Build for Linux
make build-windows # Build for Windows
make build-arm64 # Build for ARM64
make build-x86 # Build for x86_64
make build-all-archs # Build for multiple architecturesMonitor UI on macOS:
# The monitor UI builds cleanly on macOS without CGO dependencies
make build-monitor
# Or use the explicit macOS target
make build-macos
# Verify the build
./bin/monitor-ui --helpmake fmt # Format code
make lint # Lint code
make coverage # Generate coverage report- Create feature branch
- Implement changes
- Add tests
- Update documentation
- Submit pull request
This project represents a unique approach where AI systems actively participate in their own development and the creation of more advanced AI architectures.
The thinking mode provides unprecedented insight into AI decision-making processes, enabling trust and understanding.
Built-in ethical safeguards ensure all AI actions are evaluated against moral principles before execution.
Multi-level planning and execution capabilities that can handle complex, multi-step tasks intelligently.
The system grows and adapts through experience, demonstrating true learning capabilities.
The system now includes six major improvements for more focused and successful learning:
- Goal Outcome Learning: Tracks which goals succeed/fail and learns from outcomes
- Enhanced Goal Scoring: Uses historical success data to prioritize goals
- Hypothesis Value Pre-Evaluation: Filters low-value hypotheses before testing
- Focused Learning Strategy: Focuses on promising areas (70% focused, 30% exploration)
- Meta-Learning System: Learns about its own learning process
- Improved Concept Discovery: Uses LLM-based semantic analysis instead of pattern matching
See docs/LEARNING_FOCUS_IMPROVEMENTS.md for detailed information.
We welcome contributions from the AI and research community! This project represents a collaborative effort between human intelligence and artificial intelligence.
- Fork the repository
- Create a feature branch
- Implement your changes
- Add comprehensive tests
- Update documentation
- Submit a pull request
- New AI capabilities - Extend the tool system
- Ethical frameworks - Improve the principles system
- Interface improvements - Enhance user experience
- Performance optimization - Improve system efficiency
- Documentation - Help others understand the system
This project is licensed under the MIT License with Attribution Requirement.
- β Free to use for personal and commercial projects
- β Free to modify and create derivative works
- β Free to distribute and sell
- π Must attribute Steven Fisher as the original author
- π Must include this license file in derivative works
When using this software, you must:
- Include the original copyright notice: "Copyright (c) 2025 Steven Fisher"
- Display "Steven Fisher" in README files, credits, or documentation
- Include this LICENSE file in your project
- Preserve attribution in any derivative works
See the LICENSE file for complete terms.
This license ensures Steven Fisher receives proper credit while allowing maximum freedom for others to use and build upon this work.
- Steven Fisher - Project Direction and Vision
- ChatGPT - System Design and Architecture
- Cursor - Development Environment and Tools
- Claude - Implementation and Code Generation
- Open Source Community - Foundational technologies and libraries
"The best way to predict the future is to invent it, and the best way to invent the future is to have AI help us build it."
This project demonstrates that the future of AI development is not just human-led or AI-led, but a collaborative partnership between human creativity and artificial intelligence capabilities.