Skip to content

Latest commit

 

History

History
337 lines (277 loc) · 15.8 KB

File metadata and controls

337 lines (277 loc) · 15.8 KB

AGENTS.md

IMPORTANT: NEVER delete the specs/ directory or its contents. These specifications are essential project documentation that guide implementation decisions.

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. This application is an AI agent that will run on the Olas Pearl App store. Importantly, this will be deployed in a container on a local machine. Consider this deployment environment when making update or changes to the code.

Development Commands

Backend (Python/FastAPI)

  • Install dependencies: uv sync (or pip install -e .)
  • Run development server: uv run main.py (uses port 8716 by default)
  • Run production server: uv run uvicorn main:app --host 0.0.0.0 --port 8716
    • Note: The backend uses port 8716 (configured via HEALTH_CHECK_PORT environment variable)
  • Run tests: uv run pytest
  • Run tests with coverage: uv run pytest --cov=. --cov-report=html
  • Run specific test: uv run pytest tests/test_models.py -v
  • Lint code: pre-commit run --all-files
  • Only run pre-commit against the files you're working on

Frontend (SvelteKit/TypeScript)

  • Install dependencies: npm install (in frontend/ directory)
  • Run development server: npm run dev
  • Build for production: npm run build
  • Preview production build: npm run preview
  • Type checking: npm run check
  • Generate API client: npm run generate-api (requires backend running on localhost:8716)
  • Run tests: npm run test
  • Run tests in watch mode: npm run test:watch

Docker

  • Build image: docker build -t quorum-ai .
  • Run container: docker run -p 8716:8716 --env-file .env quorum-ai

Docker Compose Services

  • Start all services: docker-compose up -d
  • Start specific services: docker-compose up -d postgres redis or docker-compose up -d backend frontend
  • Stop all services: docker-compose down
  • View service logs: docker-compose logs -f [service_name]
  • Check service health: docker-compose ps
  • Remove volumes: docker-compose down -v
  • Rebuild services: docker-compose up -d --build

Service Configuration

  • PostgreSQL:
    • Port: 5432
    • Default credentials: quorum/quorum
    • Database: quorum
    • Volume: postgres_data
  • Redis:
    • Port: 6379
    • Memory limit: 256MB with LRU eviction
    • Default password: quorum
    • Persistence: AOF enabled
    • Volume: redis_data
  • Backend (FastAPI):
    • Port: 8716
    • Health check: /healthcheck endpoint
    • Depends on: PostgreSQL, Redis
  • Frontend (SvelteKit):
    • Port: 3000
    • Depends on: Backend
    • API Base URL: http://backend:8716

Environment Variables for Docker Services

# PostgreSQL
POSTGRES_USER=quorum          # Database user
POSTGRES_PASSWORD=quorum      # Database password
POSTGRES_DB=quorum           # Database name

# Redis
REDIS_PASSWORD=quorum        # Redis password

Architecture Overview

This is a full-stack DAO proposal summarization and autonomous voting application with a Python FastAPI backend and SvelteKit frontend.

Backend Architecture (backend/)

  • FastAPI application with async/await patterns for high performance
  • Pydantic AI integration with Google Gemini 2.0 Flash via OpenRouter for AI-powered proposal summarization and autonomous voting
  • Snapshot GraphQL API integration for fetching DAO proposal data from Snapshot spaces
  • Service-oriented architecture:
    • snapshot_service.py: Handles Snapshot space and proposal data fetching
    • ai_service.py: Manages AI summarization, risk assessment, and autonomous voting decisions
    • voting_service.py: Handles vote submission to Snapshot
    • agent_run_service.py: Orchestrates autonomous voting workflow (streamlined, no activity tracking)
    • user_preferences_service.py: Manages user voting preferences
    • safe_service.py: Multi-signature wallet operations with optional AttestationTracker integration
    • state_manager.py: Atomic state persistence with checkpoint/rollback support
    • activity_service.py: System activity monitoring and health checks
    • health_status_service.py: Comprehensive health monitoring with Pearl-compliant logging
  • Pydantic models for type-safe data validation (models.py)
  • Configuration management via environment variables (config.py)
  • Pearl-compliant logging to local files for observability and tracing

Frontend Architecture (frontend/)

  • SvelteKit with TypeScript for type safety and Svelte 5 with runes
  • TailwindCSS v4.x for utility-first styling
  • OpenAPI TypeScript generation for type-safe API client (openapi-fetch)
  • Vitest with Testing Library for component testing
  • Vite for build tooling and development server
  • Organization-based routing with dynamic routes (/organizations/[id])

Key Integration Points

  • Backend exposes OpenAPI schema at /openapi.json
  • Frontend generates TypeScript client from OpenAPI schema
  • API documentation available at /docs when backend is running

Key API Endpoints

  • GET /proposals - Proposal search and filtering by Snapshot space
  • GET /proposals/{id} - Get specific proposal by ID
  • POST /proposals/summarize - AI summarization for specific proposals
  • GET /proposals/{id}/top-voters - Top voters for a proposal
  • POST /agent-run - Execute autonomous voting agent
  • GET /health - Health check endpoint

Environment Setup

Required Environment Variables

# AI Provider (Required)
OPENROUTER_API_KEY=your_openrouter_api_key  # For Claude 3.5 Sonnet via OpenRouter

# Optional configuration
DEBUG=false  # Enable debug mode
HOST=0.0.0.0  # Server host
HEALTH_CHECK_PORT=8716  # Server port (defaults to 8716)

# AttestationTracker Integration (Optional)
ATTESTATION_TRACKER_ADDRESS=0x...  # AttestationTracker contract address on Base network
                                   # When set, uses AttestationTracker as EAS wrapper
                                   # Falls back to direct EAS when not configured

# Note: Observability is handled by Pearl-compliant logging to local files
# Log files are written to ./logs/ directory following Pearl standards

Development Workflow

  1. Quick Start: ./startup.sh (starts both backend and frontend automatically)
  2. Manual Setup:
    • Start backend: cd backend && uv run main.py
    • Generate API client: cd frontend && npm run generate-api
    • Start frontend: cd frontend && npm run dev
  3. Access Applications:

Code Style Guidelines

Backend Python

  • Follow FastAPI best practices from .cursor/rules/backend.mdc
  • Use async/await for I/O operations
  • Prefer functional programming over classes
  • Use type hints throughout
  • Keep methods under 60 lines
  • Use early returns for error handling
  • Implement proper error logging

Frontend SvelteKit

  • Follow Svelte best practices from .cursor/rules/frontend.mdc
  • Use TailwindCSS classes exclusively for styling
  • Use class: directive instead of ternary operators when possible
  • Prefix event handlers with "handle" (e.g., handleClick)
  • Use const for function definitions
  • Implement accessibility features (aria-label, tabindex, etc.)

Rapid Testing Workflow

Quick Start

# Fastest: Mock mode (no keys needed)
./scripts/quorum mock up
curl http://localhost:8716/self-test

# Full: Fork mode (local blockchain)
./scripts/quorum fork up
curl -X POST http://localhost:8716/agent-run-once
curl http://localhost:8716/verify/count

# Using Makefile
make up              # Start in fork mode
make test            # Run self-test
make verify          # Check attestation count
make down            # Stop services

Testing Commands

  • Start services: ./scripts/quorum [mock|fork|testnet] up
  • Self-test: curl http://localhost:8716/self-test
  • Run agent: curl -X POST http://localhost:8716/agent-run
  • Verify attestations: curl http://localhost:8716/verify/count
  • View logs: ./scripts/quorum fork logs
  • Stop services: ./scripts/quorum fork down

Auditor Verification Steps

  1. Start: make up
  2. Test: make test (should show all checks passing)
  3. Run: make run (triggers agent run)
  4. Verify: make verify (confirms attestation on-chain)
  5. Logs: make logs (Pearl-compliant audit trail)

Testing

  • When writing tests, write out the meaning and the importance of the test explaining what it's trying to do.

Backend Testing

  • Framework: pytest with async support (pytest-asyncio)
  • Coverage: pytest-cov with HTML reporting (--cov=. --cov-report=html)
  • Mocking: pytest-mock and pytest-httpx for external API mocking
  • Test Structure:
    • Test files in tests/ directory following pattern: test_*.py
    • Fixtures in conftest.py for common test data
    • Integration tests for services and APIs
  • Configuration: Strict settings in pyproject.toml
  • Target Coverage: >90% expected

Frontend Testing

  • Framework: Vitest with jsdom environment
  • Testing Library: @testing-library/svelte for component testing
  • Setup: test-setup.ts for jest-dom integration
  • Commands: npm run test (run once), npm run test:watch (watch mode)
  • Configuration: vitest.config.ts with SvelteKit integration

Code Clarity

Class and method names must be self-documenting, short, and descriptive
Remove all hardcoded values - use configuration or constants instead
If you have a complicated expression, put the result of the expression or parts of the expression, in a temporary variable with a name that explains the purpose of the expression.

Code Organization

Eliminate duplicate code through extraction or abstraction
If you have a code fragment that can be grouped together, turn the fragment into a method whose name explains the purpose of the method.
Enforce maximum method length of 60 lines
Decompose complex methods into smaller, single-purpose functions
Break down large classes with excessive instance variables (>7-10)

Code Quality

Add runtime assertions to critical methods (minimum 2 per critical method)
Assertions should validate key assumptions about state and parameters
Consider consolidating scattered minor changes into cohesive classes
Code needs to be easy for a human to read and understand. Make sure the code is explicit and clear.

Design Priorities (in order)

Readability - Code should be immediately understandable
Simplicity - Choose the least complex solution
Maintainability - Optimize for future changes
Performance - Only optimize after the above are satisfied

API Integration Changes (BAC-157)

Migration from Tally to Snapshot

This application has been migrated from using Tally to Snapshot for DAO proposal data. Key changes include:

  • Data Source: All proposal data now comes from Snapshot GraphQL API instead of Tally
  • Models: Updated all Pydantic models to match Snapshot's data structures
  • Service Architecture: Replaced tally_service.py with snapshot_service.py
  • GraphQL Queries: Implemented Snapshot-specific queries for spaces, proposals, and votes
  • Vote Submission: Updated voting mechanism to work with Snapshot's EIP-712 signatures

Snapshot Integration Details

  • API Documentation: https://docs.snapshot.box/tools/api
  • GraphQL Endpoint: https://hub.snapshot.org/graphql
  • Key Services:
    • snapshot_service.py: Fetches spaces, proposals, and votes from Snapshot
    • voting_service.py: Handles EIP-712 signature creation for vote submission
    • ai_service.py: Provides AI-powered summarization of Snapshot proposals

Important Snapshot Concepts

  • Spaces: DAO organizations on Snapshot (replaces Tally's governors)
  • Proposals: Voting items within a space
  • Strategies: Voting power calculation methods specific to each space
  • IPFS: Proposal content is stored on IPFS, requiring special handling

AI Service Updates

The AI service has been updated to work with Snapshot's data structure:

  • Parses IPFS-stored proposal descriptions
  • Handles Snapshot's voting choices format
  • Provides risk assessment based on Snapshot proposal data
  • Supports both single-choice and multiple-choice voting
  • Uses Google Gemini 2.0 Flash model via OpenRouter
  • Dual functionality: proposal summarization and autonomous voting decisions

Autonomous Voting Agent

The application now includes a comprehensive autonomous voting system:

  • Agent Run Service: Orchestrates the complete voting workflow (streamlined architecture)
  • User Preferences: Persistent configuration for voting strategies
  • Proposal Filtering: Intelligent filtering based on urgency and user preferences
  • Voting Strategies: Balanced, conservative, and aggressive approaches
  • Dry Run Mode: Test decisions without executing actual votes
  • Comprehensive Logging: Full audit trail with Pearl-compliant local file logging

Smart Contract Integration

The application supports optional AttestationTracker integration for on-chain attestation tracking:

AttestationTracker Contract (contracts/src/AttestationTracker.sol)

  • EAS Wrapper: Simplified wrapper around Ethereum Attestation Service (EAS)
  • Attestation Counting: Tracks total attestations made by each multisig wallet
  • Owner Control: Owner-only access for managing the contract
  • Event Emission: AttestationMade(address indexed multisig, bytes32 indexed attestationUID)
  • Gas Efficient: Optimized for minimal gas consumption

Backend Integration

  • SafeService Integration: Routes attestations through AttestationTracker when configured
  • Configuration-Based: Uses ATTESTATION_TRACKER_ADDRESS environment variable
  • Fallback Behavior: Falls back to direct EAS when AttestationTracker not configured
  • Monitoring: Health endpoint includes attestation statistics via get_multisig_info()
  • Helper Functions: attestation_tracker_helpers.py provides utility functions

Key Features Removed

  • QuorumTracker System: Complete removal of on-chain activity classification system
  • ActivityType Enum: Eliminated VOTE_CAST, OPPORTUNITY_CONSIDERED, NO_OPPORTUNITY tracking
  • Activity Registration: No automatic activity tracking calls in agent runs
  • Complex State Management: Simplified to pure attestation counting (removed bit manipulation)

Project Specifications

The specs/ directory contains detailed technical specifications for various components of the application. These specifications provide in-depth implementation details and architectural decisions:

  • AI Service: AI integration, prompt engineering, and autonomous voting logic
  • API: RESTful API design, endpoints, and data contracts
  • Authentication: Authentication mechanisms and security considerations
  • Database: Database schema, migrations, and data modeling
  • Deployment: Deployment strategies and infrastructure requirements
  • Error Handling: Error handling patterns and best practices
  • Logging: Logging standards and Pearl-compliant implementation
  • Testing: Testing strategies, coverage requirements, and best practices

IMPORTANT: NEVER delete the specs/ directory or its contents. These specifications are essential project documentation that guide implementation decisions.

You run in an environment where ast-grep is available. Whenever a search requires syntax‑aware or structural matching, default to ast-grep run --lang <language> -p '<pattern>' or set --lang appropriately, and avoid falling back to text‑only tools like rg or grep unless I explicitly request a plain‑text search. You can run ast-grep --help for more info.