IMPORTANT: NEVER delete the specs/ directory or its contents. These specifications are essential project documentation that guide implementation decisions.
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. This application is an AI agent that will run on the Olas Pearl App store. Importantly, this will be deployed in a container on a local machine. Consider this deployment environment when making update or changes to the code.
- Install dependencies:
uv sync(orpip install -e .) - Run development server:
uv run main.py(uses port 8716 by default) - Run production server:
uv run uvicorn main:app --host 0.0.0.0 --port 8716- Note: The backend uses port 8716 (configured via HEALTH_CHECK_PORT environment variable)
- Run tests:
uv run pytest - Run tests with coverage:
uv run pytest --cov=. --cov-report=html - Run specific test:
uv run pytest tests/test_models.py -v - Lint code:
pre-commit run --all-files - Only run pre-commit against the files you're working on
- Install dependencies:
npm install(infrontend/directory) - Run development server:
npm run dev - Build for production:
npm run build - Preview production build:
npm run preview - Type checking:
npm run check - Generate API client:
npm run generate-api(requires backend running on localhost:8716) - Run tests:
npm run test - Run tests in watch mode:
npm run test:watch
- Build image:
docker build -t quorum-ai . - Run container:
docker run -p 8716:8716 --env-file .env quorum-ai
- Start all services:
docker-compose up -d - Start specific services:
docker-compose up -d postgres redisordocker-compose up -d backend frontend - Stop all services:
docker-compose down - View service logs:
docker-compose logs -f [service_name] - Check service health:
docker-compose ps - Remove volumes:
docker-compose down -v - Rebuild services:
docker-compose up -d --build
- PostgreSQL:
- Port: 5432
- Default credentials:
quorum/quorum - Database:
quorum - Volume:
postgres_data
- Redis:
- Port: 6379
- Memory limit: 256MB with LRU eviction
- Default password:
quorum - Persistence: AOF enabled
- Volume:
redis_data
- Backend (FastAPI):
- Port: 8716
- Health check:
/healthcheckendpoint - Depends on: PostgreSQL, Redis
- Frontend (SvelteKit):
- Port: 3000
- Depends on: Backend
- API Base URL:
http://backend:8716
# PostgreSQL
POSTGRES_USER=quorum # Database user
POSTGRES_PASSWORD=quorum # Database password
POSTGRES_DB=quorum # Database name
# Redis
REDIS_PASSWORD=quorum # Redis passwordThis is a full-stack DAO proposal summarization and autonomous voting application with a Python FastAPI backend and SvelteKit frontend.
- FastAPI application with async/await patterns for high performance
- Pydantic AI integration with Google Gemini 2.0 Flash via OpenRouter for AI-powered proposal summarization and autonomous voting
- Snapshot GraphQL API integration for fetching DAO proposal data from Snapshot spaces
- Service-oriented architecture:
snapshot_service.py: Handles Snapshot space and proposal data fetchingai_service.py: Manages AI summarization, risk assessment, and autonomous voting decisionsvoting_service.py: Handles vote submission to Snapshotagent_run_service.py: Orchestrates autonomous voting workflow (streamlined, no activity tracking)user_preferences_service.py: Manages user voting preferencessafe_service.py: Multi-signature wallet operations with optional AttestationTracker integrationstate_manager.py: Atomic state persistence with checkpoint/rollback supportactivity_service.py: System activity monitoring and health checkshealth_status_service.py: Comprehensive health monitoring with Pearl-compliant logging
- Pydantic models for type-safe data validation (
models.py) - Configuration management via environment variables (
config.py) - Pearl-compliant logging to local files for observability and tracing
- SvelteKit with TypeScript for type safety and Svelte 5 with runes
- TailwindCSS v4.x for utility-first styling
- OpenAPI TypeScript generation for type-safe API client (
openapi-fetch) - Vitest with Testing Library for component testing
- Vite for build tooling and development server
- Organization-based routing with dynamic routes (
/organizations/[id])
- Backend exposes OpenAPI schema at
/openapi.json - Frontend generates TypeScript client from OpenAPI schema
- API documentation available at
/docswhen backend is running
GET /proposals- Proposal search and filtering by Snapshot spaceGET /proposals/{id}- Get specific proposal by IDPOST /proposals/summarize- AI summarization for specific proposalsGET /proposals/{id}/top-voters- Top voters for a proposalPOST /agent-run- Execute autonomous voting agentGET /health- Health check endpoint
# AI Provider (Required)
OPENROUTER_API_KEY=your_openrouter_api_key # For Claude 3.5 Sonnet via OpenRouter
# Optional configuration
DEBUG=false # Enable debug mode
HOST=0.0.0.0 # Server host
HEALTH_CHECK_PORT=8716 # Server port (defaults to 8716)
# AttestationTracker Integration (Optional)
ATTESTATION_TRACKER_ADDRESS=0x... # AttestationTracker contract address on Base network
# When set, uses AttestationTracker as EAS wrapper
# Falls back to direct EAS when not configured
# Note: Observability is handled by Pearl-compliant logging to local files
# Log files are written to ./logs/ directory following Pearl standards- Quick Start:
./startup.sh(starts both backend and frontend automatically) - Manual Setup:
- Start backend:
cd backend && uv run main.py - Generate API client:
cd frontend && npm run generate-api - Start frontend:
cd frontend && npm run dev
- Start backend:
- Access Applications:
- Backend API: http://localhost:8716
- API docs: http://localhost:8716/docs
- Frontend: http://localhost:5173
- Follow FastAPI best practices from
.cursor/rules/backend.mdc - Use async/await for I/O operations
- Prefer functional programming over classes
- Use type hints throughout
- Keep methods under 60 lines
- Use early returns for error handling
- Implement proper error logging
- Follow Svelte best practices from
.cursor/rules/frontend.mdc - Use TailwindCSS classes exclusively for styling
- Use
class:directive instead of ternary operators when possible - Prefix event handlers with "handle" (e.g.,
handleClick) - Use
constfor function definitions - Implement accessibility features (aria-label, tabindex, etc.)
# Fastest: Mock mode (no keys needed)
./scripts/quorum mock up
curl http://localhost:8716/self-test
# Full: Fork mode (local blockchain)
./scripts/quorum fork up
curl -X POST http://localhost:8716/agent-run-once
curl http://localhost:8716/verify/count
# Using Makefile
make up # Start in fork mode
make test # Run self-test
make verify # Check attestation count
make down # Stop services- Start services:
./scripts/quorum [mock|fork|testnet] up - Self-test:
curl http://localhost:8716/self-test - Run agent:
curl -X POST http://localhost:8716/agent-run - Verify attestations:
curl http://localhost:8716/verify/count - View logs:
./scripts/quorum fork logs - Stop services:
./scripts/quorum fork down
- Start:
make up - Test:
make test(should show all checks passing) - Run:
make run(triggers agent run) - Verify:
make verify(confirms attestation on-chain) - Logs:
make logs(Pearl-compliant audit trail)
- When writing tests, write out the meaning and the importance of the test explaining what it's trying to do.
- Framework: pytest with async support (
pytest-asyncio) - Coverage: pytest-cov with HTML reporting (
--cov=. --cov-report=html) - Mocking: pytest-mock and pytest-httpx for external API mocking
- Test Structure:
- Test files in
tests/directory following pattern:test_*.py - Fixtures in
conftest.pyfor common test data - Integration tests for services and APIs
- Test files in
- Configuration: Strict settings in
pyproject.toml - Target Coverage: >90% expected
- Framework: Vitest with jsdom environment
- Testing Library: @testing-library/svelte for component testing
- Setup: test-setup.ts for jest-dom integration
- Commands:
npm run test(run once),npm run test:watch(watch mode) - Configuration: vitest.config.ts with SvelteKit integration
Class and method names must be self-documenting, short, and descriptive
Remove all hardcoded values - use configuration or constants instead
If you have a complicated expression, put the result of the expression or parts of the expression, in a temporary variable with a name that explains the purpose of the expression.
Eliminate duplicate code through extraction or abstraction
If you have a code fragment that can be grouped together, turn the fragment into a method whose name explains the purpose of the method.
Enforce maximum method length of 60 lines
Decompose complex methods into smaller, single-purpose functions
Break down large classes with excessive instance variables (>7-10)
Add runtime assertions to critical methods (minimum 2 per critical method)
Assertions should validate key assumptions about state and parameters
Consider consolidating scattered minor changes into cohesive classes
Code needs to be easy for a human to read and understand. Make sure the code is explicit and clear.
Readability - Code should be immediately understandable
Simplicity - Choose the least complex solution
Maintainability - Optimize for future changes
Performance - Only optimize after the above are satisfied
This application has been migrated from using Tally to Snapshot for DAO proposal data. Key changes include:
- Data Source: All proposal data now comes from Snapshot GraphQL API instead of Tally
- Models: Updated all Pydantic models to match Snapshot's data structures
- Service Architecture: Replaced
tally_service.pywithsnapshot_service.py - GraphQL Queries: Implemented Snapshot-specific queries for spaces, proposals, and votes
- Vote Submission: Updated voting mechanism to work with Snapshot's EIP-712 signatures
- API Documentation: https://docs.snapshot.box/tools/api
- GraphQL Endpoint: https://hub.snapshot.org/graphql
- Key Services:
snapshot_service.py: Fetches spaces, proposals, and votes from Snapshotvoting_service.py: Handles EIP-712 signature creation for vote submissionai_service.py: Provides AI-powered summarization of Snapshot proposals
- Spaces: DAO organizations on Snapshot (replaces Tally's governors)
- Proposals: Voting items within a space
- Strategies: Voting power calculation methods specific to each space
- IPFS: Proposal content is stored on IPFS, requiring special handling
The AI service has been updated to work with Snapshot's data structure:
- Parses IPFS-stored proposal descriptions
- Handles Snapshot's voting choices format
- Provides risk assessment based on Snapshot proposal data
- Supports both single-choice and multiple-choice voting
- Uses Google Gemini 2.0 Flash model via OpenRouter
- Dual functionality: proposal summarization and autonomous voting decisions
The application now includes a comprehensive autonomous voting system:
- Agent Run Service: Orchestrates the complete voting workflow (streamlined architecture)
- User Preferences: Persistent configuration for voting strategies
- Proposal Filtering: Intelligent filtering based on urgency and user preferences
- Voting Strategies: Balanced, conservative, and aggressive approaches
- Dry Run Mode: Test decisions without executing actual votes
- Comprehensive Logging: Full audit trail with Pearl-compliant local file logging
The application supports optional AttestationTracker integration for on-chain attestation tracking:
- EAS Wrapper: Simplified wrapper around Ethereum Attestation Service (EAS)
- Attestation Counting: Tracks total attestations made by each multisig wallet
- Owner Control: Owner-only access for managing the contract
- Event Emission:
AttestationMade(address indexed multisig, bytes32 indexed attestationUID) - Gas Efficient: Optimized for minimal gas consumption
- SafeService Integration: Routes attestations through AttestationTracker when configured
- Configuration-Based: Uses
ATTESTATION_TRACKER_ADDRESSenvironment variable - Fallback Behavior: Falls back to direct EAS when AttestationTracker not configured
- Monitoring: Health endpoint includes attestation statistics via
get_multisig_info() - Helper Functions:
attestation_tracker_helpers.pyprovides utility functions
- QuorumTracker System: Complete removal of on-chain activity classification system
- ActivityType Enum: Eliminated VOTE_CAST, OPPORTUNITY_CONSIDERED, NO_OPPORTUNITY tracking
- Activity Registration: No automatic activity tracking calls in agent runs
- Complex State Management: Simplified to pure attestation counting (removed bit manipulation)
The specs/ directory contains detailed technical specifications for various components of the application. These specifications provide in-depth implementation details and architectural decisions:
- AI Service: AI integration, prompt engineering, and autonomous voting logic
- API: RESTful API design, endpoints, and data contracts
- Authentication: Authentication mechanisms and security considerations
- Database: Database schema, migrations, and data modeling
- Deployment: Deployment strategies and infrastructure requirements
- Error Handling: Error handling patterns and best practices
- Logging: Logging standards and Pearl-compliant implementation
- Testing: Testing strategies, coverage requirements, and best practices
IMPORTANT: NEVER delete the specs/ directory or its contents. These specifications are essential project documentation that guide implementation decisions.
You run in an environment where ast-grep is available. Whenever a search requires syntax‑aware or structural matching, default to ast-grep run --lang <language> -p '<pattern>' or set --lang appropriately, and avoid falling back to text‑only tools like rg or grep unless I explicitly request a plain‑text search. You can run ast-grep --help for more info.