Skip to content

moshesham/AI-Omniscient-Architect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

96 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Python 3.9+ MIT License Ollama Streamlit

πŸ—οΈ Omniscient Architect

AI-Powered Code Analysis Platform with Local LLM Support

Analyze codebases using local AI models for privacy-first, intelligent code review.
No data leaves your machine. No API costs. Full control.


✨ Features

Feature Description
πŸ”’ Privacy-First All analysis runs locally via Ollama - your code never leaves your machine
πŸ€– Multi-Provider LLM Support for Ollama, OpenAI, and Anthropic with automatic fallback
πŸ“Š Smart Analysis Security vulnerabilities, architecture patterns, code quality, best practices
🌐 Web UI Beautiful Streamlit interface for interactive analysis
πŸ“¦ Modular Architecture Six independent packages for flexibility and extensibility
⚑ Parallel Execution Concurrent agent analysis with progress streaming
πŸ™ GitHub Integration Analyze repositories directly from GitHub URLs

πŸš€ Quick Start

Prerequisites

  • Python 3.9+
  • Ollama installed and running

Installation

# Clone the repository
git clone https://github.com/moshesham/AI-Omniscient-Architect.git
cd AI-Omniscient-Architect

# Create virtual environment
python -m venv .venv
.venv\Scripts\activate  # Windows
# source .venv/bin/activate  # Linux/Mac

# Install dependencies
pip install -r requirements.txt

# Pull a code-focused model
ollama pull qwen2.5-coder:1.5b

Launch the Web UI

streamlit run web_app.py

Open http://localhost:8501 in your browser.


πŸ“¦ Package Architecture

packages/
β”œβ”€β”€ omniscient-core     # Base models, configuration, logging
β”œβ”€β”€ omniscient-llm      # Multi-provider LLM abstraction layer
β”œβ”€β”€ omniscient-agents   # AI analysis agents with orchestration
β”œβ”€β”€ omniscient-tools    # Code complexity, clustering, file scanning
β”œβ”€β”€ omniscient-github   # GitHub API client with rate limiting
└── omniscient-api      # FastAPI REST/GraphQL server

Package Overview

Package Purpose Key Components
omniscient-core Foundation FileAnalysis, RepositoryInfo, AnalysisConfig
omniscient-llm LLM Integration OllamaProvider, OpenAIProvider, ProviderChain
omniscient-agents Analysis CodeReviewAgent, AnalysisOrchestrator
omniscient-tools Utilities ComplexityAnalyzer, FileScanner, Clustering
omniscient-github GitHub GitHubClient, RateLimitHandler
omniscient-api API Server FastAPI routes, GraphQL schema

πŸ–₯️ Usage

Web Interface (Recommended)

The Streamlit UI provides the easiest way to analyze code:

  1. Check Ollama Status - Verify your LLM is running
  2. Select Model - Choose from available Ollama models
  3. Choose Focus Areas - Security, Architecture, Code Quality, etc.
  4. Analyze - Point to a local directory or GitHub URL

Programmatic Usage

import asyncio
from omniscient_llm import OllamaProvider, LLMClient
from omniscient_agents.llm_agent import CodeReviewAgent
from omniscient_core import FileAnalysis, RepositoryInfo

async def analyze_code():
    # Setup LLM
    provider = OllamaProvider(model="qwen2.5-coder:1.5b")
    client = LLMClient(provider=provider)
    
    async with client:
        # Create agent
        agent = CodeReviewAgent(
            llm_client=client,
            focus_areas=["security", "architecture"]
        )
        
        # Prepare files
        files = [
            FileAnalysis(
                path="main.py",
                content="your code here",
                language="Python",
                size=100
            )
        ]
        
        repo = RepositoryInfo(
            path="./my-project",
            name="my-project",
            branch="main"
        )
        
        # Run analysis
        result = await agent.analyze(files, repo)
        print(result.summary)

asyncio.run(analyze_code())

LLM CLI Tool

# Check Ollama status
python -m omniscient_llm status

# List available models
python -m omniscient_llm list

# Pull a new model
python -m omniscient_llm pull codellama:7b-instruct

# Get model recommendations
python -m omniscient_llm recommend --category code

API Server

Start the REST API server:

# Ensure packages are in PYTHONPATH
export PYTHONPATH=$PYTHONPATH:packages/api/src:packages/core/src:packages/rag/src:packages/llm/src

# Run the server
python -m omniscient_api.cli serve

The API will be available at http://localhost:8000. Documentation is available at http://localhost:8000/docs.

Authentication

The API is protected by an API Key. Set the OMNISCIENT_API_KEY environment variable to enable authentication.

export OMNISCIENT_API_KEY="your-secret-key"

Pass the key in the X-API-Key header:

curl -H "X-API-Key: your-secret-key" http://localhost:8000/api/v1/analyze ...

🐳 Docker Deployment

# Development (with hot reload)
docker compose -f docker-compose.dev.yml up --build

# Production
docker compose up --build -d

# Check health
curl http://localhost:8501/_stcore/health

Configuration

Environment Variable Description Default
OLLAMA_HOST Ollama server URL http://localhost:11434
OLLAMA_MODEL Default model qwen2.5-coder:1.5b
MAX_FILES Max files to analyze 100
ANALYSIS_DEPTH quick, standard, deep standard

πŸ” Analysis Capabilities

What Gets Analyzed

Category Checks
Security SQL injection, XSS, hardcoded secrets, CORS misconfig
Architecture Design patterns, separation of concerns, scalability
Code Quality Complexity, duplication, naming conventions
Best Practices Error handling, logging, documentation
Performance Bottlenecks, caching opportunities, async patterns

Sample Output

πŸ“‹ Summary:
The codebase has several security concerns that need immediate attention.

πŸ“Š Issues Found: 3

πŸ”΄ [HIGH] Security
   Hardcoded database credentials found
   πŸ“ File: api/routes/data.py
   πŸ“ Line: 20
   πŸ’‘ Use environment variables or a secrets manager

🟑 [MEDIUM] Architecture  
   Global state can cause race conditions
   πŸ“ File: api/routes/data.py
   πŸ’‘ Use dependency injection or request-scoped state

🟒 [LOW] Code Quality
   Missing docstrings in public functions
   πŸ’‘ Add docstrings for better maintainability

πŸ’‘ Recommendations:
  β€’ Move credentials to environment variables
  β€’ Implement proper dependency injection
  β€’ Add comprehensive documentation

πŸ› οΈ Development

Project Structure

AI-Omniscient-Architect/
β”œβ”€β”€ web_app.py           # Streamlit UI
β”œβ”€β”€ packages/            # Modular packages
β”œβ”€β”€ scripts/             # Test & utility scripts
β”œβ”€β”€ docs/                # Documentation
β”œβ”€β”€ roadmap/             # Development roadmap
β”œβ”€β”€ examples/            # Usage examples
β”œβ”€β”€ Dockerfile           # Container definition
β”œβ”€β”€ docker-compose.yml   # Production compose
└── requirements.txt     # Dependencies

Running Tests

# Test all packages
python scripts/test_packages.py

# Test local analysis
python scripts/test_local_analysis.py

# Test with a specific repo
python scripts/test_datalake_analysis.py

Recommended Models

Model Size Best For Memory
qwen2.5-coder:1.5b 1GB Quick analysis, limited RAM 2GB
codellama:7b-instruct 4GB Detailed analysis 8GB
deepseek-coder:6.7b 4GB Complex code understanding 8GB

πŸ“– Documentation


🀝 Contributing

Contributions are welcome! Areas for enhancement:

  • 🌐 Additional LLM providers
  • πŸ“Š More analysis agents (testing, documentation)
  • πŸ”Œ IDE extensions (VS Code, JetBrains)
  • πŸ“ˆ Metrics and reporting dashboards
  • πŸ”„ CI/CD integration templates

πŸ“„ License

MIT License - See LICENSE for details.


Built with ❀️ for developers who value privacy and code quality

Report Bug Β· Request Feature

About

No description, website, or topics provided.

Resources

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors