AI-Powered Code Analysis Platform with Local LLM Support
Analyze codebases using local AI models for privacy-first, intelligent code review.
No data leaves your machine. No API costs. Full control.
| Feature | Description |
|---|---|
| π Privacy-First | All analysis runs locally via Ollama - your code never leaves your machine |
| π€ Multi-Provider LLM | Support for Ollama, OpenAI, and Anthropic with automatic fallback |
| π Smart Analysis | Security vulnerabilities, architecture patterns, code quality, best practices |
| π Web UI | Beautiful Streamlit interface for interactive analysis |
| π¦ Modular Architecture | Six independent packages for flexibility and extensibility |
| β‘ Parallel Execution | Concurrent agent analysis with progress streaming |
| π GitHub Integration | Analyze repositories directly from GitHub URLs |
- Python 3.9+
- Ollama installed and running
# Clone the repository
git clone https://github.com/moshesham/AI-Omniscient-Architect.git
cd AI-Omniscient-Architect
# Create virtual environment
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/bin/activate # Linux/Mac
# Install dependencies
pip install -r requirements.txt
# Pull a code-focused model
ollama pull qwen2.5-coder:1.5b
streamlit run web_app.py
Open http://localhost:8501 in your browser.
packages/
βββ omniscient-core # Base models, configuration, logging
βββ omniscient-llm # Multi-provider LLM abstraction layer
βββ omniscient-agents # AI analysis agents with orchestration
βββ omniscient-tools # Code complexity, clustering, file scanning
βββ omniscient-github # GitHub API client with rate limiting
βββ omniscient-api # FastAPI REST/GraphQL server
| Package | Purpose | Key Components |
|---|---|---|
omniscient-core |
Foundation | FileAnalysis, RepositoryInfo, AnalysisConfig |
omniscient-llm |
LLM Integration | OllamaProvider, OpenAIProvider, ProviderChain |
omniscient-agents |
Analysis | CodeReviewAgent, AnalysisOrchestrator |
omniscient-tools |
Utilities | ComplexityAnalyzer, FileScanner, Clustering |
omniscient-github |
GitHub | GitHubClient, RateLimitHandler |
omniscient-api |
API Server | FastAPI routes, GraphQL schema |
The Streamlit UI provides the easiest way to analyze code:
- Check Ollama Status - Verify your LLM is running
- Select Model - Choose from available Ollama models
- Choose Focus Areas - Security, Architecture, Code Quality, etc.
- Analyze - Point to a local directory or GitHub URL
import asyncio
from omniscient_llm import OllamaProvider, LLMClient
from omniscient_agents.llm_agent import CodeReviewAgent
from omniscient_core import FileAnalysis, RepositoryInfo
async def analyze_code():
# Setup LLM
provider = OllamaProvider(model="qwen2.5-coder:1.5b")
client = LLMClient(provider=provider)
async with client:
# Create agent
agent = CodeReviewAgent(
llm_client=client,
focus_areas=["security", "architecture"]
)
# Prepare files
files = [
FileAnalysis(
path="main.py",
content="your code here",
language="Python",
size=100
)
]
repo = RepositoryInfo(
path="./my-project",
name="my-project",
branch="main"
)
# Run analysis
result = await agent.analyze(files, repo)
print(result.summary)
asyncio.run(analyze_code())
# Check Ollama status
python -m omniscient_llm status
# List available models
python -m omniscient_llm list
# Pull a new model
python -m omniscient_llm pull codellama:7b-instruct
# Get model recommendations
python -m omniscient_llm recommend --category code
Start the REST API server:
# Ensure packages are in PYTHONPATH
export PYTHONPATH=$PYTHONPATH:packages/api/src:packages/core/src:packages/rag/src:packages/llm/src
# Run the server
python -m omniscient_api.cli serve
The API will be available at http://localhost:8000.
Documentation is available at http://localhost:8000/docs.
The API is protected by an API Key. Set the OMNISCIENT_API_KEY environment variable to enable authentication.
export OMNISCIENT_API_KEY="your-secret-key"
Pass the key in the X-API-Key header:
curl -H "X-API-Key: your-secret-key" http://localhost:8000/api/v1/analyze ...
# Development (with hot reload)
docker compose -f docker-compose.dev.yml up --build
# Production
docker compose up --build -d
# Check health
curl http://localhost:8501/_stcore/health
| Environment Variable | Description | Default |
|---|---|---|
OLLAMA_HOST |
Ollama server URL | http://localhost:11434 |
OLLAMA_MODEL |
Default model | qwen2.5-coder:1.5b |
MAX_FILES |
Max files to analyze | 100 |
ANALYSIS_DEPTH |
quick, standard, deep |
standard |
| Category | Checks |
|---|---|
| Security | SQL injection, XSS, hardcoded secrets, CORS misconfig |
| Architecture | Design patterns, separation of concerns, scalability |
| Code Quality | Complexity, duplication, naming conventions |
| Best Practices | Error handling, logging, documentation |
| Performance | Bottlenecks, caching opportunities, async patterns |
π Summary:
The codebase has several security concerns that need immediate attention.
π Issues Found: 3
π΄ [HIGH] Security
Hardcoded database credentials found
π File: api/routes/data.py
π Line: 20
π‘ Use environment variables or a secrets manager
π‘ [MEDIUM] Architecture
Global state can cause race conditions
π File: api/routes/data.py
π‘ Use dependency injection or request-scoped state
π’ [LOW] Code Quality
Missing docstrings in public functions
π‘ Add docstrings for better maintainability
π‘ Recommendations:
β’ Move credentials to environment variables
β’ Implement proper dependency injection
β’ Add comprehensive documentation
AI-Omniscient-Architect/
βββ web_app.py # Streamlit UI
βββ packages/ # Modular packages
βββ scripts/ # Test & utility scripts
βββ docs/ # Documentation
βββ roadmap/ # Development roadmap
βββ examples/ # Usage examples
βββ Dockerfile # Container definition
βββ docker-compose.yml # Production compose
βββ requirements.txt # Dependencies
# Test all packages
python scripts/test_packages.py
# Test local analysis
python scripts/test_local_analysis.py
# Test with a specific repo
python scripts/test_datalake_analysis.py
| Model | Size | Best For | Memory |
|---|---|---|---|
qwen2.5-coder:1.5b |
1GB | Quick analysis, limited RAM | 2GB |
codellama:7b-instruct |
4GB | Detailed analysis | 8GB |
deepseek-coder:6.7b |
4GB | Complex code understanding | 8GB |
- Development Roadmap - Current progress and future plans
- Package Documentation - Detailed package docs
- API Reference - REST/GraphQL API docs
Contributions are welcome! Areas for enhancement:
- π Additional LLM providers
- π More analysis agents (testing, documentation)
- π IDE extensions (VS Code, JetBrains)
- π Metrics and reporting dashboards
- π CI/CD integration templates
MIT License - See LICENSE for details.
Built with β€οΈ for developers who value privacy and code quality