AI-Powered Multi-Level Solution Architecture Generator
Note: This project has been refactored for better maintainability and organization. The core functionality remains the same, but the code is now more modular and easier to extend.
Inceptor is a powerful AI-powered tool that helps you design, generate, and implement complex software architectures using natural language. Built with Ollama's Mistral:7b model, it creates multi-level architecture designs that evolve from high-level concepts to detailed implementation plans.
- π€ AI-Powered: Leverages Ollama's Mistral:7b for intelligent architecture generation
- ποΈ Multi-Level Design: Creates 5 distinct architecture levels (LIMBO β DREAM β REALITY β DEEPER β DEEPEST)
- π Context-Aware: Understands requirements from natural language descriptions
- π» Interactive CLI: Command-line interface with autocomplete and suggestions
- π Structured Output: Exports to Markdown, JSON, YAML, and more
- π Zero-Setup: Works out of the box with local Ollama installation
- π Extensible: Plugin system for custom generators and templates
- Python 3.8 or higher
- Ollama with Mistral:7b model
- 4GB RAM (minimum)
# Install from PyPI
pip install inceptor
# Or install from source
git clone https://github.com/wronai/inceptor.git
cd inceptor
make install # Installs in development mode with all dependencies
# Start Ollama server (if not already running)
ollama serve# Generate architecture from a description
inceptor "I need a REST API for a todo app with user authentication"
# Start interactive shell
inceptor shellfrom inceptor import DreamArchitect, Solution, ArchitectureLevel
# Create an architect instance
architect = DreamArchitect()
# Generate a solution
problem = """
I need a task management system for a small development team.
The team consists of 5 people and uses Python, FastAPI, and PostgreSQL.
The system should have a web interface and REST API.
"""
# Generate solution with 3 levels of detail
solution = architect.inception(problem, max_levels=3)
# Access solution components
print(f"Problem: {solution.problem}")
print(f"Components: {len(solution.architecture.get('limbo', {}).get('components', []))}")
print(f"Tasks: {len(solution.tasks)}")
# Save to JSON
import json
from dataclasses import asdict, is_dataclass
def convert_dataclass(obj):
if is_dataclass(obj):
return {k: convert_dataclass(v) for k, v in asdict(obj).items()}
elif isinstance(obj, (list, tuple)):
return [convert_dataclass(x) for x in obj]
elif isinstance(obj, dict):
return {k: convert_dataclass(v) for k, v in obj.items()}
elif hasattr(obj, 'name'): # For Enums
return obj.name
return obj
with open("solution.json", "w") as f:
json.dump(convert_dataclass(solution), f, indent=2, ensure_ascii=False)After refactoring, the project has a cleaner, more modular structure:
src/inceptor/
βββ __init__.py # Package exports and version
βββ inceptor.py # Compatibility layer
βββ core/ # Core functionality
βββ __init__.py # Core package exports
βββ enums.py # ArchitectureLevel enum
βββ models.py # Solution and Task dataclasses
βββ context_extractor.py # Context extraction utilities
βββ ollama_client.py # Ollama API client
βββ prompt_templates.py # Prompt templates for each level
βββ dream_architect.py # Main architecture generation logic
βββ utils.py # Utility functions
Inceptor structures architectures across 5 levels of detail:
| Level | Name | Description | Output |
|---|---|---|---|
| 1 | LIMBO | Problem analysis & decomposition | High-level components |
| 2 | DREAM | Component design & interactions | API contracts, Data flows |
| 3 | REALITY | Implementation details | Code structure, Tech stack |
| 4 | DEEPER | Integration & deployment | CI/CD, Infrastructure |
| 5 | DEEPEST | Optimization & scaling | Performance, Monitoring |
-
Clone the repository:
git clone https://github.com/wronai/inceptor.git cd inceptor -
Set up the development environment:
# Install Python dependencies make install # Install pre-commit hooks pre-commit install # Start Ollama server (in a separate terminal) ollama serve
# Install development dependencies
make install
# Run tests
make test
# Run tests with coverage
make test-cov
# Check code style
make lint
# Format code
make format
# Build documentation
make docs
# Run documentation server (http://localhost:8001)
make serve-docs
# Build package
make build
# Clean up
make clean
# Run a local example
python -m src.inceptor.inceptorFor full documentation, please visit https://wronai.github.io/inceptor/
Contributions are welcome! Please read our Contributing Guide to get started.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for the powerful AI models
- Mistral AI for the 7B model
- The open-source community for invaluable tools and libraries
# 1. Zainstaluj MkDocs
pip install mkdocs-material mkdocstrings[python] mkdocs-awesome-pages-plugin
# 2. StwΓ³rz strukturΔ docs/
mkdir -p docs/{guide,architecture,api,development,examples,about,assets/{css,js,images}}
# 3. Uruchom development server
mkdocs serve
# 4. Build i deploy
mkdocs build
mkdocs gh-deploy # GitHub Pages- Home: Installation, Quick Start, Features
- User Guide: Getting Started, CLI Reference, Examples
- Architecture: Multi-Level Design, Prompts, Ollama Integration
- API Reference: Auto-generated z kodu
- Development: Contributing, Testing, Release Process
- Examples: Real-world use cases, troubleshooting
- Theme: Material Design z custom colors
- Logo: Inception-inspired rotating animation
- Terminal: Code examples z animacjΔ
- Social: GitHub, PyPI, Docker links
- Search: Zaawansowane z jΔzyk separatorami
- Git dates: Automatic creation/modification dates
- Minify: Optimized HTML/CSS/JS
- Privacy: GDPR-compliant
- Tags: Content categorization
Teraz wystarczy dodaΔ treΕΔ do folderΓ³w w docs/ i masz profesjonalnΔ
dokumentacjΔ gotowΔ
na deployment! π―
PrzykΕadowa komenda uruchomienia:
mkdocs serve # http://localhost:8000