Important
An interactive CLI AI agent(llm in feedback-loop) that suppose to help you with coding tasks. I have built this project to understand about how agents work and how to built agents effectively.
Claude helped me here and there but mostly around UI design.
Note
I have tested this project only on few long horizon coding tasks. First i just want to make sure everything works great on small taks before i go all in.
- Multi-Provider LLM Support - OpenAI integration and local llm using vLLM(coming soon...)
- React/Ink Terminal UI - Beautiful, interactive terminal interface with real-time updates
- Autonomous Code Operations - Read, edit, search, and manage files with AI assistance
- Intelligent Code Search - Semantic search with ChromaDB, grep patterns, and fuzzy file search
- Git Integration - Built-in git operations for status, commit, push, diff, and log
- Memory System - Repository-specific knowledge retention and retrieval
- Task Management - TODO tracking and autonomous task execution via subagents
- Context Engineering - Advanced context window management and optimization
Clone the repository and install dependencies:
git clone https://github.com/saurabhaloneai/hakken.git
cd hakkenInstall Python package:
uv pip install -e .Install terminal UI dependencies:
cd terminal_ui
npm install
cd ..Create a .env file in the project root:
# Required
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_BASE_URL=https://openrouter.ai/api/v1
OPENAI_MODEL=openai/gpt-oss-20b:free
# The unit is k
MODEL_MAX_TOKENS=250
COMPRESS_THRESHOLD=0.8# Default: Launch with React/Ink UI
hakken
# Show version
hakken --versionHakken provides a comprehensive set of tools for AI-powered code operations:
- read_file - Read file contents with optional line range support
- edit_file - Create and modify files with full content replacement
- search_replace - Precise string-based find and replace
- delete_file - Safe file deletion with confirmation
- list_dir - Directory exploration and file listing
- grep_search - Pattern matching with regex support across files
- file_search - Fuzzy filename search to locate files quickly
- semantic_search - AI-powered semantic code search using ChromaDB embeddings
- run_terminal_cmd - Execute shell commands with real-time output streaming
- Command validation and security checks
- Working directory support
- git status - Check repository status and changes
- git commit - Create commits with AI-generated or custom messages
- git push - Push changes to remote repository
- git diff - View file differences
- git log - Browse commit history
- add_memory - Store repository-specific knowledge and context
- list_memories - Retrieve stored memories and insights
- Persistent knowledge base for better contextual understanding
- todo - Structured TODO list management
- task - Autonomous task execution via subagents
- context_compression - Intelligent conversation history compression
Autonomous subagents can be spawned for:
- Complex multi-step tasks
- Parallel execution of independent operations
- Isolated contexts for specific objectives
Hakken implements several techniques to manage and optimize the LLM context window:
- Automatically compresses conversation history when context usage exceeds threshold (configurable via
COMPRESS_THRESHOLD) - Uses LLM to generate intelligent summaries preserving key decisions, unresolved issues, and important context
- Retains system messages and recent interactions while summarizing older sessions
- Automatically clears old tool results after every 10 tool calls (keeps last 5)
- Replaces verbose tool outputs with placeholder to save context space
- Manual cropping support (top/bottom direction) for fine-grained control
- Subagent tasks run in isolated chat sessions
- Prevents context pollution between main conversation and sub-tasks
- Clean session handoff with response extraction
- Adds Anthropic-style
cache_controlmarkers to messages - Enables prompt caching for compatible providers (Anthropic models via OpenRouter)
- Structured todo list to track multi-step tasks
- Keeps agent focused on current objectives without losing context
- Provides visibility into progress and remaining work
- Trims and validates assistant content before storing
- Handles empty responses with fallback content
- Proper tool-call detection and structured message building
- memory
- local llm support with vLLM
- improve tool execution
- parallel tool call
- [ ]
