Skip to content

Tanishq-S-Dev05/CodeCortex-AI

Repository files navigation

🤖 CodeCortex AI

Your Autonomous Terminal-Based AI Pair Programmer

Python 3.12+ OpenRouter MCP License


An intelligent coding agent that lives in your terminal. It reads your files, writes code, runs commands, searches the web, manages tasks — all through natural language. Think Cursor/Copilot, but open-source and running entirely in your CLI.


╭─ CodeCortex AI ──────────────────────────────────────╮
│                                                      │
│  model: openrouter/hunter-alpha                      │
│  cwd: ~/my-project                                   │
│  commands: /help /config /approval /model /exit       │
│                                                      │
╰──────────────────────────────────────────────────────╯

> Fix the authentication bug in auth.py and add unit tests

🔍 Reading auth.py...
🔍 Searching for related test files...
✏️  Editing auth.py — fixed token validation logic
✏️  Creating tests/test_auth.py — 6 test cases
🐚 Running pytest tests/test_auth.py...
✅ All 6 tests passed.

✨ What Makes It Special

🧠 Truly Autonomous

Doesn't just suggest code — it reads, writes, edits, and runs your project. Give it a task and watch it work through multi-step workflows independently.

🔒 Safety First

Built-in dangerous command detection, approval policies, and path-based safety checks. You stay in control — always.

🔌 MCP Protocol Support

Connect to any Model Context Protocol server — extend capabilities with external tools, databases, APIs, and more.

🧩 Sub-Agents

Delegate complex tasks to specialized sub-agents — codebase investigation, code review, and custom workflows with isolated context.

💾 Session Persistence

Save, resume, and checkpoint your coding sessions. Pick up right where you left off, even after closing the terminal.

🔄 Context Compression

Smart automatic context management — compresses conversation history when approaching token limits so you never lose track.


🛠️ Built-in Tools

The agent comes loaded with powerful tools out of the box:

Tool Type Description
📖 read_file Read Read file contents with line range support
✏️ write_file Write Create new files or overwrite existing ones
🔧 edit Write Surgical search-and-replace editing
📁 list_dir Read Explore directory structures
🔍 grep Read Search code with regex pattern matching
🌐 glob Read Find files by name patterns
🐚 shell Shell Execute any shell command
🌍 web_search Network Search the web via DuckDuckGo
📥 web_fetch Network Fetch and read web page content
🧠 memory Memory Persistent memory across sessions
todo Read Task tracking and management
🔗 mcp MCP Tools from connected MCP servers

🚀 Quick Start

Prerequisites

Option A — Install via pip (recommended)

pip install codecortex-ai
codecortex

That's it! On first run it'll ask for your API key.

Option B — Clone from source

git clone https://github.com/Tanishq-S-Dev05/CodeCortex-AI.git
cd CodeCortex-AI

2. Create Virtual Environment

python -m venv .venv

# Windows
.venv\Scripts\activate

# macOS / Linux
source .venv/bin/activate

3. Install Dependencies

pip install -r requirements.txt

4. Configure API Key

Create a .env file in the project root:

API_KEY=sk-or-v1-your-openrouter-api-key-here
BASE_URL=https://openrouter.ai/api/v1

5. Launch 🚀

python main.py

That's it — you're in! Start typing natural language commands.


💡 Usage

Interactive Mode (Default)

python main.py

Opens an interactive session where you can have a multi-turn conversation with the agent.

Single Prompt Mode

python main.py "Create a Flask REST API with user authentication"

Runs a single task and exits.

Custom Working Directory

python main.py -c /path/to/your/project

Point the agent at any project directory.


⌨️ Commands

Once inside the agent, use these slash commands:

Command Action
/help Show all available commands
/config Display current configuration
/model Show/change the active model
/tools List all available tools
/mcp Show MCP server connections
/approval Change approval policy
/stats View token usage statistics
/save Save current session
/resume Resume a saved session
/checkpoint Create a session checkpoint
/restore Restore from a checkpoint
/exit Exit the agent

🔒 Approval Policies

Control how the agent handles potentially dangerous operations:

Policy Behavior Best For
auto-edit Auto-approves reads & edits, asks for shell commands Default — Recommended
auto Auto-approves everything except dangerous commands Trusted environments
on-failure Auto-approves, asks on errors Experienced users
never Only allows safe read-only commands Maximum safety
yolo Approves everything (including dangerous!) ⚠️ Use at your own risk

Built-in protection: Commands like rm -rf /, dd, mkfs, shutdown, curl | bash are always blocked regardless of policy.


⚙️ Configuration

.codecortex-ai/config.toml

# Enable/disable the hook system
hooks_enabled = false

# Model settings
[model]
name = "openrouter/hunter-alpha"
temperature = 0

# MCP Servers (optional)
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "./"]

# Custom hooks (optional)
[[hooks]]
name = "pre-tool-check"
trigger = "before_tool"
command = "python ./scripts/validate.py"

Environment Variables

Variable Description Required
API_KEY OpenRouter API key
BASE_URL API base URL

🏗️ Architecture

codecortex-ai/
├── main.py                  # CLI entry point (Click)
├── .env                     # API credentials (gitignored)
├── requirements.txt         # Python dependencies
│
├── agent/                   # 🧠 Core agent loop
│   ├── agent.py             # Main agentic loop with streaming
│   ├── events.py            # Event system (text, tool calls, errors)
│   ├── session.py           # Session orchestration & initialization
│   └── persistence.py       # Save/resume/checkpoint sessions
│
├── client/                  # 🔌 LLM client (OpenAI SDK)
│   ├── llm_client.py        # AsyncOpenAI wrapper with streaming
│   └── response.py          # Stream event parsing & tool calls
│
├── config/                  # ⚙️ Configuration
│   ├── config.py            # Pydantic models for all settings
│   └── loader.py            # TOML config file discovery & loading
│
├── tools/                   # 🛠️ Tool system
│   ├── base.py              # Tool base class & schemas
│   ├── registry.py          # Tool discovery & invocation
│   ├── builtin/             # Built-in tools (file, shell, web, etc.)
│   └── mcp/                 # MCP server integration (fastmcp)
│
├── context/                 # 📦 Context management
│   ├── context.py           # Message history & token tracking
│   └── compactor.py         # Automatic context compression
│
├── prompts/                 # 📝 System prompts
│   └── system.py            # Identity, security, operational guidelines
│
├── safety/                  # 🔒 Safety & approval
│   └── approval.py          # Policies, dangerous command detection
│
├── hooks/                   # 🪝 Hook system
│   └── hooks.py             # Before/after hooks for agent & tools
│
├── ui/                      # 🎨 Terminal UI
│   └── terminal.py          # Rich-based formatted output
│
├── utils/                   # 🧰 Utilities
│   ├── text.py              # Token counting (tiktoken)
│   └── paths.py             # Path resolution helpers
│
└── .codecortex-ai/           # 📋 Project config
    ├── config.toml           # Agent configuration
    └── tools/                # Custom tool definitions

🔌 MCP Integration

Connect to any Model Context Protocol server to extend the agent's capabilities:

# .codecortex-ai/config.toml

# Local filesystem access
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "./"]

# Database access
[mcp_servers.postgres]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]

# GitHub integration
[mcp_servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_TOKEN = "ghp_..." }

Supports both stdio and HTTP/SSE transports.


🤖 Sub-Agents

Delegate complex tasks to specialized sub-agents with isolated context:

  • Codebase Investigator — Deep-dive into large codebases, understand architecture
  • Code Reviewer — Analyze code quality, find bugs, suggest improvements
  • Custom Sub-Agents — Define your own with specific tools and constraints

Sub-agents run with their own context window and limited tool access, making them ideal for focused investigations without polluting the main conversation.


🧪 Example Workflows

🐛 Bug Fix

> There's a null pointer exception in the payment module. Find and fix it.

🏗️ Feature Development

> Add a /api/users endpoint with CRUD operations, validation, and tests

🔍 Code Review

> Review the recent changes in src/auth/ for security vulnerabilities

📚 Documentation

> Generate API documentation for all routes in the Express app

🔄 Refactoring

> Refactor the database module to use connection pooling

🌐 Research

> Search for best practices for rate limiting in Node.js and implement it

🌟 Supported Models

Works with any model available on OpenRouter, including:

Model Context Price
openrouter/hunter-alpha 1M tokens Free
mistralai/devstral-2512:free 256K tokens Free
anthropic/claude-sonnet-4 200K tokens Paid
google/gemini-2.5-pro 1M tokens Paid
openai/gpt-4o 128K tokens Paid

Switch models anytime with the /model command or in config.toml.


🤝 Contributing

Contributions are welcome! Here's how:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m "Add amazing feature"
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

📄 License

This project is open source and available under the MIT License.


Built with ❤️ for developers who live in the terminal

⬆ Back to Top

About

Terminal-AI-Agent

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages