Skip to content

Releases: VladoIvankovic/Codeep

v1.3.0 — Ollama Local AI Support

08 Apr 13:29

Choose a tag to compare

v1.3.0 — Ollama Local AI Support

New: Ollama Provider

Run AI models fully locally — or on a remote server — without an API key.

  • Select provider with /providerollama
  • Configure URL via /settings → Ollama URL (default: http://localhost:11434)
  • Pick from installed models dynamically with /model
  • For remote Ollama: set OLLAMA_HOST=0.0.0.0 on the server
  • Node v24 compatibility: uses node:http transport to avoid undici AggregateError

New: /memory command

Annotate project context from the terminal — stored in .codeep/intelligence.json.

  • /memory <note> — add a note
  • /memory list — show all notes
  • /memory remove <n> — remove by index
  • /memory clear — remove all notes

Agent Confirmation/Permission (ACP)

Granular per-tool confirmation controls in /settings:

  • Confirm: delete_file — ON by default
  • Confirm: execute_command — ON by default
  • Confirm: write_file / edit_file — OFF by default (opt-in)

API Endpoint Detection

Automatic detection of API endpoints on project scan (Next.js App/Pages Router, Express, Laravel, Django). Results are included in the agent system prompt.

Dashboard improvements

  • Archive confirmation dialog before archiving projects
  • "View all N →" link when more than 10 projects exist
  • Pending / Done task tabs in the tasks section

v1.2.160

07 Apr 20:54

Choose a tag to compare

What's new

/memory command

Add custom notes to project intelligence directly from the CLI:

/memory Always use pnpm, never npm
/memory Main entry point is src/renderer/main.ts

Notes are included in every AI and agent conversation for this project.

Agent now uses project intelligence

The agent system prompt now includes the full intelligence.json context — frameworks, architecture, entry points, API endpoints, and custom notes. Previously only chat mode had access to this data.

Configurable tool confirmations

In dangerous confirmation mode, you can now choose exactly which tools require approval via /settings:

  • Confirm: delete_file — ON by default
  • Confirm: execute_command — ON by default
  • Confirm: write_file / edit_file — OFF by default

API endpoint detection

/scan now detects API routes in your project:

  • Next.js App Router (app/**/route.ts)
  • Next.js Pages Router (pages/api/**)
  • Express / Fastify (app.get('/path'))
  • Laravel (Route::get('/path'))
  • Django (urls.py)

Dashboard improvements

  • Confirmation dialog before archiving a project
  • "View all N →" link when you have more than 10 projects
  • Pending / Done tabs for tasks with counts

v1.2.152

04 Apr 14:52

Choose a tag to compare

What's new

Security

  • Fixed unauthenticated access to /api/tasks — now requires x-sync-token header
  • Added rate limiting to all API endpoints (stats, tasks, progress, sync, keys, cleanup)

New features

  • Token budget warning — agent warns at 80% and 95% of model's context window, using accurate per-model context sizes
  • /sync command — sync learning preferences and profiles across machines
  • Auto-sync on startup — learning preferences are automatically pulled from cloud if newer than local

Reliability

  • Retry logic for all cloud sync calls (exponential backoff, up to 2 retries on network errors and 5xx)

Developer experience

  • Debug logging now writes to ~/.codeep/logs/ — use CODEEP_DEBUG=1 to enable, tail -f to follow without breaking the UI
  • Updated TypeScript 5.3 → 6.0 and minimum Node.js 18 → 20

Data & accuracy

  • Fixed model context window sizes (Claude Opus/Sonnet: 200k → 1M, DeepSeek: 64k → 128k, MiniMax corrected)
  • Updated model pricing across all providers

Bug fixes

  • Fixed 23 failing tests

v1.2.135

01 Apr 20:45

Choose a tag to compare

What's new

OpenAI GPT-5.4 support

  • Added GPT-5.4, GPT-5.4 Mini, and GPT-5.4 Nano models
  • Fixed max_completion_tokens compatibility (GPT-5.4+ requirement)

Updated provider model lists

  • Z.AI — GLM-5.1 (default), GLM-5 Turbo, GLM-5
  • MiniMax — MiniMax M2.7
  • OpenAI — GPT-5.4 (default), GPT-5.4 Mini, GPT-5.4 Nano
  • Anthropic — Claude Opus (default), Claude Sonnet, Claude Haiku
  • Google — Gemini 3.1 Pro (default), Gemini 3 Flash

Higher default limits

  • Agent iterations: raised to 10,000 (was 100)
  • Agent duration: raised to 480 min / 8h (was 20 min)
  • Max tokens: raised to 32,768 (was 8,192) — better for Opus and large responses
  • Progress bar no longer capped at 500 iterations

Bug fixes

  • API error messages now show actual error details instead of generic "API error"
  • Fixed project type detection (PHP/HTML projects no longer show "Unknown")

v1.2.130

01 Apr 12:58

Choose a tag to compare

Fix project type detection using scanned file extensions

- Run extension fallback for both Unknown and generic types
- Skip non-code extensions (svg, png, etc.) when finding dominant type
- Add HTML/CSS, TypeScript, JavaScript to extension type map

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

v1.2.129

01 Apr 12:45

Choose a tag to compare

Use scanned file extensions as fallback for project type detection

If config-file detection returns Unknown, count file extensions from
the directory scan (depth 3) and use the dominant extension to determine type.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

v1.2.128

01 Apr 12:35

Choose a tag to compare

Fix PHP project not recognized by isProjectDirectory

v1.2.127

01 Apr 12:12

Choose a tag to compare

Detect PHP projects by scanning subdirectories too

v1.2.126

01 Apr 12:02

Choose a tag to compare

Add PHP and other project types to type detection

Also added Python (pyproject.toml), Elixir, Dart/Flutter, C/C++.
PHP fallback: scan root dir for .php files if no composer.json.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

v1.2.125

31 Mar 14:25

Choose a tag to compare

Make context compression silent