Skip to content

v1.3.0 — Ollama Local AI Support

Choose a tag to compare

@VladoIvankovic VladoIvankovic released this 08 Apr 13:29
· 68 commits to main since this release

v1.3.0 — Ollama Local AI Support

New: Ollama Provider

Run AI models fully locally — or on a remote server — without an API key.

  • Select provider with /providerollama
  • Configure URL via /settings → Ollama URL (default: http://localhost:11434)
  • Pick from installed models dynamically with /model
  • For remote Ollama: set OLLAMA_HOST=0.0.0.0 on the server
  • Node v24 compatibility: uses node:http transport to avoid undici AggregateError

New: /memory command

Annotate project context from the terminal — stored in .codeep/intelligence.json.

  • /memory <note> — add a note
  • /memory list — show all notes
  • /memory remove <n> — remove by index
  • /memory clear — remove all notes

Agent Confirmation/Permission (ACP)

Granular per-tool confirmation controls in /settings:

  • Confirm: delete_file — ON by default
  • Confirm: execute_command — ON by default
  • Confirm: write_file / edit_file — OFF by default (opt-in)

API Endpoint Detection

Automatic detection of API endpoints on project scan (Next.js App/Pages Router, Express, Laravel, Django). Results are included in the agent system prompt.

Dashboard improvements

  • Archive confirmation dialog before archiving projects
  • "View all N →" link when more than 10 projects exist
  • Pending / Done task tabs in the tasks section