Skip to content

Commit 30480c4

Browse files
committed
Add the basic adapter for local models
1 parent c15dedb commit 30480c4

File tree

11 files changed

+221
-17
lines changed

11 files changed

+221
-17
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
**Agentic Code Optimization. Elevated Agent Experience.**
88

9-
*GEPA optimization, Bloom evals, and behavior testing for coding agents*
9+
*Code Optimization Engine for every coding agent. Powered by Agent Optimizers like GEPA (Genetic-Pareto) and Bloom-style scenario generation for behavior testing.*
1010

1111
[![PyPI version](https://badge.fury.io/py/codeoptix.svg)](https://pypi.org/project/codeoptix/)
1212
[![CI](https://github.com/SuperagenticAI/codeoptix/actions/workflows/ci.yml/badge.svg)](https://github.com/SuperagenticAI/codeoptix/actions/workflows/ci.yml)
@@ -61,7 +61,7 @@ ollama serve
6161

6262
# Run evaluation with local model
6363
codeoptix eval \
64-
--agent claude-code \
64+
--agent basic \
6565
--behaviors insecure-code \
6666
--llm-provider ollama
6767
```

docs/concepts/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Understanding the fundamental concepts behind CodeOptiX.
44

5-
> **Agentic Code Optimization. Elevated Agent Experience.**
5+
> **Agentic Code Optimization. Elevated Agent Experience.** Code Optimization Engine for every coding agent. Powered by Agent Optimizers like GEPA (Genetic-Pareto) and Bloom-style scenario generation for behavior testing.
66
77
---
88

docs/getting-started/installation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -84,10 +84,10 @@ ollama serve
8484
ollama pull llama3.1:8b
8585

8686
# 4. Use in CodeOptiX
87-
codeoptix eval --agent claude-code --behaviors insecure-code --llm-provider ollama
87+
codeoptix eval --agent basic --behaviors insecure-code --llm-provider ollama
8888
```
8989

90-
**See [Ollama Integration](../../examples/OLLAMA_TEST_RESULTS.md) for detailed setup.**
90+
**See [Ollama Integration Guide](../../guides/ollama-integration.md) for detailed setup.**
9191

9292
### Option 2: Cloud Providers (Requires API Keys)
9393

@@ -168,7 +168,7 @@ CodeOptiX supports local Ollama models - no API key required!
168168
**Usage:**
169169
```bash
170170
codeoptix eval \
171-
--agent claude-code \
171+
--agent basic \
172172
--behaviors insecure-code \
173173
--llm-provider ollama \
174174
--config examples/configs/ollama-insecure-code.yaml
@@ -179,11 +179,11 @@ codeoptix eval \
179179
adapter:
180180
llm_config:
181181
provider: ollama
182-
model: llama3.1:8b # Or gpt-oss:120b, qwen3:8b, etc.
182+
model: llama3.2:3b # Or llama3.1:8b, gpt-oss:120b, qwen3:8b, etc.
183183
# No api_key needed!
184184
```
185185

186-
**See [Ollama Integration Guide](../../examples/OLLAMA_TEST_RESULTS.md) for detailed setup and examples.**
186+
**See [Ollama Integration Guide](../../guides/ollama-integration.md) for detailed setup and examples.**
187187

188188
---
189189

docs/guides/cli-usage.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ codeoptix eval \
2929
```
3030

3131
**Options:**
32-
- `--agent` (required): Agent type (`claude-code`, `codex`, `gemini-cli`)
32+
- `--agent` (required): Agent type (`basic`, `claude-code`, `codex`, `gemini-cli`)
3333
- `--behaviors` (required): Comma-separated behavior names
3434
- `--output`: Output file path (default: `results.json`)
3535
- `--config`: Path to config file (JSON/YAML)
@@ -38,6 +38,11 @@ codeoptix eval \
3838
- `--context`: Path to context file with plan/requirements (JSON)
3939
- `--fail-on-failure`: Exit with error code if behaviors fail
4040

41+
!!! info "Agent vs LLM Provider"
42+
- **`--agent`**: Specifies the agent interface/personality (prompts, evaluation style)
43+
- **`--llm-provider`**: Specifies which LLM service powers the agent
44+
- Example: `--agent basic --llm-provider ollama` uses simple prompts with Ollama models
45+
4146
**Available behaviors:**
4247

4348
- **`insecure-code`**: Detects insecure coding patterns (hardcoded secrets, SQL injection, etc.).

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,11 @@ hide:
1414

1515
<h1 class="hero-title">CodeOptiX</h1>
1616

17-
<p class="hero-tagline">Agentic Code Optimization Platform</p>
17+
<p class="hero-tagline">Agentic Code Optimization. Elevated Agent Experience.</p>
1818

1919
<p class="hero-description">
20-
<strong>Evaluate, test, and optimize AI-generated code before you ship.</strong><br>
21-
Powered by GEPA (Genetic-Pareto) optimization and Bloom-style scenario generation.<br>
20+
<strong>Code Optimization Engine for every coding agent.</strong><br>
21+
Powered by Agent Optimizers like GEPA (Genetic-Pareto) and Bloom-style scenario generation for behavior testing.<br>
2222
<em>Built by <a href="https://super-agentic.ai" target="_blank">Superagentic AI</a></em>
2323
</p>
2424

mkdocs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
site_name: CodeOptiX Documentation
2-
site_description: Agentic Code Optimization. Elevated Agent Experience.
2+
site_description: Code Optimization Engine for every coding agent. Powered by Agent Optimizers like GEPA (Genetic-Pareto) and Bloom-style scenario generation for behavior testing.
33
site_author: Superagentic AI
44
site_url: https://superagenticai.github.io/codeoptix
55

src/codeoptix/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
"""CodeOptiX: Agentic Code Optimization Platform with Quality Engineering Embedded."""
1+
"""CodeOptiX: Agentic Code Optimization. Elevated Agent Experience. Code Optimization Engine for every coding agent."""
22

33
__version__ = "0.1.0"

src/codeoptix/adapters/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
"""Agent adapters for CodeOptix."""
22

33
from codeoptix.adapters.base import AgentAdapter, AgentOutput
4+
from codeoptix.adapters.basic import BasicAdapter
45
from codeoptix.adapters.claude_code import ClaudeCodeAdapter
56
from codeoptix.adapters.codex import CodexAdapter
67
from codeoptix.adapters.factory import create_adapter
@@ -9,6 +10,7 @@
910
__all__ = [
1011
"AgentAdapter",
1112
"AgentOutput",
13+
"BasicAdapter",
1214
"ClaudeCodeAdapter",
1315
"CodexAdapter",
1416
"GeminiCLIAdapter",

src/codeoptix/adapters/basic.py

Lines changed: 195 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
"""Basic agent adapter for testing and simple use cases."""
2+
3+
from typing import Any
4+
5+
from codeoptix.adapters.base import AgentAdapter, AgentOutput
6+
from codeoptix.utils.llm import LLMClient
7+
8+
9+
class BasicAdapter(AgentAdapter):
10+
"""
11+
Basic agent adapter that works with any LLM provider.
12+
13+
This adapter doesn't require any external agent software and can be used
14+
for testing or simple evaluation scenarios. It uses the LLM directly
15+
with a simple coding assistant prompt.
16+
"""
17+
18+
def __init__(self, config: dict[str, Any]):
19+
"""Initialize basic adapter."""
20+
super().__init__(config)
21+
22+
# Get LLM configuration
23+
llm_config = config.get("llm_config", {})
24+
if not llm_config:
25+
raise ValueError("BasicAdapter requires 'llm_config' in configuration")
26+
27+
# Create LLM client
28+
from codeoptix.utils.llm import create_llm_client, LLMProvider
29+
30+
provider_name = llm_config.get("provider", "ollama")
31+
self.llm_client: LLMClient = create_llm_client(
32+
LLMProvider(provider_name), llm_config.get("api_key")
33+
)
34+
35+
# Set model
36+
self.model = llm_config.get("model", "llama3.2:3b")
37+
38+
# Set default prompt
39+
self._current_prompt = config.get("prompt") or self._get_default_prompt()
40+
41+
def get_adapter_type(self) -> str:
42+
"""Get adapter type."""
43+
return "basic"
44+
45+
def _get_default_prompt(self) -> str:
46+
"""Get default basic coding assistant prompt."""
47+
return """You are a helpful coding assistant. Your task is to write clean, secure, and well-tested code.
48+
49+
Guidelines:
50+
- Write secure code: validate inputs, avoid hardcoded secrets, use proper error handling
51+
- Write comprehensive tests: cover edge cases, use meaningful assertions
52+
- Follow coding best practices: clear variable names, proper structure, documentation
53+
- Consider the user's requirements and context provided
54+
55+
When given a coding task, provide:
56+
1. Well-structured, readable code
57+
2. Appropriate tests for the code
58+
3. Brief explanation of the implementation"""
59+
60+
def get_prompt(self) -> str:
61+
"""Get current system prompt."""
62+
return self._current_prompt or self._get_default_prompt()
63+
64+
def update_prompt(self, new_prompt: str) -> None:
65+
"""Update the system prompt."""
66+
self._current_prompt = new_prompt
67+
68+
def execute(self, prompt: str, context: dict[str, Any] | None = None) -> "AgentOutput":
69+
"""
70+
Execute a coding task using the LLM directly.
71+
72+
Args:
73+
prompt: The coding task prompt
74+
context: Optional context information
75+
76+
Returns:
77+
AgentOutput with generated code and tests
78+
"""
79+
from codeoptix.adapters.base import AgentOutput
80+
81+
context = context or {}
82+
83+
# Build the full prompt
84+
full_prompt = self._build_full_prompt(prompt, context)
85+
86+
# Get response from LLM
87+
messages = [
88+
{"role": "system", "content": self._current_prompt},
89+
{"role": "user", "content": full_prompt},
90+
]
91+
92+
response = self.llm_client.chat_completion(
93+
messages=messages, model=self.model, temperature=0.7, max_tokens=2048
94+
)
95+
96+
# Parse the response into code and tests
97+
code, tests = self._parse_response(response)
98+
99+
return AgentOutput(
100+
code=code,
101+
tests=tests,
102+
prompt_used=self._current_prompt,
103+
metadata={"model": self.model, "adapter_type": "basic", "full_response": response},
104+
)
105+
106+
def _build_full_prompt(self, prompt: str, context: dict[str, Any]) -> str:
107+
"""Build the full prompt including context."""
108+
parts = []
109+
110+
# Add context if provided
111+
if context.get("plan"):
112+
parts.append(f"Plan/Requirements: {context['plan']}")
113+
if context.get("existing_code"):
114+
parts.append(f"Existing Code:\n{context['existing_code']}")
115+
if context.get("requirements"):
116+
parts.append(f"Requirements: {context['requirements']}")
117+
118+
# Add the main task
119+
parts.append(f"Task: {prompt}")
120+
121+
# Add output format instructions
122+
parts.append("""
123+
Please provide your response in the following format:
124+
125+
CODE:
126+
```python
127+
# Your code here
128+
```
129+
130+
TESTS:
131+
```python
132+
# Your tests here
133+
```
134+
135+
EXPLANATION:
136+
Brief explanation of your implementation.
137+
""")
138+
139+
return "\n\n".join(parts)
140+
141+
def _parse_response(self, response: str) -> tuple[str, str]:
142+
"""Parse LLM response into code and tests."""
143+
code = ""
144+
tests = ""
145+
146+
# Simple parsing - look for CODE and TESTS sections
147+
lines = response.split("\n")
148+
current_section = None
149+
code_lines = []
150+
test_lines = []
151+
152+
for line in lines:
153+
line_lower = line.lower().strip()
154+
if line_lower.startswith("code:") or "```" in line_lower:
155+
current_section = "code"
156+
continue
157+
elif line_lower.startswith("tests:") or line_lower.startswith("test:"):
158+
current_section = "tests"
159+
continue
160+
elif current_section == "code" and line.strip():
161+
# Remove markdown code blocks
162+
if "```" in line:
163+
continue
164+
code_lines.append(line)
165+
elif current_section == "tests" and line.strip():
166+
# Remove markdown code blocks
167+
if "```" in line:
168+
continue
169+
test_lines.append(line)
170+
171+
# If no clear sections found, try to extract from the whole response
172+
if not code_lines and not test_lines:
173+
# Look for function definitions for code
174+
# Look for test functions for tests
175+
for line in lines:
176+
if line.strip().startswith("def ") and "test" in line.lower():
177+
current_section = "tests"
178+
test_lines.append(line)
179+
elif line.strip().startswith("def ") and current_section != "tests":
180+
current_section = "code"
181+
code_lines.append(line)
182+
elif current_section == "code":
183+
code_lines.append(line)
184+
elif current_section == "tests":
185+
test_lines.append(line)
186+
187+
code = "\n".join(code_lines).strip()
188+
tests = "\n".join(test_lines).strip()
189+
190+
# Fallback if parsing failed
191+
if not code and not tests:
192+
# Assume the entire response is code
193+
code = response.strip()
194+
195+
return code, tests

src/codeoptix/adapters/factory.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
from typing import Any
44

55
from codeoptix.adapters.base import AgentAdapter
6+
from codeoptix.adapters.basic import BasicAdapter
67
from codeoptix.adapters.claude_code import ClaudeCodeAdapter
78
from codeoptix.adapters.codex import CodexAdapter
89
from codeoptix.adapters.gemini_cli import GeminiCLIAdapter
@@ -13,7 +14,7 @@ def create_adapter(adapter_type: str, config: dict[str, Any]) -> AgentAdapter:
1314
Factory function to create an agent adapter.
1415
1516
Args:
16-
adapter_type: Type of adapter ("claude-code", "codex", "gemini-cli")
17+
adapter_type: Type of adapter ("basic", "claude-code", "codex", "gemini-cli")
1718
config: Configuration dictionary for the adapter
1819
1920
Returns:
@@ -23,6 +24,7 @@ def create_adapter(adapter_type: str, config: dict[str, Any]) -> AgentAdapter:
2324
ValueError: If adapter_type is not supported
2425
"""
2526
adapter_map = {
27+
"basic": BasicAdapter,
2628
"claude-code": ClaudeCodeAdapter,
2729
"codex": CodexAdapter,
2830
"gemini-cli": GeminiCLIAdapter,
@@ -34,7 +36,7 @@ def create_adapter(adapter_type: str, config: dict[str, Any]) -> AgentAdapter:
3436
raise ValueError(
3537
f"Unsupported adapter type: '{adapter_type}'. "
3638
f"Supported types: {supported}. "
37-
f"Please check the agent name and try again."
39+
f"For testing without external agents, use 'basic'."
3840
)
3941

4042
try:

0 commit comments

Comments
 (0)