This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
CODA (CODing Agent) is a CLI-based AI coding assistant written in Go. It leverages OpenAI/Azure OpenAI models and provides a rich terminal interface using Bubbletea framework.
# Build the binary
make build
# Run tests
make test
# Run tests with coverage
make test-coverage
# Run linter (golangci-lint)
make lint
# Format code
make fmt
# Build for all platforms
make build-all
# Run the application
make run
# Clean build artifacts
make clean
# Download and verify dependencies
make deps
make verify# Run a specific test
go test -v -run TestFunctionName ./path/to/package
# Run tests in a specific package
go test -v ./internal/ai/...
# Run with coverage for specific package
go test -v -coverprofile=coverage.out ./internal/chatThe project follows a layered architecture with clear separation of concerns:
- CLI Layer (
cmd/) - Cobra-based command handling - TUI Layer (
internal/ui/) - Bubbletea terminal interface (in development) - Application Layer (
internal/chat/) - Core business logic and chat processing - Service Layer (
internal/ai/,internal/tools/) - AI client abstractions and tool system - Infrastructure Layer (
internal/config/,internal/security/) - Configuration and security
- Interface-Based Design: All major components use interfaces for abstraction
- Provider Pattern: AI providers (OpenAI/Azure) implement common
Clientinterface - Tool System: Extensible plugin-like architecture for file operations
- Security First: All file operations go through validation layer
-
AI Client (
internal/ai/)- Unified interface for multiple AI providers
- Streaming support with channel-based communication
- Provider implementations:
openai.go,azure.go
-
Tool System (
internal/tools/)- Tool interface for extensibility
- Built-in tools: read_file, write_file, edit_file, list_files, search_files
- Security validation before execution
-
Chat Handler (
internal/chat/)- Message routing and processing
- Tool call detection in AI responses
- Session management and persistence
-
Configuration (
internal/config/)- YAML-based configuration with environment variable support
- Secure credential management
- Multi-location config loading
When implementing new tools:
- Implement the
Toolinterface ininternal/tools/ - Add security validation in
Execute()method - Register tool in the manager
- Add tests following table-driven pattern
To add a new AI provider:
- Implement
ai.Clientinterface - Add provider configuration structure
- Register in client factory
- Support both streaming and non-streaming modes
- Use typed errors with
CodaErrorstructure - Always wrap lower-level errors with context
- Provide user-friendly error messages
- Log detailed errors for debugging
- Unit tests for individual components with mocks
- Integration tests for component interactions
- Table-driven tests for comprehensive coverage
- Use
testifyfor assertions
The system loads configuration from multiple sources (in order):
- Command line flags
- Environment variables (CODA_* prefix)
$HOME/.coda/config.yaml./config.yaml
- AI Settings: Provider, model, API keys
- Tool Settings: Enabled tools, auto-approval, allowed paths
- Session Settings: History management, persistence
- Security Settings: Path restrictions, dangerous patterns
CODA now supports GPT-5 models with reasoning effort configuration:
ai:
model: gpt-5
# Reasoning effort for GPT-5 models (optional)
# Valid values: "minimal", "low", "medium", "high"
reasoning_effort: "minimal"- Use
reasoning_effort: "minimal"for fastest responses - Higher values (
"low","medium","high") provide more detailed reasoning - If
reasoning_effortis not specified or commented out, the SDK default will be used - This setting only applies to GPT-5 models
Note: Full GPT-5 support depends on the go-openai SDK. Currently, the reasoning effort is prepared but may not be sent to the API until SDK support is complete.
The system uses a comprehensive prompt system defined in internal/chat/prompts.go. It supports:
- Default system prompt with tool calling protocol
- Workspace-specific prompts (CODA.md/CLAUDE.md in project root)
- User-specific instructions
Tools are invoked via JSON blocks in AI responses:
{"tool": "tool_name", "arguments": {"param1": "value1"}}- File operations restricted to allowed paths
- Dangerous patterns detection (e.g., .env, .pem files)
- User approval required for tool execution
- API keys stored securely using OS keychain
- Create new file in
cmd/directory - Define command using Cobra structure
- Register command in
root.go - Add corresponding handler logic
- Update tool implementation in
internal/tools/ - Modify security rules if needed
- Update tests
- Document changes in tool description
- Enable debug mode:
coda --debug chat - Check logs at
~/.coda/coda.log - Use structured logging with appropriate levels
- Add context to errors for better tracing
- GPT-3.5, GPT-4 などの旧世代モデルは使用禁止です。 o3, GPT-5 を基本とします。