This repo is intended to be runnable locally and easy for coding agents to work in.
Style Telegraph. Drop filler/grammar. Min tokens (global AGENTS + replies).
Critical Thinking Fix root cause (not band-aid). Unsure: read more code; if still stuck, ask w/ short options. Unrecognized changes: assume other agent; keep going; focus your changes. If it causes issues, stop + ask user. Leave breadcrumb notes in thread.
- Keep decisions as comments on top of the file. Only important decisions that could not be inferred from code.
- Code should be easily testable, smoke testable, runnable in local dev env.
- Prefer small, incremental PR-sized changes with a runnable state at each step.
- Avoid adding dependencies with non-permissive licenses. If a dependency is non-permissive or unclear, stop and ask the repo owner.
- Coding Agent friendly tool to magically generate text and images
- CLI for generating textual and visual artifacts using LLM
- Minimal self-contained OpenAI provider (no external LLM libraries)
- Supports Jinja2-like template variables in prompts
- Model selection and reasoning level configuration
- Designed for CI/CD integration and AI agent workflows
- Rich error messages with recovery hints for agent self-correction
- Rust toolchain (stable)
cargo buildto buildcargo testto run testscargo fmtto format codecargo clippyfor lintingOPENAI_API_KEYenvironment variable required for runtime
src/
├── main.rs # CLI entry point, clap argument parsing
├── output.rs # JSON output utilities
├── commands/
│ ├── mod.rs # Command traits (CommandExec, CommandResult)
│ ├── generate.rs # Generate command implementation
│ └── image.rs # Image generation command implementation
├── provider/
│ ├── mod.rs # Provider abstraction types (Chat + Responses API)
│ └── openai.rs # OpenAI provider implementation
└── trickery/
├── mod.rs
├── generate.rs # LLM template generation logic
└── image.rs # Image generation logic
prompts/ # Example prompt templates
test_cases/ # Test case templates for generate command
specs/ # Feature specifications
docs/ # Feature documentation
- Use snake_case for files and functions
- Use PascalCase for types and traits
- Keep module names short and descriptive
CI is implemented using GitHub Actions (.github/workflows/ci.yaml):
- Runs on push/PR to main
- Executes
cargo build --verbose - Executes
cargo test --verbose
- Formatting: Run
cargo fmt - Linting: Run
cargo clippyand fix warnings - Tests: Ensure
cargo testpasses - Build: Ensure
cargo buildsucceeds - Full help: If CLI options changed, update
print_full_help()insrc/main.rs - Lockfile: Run
cargo update --workspaceafter version bumps or dependency changes to sync Cargo.lock - README: If README needs changes, update
prompts/trickery_readme.mdand regenerate withtrickery generate ./prompts/trickery_readme.md > README.md
NEVER add links to Claude sessions in PR body or commits. Also never attribute commit or merge commit to coding agents, always use real user.
Follow Conventional Commits format:
feat:new featurefix:bug fixdocs:documentation changesstyle:formatting, no code changerefactor:code restructuringperf:performance improvementstest:adding/updating testschore:maintenance tasksci:CI configuration changes
PR titles should follow Conventional Commits format: <type>[optional scope]: <description>
## What
Clear description of the change.
## Why
Problem or motivation.
## How
High-level approach.
## Risk
- Low / Medium / High
- What can break
### Checklist
- [ ] Unit tests are passed
- [ ] Smoke tests are passed
- [ ] Documentation is updatedspecs/ folder contains feature specifications outlining requirements for specific features and components. New code should comply with these specifications or propose changes to them.
Available specs:
coding-agent-design.md- Agent-friendly design principles, error recovery, discoverabilityllm-provider.md- LLM provider abstraction, OpenAI integration, design choicestext-input.md- Direct text input via --text option, alternative to file input
Specification format: Abstract and Requirements sections.
test_cases/ folder contains manual smoke test cases for validating CLI functionality. Run these after changes to verify behavior.
Available test cases:
basic_generation.md- Simple prompt generation without variablestemplate_variables.md- Jinja2-style variable substitutionjson_output.md- JSON output format flagimage_multimodal.md- Image input for multimodal promptsimage_generate.md- Image generation and editing commanderror_handling.md- Error scenarios and messagestext_input.md- Direct text input via --text option
# Test: <Name>
## Abstract
<One sentence describing what this test validates>
## Prerequisites
- `cargo install --path .`
- <Other required setup, env vars, files>
## Steps
### 1. <Step name>
**Run:** `trickery <command>`
**Expect:** <Expected outcome>