This repository contains AI agent skills for Flux CD, Kubernetes, and GitOps. Skills follow the Agent Skills Open Standard.
skills/{skill-name}/
├── SKILL.md # Agent instructions (frontmatter + workflow)
├── scripts/ # Helper scripts (bash, awk-only where possible)
├── references/ # On-demand reference docs (checklists, API summaries)
├── assets/schemas/ # Bundled OpenAPI schemas for Flux CRDs
└── evals/evals.json # Evaluation scenarios with expected behavior
tests/{skill-name}/ # Test fixtures for offline evaluation
.claude-plugin/marketplace.json # Skill registry for distribution
Makefile # Schema downloads and test targets
- Read the skill's
SKILL.mdto understand its workflow and allowed tools - Read all files in
references/andscripts/before making changes - Keep
SKILL.mdunder ~15KB — heavy reference material belongs inreferences/
Each skill has an evals/evals.json file with evaluation scenarios. When asked to run them:
- Read
evals/evals.jsonto get the list of eval prompts and their expectations - For each eval, spawn a sub-agent with this prompt template:
You are a [skill role]. Follow the instructions in `skills/{skill-name}/SKILL.md` exactly — read it first, then follow the workflow phases. Your task: [eval prompt from evals.json] Important: - Read `skills/{skill-name}/SKILL.md` first to understand the full workflow - Run the bundled scripts as instructed by the SKILL.md - Read the reference files as instructed by the SKILL.md - Read the actual YAML files in the test fixture - Produce the full structured report as specified in the workflow - Score each eval output against the
expectationsarray — each expectation is a pass/fail check - Report results as a scorecard: eval id, pass/fail counts, and any missed expectations
The sub-agent should not be told the expectations — it must produce the correct output by following the skill workflow alone.
- Create
skills/{skill-name}/with the structure above - Write
SKILL.mdwith frontmatter (name,description,allowed-tools) and a phased workflow - Use the existing skills as templates — read
skills/gitops-repo-audit/SKILL.mdandskills/gitops-cluster-debug/SKILL.mdto match the conventions:- Workflows are explicit step-by-step, not open-ended
- Reference docs are actionable checklists and lookup tables, not tutorials
- Edge cases section prevents false positives on common patterns
- Scripts output structured data (JSON) and avoid dependencies beyond awk
- OpenAPI schemas are for validating generated YAML, not for general reference
- Add evaluation scenarios in
evals/evals.jsonwith specific expectations - Add test fixtures in
tests/{skill-name}/covering distinct scenarios - Register the skill in
.claude-plugin/marketplace.jsonunderplugins[0].skills - If the skill uses schemas, add its schema directory to
SCHEMAS_DIRSin theMakefile