Ready-to-use pipeline agents built on top of the Code-Genesis engine.
Code-Genesis is the AI pipeline engine that powers everything here — it handles orchestration, parallel execution, agent contexts, and step control. This repo is purely a collection of pipelines and agents you can drop in and run.
code-genesis-fabric/
├── samples/ # Learning pipelines — start here
│ ├── hello-world.yml # Minimal 3-step pipeline (plan → build → review)
│ ├── fail-if-step.yml # Guard rails: stop the pipeline on LLM-detected failure
│ ├── foreach-parallel.yml # Nested foreach + parallel branches per item
│ ├── parallel-foreach.yml # Concurrent iteration over a collection
│ └── contexts/ # Reusable agent context definitions
│ └── planner/
│
└── snyk-issues-report/ # Production-ready Snyk security report pipeline
├── snyk-issues-report.yml # Discovers tagged projects → fetches issues → exports CSV
└── contexts/ # Agent contexts used by this pipeline
├── snyk-discoverer/
├── snyk-validator/
├── snyk-issue-fetcher/
├── snyk-csv-builder/
└── snyk-file-writer/
Minimal pipelines that demonstrate the core Code-Genesis primitives.
| File | What it shows |
|---|---|
hello-world.yml |
Basic sequential steps with context, prompt, and output_key |
fail-if-step.yml |
Using fail_if + fail_message so the LLM can abort the pipeline |
foreach-parallel.yml |
foreach iteration with nested parallel branches inside each item |
parallel-foreach.yml |
parallel_foreach with max_concurrency and fail_fast |
Generates a CSV report of open critical and high severity vulnerabilities across all Snyk projects that share a given tag.
Required MCP: mcp-snyks
This pipeline will not work without the mcp-snyks MCP server running. All Snyk API calls are made exclusively through it. Register it in your Claude MCP configuration using HTTP transport:
{
"mcpServers": {
"mcp-snyks": {
"type": "http",
"url": "http://localhost:<PORT>/mcp"
}
}
}See mcp-snyks for setup instructions, available port, and required environment variables (Snyk API token, etc.).
How it works:
[1] Discover tagged projects — finds all projects matching tag_key + tag_value
[2] Validate project list — aborts early if no projects are found (fail_if guard)
[3] Fetch issues (parallel) — parallel_foreach across all projects (10 concurrent)
[4] Build CSV report — flattens, categorizes, and sorts all issues
[5] Save to disk — writes the final CSV file
Inputs:
| Input | Description | Default |
|---|---|---|
org |
Snyk organization name, slug, or GUID | my-org |
tag_key |
Tag key to filter projects | team |
tag_value |
Tag value to match | backend |
output_file |
Destination path for the CSV | snyk-issues-report.csv |
Outputs:
| Output | Description |
|---|---|
report_saved |
Confirmation with issue counts and saved file path |
csv_content |
Full CSV text (header + rows + summary block) |
raw_issues |
Raw per-project issue arrays before aggregation |
projects_found |
Projects that matched the tag filter |
- code-genesis — the engine that runs these pipelines