Skip to content

Comments

build: Bootstrap agent-ready infrastructure#211

Open
dahlem wants to merge 1 commit intomainfrom
agentready-bootstrap
Open

build: Bootstrap agent-ready infrastructure#211
dahlem wants to merge 1 commit intomainfrom
agentready-bootstrap

Conversation

@dahlem
Copy link

@dahlem dahlem commented Feb 12, 2026

Summary

  • Bootstrap agent-ready infrastructure via agentready bootstrap
  • Add assessment report from agentready assess
  • Add CI workflows, issue/PR templates, pre-commit config, and other repo hygiene files

Files added/modified

  • .agentready/ — assessment reports and configuration
  • .github/workflows/ — CI workflows (agentready assessment, security, tests)
  • .github/ISSUE_TEMPLATE/ — issue templates
  • .github/PULL_REQUEST_TEMPLATE.md — PR template
  • .github/CODEOWNERS — code ownership
  • .github/dependabot.yml — dependency update config
  • .pre-commit-config.yaml — pre-commit hooks
  • CODE_OF_CONDUCT.md — code of conduct (if added)

Test plan

  • Verify CI workflows pass on the PR
  • Review agentready assessment report
  • Confirm no unintended file changes

🤖 Generated with Claude Code

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link

🤖 AgentReady Assessment Report

Repository: guardrails-detectors
Path: /home/runner/work/guardrails-detectors/guardrails-detectors
Branch: HEAD | Commit: 90fc9a1d
Assessed: February 12, 2026 at 4:58 PM
AgentReady Version: 2.27.0
Run by: runner@runnervmjduv7


📊 Summary

Metric Value
Overall Score 48.1/100 🥉 Bronze (Tier Definitions)
Attributes Assessed 21/25
Attributes Not Assessed 4
Assessment Duration 0.6s

Languages Detected

  • Python: 34 files
  • JSON: 22 files
  • YAML: 18 files
  • Markdown: 16 files

Repository Stats

  • Total Files: 119
  • Total Lines: 43,235

🎯 Priority Improvements

Focus on these high-impact fixes first:

  1. CLAUDE.md Configuration Files (Tier 1) - +10.0 points potential
    • Create CLAUDE.md or AGENTS.md with project-specific configuration for AI coding assistants
  2. Dependency Pinning for Reproducibility (Tier 1) - +10.0 points potential
    • Add lock file for dependency reproducibility
  3. Standard Project Layouts (Tier 1) - +10.0 points potential
    • Organize code into standard directories (src/, tests/, docs/)
  4. Type Annotations (Tier 1) - +10.0 points potential
    • Add type annotations to function signatures
  5. Test Coverage Requirements (Tier 2) - +3.0 points potential
    • Configure test coverage with ≥80% threshold

📋 Detailed Findings

Findings sorted by priority (Tier 1 failures first, then Tier 2, etc.)

T1 CLAUDE.md Configuration Files ❌ 0/100

📝 Remediation Steps

Measured: missing (Threshold: present)

Evidence:

  • CLAUDE.md not found in repository root
  • AGENTS.md not found (alternative)

Create CLAUDE.md or AGENTS.md with project-specific configuration for AI coding assistants

  1. Choose one of three approaches:
  2. Option 1: Create standalone CLAUDE.md (>50 bytes) with project context
  3. Option 2: Create AGENTS.md and symlink CLAUDE.md to it (cross-tool compatibility)
  4. Option 3: Create AGENTS.md and reference it with @AGENTS.md in minimal CLAUDE.md
  5. Add project overview and purpose
  6. Document key architectural patterns
  7. Specify coding standards and conventions
  8. Include build/test/deployment commands
  9. Add any project-specific context that helps AI assistants

Commands:

# Option 1: Standalone CLAUDE.md
touch CLAUDE.md
# Add content describing your project

# Option 2: Symlink CLAUDE.md to AGENTS.md
touch AGENTS.md
# Add content to AGENTS.md
ln -s AGENTS.md CLAUDE.md

# Option 3: @ reference in CLAUDE.md
echo '@AGENTS.md' > CLAUDE.md
touch AGENTS.md
# Add content to AGENTS.md

Examples:

# Standalone CLAUDE.md (Option 1)

## Overview
Brief description of what this project does.

## Architecture
Key patterns and structure.

## Development
```bash
# Install dependencies
npm install

# Run tests
npm test

# Build
npm run build

Coding Standards

  • Use TypeScript strict mode
  • Follow ESLint configuration
  • Write tests for new features

CLAUDE.md with @ reference (Option 3)

@AGENTS.md

AGENTS.md (shared by multiple tools)

Project Overview

This project implements a REST API for user management.

Architecture

  • Layered architecture: controllers, services, repositories
  • PostgreSQL database with SQLAlchemy ORM
  • FastAPI web framework

Development Workflow

# Setup
python -m venv .venv
source .venv/bin/activate
pip install -e .

# Run tests
pytest

# Start server
uvicorn app.main:app --reload

Code Conventions

  • Use type hints for all functions
  • Follow PEP 8 style guide
  • Write docstrings for public APIs
  • Maintain >80% test coverage

</details>

![T1](https://img.shields.io/badge/T1-Dependency_Pinning_for_Reproducibility_0--100-red) **Dependency Pinning for Reproducibility** ❌ 0/100
<details>
<summary>📝 Remediation Steps</summary>

**Measured**: none (Threshold: lock file with pinned versions)

**Evidence**:
- No dependency lock files found

Add lock file for dependency reproducibility

1. For npm: run 'npm install' (generates package-lock.json)
2. For Python: use 'pip freeze > requirements.txt' or poetry
3. For Ruby: run 'bundle install' (generates Gemfile.lock)

**Commands**:
```bash
npm install  # npm
pip freeze > requirements.txt  # Python
poetry lock  # Python with Poetry

T1 Standard Project Layouts ❌ 50/100

📝 Remediation Steps

Measured: 1/2 directories (Threshold: 2/2 directories)

Evidence:

  • Found 1/2 standard directories
  • src/: ✗
  • tests/: ✓

Organize code into standard directories (src/, tests/, docs/)

  1. Create src/ directory for source code
  2. Create tests/ directory for test files
  3. Create docs/ directory for documentation
  4. Move source code into src/
  5. Move tests into tests/

Commands:

mkdir -p src tests docs
# Move source files to src/
# Move test files to tests/

T1 Type Annotations ❌ 58/100

📝 Remediation Steps

Measured: 46.1% (Threshold: ≥80%)

Evidence:

  • Typed functions: 113/245
  • Coverage: 46.1%

Add type annotations to function signatures

  1. For Python: Add type hints to function parameters and return types
  2. For TypeScript: Enable strict mode in tsconfig.json
  3. Use mypy or pyright for Python type checking
  4. Use tsc --strict for TypeScript
  5. Add type annotations gradually to existing code

Commands:

# Python
pip install mypy
mypy --strict src/

# TypeScript
npm install --save-dev typescript
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json

Examples:

# Python - Before
def calculate(x, y):
    return x + y

# Python - After
def calculate(x: float, y: float) -> float:
    return x + y

// TypeScript - tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "strictNullChecks": true
  }
}

T1 Dependency Security & Vulnerability Scanning ✅ 35/100

T1 README Structure ✅ 100/100

T2 Test Coverage Requirements ❌ 0/100

📝 Remediation Steps

Measured: not configured (Threshold: configured with >80% threshold)

Evidence:

  • No coverage configuration found

Configure test coverage with ≥80% threshold

  1. Install coverage tool (pytest-cov for Python, jest for JavaScript)
  2. Configure coverage threshold in project config
  3. Add coverage reporting to CI/CD pipeline
  4. Run coverage locally before committing

Commands:

# Python
pip install pytest-cov
pytest --cov=src --cov-report=term-missing --cov-fail-under=80

# JavaScript
npm install --save-dev jest
npm test -- --coverage --coverageThreshold='{\'global\': {\'lines\': 80}}'

Examples:

# Python - pyproject.toml
[tool.pytest.ini_options]
addopts = "--cov=src --cov-report=term-missing"

[tool.coverage.report]
fail_under = 80

// JavaScript - package.json
{
  "jest": {
    "coverageThreshold": {
      "global": {
        "lines": 80,
        "statements": 80,
        "functions": 80,
        "branches": 80
      }
    }
  }
}

T2 Conventional Commit Messages ❌ 0/100

📝 Remediation Steps

Measured: not configured (Threshold: configured)

Evidence:

  • No commitlint or husky configuration

Configure conventional commits with commitlint

  1. Install commitlint
  2. Configure husky for commit-msg hook

Commands:

npm install --save-dev @commitlint/cli @commitlint/config-conventional husky

T2 One-Command Build/Setup ❌ 30/100

📝 Remediation Steps

Measured: multi-step setup (Threshold: single command)

Evidence:

  • No clear setup command found in README
  • No Makefile or setup script found
  • Setup instructions in prominent location

Create single-command setup for development environment

  1. Choose setup automation tool (Makefile, setup script, or package manager)
  2. Create setup command that handles all dependencies
  3. Document setup command prominently in README (Quick Start section)
  4. Ensure setup is idempotent (safe to run multiple times)
  5. Test setup on fresh clone to verify it works

Commands:

# Example Makefile
cat > Makefile << 'EOF'
.PHONY: setup
setup:
	python -m venv venv
	. venv/bin/activate && pip install -r requirements.txt
	pre-commit install
	cp .env.example .env
	@echo 'Setup complete! Run make test to verify.'
EOF

Examples:

# Quick Start section in README

## Quick Start

```bash
make setup  # One command to set up development environment
make test   # Run tests to verify setup

</details>

![T2](https://img.shields.io/badge/T2-Inline_Documentation_41--100-red) **Inline Documentation** ❌ 41/100
<details>
<summary>📝 Remediation Steps</summary>

**Measured**: 33.0% (Threshold: ≥80%)

**Evidence**:
- Documented items: 103/312
- Coverage: 33.0%
- Many public functions/classes lack docstrings

Add docstrings to public functions and classes

1. Identify functions/classes without docstrings
2. Add PEP 257 compliant docstrings for Python
3. Add JSDoc comments for JavaScript/TypeScript
4. Include: description, parameters, return values, exceptions
5. Add examples for complex functions
6. Run pydocstyle to validate docstring format

**Commands**:
```bash
# Install pydocstyle
pip install pydocstyle

# Check docstring coverage
pydocstyle src/

# Generate documentation
pip install sphinx
sphinx-apidoc -o docs/ src/

Examples:

# Python - Good docstring
def calculate_discount(price: float, discount_percent: float) -> float:
    """Calculate discounted price.

    Args:
        price: Original price in USD
        discount_percent: Discount percentage (0-100)

    Returns:
        Discounted price

    Raises:
        ValueError: If discount_percent not in 0-100 range

    Example:
        >>> calculate_discount(100.0, 20.0)
        80.0
    """
    if not 0 <= discount_percent <= 100:
        raise ValueError("Discount must be 0-100")
    return price * (1 - discount_percent / 100)

// JavaScript - Good JSDoc
/**
 * Calculate discounted price
 *
 * @param {number} price - Original price in USD
 * @param {number} discountPercent - Discount percentage (0-100)
 * @returns {number} Discounted price
 * @throws {Error} If discountPercent not in 0-100 range
 * @example
 * calculateDiscount(100.0, 20.0)
 * // Returns: 80.0
 */
function calculateDiscount(price, discountPercent) {
    if (discountPercent < 0 || discountPercent > 100) {
        throw new Error("Discount must be 0-100");
    }
    return price * (1 - discountPercent / 100);
}

T2 Concise Documentation ❌ 64/100

📝 Remediation Steps

Measured: 53 lines, 8 headings, 6 bullets (Threshold: <500 lines, structured format)

Evidence:

  • README length: 53 lines (excellent)
  • Heading density: 15.1 per 100 lines (target: 3-5)
  • Only 6 bullet points (prefer bullets over prose)

Make documentation more concise and structured

  1. Break long README into multiple documents (docs/ directory)
  2. Add clear Markdown headings (##, ###) for structure
  3. Convert prose paragraphs to bullet points where possible
  4. Add table of contents for documents >100 lines
  5. Use code blocks instead of describing commands in prose
  6. Move detailed content to wiki or docs/, keep README focused

Commands:

# Check README length
wc -l README.md

# Count headings
grep -c '^#' README.md

Examples:

# Good: Concise with structure

## Quick Start
```bash
pip install -e .
agentready assess .

Features

  • Fast repository scanning
  • HTML and Markdown reports
  • 25 agent-ready attributes

Documentation

See docs/ for detailed guides.

Bad: Verbose prose

This project is a tool that helps you assess your repository
against best practices for AI-assisted development. It works by
scanning your codebase and checking for various attributes that
make repositories more effective when working with AI coding
assistants like Claude Code...

[Many more paragraphs of prose...]


</details>

![T2](https://img.shields.io/badge/T2-.gitignore_Completeness_67--100-red) **.gitignore Completeness** ❌ 67/100
<details>
<summary>📝 Remediation Steps</summary>

**Measured**: 8/12 patterns (Threshold: ≥70% of language-specific patterns)

**Evidence**:
- .gitignore found (274 bytes)
- Pattern coverage: 8/12 (67%)
- Missing 4 recommended patterns

Add missing language-specific ignore patterns

1. Review GitHub's gitignore templates for your language
2. Add the 4 missing patterns
3. Ensure editor/IDE patterns are included

**Examples**:

Missing patterns:

*.egg-info/
*.swp
.idea/
*.swo


</details>

![T2](https://img.shields.io/badge/T2-File_Size_Limits_90--100-green) **File Size Limits** ✅ 90/100

![T2](https://img.shields.io/badge/T2-Separation_of_Concerns_99--100-green) **Separation of Concerns** ✅ 99/100

![T2](https://img.shields.io/badge/T2-Pre-commit_Hooks_%26_CI%2FCD_Linting_100--100-green) **Pre-commit Hooks & CI/CD Linting** ✅ 100/100

![T3](https://img.shields.io/badge/T3-Architecture_Decision_Records_%28ADRs%29_0--100-red) **Architecture Decision Records (ADRs)** ❌ 0/100
<details>
<summary>📝 Remediation Steps</summary>

**Measured**: no ADR directory (Threshold: ADR directory with decisions)

**Evidence**:
- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)

Create Architecture Decision Records (ADRs) directory and document key decisions

1. Create docs/adr/ directory in repository root
2. Use Michael Nygard ADR template or MADR format
3. Document each significant architectural decision
4. Number ADRs sequentially (0001-*.md, 0002-*.md)
5. Include Status, Context, Decision, and Consequences sections
6. Update ADR status when decisions are revised (Superseded, Deprecated)

**Commands**:
```bash
# Create ADR directory
mkdir -p docs/adr

# Create first ADR using template
cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
# 1. Use Architecture Decision Records

Date: 2025-11-22

## Status
Accepted

## Context
We need to record architectural decisions made in this project.

## Decision
We will use Architecture Decision Records (ADRs) as described by Michael Nygard.

## Consequences
- Decisions are documented with context
- Future contributors understand rationale
- ADRs are lightweight and version-controlled
EOF

Examples:

# Example ADR Structure

```markdown
# 2. Use PostgreSQL for Database

Date: 2025-11-22

## Status
Accepted

## Context
We need a relational database for complex queries and ACID transactions.
Team has PostgreSQL experience. Need full-text search capabilities.

## Decision
Use PostgreSQL 15+ as primary database.

## Consequences
- Positive: Robust ACID, full-text search, team familiarity
- Negative: Higher resource usage than SQLite
- Neutral: Need to manage migrations, backups

</details>

![T3](https://img.shields.io/badge/T3-CI%2FCD_Pipeline_Visibility_80--100-green) **CI/CD Pipeline Visibility** ✅ 80/100

![T3](https://img.shields.io/badge/T3-Cyclomatic_Complexity_Thresholds_100--100-green) **Cyclomatic Complexity Thresholds** ✅ 100/100

![T3](https://img.shields.io/badge/T3-Issue_%26_Pull_Request_Templates_100--100-green) **Issue & Pull Request Templates** ✅ 100/100

![T3](https://img.shields.io/badge/T3-Semantic_Naming_100--100-green) **Semantic Naming** ✅ 100/100

![T3](https://img.shields.io/badge/T3-Structured_Logging_N--A-lightgray) **Structured Logging** ⊘ 

![T3](https://img.shields.io/badge/T3-OpenAPI%2FSwagger_Specifications_N--A-lightgray) **OpenAPI/Swagger Specifications** ⊘ 

![T4](https://img.shields.io/badge/T4-Code_Smell_Elimination_0--100-red) **Code Smell Elimination** ❌ 0/100
<details>
<summary>📝 Remediation Steps</summary>

**Measured**: none (Threshold: ≥60% of applicable linters configured)

**Evidence**:
- No linters configured

Configure 4 missing linter(s)

1. Configure pylint for Python code smell detection
2. Configure ruff for fast Python linting
3. Add actionlint for GitHub Actions workflow validation
4. Configure markdownlint for documentation quality

**Commands**:
```bash
pip install pylint && pylint --generate-rcfile > .pylintrc
pip install ruff && ruff init
npm install --save-dev markdownlint-cli && touch .markdownlint.json

Examples:

# .pylintrc example
[MASTER]
max-line-length=100

[MESSAGES CONTROL]
disable=C0111
# .eslintrc.json example
{
  "extends": "eslint:recommended",
  "rules": {
    "no-console": "warn"
  }
}

T4 Branch Protection Rules

T4 Container/Virtualization Setup


📝 Assessment Metadata

  • AgentReady Version: v2.27.0
  • Research Version: v1.0.1
  • Repository Snapshot: 90fc9a1
  • Assessment Duration: 0.6s
  • Assessed By: runner@runnervmjduv7
  • Assessment Date: February 12, 2026 at 4:58 PM

🤖 Generated with Claude Code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant