Skip to content

Documentation readability analyzer - GitHub Action and CLI tool for measuring content quality metrics

License

Notifications You must be signed in to change notification settings

adaptive-enforcement-lab/readability

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

208 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Readability

CI codecov Go Report Card Go Reference OpenSSF Best Practices OpenSSF Scorecard

Documentation readability analyzer - GitHub Action and CLI tool for measuring content quality metrics.

Features

  • Flesch Reading Ease - How easy is your content to read?
  • Grade Level Scores - Flesch-Kincaid, Gunning Fog, Coleman-Liau, SMOG, ARI
  • Word & Sentence Metrics - Count, averages, complexity indicators
  • MkDocs Admonitions - Detect and require !!! note, !!! warning, etc.
  • Multiple Output Formats - Table, Markdown, JSON, Summary, Report
  • Threshold Enforcement - Fail CI when quality drops
  • Job Summary - Automatic GitHub Actions job summary with formatted report

Quick Start

GitHub Action

- uses: adaptive-enforcement-lab/readability@v1
  with:
    path: docs/
    format: markdown
    check: true
    max-grade: 12

CLI

# Install
go install github.com/adaptive-enforcement-lab/readability/cmd/readability@latest

# Analyze a directory
readability docs/

# Check with thresholds
readability --check --max-grade 12 docs/

# Output as JSON
readability --format json docs/

# Use a config file
readability --config .readability.yml docs/

Docker

# Pull the image
docker pull ghcr.io/adaptive-enforcement-lab/readability:latest

# Analyze local docs
docker run --rm -v "$(pwd):/workspace" ghcr.io/adaptive-enforcement-lab/readability:latest /workspace/docs

# With thresholds
docker run --rm -v "$(pwd):/workspace" ghcr.io/adaptive-enforcement-lab/readability:latest \
  --check --max-grade 12 /workspace/docs

# Verify image signature
cosign verify ghcr.io/adaptive-enforcement-lab/readability:latest \
  --certificate-identity-regexp 'https://github.com/adaptive-enforcement-lab/readability/.*' \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com

Metrics

Metric Range Interpretation
Flesch Reading Ease 0-100 Higher = easier (60-70 is standard)
Flesch-Kincaid Grade 0-18+ US grade level needed to understand
Gunning Fog Index 0-20+ Years of education needed
SMOG Index 0-20+ Years of education needed
Coleman-Liau Index 0-20+ US grade level
ARI 0-20+ US grade level

Configuration

Create .readability.yml in your repo:

# yaml-language-server: $schema=https://readability.adaptive-enforcement-lab.com/latest/schemas/config.json
---
thresholds:
  max_grade: 12       # Maximum Flesch-Kincaid grade level
  max_ari: 12         # Maximum Automated Readability Index
  max_fog: 12         # Maximum Gunning Fog index
  min_ease: 30        # Minimum Flesch Reading Ease (0-100 scale)
  max_lines: 500      # Maximum lines of prose per file
  min_words: 100      # Skip files with fewer words (formulas unreliable)
  min_admonitions: 1  # Require at least one MkDocs admonition

overrides:
  # Exclude files from threshold checks (still analyzed, never fail)
  - path: docs/includes/
    exclude: true                 # Skip snippet files

  - path: CHANGELOG.md
    exclude: true                 # Changelogs often have extreme scores

  # Custom thresholds for specific paths
  - path: docs/api/
    thresholds:
      max_grade: 14           # Allow more complexity for API docs
      max_lines: 1000         # API docs can be longer
      min_admonitions: -1     # Disable admonition requirement

Your editor will provide autocomplete and validation as you type. See the Configuration Guide for IDE setup.

IDE Support

The configuration file includes a JSON Schema that enables:

  • Real-time validation - Catch errors while editing
  • Autocomplete - IntelliSense for all config options
  • Inline documentation - Hover tooltips for field descriptions
  • Type checking - Prevent invalid values before commit

Supported Editors: VS Code, JetBrains IDEs (IntelliJ, WebStorm, etc.), Neovim with yaml-language-server, and any editor with YAML language server support.

Validate your config:

# Validate configuration file
readability --validate-config

# Or use check-jsonschema directly
pipx install check-jsonschema
check-jsonschema --schemafile docs/schemas/config.json .readability.yml

Excluding Files

The exclude field allows you to skip threshold checks for specific files while still analyzing them. Excluded files:

  • Appear in output with status "excluded"
  • Never fail --check mode (exit code 0)
  • Include full metrics (lines, words, readability scores)
  • Count in summary as "Excluded: N"

Use cases:

  • Snippet/include files (abbreviations, glossaries)
  • Auto-generated content (changelogs, API references)
  • Files with intentionally extreme readability (legal text, specifications)

Example:

overrides:
  - path: docs/includes/
    exclude: true

  - path: CHANGELOG.md
    exclude: true

Note: You cannot use both exclude: true and custom thresholds in the same override. They are mutually exclusive.

Action Inputs

Input Description Default
path Path to analyze (file or directory) docs/
format Output format (table, markdown, json, summary, report) markdown
config Path to config file (auto-detect)
check Fail on threshold violations false
max-grade Maximum Flesch-Kincaid grade level (from config)
max-ari Maximum ARI score (from config)
max-lines Maximum lines per file (from config)
summary Write formatted report to job summary true
summary-title Title for the job summary section Documentation Readability Report
version Version of readability to use latest

Action Outputs

Output Description
report Analysis report in JSON format
passed Whether all thresholds were met (true/false)
files-analyzed Number of files analyzed

CLI Flags

Flag Description
--format, -f Output format: table, json, markdown, summary, report, diagnostic
--verbose, -v Show all metrics
--check Check against thresholds (exit 1 on failure)
--config, -c Path to config file
--validate-config Validate configuration file and exit
--max-grade Maximum Flesch-Kincaid grade level
--max-ari Maximum ARI score
--max-lines Maximum lines per file (0 to disable)
--min-admonitions Minimum MkDocs-style admonitions required (-1 to disable)

License

MIT

About

Documentation readability analyzer - GitHub Action and CLI tool for measuring content quality metrics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors