Skip to content

Latest commit

 

History

History
292 lines (197 loc) · 8.86 KB

File metadata and controls

292 lines (197 loc) · 8.86 KB

Contributing to Semantic Anchors

What are Semantic Anchors?

Semantic anchors are well-defined terms, methodologies, and frameworks that serve as reference points when communicating with Large Language Models (LLMs). They act as shared vocabulary that triggers specific, contextually rich knowledge domains within an LLM’s training data.

Example: When you mention "TDD, London School" to an LLM, it activates knowledge about mock-heavy testing, outside-in development, and the work of Steve Freeman and Nat Pryce - much richer than simply saying "use mocks in testing."

Quality Criteria

Before proposing a new semantic anchor, ensure it meets these four criteria:

Details

=== ✓ Precise The anchor references a specific, established body of knowledge with clear boundaries.

Good: "SOLID Principles" - five specific design principles (SRP, OCP, LSP, ISP, DIP)

Bad: "Good design" - vague and subjective

=== ✓ Rich The anchor activates multiple interconnected concepts, not just a single instruction.

Good: "Domain-Driven Design" - activates bounded contexts, ubiquitous language, aggregates, value objects, entities, repositories, etc.

Bad: "Use meaningful names" - single instruction with no conceptual depth

=== ✓ Consistent Different users invoking the anchor should get similar conceptual activation from the LLM.

Good: "Test-Driven Development" - widely documented methodology with consistent understanding

Bad: "Modern testing" - different interpretations by different people

=== ✓ Attributable The anchor can be traced to key proponents, publications, or documented standards.

Good: "Hexagonal Architecture" (Alistair Cockburn, 2005)

Bad: "Best practices" - no specific source or authority

Testing Your Semantic Anchor

Before proposing, test the anchor with this prompt in an LLM:

What concepts do you associate with '<your semantic anchor name>'?

Evaluate the response:

  • Recognition: Does the LLM recognize the term?

  • Accuracy: Is the explanation correct?

  • Depth: Does it cover multiple related concepts?

  • Specificity: Is the scope well-defined?

Developer Setup

Prerequisites

  • Git

  • Python 3.12+ (for pre-commit hooks)

  • Node.js 20+ (for website development, optional)

Installing Pre-Commit Hooks

Required for all contributors!

Run the installation script:

./pre-commit-install.sh

This installs:

  • AsciiDoc Linter - validates anchor file syntax automatically

  • pre-commit framework - runs checks before each commit

  • Standard hooks - trailing whitespace, YAML/JSON validation

Manual Hook Execution

Run all hooks on all files:

pre-commit run --all-files

Run specific hook:

pre-commit run asciidoc-linter --all-files

How to Propose a New Anchor

We use an automated workflow with GitHub Copilot to validate and enrich proposals:

Step 1: Create an Issue

Click the btn:[Propose New Anchor] button on the website or create an issue using our proposal template.

All you need to provide:

  • The term or concept name

  • (Optional) Why you think it would be valuable

Step 2: Copilot Validation

GitHub Copilot automatically:

  1. Tests the anchor against the four quality criteria

  2. Either accepts or rejects the proposal

  3. If rejected: Explains why it doesn’t meet criteria

  4. If accepted: Enriches the issue with detailed information

Step 3: Copilot Creates the Anchor

Once accepted and enriched, Copilot is assigned to:

  1. Create the AsciiDoc file in docs/anchors/

  2. Add all required metadata (categories, roles, proponents, tags)

  3. Submit a Pull Request

  4. Maintainers review and merge

Step 4: Published

After merge, the new anchor appears on the website within minutes via automated deployment!

Anchor File Format

Each anchor is stored as an AsciiDoc file with metadata attributes:

= TDD, London School
:categories: testing-quality
:roles: software-developer, qa-engineer, architect
:related: tdd-chicago-school, hexagonal-architecture
:proponents: Steve Freeman, Nat Pryce
:tags: testing, tdd, mocking, outside-in

[%collapsible]
====
*Full Name*: Test-Driven Development, London School

*Also known as*: Mockist TDD, Outside-In TDD

*Core Concepts*:
* Mock-heavy testing
* Outside-in development
* Interaction-based testing

*Key Proponents*: Steve Freeman, Nat Pryce ("Growing Object-Oriented Software, Guided by Tests")

*When to Use*:
* Complex systems with many collaborating objects
* When designing APIs and interfaces
* Distributed systems where integration is costly
====

Required Metadata:

  • :categories: - One or more category IDs (see website for list)

  • :roles: - One or more professional roles that use this anchor

  • :proponents: - Key people, publications, or standards

  • :tags: - Keywords for search (optional but recommended)

  • :related: - Related anchor IDs (optional)

Counter-Examples

These are NOT semantic anchors:

"TLDR"

Underspecified instruction, no defined structure

"ELI5"

Vague target level, no pedagogical framework

"Keep it short"

Pure instruction, no conceptual depth

"Best practices"

No specific body of knowledge, not attributable

"Modern approach"

Too vague, not consistent across users

Categories

Anchors are organized into 12 MECE (Mutually Exclusive, Collectively Exhaustive) categories:

  1. Communication & Presentation

  2. Design Principles & Patterns

  3. Development Workflow

  4. Dialogue & Interaction Patterns

  5. Documentation Practices

  6. Meta (repository and catalog concepts)

  7. Problem-Solving Methodologies

  8. Requirements Engineering

  9. Software Architecture

  10. Statistical Methods & Process Monitoring

  11. Strategic Planning & Decision Making

  12. Testing & Quality Practices

See the website for full category descriptions.

Professional Roles

Anchors are tagged with professional roles to help filter relevant content:

  1. Software Developer / Engineer

  2. Software Architect

  3. QA Engineer / Tester

  4. DevOps Engineer

  5. Product Owner / Product Manager

  6. Business Analyst / Requirements Engineer

  7. Technical Writer / Documentation Specialist

  8. UX Designer / Researcher

  9. Data Scientist / Statistician

  10. Consultant / Coach

  11. Team Lead / Engineering Manager

  12. Educator / Trainer

PR Review Policy

Review Requirements

All pull requests to main require at least one approving review before merging.

Sampling Review (~20%)

For active periods with many contributions, maintainers apply a 20% sampling review:

  • At least 1 in 5 PRs receives a thorough, line-by-line review

  • All other PRs receive a high-level review (structure, quality criteria, CI status)

  • AI-generated PRs (GitHub Copilot) always receive human review

Automated Checks (Required to Pass)

Every PR must pass all of the following before merge:

  • E2E Tests — all 28+ Playwright tests green

  • Lint & Format Check — ESLint + Prettier (no errors)

  • Dependency Auditnpm audit --audit-level=high clean

  • CodeQL — no high/critical security findings

  • AsciiDoc Linter — anchor files conform to format (pre-commit hook)

What Reviewers Check

For new semantic anchors:

  1. Quality criteria met (Precise, Rich, Consistent, Attributable)

  2. All required metadata attributes present (:categories:, :roles:, :proponents:)

  3. AsciiDoc format correct ([%collapsible] block, proper attribute syntax)

  4. Anchor tested with LLM prompt (see Testing Your Semantic Anchor)

For code changes:

  1. No regressions in existing tests

  2. No new high/critical security vulnerabilities

  3. Follows ESLint/Prettier code style

AI-Assisted Reviews

This project uses CodeRabbit for automated AI code review on all PRs. CodeRabbit reviews are advisory — human maintainer approval is still required.

Code of Conduct

  • Be respectful and constructive in discussions

  • Propose anchors in good faith

  • Respect maintainer decisions on quality criteria

  • Focus on established, documented methodologies

  • Give credit to original proponents

Questions?

License

By contributing, you agree that your contributions will be licensed under the same license as this project (see LICENSE file).


Ready to propose? Click here: Propose New Semantic Anchor