This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This repository is a curated catalog of semantic anchors - well-defined terms, methodologies, and frameworks that serve as reference points when communicating with Large Language Models (LLMs). Semantic anchors act as shared vocabulary that triggers specific, contextually rich knowledge domains within an LLM's training data.
Current Status: The repository is undergoing a major redesign to become an interactive, bilingual website with treemap visualization, role-based filtering, and automated contribution workflow.
Development takes place in the fork until PRD implementation is complete.
Git Remotes:
origin→raifdmueller/Semantic-Anchors(fork for development)upstream→LLM-Coding/Semantic-Anchors(original repository)
Workflow:
- All feature branches are created in the fork (
origin) - PRs are created within the fork for review
- After PRD implementation is complete, changes will be merged back to upstream
- Use
git push origin <branch>for all development work - Use
git fetch upstreamto sync with original repository if needed
Issue References:
- Issues are tracked in upstream repository: https://github.com/LLM-Coding/Semantic-Anchors/issues
- Reference issues in commits:
feat: implement X (#42)
When using agent teams, use Git Worktrees to enable true parallel development:
Setup:
# Create worktree directory
mkdir -p ~/projects/Semantic-Anchors-worktrees
# Create worktree for each feature branch
git worktree add ../Semantic-Anchors-worktrees/feature-<name> -b feature/<name>Directory Structure:
~/projects/
├── Semantic-Anchors/ # Main worktree (main branch)
└── Semantic-Anchors-worktrees/ # Feature branch worktrees
├── feature-mece-analysis/ # Teammate 1
├── feature-role-mapping/ # Teammate 2
├── feature-split-readme/ # Teammate 3
└── feature-metadata-script/ # Teammate 4
Team Workflow:
- Create worktrees for each feature branch
- Spawn teammates with
working_directoryparameter pointing to their worktree - Each teammate works independently in their directory
- Teammates commit and push to their feature branches
- Create PRs from feature branches to main
- After merge, cleanup:
git worktree remove <path>
Advantages:
- No branch switching conflicts
- True parallel development
- Each teammate has isolated workspace
- Shared Git history (.git directory)
- Clean separation of concerns
All project documentation is located in the docs/ directory:
docs/PRD.md: Product Requirements Document with user personas, user stories, timelinedocs/specs/: Detailed specifications01_use_cases.adoc: 10 use cases with PlantUML diagrams02_api_specification.adoc: Data models and API structure03_acceptance_criteria.adoc: Gherkin scenarios (in German)adrs/: Architecture Decision Records with Pugh matrices
docs/arc42/: Complete architecture documentation (12 chapters)PROJECT_STATUS.md: Current implementation status and roadmap
Always review these documents before starting work on implementation issues.
Semantic-Anchors/
├── docs/ # All documentation
│ ├── PRD.md # Product Requirements
│ ├── specs/ # Specifications & ADRs
│ └── arc42/ # Architecture (arc42 template)
├── README.adoc # Current catalog (will be split in Phase 1)
├── PROJECT_STATUS.md # Implementation tracking
├── CLAUDE.md # This file
└── LICENSE
Semantic-Anchors/
├── docs/
│ ├── anchors/ # Individual anchor files (60+)
│ │ ├── _template.adoc # Template for new anchors
│ │ ├── tdd-london-school.adoc
│ │ └── ...
│ ├── categories/ # Category include files
│ │ ├── testing-quality.adoc
│ │ └── ...
│ ├── roles/ # Role include files
│ │ ├── software-developer.adoc
│ │ └── ...
│ ├── metadata/ # Generated metadata
│ │ ├── categories.yml
│ │ ├── roles.yml
│ │ └── anchors.yml
│ └── ... (specs, arc42)
├── website/ # Vite-based static site (Phase 2)
│ ├── src/
│ ├── public/
│ └── package.json
├── scripts/ # Build scripts
│ ├── extract-metadata.js
│ └── generate-includes.js
└── ...
All content is written in AsciiDoc format (.adoc), not Markdown. Key AsciiDoc patterns:
- Section headers:
== Level 1,=== Level 2,==== Level 3 - Collapsible sections:
[%collapsible]followed by====delimiters - Anchors:
[[anchor-id]]before a section header - Internal links:
<<anchor-id,Link Text>> - Bullet lists:
*for unordered,.for ordered (not-like Markdown) - Bold text:
*bold*(not**bold**) - Code blocks:
[source]or[source,language]followed by----delimiters
Each semantic anchor will be in its own file with AsciiDoc attributes for metadata:
= TDD, London School
:categories: testing-quality
:roles: software-developer, qa-engineer, architect
:related: tdd-chicago-school, hexagonal-architecture
:proponents: Steve Freeman, Nat Pryce
:tags: testing, tdd, mocking, outside-in
[%collapsible]
====
*Full Name*: Test-Driven Development, London School
*Also known as*: Mockist TDD, Outside-In TDD
*Core Concepts*:
* Mock-heavy testing
* Outside-in development
* Interaction-based testing
*Key Proponents*: Steve Freeman, Nat Pryce ("Growing Object-Oriented Software, Guided by Tests")
*When to Use*:
* Complex systems with many collaborating objects
* When designing APIs and interfaces
* Distributed systems where integration is costly
====Key Metadata Attributes:
:categories:- Category IDs (comma-separated, required):roles:- Role IDs (comma-separated, required):related:- Related anchor IDs (comma-separated, optional):proponents:- Key proponents (comma-separated, required):tags:- Search keywords (comma-separated, optional)
Before adding a new anchor, verify it meets these criteria:
- Precise: References a specific, established body of knowledge with clear boundaries
- Rich: Activates multiple interconnected concepts, not just a single instruction
- Consistent: Different users invoking it get similar conceptual activation
- Attributable: Can be traced to key proponents, publications, or documented standards
Counter-examples (NOT semantic anchors):
- "TLDR" - underspecified, no defined structure
- "ELI5" - vague target level, no pedagogical framework
- "Keep it short/simple" - pure instruction, no conceptual depth
Before adding, test with this prompt:
What concepts do you associate with '<semantic anchor name>'?
Evaluate: Recognition, Accuracy, Depth, Specificity
Current categories (subject to MECE analysis in Issue #36):
- Testing & Quality Practices
- Architecture & Design
- Design Principles & Patterns
- Requirements Engineering
- Documentation
- Communication & Presentation
- Decision Making & Strategy
- Development Practices
- Statistical Methods & Process Monitoring
- Interaction & Reasoning Patterns
- Software Developer / Engineer
- Software Architect
- QA Engineer / Tester
- DevOps Engineer
- Product Owner / Product Manager
- Business Analyst / Requirements Engineer
- Technical Writer / Documentation Specialist
- UX Designer / Researcher
- Data Scientist / Statistician
- Consultant / Coach
- Team Lead / Engineering Manager
- Educator / Trainer
- MECE analysis of categories
- Role mapping for all anchors
- Split README.adoc into individual files
- Create metadata extraction script
- Generate category/role include files
Key Issues: #35 (Epic), #36-40
- Setup Vite project
- Implement treemap visualization (Apache ECharts)
- Role-based filtering
- Search functionality
- AsciiDoc rendering (asciidoctor.js)
- i18n (EN/DE)
- Dark/Light theme
Key Issues: #41 (Epic), #42-48
- GitHub Actions deployment workflow
- Issue templates for contributions
- CONTRIBUTING.md guide
- Automated testing (E2E, Lighthouse)
Key Issues: #49 (Epic), #50-53
- GitHub Copilot validation workflow
- Advanced search
- Analytics (privacy-first)
Current (Pre-Phase 1):
- Create feature branch:
git checkout -b add-semantic-anchor-<name> - Edit
README.adocto add new anchor - Follow standard format
- Commit:
feat: Add <Anchor Name> to <Category> section - Create PR for review
When adding new anchors, also update the AgentSkill catalog:
skill/semantic-anchor-translator/references/catalog.md
The AgentSkill enables AI agents (Claude Code, Codex, Cursor, etc.) to recognize and suggest semantic anchors. If the catalog is not updated, new anchors won't be discoverable by agents using the skill.
Checklist for new anchors:
- Add anchor to
docs/anchors/<name>.adoc - Update
skill/semantic-anchor-translator/references/catalog.mdwith the new entry
Future (Post-Phase 3):
- User creates GitHub Issue via template
- GitHub Actions validates with Copilot
- Maintainer assigns to Copilot
- Copilot generates AsciiDoc file
- PR created automatically
- Maintainer reviews and merges
All decisions documented with Pugh matrices in docs/specs/adrs/:
- ADR-001: Vite as Static Site Generator (+88 points vs. Astro baseline)
- ADR-002: AsciiDoc Attributes for Metadata (+51 points vs. separate YAML)
- ADR-003: Apache ECharts for Treemap (+77 points vs. D3.js)
- ADR-004: One File per Anchor (+105 points vs. files per category)
- Build Tool: Vite (fast, modern, no framework lock-in)
- Visualization: Apache ECharts (treemap with built-in theming)
- Content Rendering: asciidoctor.js (AsciiDoc → HTML in browser)
- Styling: Tailwind CSS (utility-first, responsive, dark mode)
- i18n: Custom JSON-based (lightweight)
- Hosting: GitHub Pages
- CI/CD: GitHub Actions
When creating specifications for this project:
- Use PlantUML Activity Diagrams for each use case
- Structure: Akteure, Vorbedingungen, Ablauf, Nachbedingungen, Fehlerszenarien
- OpenAPI-style in AsciiDoc format
- Data models as JSON schemas
- Group endpoints: Navigation, Content Access, Manipulation, Meta-Information
- Gherkin format (Given-When-Then)
- Group by feature/use case
- Language: German
- Follow Nygard format: Status, Context, Decision, Consequences
- Include Pugh Matrix comparing alternatives
- Store in
docs/specs/adrs/
# Development
npm run dev # Start Vite dev server
# Build
npm run extract-metadata # Extract metadata from .adoc files
npm run validate # Validate anchors
npm run generate-json # Generate JSON for website
npm run build # Build website for production
# Testing
npm run test # Unit tests
npm run test:e2e # E2E tests with Playwright
npm run test:a11y # Accessibility tests
# Deployment
npm run deploy # Deploy to GitHub Pages (via GH Actions)All implementation work is tracked in GitHub Issues. See PROJECT_STATUS.md for current status.
Issue Labels:
epic- High-level phasesphase-1,phase-2,phase-3- Phase assignmentnew-anchor- Proposals for new semantic anchorsbug,enhancement,documentation
Workflow:
- Check PROJECT_STATUS.md for current phase
- Pick an issue from current phase
- Work on feature branch
- Reference issue in commits:
feat: implement X (#42) - Create PR linking to issue
- Page load < 2s on 3G
- Treemap rendering < 500ms
- Search response < 300ms
- WCAG 2.1 Level AA compliance
- Keyboard navigation
- Screen reader support
- Lighthouse Performance > 90
- Lighthouse Accessibility: 100
- Lighthouse SEO > 90
- AsciiDoc is mandatory - Do not convert to Markdown
- MECE categories - After Phase 1, all categories must be MECE-compliant
- Bilingual UI - Website UI in EN/DE, but anchor content stays English
- Privacy-first - No tracking without consent
- One file per anchor - Improves Git history and parallel contributions
- Test before adding - Always validate semantic anchors with LLM first
- PRD:
docs/PRD.md - Architecture:
docs/arc42/arc42.adoc - Project Status:
PROJECT_STATUS.md - GitHub Issues: https://github.com/LLM-Coding/Semantic-Anchors/issues
Generated by /risk-assess on 2026-02-13
Mitigation Tracking: Risk Radar Issues Documentation: Vibe Coding Risk Radar
| Dimension | Score | Level | Evidence |
|---|---|---|---|
| Code Type | 1 | Build Scripts / Tests | extract-metadata.js, split-readme.js, update-anchor-categories.js, translate-anchors.py |
| Language | 2 | Dynamically typed | 3 .js files, 1 .py file |
| Deployment | 1 | Internal tool | CI/CD automation, build-time scripts |
| Data Sensitivity | 0 | Public data | Processes public AsciiDoc documentation |
| Blast Radius | 0 | Cosmetic / Tech debt | Build failures, incorrect metadata generation |
Tier: 2 — determined by Language = 2
| Measure | Status | Details | Issue |
|---|---|---|---|
| Linter & Formatter | ❌ Ausstehend | No ESLint or Prettier config detected | #81 |
| Type Checking | ❌ N/A | JavaScript without TypeScript | — |
| Pre-Commit Hooks | ❌ Ausstehend | No husky or pre-commit framework | #82 |
| Dependency Check | ❌ Ausstehend | No npm audit in CI workflows | #83 |
| CI Build & Unit Tests | ✅ Vorhanden | GitHub Actions (.github/workflows/test.yml) | — |
| Measure | Status | Details | Issue |
|---|---|---|---|
| SAST | ❌ Ausstehend | No Semgrep or CodeQL detected | #84 |
| AI Code Review | ❌ Ausstehend | No CodeRabbit or Copilot Review | #86 |
| Property-Based Tests | ❌ Ausstehend | No fast-check or hypothesis | #85 |
| SonarQube Quality Gate | ❌ Ausstehend | No SonarQube config | #87 |
| Sampling Review (~20%) | PR review process (assumed) | #88 |
| Dimension | Score | Level | Evidence |
|---|---|---|---|
| Code Type | 0 | UI / CSS / Docs | Frontend components with DOM manipulation (header.js, card-grid.js, anchor-modal.js) |
| Language | 2 | Dynamically typed | 17 .js files (JavaScript) |
| Deployment | 2 | Public-facing app | GitHub Pages deployment, public documentation website |
| Data Sensitivity | 0 | Public data | Public semantic anchor documentation |
| Blast Radius | 0 | Cosmetic / Tech debt | UI glitches, broken features |
Tier: 2 — determined by Language = 2, Deployment = 2
| Measure | Status | Details | Issue |
|---|---|---|---|
| Linter & Formatter | ❌ Ausstehend | No ESLint or Prettier config detected | #81 |
| Type Checking | ❌ N/A | JavaScript without TypeScript | — |
| Pre-Commit Hooks | ❌ Ausstehend | No husky or pre-commit framework | #82 |
| Dependency Check | ❌ Ausstehend | No npm audit in CI workflows | #83 |
| CI Build & Unit Tests | ✅ Vorhanden | GitHub Actions with Playwright E2E tests and Lighthouse CI (.github/workflows/test.yml) | — |
| Measure | Status | Details | Issue |
|---|---|---|---|
| SAST | ❌ Ausstehend | No Semgrep or CodeQL detected | #84 |
| AI Code Review | ❌ Ausstehend | No CodeRabbit or Copilot Review | #86 |
| Property-Based Tests | ❌ Ausstehend | No fast-check or hypothesis | #85 |
| SonarQube Quality Gate | ❌ Ausstehend | No SonarQube config | #87 |
| Sampling Review (~20%) | PR review process (assumed) | #88 |