Skip to content

feat: Add VoiceFidelity pack#984

Closed
NorthwoodsSentinel wants to merge 4 commits intodanielmiessler:mainfrom
NorthwoodsSentinel:feat/voicefidelity-pack
Closed

feat: Add VoiceFidelity pack#984
NorthwoodsSentinel wants to merge 4 commits intodanielmiessler:mainfrom
NorthwoodsSentinel:feat/voicefidelity-pack

Conversation

@NorthwoodsSentinel
Copy link
Copy Markdown

Summary

  • Adds VoiceFidelity pack to Packs/ — voice fidelity scoring and indexical grounding detection for AI-assisted writing
  • Three CLI tools: voice-extract (corpus → profile), voice-score (9-check scoring), prufrock (10-layer grounding audit)
  • No external API dependencies — everything runs locally with bun
  • Follows the established Pack format (README, INSTALL, VERIFY, src/)

What It Does

Answers "does this sound like you?" instead of "was this AI?" Every person has a linguistic fingerprint — sentence patterns, vocabulary, regional markers, professional tribe language. This pack extracts that fingerprint from a writing corpus and scores new documents against it.

voice-extract: Reads 15-20 documents and builds a JSON voice profile with scoring thresholds derived from the author's actual patterns.

voice-score: Runs 9 weighted checks against the profile — banned words, filler openers, sentence length conformance, paragraph structure, hedge clusters, passive voice, AI triple detection, bullet walls, and voice conformance metrics.

prufrock: 10-layer indexical grounding audit. Catches mangled idioms (140 idiom corpus + 25 known mangles), missing regional markers, register collisions, embodied detail cliches (45 generic placeholders), temporal vagueness, plus 5 manual review questions for community-of-practice, provenance safety, cross-layer consistency, narrative truth, and stance authenticity.

Origin

Built when an AI wrote "don't rock the ship" instead of "don't rock the boat." Two review systems missed it. A human caught it because something felt wrong. That led to forensic linguistics research (indexicality, shibboleths, idiolect, formulaic sequences) and this detection framework.

Test plan

  • bun voice-extract.ts --corpus <folder> produces valid JSON profile
  • bun voice-score.ts <document> returns scored output with pass/fail
  • bun prufrock.ts <document> returns 10-layer audit with automated + manual layers
  • INSTALL.md wizard runs cleanly on a fresh Claude Code instance
  • VERIFY.md checks all pass after installation
  • No external API keys required for core functionality

🤖 Generated with Claude Code

NorthwoodsSentinel and others added 4 commits March 22, 2026 18:05
…rounding detection

Three CLI tools for ensuring AI-assisted writing sounds like the person
who claims to have written it:

- voice-extract: corpus → JSON voice profile (sentence patterns, vocabulary,
  paragraph architecture, scoring thresholds)
- voice-score: 9-check weighted scoring against a personal profile (pass at 70)
- prufrock: 10-layer indexical grounding audit (5 automated + 5 manual)

Includes 140 frozen expression corpus, 25 known idiom mangles, 45 embodied
cliche detections, regional marker categories, and the Indexical Grounding
Framework with academic backing from sociolinguistics and forensic linguistics.

No external API dependencies. Everything runs locally with bun.

Origin: built from personal friction when an AI wrote "don't rock the ship"
instead of "don't rock the boat." The detection framework grew from there.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…gged insights

Two CLI tools for extracting and capturing breakthrough moments:

- breadcrumb-mine: scans ChatGPT JSON, Claude JSONL, or markdown
  conversation exports for 25+ natural breakthrough language patterns
  ("holy shit", "I just realized", "eureka", "remember this", etc.)
  Auto-categorizes into 10 themes and outputs a searchable index.

- breadcrumb-tag: quick-capture insights during live sessions with
  auto-detected tags and categories. Designed for speed — capture
  before the insight fades.

Tested against 683 real ChatGPT conversations, found 322 breadcrumbs
across 10 categories. No external API dependencies.

Origin: discovered after mining 62 manually-tagged insights from 10
months of ChatGPT history. The user's natural breakthrough language
("holy shit", "eureka", "remember this") was already a retrieval
index — it just needed a tool to extract it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tion

Mirror — structured metacognitive reflection for user-generated insight
Flinch — somatic signal capture (5-second body signal logging with scoring)
LandingProcedure — safe exit protocol for deep work / flow sessions
SessionHealth — session age and turn count watchdog (prevents context rot)
PreCompact — AI-authored fidelity preamble before context compaction
FlowDetect — flow state detection from message patterns
VitalsDump — daily biometric summary from Oura Ring + Garmin Connect
FleetDump — daily activity summary across multi-AI fleet infrastructure

These packs share an origin: built by someone rebuilding after personal
crisis who needed tools his AI couldn't provide. Mirror held the space
for reflection. Flinch caught the body signals he'd learned to ignore.
LandingProcedure stopped the 28-hour sessions. SessionHealth warned
before context degraded. The rest instrument the relationship between
body, mind, and AI infrastructure.

No external API dependencies for core tools (VitalsDump needs Oura/Garmin
credentials). All TypeScript/bun or Python. All battle-tested in production.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WorkCompletionLearning — SessionEnd hook that captures work meta-learnings
(files changed, tools used, ISC satisfaction, duration) so insights compound
across sessions instead of dying with the PRD.

DriftMon — AI behavioral drift detector. Analyzes session transcripts for
hedging, refusal, softener, and meta-commentary density. Tracks trends via
CSV. An intrusion detection system for AI behavior — same methodology as
network security monitoring, different target.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@NorthwoodsSentinel
Copy link
Copy Markdown
Author

Closing for now — doing a security hardening pass across all public code before submitting. Will resubmit when the codebase has been through full SAST review. The packs are solid but I want the tooling they reference to be properly hardened first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant