A .speq file is the architectural contract of your project. You write it once. Every AI session reads it before writing a single line of code. Same spec, two machines, two agents: architecturally identical output.
Natural language is inherently ambiguous. The same words mean different things across models, sessions, and prompts. A .speq file has a grammar, a parser, and a validator. It cannot be misread.
Vibe coding generates entropy. The AI fills every undeclared decision arbitrarily: naming, layers, security, differently every session. SpeQ collapses that space upfront.
AGENTS.md and similar work well, but they are natural language by design, and natural language is not opinionated. On large codebases especially, they drift, get interpreted differently across models and sessions, and cannot be treated as a real source of truth.
The root cause of AI inconsistency is not the model. It is the missing source of truth.
Annotated .speq files covering different domains and complexity levels: examples/
The AI behavioral contract for working with .speq files. Load it alongside your spec at the start of any session. An AI that has read the SKILL and the spec has no room for interpretation errors.
This project is in early development. The format already works, but there is meaningful work left on semantic precision and completeness. Criticism and contributions are very welcome. Read CONTRIBUTING.md to get involved.