# Cognitive Shadow Theorem
[](LICENSE)
[](LICENSE_TEXT)
[](https://doi.org/10.5281/zenodo.20096839)
**A formally verified framework proving fundamental limits for the complete digitization of conscious experience.**
The core result — the **Cognitive Shadow Theorem** — demonstrates that any formal system claiming to fully capture a conscious agent necessarily leaves a non‑formalizable residue: the **cognitive shadow**.
The project includes:
- **Mechanized proofs** in Coq (CIC) for all central theorems.
- **Dynamic extensions** governing resource‑bounded interfaces, predictability horizons, and metacognitive collapse.
- **Safety protocols** for systems that interact with cognitive shadows.
- **Empirical roadmap** for experimental validation (EEG calibration, clinical protocols, cross‑population studies).
- **Model checking** of the complete agent/shadow dynamics in TLA⁺ with invariant proofs.
- **Calibration tools** (Python) for fitting key parameters from real‑world data.
---
## 🔐 Intellectual Property & Licensing
This project is **dual‑licensed**:
| Component | License | Note |
|-----------|---------|------|
| All source code (Coq, TLA⁺, Python, Rust) | **GNU Affero GPL v3.0** ([LICENSE](LICENSE)) | *Commercial use **requires a separate paid license**. Contact the author.* |
| All text, documentation, protocols, preprints | **CC BY‑NC 4.0** ([LICENSE_TEXT](LICENSE_TEXT)) | *Commercial use of the **descriptions** is prohibited.* |
**For commercial or proprietary use** (e.g., integration into a medical device, BCI platform, or AI safety infrastructure), please contact the author for an alternative commercial license agreement.
**Patent pending** on certain technical implementations.
---
## 📂 Repository Structurecognitive-shadow-theorem/ ├── preprints/ # LaTeX sources of all preprints and appendices │ ├── The_Cognitive_Shadow_Theorem.tex │ ├── Chemical_Phenomenology_of_Consciousness.tex │ ├── ... (other preprints) │ └── supplementary/ # Appendices S1–S7 ├── src/ # Coq formalization (28 modules) ├── tla-spec/ # TLA⁺ specification and model checking │ ├── CognitiveShadow_Model.tla │ ├── CognitiveShadow_Model.cfg │ └── (tla2tools.jar is downloaded automatically) ├── scripts/ # Open‑source calibration tools & tests │ ├── fit_delta_min.py │ ├── fit_ferroelectric_params.py │ ├── requirements.txt │ ├── Dockerfile │ └── tests/ # pytest suite ├── guides/ # Practical guides (AI hygiene) ├── ru/ # Russian translations (for reference) ├── docs/ # Technical documentation of Coq modules ├── _CoqProject # Coq project file ├── coq_build.log # Full Coq compilation log (successful) ├── tlc_report.txt # TLC output (65 states, invariants OK) ├── docker-compose.yml # Reproducible Docker environment ├── README.md ├── LICENSE # AGPL v3 (code) ├── LICENSE_TEXT # CC BY‑NC 4.0 (texts) └── NOTICE # Patent & commercial licensing notice
> **Note:** The public repository contains the research‑grade core and calibration tools.
> **Production‑grade modules** (full real‑time pipeline, BIDS converter with HSM integration, ZK‑audit workflows, edge‑device optimizations) are **available under commercial license** only. Contact the author for access.
---
## 🚀 Quick Start
### 1. Verify the Coq proofs
```bash
git clone https://github.com/Kalera77/cognitive-shadow-theorem.git
cd cognitive-shadow-theorem
docker-compose run --rm coq
This will compile all Coq modules inside a Docker container and display ✅ All modules compiled successfully!.
docker-compose run --rm tlaThe TLC model checker verifies the finite model of the cognitive architecture with all invariants.
Expected output: Model checking completed. No error has been found. 65 states generated, 61 distinct states found.
See tlc_report.txt for the full report.
Preprints are located in preprints/.
The main theoretical paper is preprints/The_Cognitive_Shadow_Theorem.pdf.
Supplementary appendices (detailed Coq listings, experimental protocols, calibration procedures, empirical foundations) are in preprints/supplementary/.
Build the Docker image for Python tools and run the test suite:
cd scripts
docker build -t shadow-tests .
docker run --rm -v ${PWD}:/scripts -w /scripts shadow-tests pytest tests/ -vAll tests pass.
Individual scripts can also be run directly with Python after installing dependencies:
pip install -r scripts/requirements.txt
python scripts/fit_delta_min.py --input data/fitting_input_001.csvThe question of consciousness was long treated as purely philosophical. This is no longer the case.
Contemporary neuroscience now provides quantitative, reproducible correlates of conscious states:
studies of metacognitive efficiency (meta‑d'), disorders of consciousness (coma, VS, MCS, LIS),
and dynamic functional connectivity (EEG, fMRI, iEEG) are moving the field beyond behaviourism.
At the same time, formal approaches — from integrated information theory to the free‑energy principle —
are attempting to capture consciousness in mathematical language.
The present work contributes to this landscape by offering a formally verified, axiomatically transparent framework that makes falsifiable predictions and connects directly with the empirical roadmap laid out in the accompanying preprints. The Cognitive Shadow Theorem is not an armchair speculation — it is an engineering‑ready account of the limits that any computational model of a conscious agent must respect.
Even setting aside the debates about human consciousness, the formal construction of the project — a system with a proven impossibility of complete self-capture — is directly applicable to algorithm theory and AI safety.
The theorem rests not on biology, but on the properties of formal models of an agent that attempts to fully describe itself. For AI, this yields the following practical implications:
- Limit of self‑predictability. One cannot build a debugger or monitor that, for any internal state of an agent, accurately predicts its next action — provided the agent itself uses the same model.
- An unavoidable "grey zone" in verification. Complete formal verification of a self‑referential agent is impossible in principle, and the theorem provides a precise mathematical characterization of this limit.
- The project's applied component — protocols for handling cognitive shadows — proposes not to "crack open" the shadow but to work with it correctly as an accepted boundary.
- FORCED_REPORT for AI. An analogue of the clinical protocol — a mandatory report request in critical states. For AI, this could mean enforced generation of explanations before a dangerous action, even when full interpretation is impossible.
- Predictability constraints for systems that interact with cognitive shadows (agent cooperation or competition).
- Metacognitive collapse — a situation where an AI, attempting to fully reflect on its own process, enters a logical deadlock. The system can be equipped with a "circuit breaker" that detects such a state and transitions into a safe mode.
If the empirical roadmap of the project is freed from its tether to living EEG, what remains is a methodology for calibrating cognitive shadow parameters from the behavior of any complex system. This provides tools for:
- Measuring an agent's "degree of self‑prediction incompleteness" from its action logs.
- Assessing how closely an architecture approaches the formalizability boundary and predicting its stability near that boundary.
The project offers a formally verified lemma on the limits of self‑knowledge in agents, a suite of dynamic theorems about interaction and predictability, and ready‑to‑use software components (Coq, TLA⁺) for modeling and checking these boundaries. This is a potentially useful mathematical apparatus for Explainable AI (XAI), AI Safety, and multi‑agent systems theory.
Valery S. Kalinin
Independent Researcher, System Engineering Research
Email: kalera77@gmail.com
GitHub Issues: github.com/Kalera77/cognitive-shadow-theorem/issues
This is an open research project. Contributions, suggestions, and collaborations are warmly welcomed.
Please open an issue or submit a pull request. For major changes, consider discussing first.
Note: External contributors must agree to the project’s Contributor License Agreement (CLA) to ensure the integrity of the IP.
If you use this work, please cite the main preprint:
@techreport{kalinin2026cognitive-shadow,
title = {The Cognitive Shadow Theorem: Formal Limits of Consciousness Digitization with Mechanized Proofs},
author = {Kalinin, Valery S.},
year = {2026},
institution = {System Engineering Research},
url = {https://github.com/Kalera77/cognitive-shadow-theorem},
note = {Preprint, April 2026}
}A CITATION.cff file is also provided for GitHub’s automated citation support.