Releases: crasofuentes-hub/qisa-consensus-engine
QISA arXiv package (paper-v0.2.4)
arXiv package for QISA Consensus Engine.
Includes:
- main.tex + refs.bib
- ARXIV_COMMENTS.txt (paste into arXiv Comments field)
- COVER_LETTER.txt (short cover letter template)
QISA Consensus Engine v0.2.4
Release v0.2.4
Highlights:
- Trusted Publishing to PyPI via GitHub Actions (OIDC)
- Deterministic traces + external verification tools
- Adversarial benchmarks (paper-grade)
QISA Consensus Engine v0.2.3
Release v0.2.3
Highlights:
- Trusted Publishing to PyPI via GitHub Actions (OIDC)
- Deterministic, externally verifiable traces
- Adversarial benchmarks methodology
v0.2.2
Adversarial benchmarks snapshot + CI artifacts. See benchmarks workflow for reproducibility.
QISA Consensus Engine v0.2.1
Release v0.2.1
Highlights:
- Non-convergence policy + quality metrics
- Paper-grade tools docs (trace export/verify)
- Adversarial benchmarks + methodology
Repro:
- pip install qisa-consensus-engine==0.2.1
- docker build -t qisa-consensus-engine:dev .
- docker run --rm -v ${PWD}:/app qisa-consensus-engine:dev python benchmarks/bench_adversarial.py
v0.2.0 — Non-Convergence Policy + Quality Metrics
v0.2.0 — Non-Convergence Policy + Quality Metrics (Normalized API)
Resumen técnico
Esta versión eleva QISA a un estándar de “reference-grade” para auditoría y reproducibilidad: define política explícita de no-convergencia, normaliza el API de métricas de calidad y cierra brechas de cobertura bajo un gate estricto en CI.
Cambios principales
- Política explícita de no-convergencia
- Comportamiento definido y rastreable cuando el fixpoint no estabiliza dentro de max_steps.
- Señalización clara del motivo de parada (stop reason) y trazabilidad consistente para análisis posterior.
- Métricas de calidad — API normalizada
- compute_quality_metrics acepta fuentes de datos normalizadas:
- preferencia: trace → records → decisions (+ final_state opcional)
- Métricas base deterministas:
- disagreement_entropy
- x_target_variance
- decision_count
- final_state_keys (si se provee final_state)
- Retrocompatible: llamada sin argumentos retorna {}.
- Confiabilidad de ingeniería
- CI coverage gate enforced (fail-under 95%).
- Suite de tests ampliada para cubrir rutas no ejercitadas y estabilizar contratos (incluyendo trazas y validación de tamper).
- Estado actual: tests green + coverage local ~99%.
Determinismo y auditabilidad
- Orden estable para serializaciones/métricas (apto para hashing y comparación reproducible).
- verify_trace es compatible con implementaciones que lanzan excepción o retornan bool/None (test cubre ambas).
Compatibilidad
- Cambios compatibles; el ajuste principal es la normalización del API de métricas para consumo consistente.
Cómo validar (local)
- python -m ruff check .
- python -m pytest -q
- python -m coverage run --source=src -m pytest -q
- python -m coverage report -m
v0.1.3
v0.1.3
- CI green
- Coverage artifact published
- Property-based tests: determinism + tamper-evidence
v0.1.2 — Metadata sync (Zenodo)
What changed:
- Zenodo metadata finalized via .zenodo.json
- README updated with DOI badge + citation
- CITATION.cff updated with Zenodo DOI and version alignment
Why:
Ensure Zenodo archives the repository with canonical metadata and consistent citation info.
Reproducibility:
- python -m pip install -e .[dev]
- python -m pytest
- python -m bench.run_bench > bench/results_scenario_v1.json
- python -m bench.make_results_table
v0.1.1 — Zenodo archival trigger
No code changes. This release exists to trigger Zenodo archival and DOI issuance.
v0.1.0 — Deterministic consensus core (auditable + reproducible)
QISA Consensus Engine — v0.1.0
Scope
First reference release of a deterministic, auditable consensus engine* with explicit fixpoint semantics.
Core guarantees
- Deterministic convergence (fixpoint + idempotence)
- Tamper-evident trace hashing (hash chain per step)
- Verifiable trace export and verification
- Reproducible benchmark harness with pinned results
What is included
- Multi-perspective opinions + deterministic consensus operator
- Trace export (
trace_to_json) and verification (verify_trace) - Benchmark harness (
bench.run_bench) with baselines - Auto-generated comparison table from pinned JSON results
- Tests covering convergence, idempotence, tamper-evidence, and benchmarks
What this is NOT
- Not an LLM
- Not stochastic
- Not dependent on external services
Reproducibility
python -m bench.run_bench > bench/results_scenario_v1.json
python -m bench.make_results_tableStatus
Reference-grade core suitable for audited decision systems and for integration under LLL or non-LLM perspectives.