Skip to content

Add m014 Authority Validator Governance mechanism spec#24

Open
CShear wants to merge 2 commits intoregen-network:mainfrom
CShear:feat/m014-poa-governance
Open

Add m014 Authority Validator Governance mechanism spec#24
CShear wants to merge 2 commits intoregen-network:mainfrom
CShear:feat/m014-poa-governance

Conversation

@CShear
Copy link
Contributor

@CShear CShear commented Feb 18, 2026

Summary

  • Adds full mechanism spec for m014 (Authority Validator Governance) — transition from PoS to PoA with curated, compensated validator set
  • 3 composition categories: infrastructure builders, trusted ReFi partners, ecological data stewards (min 5 each)
  • 3-factor performance scoring: uptime (0.4), governance participation (0.3), ecosystem contribution (0.3)
  • Validator lifecycle: CANDIDATE → APPROVED → ACTIVE → PROBATION → REMOVED / TERM_EXPIRED
  • Fee-based compensation via M013 validator fund: equal base share + 10% performance bonus pool
  • Includes SPEC.md (13 sections), JSON schemas, reference implementation with self-test, replay datasets with PoS→PoA transition scenarios
  • Updates MECHANISM_CONSUMERS.md with m014 consumer mappings

Test plan

  • node mechanisms/m014-authority-validator-governance/reference-impl/m014_score.js — self-test PASS (5 vectors, probation flags verified)
  • npm run verify — PASS
  • m010 regression — PASS
  • Review composition criteria against WG consensus
  • Validate compensation model against M013 fee projections

🤖 Generated with Claude Code

Adds full mechanism spec for m014 following the established pattern:
- SPEC.md with 13 sections covering PoS-to-PoA transition, curated validator
  set with 3 composition categories (infrastructure builders, trusted ReFi
  partners, ecological data stewards), and fee-based compensation
- 3-factor performance scoring: uptime (0.4), governance participation (0.3),
  ecosystem contribution (0.3) with probation flags
- Validator lifecycle: CANDIDATE → APPROVED → ACTIVE → PROBATION → REMOVED
- Compensation model: equal base share + 10% performance bonus pool from M013
- JSON schemas for validator lifecycle, performance score, and KPI metrics
- Reference implementation with self-test (5 test vectors across categories)
- Replay datasets including PoS→PoA transition scenarios
- Updates MECHANISM_CONSUMERS.md with m014 consumer mappings

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @CShear, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the foundational specification and supporting artifacts for mechanism m014, which outlines a strategic shift in the network's validator model. The change aims to foster a more robust and mission-aligned validator set by moving away from capital-weighted security to a system that rewards active contribution and performance. This new framework defines how validators will be selected, evaluated, compensated, and managed throughout their lifecycle, ensuring network stability and incentivizing ecological impact.

Highlights

  • Transition to Proof of Authority (PoA): Introduced a comprehensive mechanism specification (m014) for transitioning the network's validator governance from Proof of Stake (PoS) to a curated, compensated Proof of Authority (PoA) model, aiming to address security vulnerabilities, misaligned incentives, and value disconnection.
  • Validator Performance Scoring: Defined a 3-factor weighted performance scoring system for validators, based on uptime (0.4), governance participation (0.3), and ecosystem contribution (0.3), with thresholds for review and probation.
  • Curated Validator Set & Lifecycle: Established a validator composition requirement with minimums across three categories (infrastructure builders, trusted ReFi partners, ecological data stewards) and a detailed lifecycle state machine (CANDIDATE, APPROVED, ACTIVE, PROBATION, REMOVED, TERM_EXPIRED).
  • Fee-Based Compensation Model: Implemented a compensation model sourced from the M013 validator fund, comprising an equal base share for all active validators and a 10% performance bonus pool distributed proportionally to performance scores.
  • Reference Implementation and Data: Provided a canonical reference implementation in JavaScript for computing validator scores and Key Performance Indicator (KPI) metrics, along with JSON schemas for outputs and deterministic replay datasets for testing PoS-to-PoA transition scenarios.
Changelog
  • README.md
    • Added a link to the new m014 Authority Validator Governance mechanism in the main README index.
  • docs/MECHANISM_CONSUMERS.md
    • Appended a new section detailing m014, including its canonical spec, output schemas, replay datasets, and known consumers like AGENT-004 and M013 integration.
  • mechanisms/m014-authority-validator-governance/README.md
    • Added a new README file providing an overview of m014, its outputs, what it doesn't cover in v0, how to reference its components, and details on replay datasets and schemas.
  • mechanisms/m014-authority-validator-governance/SPEC.md
    • Added the core specification document for m014, detailing the problem statement, target actors, signal definition, evidence inputs, scoring function, validator lifecycle, economic linkage, on-chain vs. off-chain boundaries, attack model, governance parameters, security invariants, and open questions.
  • mechanisms/m014-authority-validator-governance/datasets/README.md
    • Added a new README file explaining the purpose and usage of the deterministic replay datasets for m014, including how they are used to compute KPIs without the full MCP.
  • mechanisms/m014-authority-validator-governance/datasets/fixtures/v0_sample.json
    • Added a new JSON fixture file containing a sample of 5 active authority validators with varied performance profiles for m014 KPI computation.
  • mechanisms/m014-authority-validator-governance/datasets/fixtures/v0_transition_sample.json
    • Added a new JSON fixture file simulating a PoS-to-PoA transition snapshot, featuring validators in various lifecycle states (active, probation, candidate, removed, term_expired).
  • mechanisms/m014-authority-validator-governance/datasets/schema.json
    • Added a new JSON schema defining the structure for m014 replay datasets, including validator details and performance factors.
  • mechanisms/m014-authority-validator-governance/reference-impl/README.md
    • Added a new README file for the m014 reference implementation, outlining the inputs and outputs for score and KPI computations, and instructions for self-testing.
  • mechanisms/m014-authority-validator-governance/reference-impl/m014_kpi.js
    • Added a new JavaScript file containing the computeM014KPI function, which calculates aggregate KPI metrics for the authority validator set based on provided validator data.
  • mechanisms/m014-authority-validator-governance/reference-impl/m014_score.js
    • Added a new JavaScript file containing the computeM014Score function, which calculates a composite performance score for individual validators, along with a self-test harness.
  • mechanisms/m014-authority-validator-governance/reference-impl/test_vectors/vector_v0_sample.expected.json
    • Added a new JSON file containing the expected output for the m014 score self-test, based on the sample input.
  • mechanisms/m014-authority-validator-governance/reference-impl/test_vectors/vector_v0_sample.input.json
    • Added a new JSON file containing sample input data for the m014 score self-test, featuring various validator performance profiles.
  • mechanisms/m014-authority-validator-governance/schemas/README.md
    • Added a new README file describing the canonical JSON schemas for m014 outputs, including validator, performance, and KPI schemas.
  • mechanisms/m014-authority-validator-governance/schemas/m014_kpi.schema.json
    • Added a new JSON schema defining the structure for m014 KPI output blocks, including validator counts, performance averages, and compensation details.
  • mechanisms/m014-authority-validator-governance/schemas/m014_performance.schema.json
    • Added a new JSON schema defining the structure for m014 performance score outputs, including composite score, confidence, factor breakdown, and flags.
  • mechanisms/m014-authority-validator-governance/schemas/m014_validator.schema.json
    • Added a new JSON schema defining the structure for an individual m014 authority validator item, including address, category, status, and term details.
  • package.json
    • Updated the package configuration to specify 'type': 'module', enabling ES module syntax in JavaScript files.
Activity
  • The pull request author, CShear, has provided a detailed summary and test plan in the pull request description.
  • The author has confirmed that the m014_score.js self-test passes with 5 vectors and verified probation flags.
  • The author has confirmed that npm run verify passes.
  • The author has confirmed that m010 regression passes.
  • The author has noted remaining items for review: composition criteria against WG consensus and compensation model validation against M013 fee projections.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the "m014 Authority Validator Governance" mechanism, with comprehensive changes including specification, JSON schemas, replay datasets, and a reference implementation. The review identified a critical security concern in the reference implementation regarding an unconditionally called selfTest() function, which could lead to Denial of Service. Furthermore, inconsistencies were found in the specification's compensation formulas, requiring explicit definition to prevent ambiguity, and the validator schema's structure, which needs alignment with project data standards. A bug in KPI calculation and redundancy in the scoring script were also noted.

validators_by_category.ecological_data_stewards >= 5;

// Byzantine tolerance: active_count > 3f + 1
const active_count = validators_by_status.active;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The active_count is calculated using only validators with status: "active", excluding those on probation. This is inconsistent with activeVals which correctly includes validators on probation for performance scoring. Validators on probation are still part of the active set and should be included in calculations for Byzantine tolerance and compensation. This bug can lead to incorrect KPI reporting and compensation logic. active_count should be based on the length of activeVals.

Suggested change
const active_count = validators_by_status.active;
const active_count = activeVals.length;

if (fail > 0) process.exit(1);
}

selfTest();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The "selfTest()" function is called unconditionally at the top level of the module. This is a security concern as it can lead to Denial of Service (DoS) if test dependencies are missing or if tests fail, causing the application to crash or terminate the Node.js process. This practice also introduces unexpected behavior, output, and unnecessary performance overhead on every module import.

Suggested change
selfTest();
// selfTest(); // Remove or guard this call

Comment on lines +71 to +73
base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count / period
performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) / period
total_compensation = base_compensation + performance_bonus
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The compensation allocation formulas include a / period term at the end. This is ambiguous and inconsistent with the reference implementation in m014_kpi.js. If validator_fund_balance represents the total funds available for the period, dividing by period again is incorrect. Suggest removing / period for clarity and to align with the implementation.

Suggested change
base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count / period
performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) / period
total_compensation = base_compensation + performance_bonus
base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count
performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores)
total_compensation = base_compensation + performance_bonus
References
  1. When specifying transition logic between different mechanisms (e.g., multipliers), explicitly define the function (e.g., max()) and the state-based conditions (phase-gating) that govern the transition to prevent implementation ambiguity.

Comment on lines +64 to +71
const performance_score = totalWeight > 0
? Number((weightedSum / totalWeight * 1.0).toFixed(4))
: 0.0;

// Normalize to original weight basis when all factors present
const normalizedScore = availableCount === FACTOR_KEYS.length
? Number(weightedSum.toFixed(4))
: performance_score;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to calculate performance_score and then normalizedScore is redundant and confusing. When all factors are present, totalWeight is 1, and performance_score is already the same as weightedSum. The conditional assignment to normalizedScore is unnecessary. This can be simplified for better readability and maintainability.

  const performance_score = totalWeight > 0
    ? weightedSum / totalWeight
    : 0.0;

  // When all factors are present, this is the direct weighted sum.
  // When some are missing, it's re-normalized based on available weights.
  const normalizedScore = Number(performance_score.toFixed(4));

Comment on lines +55 to +78
"performance": {
"type": "object",
"additionalProperties": false,
"description": "Most recent performance evaluation",
"properties": {
"uptime": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "blocks_signed / blocks_expected"
},
"governance_participation": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "votes_cast / proposals_available"
},
"ecosystem_contribution": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "AGENT-004 assessed contribution score"
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This schema defines a validator structure that is inconsistent with datasets/schema.json and the fixture data.

  1. It places performance factors under an optional performance object, whereas other schemas use a factors object at the top level.
  2. It defines the factor values (e.g., uptime) as type: "number", not allowing null. This contradicts the replay dataset schema and the reference implementation, which correctly handle null values for missing factors.

Aligning these schemas would improve consistency and reduce confusion.

References
  1. When defining data structures or type definitions, ensure they are aligned with the project's central data standards repository (regen-data-standards) to maintain coherence.

…tion formula, schema nulls

- KPI active_count now includes probation validators (they're in active set)
- Guard selfTest() with process.argv check to prevent DoS on module import
- Remove ambiguous `/ period` from compensation formula in SPEC.md
- Allow null for performance factor types in validator schema
- Simplify redundant score normalization logic

Fixes Gemini review on PR regen-network#24.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@glandua
Copy link
Contributor

glandua commented Mar 18, 2026

Review: M014 Authority Validator Governance (PoA Transition)

Thorough and well-structured mechanism spec. The overall architecture is sound — the 3-category composition model, lifecycle state machine, fee-based compensation, and multi-factor scoring all make sense for Regen's specific situation.


1. Technical Accuracy — Cosmos SDK x/poa

The single most important issue: the spec references a custom x/authority module (Section 8) but should reference Cosmos SDK's native x/poa module shipping in SDK v0.54 (Q2 2026).

The Cosmos Labs x/poa module provides:

  • Curated validator set managed by a designated authority (governance or multisig)
  • SetPower / MsgUpdateValidators to add/remove validators atomically
  • Integration with CometBFT's validator set updates
  • Compatibility with existing x/staking for transition

Recommendations:

  1. Rename x/authorityx/poa throughout SPEC.md section 8
  2. Note that base layer uses SDK's native x/poa, with application-specific extensions (performance tracking, category metadata, term management) layered as a companion module
  3. Reference SDK v0.54 timeline explicitly — aligns with Q3-Q4 2026 pilot target

Validator lifecycle states are reasonable but missing an emergency path: no direct ACTIVE → REMOVED transition for key compromise or malicious behavior. Currently requires going through PROBATION (30-day wait), which is too slow for security incidents.

The bonded_minimum_stake guard on APPROVED → ACTIVE is underspecified. In a PoA system where staking is being disabled, clarify what "bonded" means post-PoA and whether it's slashable.

Byzantine tolerance math is correct (15 validators: f=4, tolerates 4 Byzantine nodes). However, the reference impl check active_count >= 3 * max_byzantine_f + 1 is tautologically true given how max_byzantine_f is derived. Should validate against a minimum required tolerance level instead.

2. Completeness

  • Composition model (15-21, min 5 per category): Well-defined and sensible.
  • Gap — category assignment governance: Who decides which category a validator belongs to? How are disputes resolved?
  • Gap — compensation frequency: "Quarterly review" but distribution cadence unspecified (monthly? quarterly? per-epoch?).
  • Gap — probation compensation: "Reduce compensation" but by how much? 50%? 0%?
  • Term structure: 12-month terms with streamlined re-application for incumbents — good design.

3. Consistency with M013/M012

  • M013 → M014: Validator Fund as sole compensation source, no inflationary fallback. Correctly specified. Note dependency on OQ-M013-1 (Model A: 40% validator vs Model B: 15-25%) — directly determines compensation adequacy.
  • M012 dependency: M012's regrowth algorithm depends on module-level states (INACTIVE/TRANSITION/ACTIVE/EQUILIBRIUM) that are distinct from M014's validator lifecycle states but are NOT defined in M014. Add a "Module State" section defining these 4 states and transitions.

4. Reference Implementation

  • m014_score.js: Correct. Renormalization for missing factors works properly. Test vectors verify (epsilon with null governance_participation scores 0.8263).
  • Issue — self-test bundled with library code: Top-level Node.js imports execute unconditionally. Move self-test to separate file.
  • m014_kpi.js: Probation validators included in active set for category counting — design decision worth documenting explicitly.

5. Open Questions

All 5 well-framed. Key additions:

  • OQ-M014-3 (seed set): Add: "Can the pilot launch with fewer than 15 validators during Phase 1 (below composition minimums) with an explicit ramp-up plan?"
  • OQ-M014-5 (unbonding): Expand to cover: unbonding period duration, grace period, accrued rewards during unbonding, migration path to M015 stability tiers, and notification timeline for delegators.

6. Additional Issues

  • Missing: emergency procedures. Who has authority to act if 2-3 validators go down simultaneously? What's the SLA?
  • Missing: slashing semantics. Explicitly state whether PoA replaces slashing with removal + reputation loss.
  • Fixture data uses real organization names (RND-Core, ReFiDAO-Node, Toucan-Validator, Kolektivo-Expired) in v0_transition_sample.json. These imply agreement to be validators. Use generic names or add a disclaimer.
  • AGENT-004 dependency: Performance bonus (weight 0.3 for ecosystem_contribution) depends on AGENT-004's scoring methodology, which is unspecified. Flag as v1 blocker.

Priority Fixes Before Merge

  1. Replace x/authority with x/poa (Cosmos SDK v0.54) — most significant technical gap
  2. Add module-level states (INACTIVE/TRANSITION/ACTIVE/EQUILIBRIUM) that M012 depends on
  3. Add direct ACTIVE → REMOVED emergency path in state machine
  4. Rename organization-specific monikers in fixtures to generic names
  5. Clarify bonded_minimum_stake semantics in PoA context

Verdict

Strong v0 spec that faithfully extracts and formalizes M014. Reference implementation is correct, schemas well-structured, transition fixtures thoughtful. The x/authorityx/poa rename is the most impactful change — it grounds the spec in real tooling that Cosmos Labs is shipping.


Review generated with Claude Code, grounded in KOI, Cosmos Labs documentation, and Regen forum community discussions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants