Skip to content

Conversation

@moshababo
Copy link
Contributor

@moshababo moshababo commented Oct 12, 2025

Implement BDN signature aggregation scheme

Closes #29

This PR implements the BDN aggregation scheme for BLS signatures, fixing certificate verification failures in the F3 lightclient.

Due to the lack of Rust implementation for Blake2X XOF (specifically the 32-bit Blake2Xs variant), this PR uses the
pending PR RustCrypto/hashes#704

Changes

  • Implemented BDN coefficient computation using Blake2Xs XOF
  • Added coefficient weighting for both signatures and public keys aggregation
  • Updated Verifier::verify_aggregate() API to accept full power table and signer indices (BDN
    requires all keys for correct coefficient computation)
  • Removed temporary workaround that was silencing verification errors

Testing

Verified correctness using the certificates_validation example:

  • calibrationnet (testnet): ~160ms per certificate (4/20 signers)
  • filecoin (mainnet): ~8s per certificate (900-1000/1560 signers)

TODO

  • Optimize verification performance
  • Add deterministic unit tests
  • Add test coverage for signature aggregation (although not used by lightclient)

Summary by CodeRabbit

  • New Features

    • Deterministic per-key coefficients for signature aggregation and index-based selective aggregation via a power table.
  • Refactor

    • Aggregation now precomputes per-key terms and uses signer indices for verification and aggregation flows.
  • Documentation

    • Verifier docs updated to describe power_table and signer_indices usage.
  • Tests

    • Tests and mocks updated to match the new verification and aggregation interfaces.

@moshababo moshababo requested a review from a team as a code owner October 12, 2025 20:14
@moshababo moshababo requested review from akaladarshi and elmattic and removed request for a team October 12, 2025 20:14
@coderabbitai
Copy link

coderabbitai bot commented Oct 12, 2025

Walkthrough

Adds a BDN aggregation implementation: introduces Blake2x dependency, computes per-key coefficients and terms during BDNAggregation construction, changes signature and public-key aggregation to use indices, updates verifier API and errors, adapts callers and tests, and adds three config dictionary entries.

Changes

Cohort / File(s) Summary
Workspace manifest
Cargo.toml
Adds blake2 workspace dependency from Git (branch blake2x-pr, feature blake2x); tightens bls-signatures to { version = "0.15", default-features = false, features = ["blst"] }.
blssig crate manifest
blssig/Cargo.toml
Adds blake2.workspace = true; removes bls12_381.workspace = true; adds blstrs = "0.7" and group = "0.13".
BDN implementation
blssig/src/bdn/mod.rs
Implements BDN: computes per-key coefficients (Blake2xs) and precomputed terms, stores coefficients and terms in BDNAggregation, changes new, aggregate_sigs, and aggregate_pub_keys to use coefficients/indices, adds calc_coefficients and calc_terms, and expands errors/validation.
Verifier API & errors
blssig/src/verifier/mod.rs, gpbft/src/api.rs
Adds BLSError variants (EmptySigners, InvalidScalar, SignerIndexOutOfRange); changes Verifier::verify_aggregate signature to accept power_table: &[PubKey] and signer_indices: &[u64]; updates verify control flow to use indices and aggregate by precomputed terms.
Call-site integration
certs/src/lib.rs
Adapts verification call to new verifier signature by building pub_keys from power table and passing signer indices; updates tests/mocks accordingly and propagates errors via Result.
Config dictionary
.config/rust-f3.dic
Adds entries: coef_i, sig_i, pub_key_i.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller as Caller
  participant Certs as certs::verify_signature
  participant Ver as blssig::BLSVerifier
  participant BDN as blssig::bdn::BDNAggregation

  Caller->>Certs: verify_signature(payload, agg_sig, power_table, signer_indices)
  Certs->>Ver: verify_aggregate(payload, agg_sig, power_table, signer_indices)
  activate Ver
  Ver->>BDN: BDNAggregation::new(power_table)
  activate BDN
  Note right of BDN #D1FFD6: Compute coefficients via Blake2xs\nCompute terms = (pk*coef)+pk
  BDN-->>Ver: BDNAggregation{pub_keys, coefficients, terms}
  Ver->>BDN: aggregate_pub_keys(signer_indices)
  BDN-->>Ver: aggregated_pub_key
  Ver->>BDN: aggregate_sigs(sigs_from_agg)
  BDN-->>Ver: aggregated_signature
  Ver-->>Certs: Ok / Err(BLSError)
  deactivate Ver
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Potential focal points for review:

  • Correctness of Blake2x usage and deterministic coefficient derivation in calc_coefficients.
  • Scalar conversion and error handling (InvalidScalar) when mapping XOF output to Scalars.
  • Index bounds checks and signer index error surfaces in aggregate_pub_keys.
  • Dependency changes: compatibility between blstrs, group, and workspace blake2.

Possibly related PRs

Suggested reviewers

  • elmattic
  • sudo-shashank

Poem

A nibble of Blake in a moonlit glade,
Coefficients counted where public keys played.
Terms hop together, signatures blend,
Indices guide where the proofs will send.
Rabbit thumps once — aggregation snugly made. 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Implement BDN signature aggregation scheme" accurately and specifically describes the main change in the pull request. The entire changeset is focused on implementing the Boneh–Drijvers–Neven (BDN) signature aggregation scheme, including adding BDN coefficient computation using Blake2Xs, updating aggregation logic with coefficient weighting, and modifying the Verifier API to support the new scheme. The title is concise, clear, and appropriately specific for a developer scanning the history to understand the primary change.
Out of Scope Changes Check ✅ Passed All code changes in the pull request are directly related to implementing the BDN signature aggregation scheme. The Cargo.toml and dependency updates add necessary components (blake2 with Blake2Xs support, blstrs, group) for the BDN implementation. The core BDN implementation in blssig/src/bdn/mod.rs introduces coefficient calculation and precomputed terms with required error handling. The verifier, certificates, and API trait changes are all necessary adaptations to the new BDN-based verification signature and behavior. The dictionary file additions (coef_i, sig_i, pub_key_i) are incidental support changes for the new terminology. No extraneous or unrelated modifications are present.
Docstring Coverage ✅ Passed Docstring coverage is 80.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f2f103f and 5df78a7.

📒 Files selected for processing (3)
  • Cargo.toml (1 hunks)
  • blssig/Cargo.toml (1 hunks)
  • blssig/src/bdn/mod.rs (3 hunks)
🔇 Additional comments (6)
blssig/Cargo.toml (1)

13-16: Dependency additions align with BDN implementation needs.

The new dependencies support the BDN aggregation scheme:

  • blake2 for Blake2xs XOF (coefficient derivation)
  • blstrs for BLS12-381 scalar/group operations
  • group for Group trait
Cargo.toml (1)

20-22: LGTM: BLST backend configuration is appropriate.

Disabling default features and explicitly enabling blst is a good practice for controlled dependency surface.

blssig/src/bdn/mod.rs (4)

4-10: Clear documentation with authoritative references.

The module docs effectively explain the BDN scheme's purpose and cite relevant papers.


26-40: Constructor correctly precomputes BDN coefficients and terms.

The eager computation of coefficients and terms in the constructor is appropriate for the BDN scheme, avoiding repeated computation during aggregation.


42-65: Signature aggregation correctly implements BDN formula.

The implementation correctly computes sum((coef_i + 1) * sig_i) using projective coordinates for efficiency, with appropriate length validation.


121-130: Term precomputation correctly implements (coef_i + 1) * pub_key_i.

The implementation correctly precomputes weighted public key terms for efficient index-based aggregation.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
blssig/src/bdn/mod.rs (2)

31-39: Precompute terms once; OK. Consider storing projective terms for perf.

Logic is correct. For speed (1k+ keys), store G1Projective terms to avoid repeated decode per aggregation.


120-129: Minor: preallocate and store projective for speed.

Allocate with capacity and consider storing G1Projective to avoid re-decode in aggregate.

-    pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
-        let mut terms = vec![];
+    pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
+        let mut terms = Vec::with_capacity(pub_keys.len());
         for (i, pub_key) in pub_keys.iter().enumerate() {
             let pub_key_point: G1Projective = (*pub_key).into();
             let pub_c = pub_key_point * coefficients[i];
             let term = pub_c + pub_key_point;
             terms.push(term.into());
         }
         terms
     }
certs/src/lib.rs (1)

296-305: Correct: pass full power table and signer indices.

This matches the updated Verifier API and BDN requirements. Minor: this clones all keys; unavoidable given &[PubKey] API.

blssig/src/verifier/mod.rs (3)

36-39: Pre-validate signer_indices and handle u64→usize conversion to avoid panics and wasted work

signer_indices is &[u64] but indices for slices/Vecs are usize; very large u64s can overflow on cast. Also, early range checks avoid building BDNAggregation when indices are out of range and return precise errors.

Apply near Line 180:

       if signer_indices.is_empty() {
           return Err(BLSError::EmptySigners);
       }
+
+      // Early validation of indices to avoid overflow/panic and wasted work
+      let n = power_table.len();
+      for &idx in signer_indices {
+          // Guard u64 -> usize conversion on 32-bit targets and ensure in-bounds
+          let idx_usize: usize = idx
+              .try_into()
+              .map_err(|_| BLSError::SignerIndexOutOfRange(usize::MAX))?;
+          if idx_usize >= n {
+              return Err(BLSError::SignerIndexOutOfRange(idx_usize));
+          }
+      }

Optional: if BDNAggregation::aggregate_pub_keys already performs these checks safely, you can keep one source of truth but still consider the early check to short-circuit and return the intended BLSError variant faster.

Also applies to: 171-179


182-188: Minor: pre-allocate capacity for typed_pub_keys

Small perf win: avoid reallocations when mirroring power_table.

-        let mut typed_pub_keys = vec![];
+        let mut typed_pub_keys = Vec::with_capacity(power_table.len());

191-193: Performance: consider caching BDNAggregation per power table

BDN setup over the full power table is likely the hot path. Cache precomputed state keyed by a stable hash of power_table (e.g., concatenated bytes or a Merkle root) to amortize cost across certificates within the same epoch/validator set. Keep using the point cache for deserialization; this would add another layer for BDN precomputation.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4520e4c and f2f103f.

📒 Files selected for processing (7)
  • .config/rust-f3.dic (1 hunks)
  • Cargo.toml (1 hunks)
  • blssig/Cargo.toml (1 hunks)
  • blssig/src/bdn/mod.rs (3 hunks)
  • blssig/src/verifier/mod.rs (4 hunks)
  • certs/src/lib.rs (2 hunks)
  • gpbft/src/api.rs (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
gpbft/src/api.rs (3)
blssig/src/verifier/mod.rs (1)
  • verify_aggregate (167-194)
certs/src/lib.rs (1)
  • verify_aggregate (712-720)
lightclient/src/lib.rs (1)
  • power_table (39-42)
blssig/src/verifier/mod.rs (1)
lightclient/src/lib.rs (1)
  • power_table (39-42)
certs/src/lib.rs (1)
lightclient/src/lib.rs (1)
  • power_table (39-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: All lint checks
🔇 Additional comments (6)
.config/rust-f3.dic (1)

45-47: LGTM!

The addition of coef_i, sig_i, and pub_key_i to the dictionary is appropriate for the BDN aggregation implementation, which introduces per-key coefficients and indexed signatures/public keys.

blssig/Cargo.toml (1)

13-13: Workspace dep OK; ensure features come through and pin upstream rev at root.

Enabling blake2 via workspace is fine. Since the root sets features, nothing else needed here. No action in this file.

blssig/src/bdn/mod.rs (1)

41-59: Weighted signature aggregation may not interop; verify expected input.

If upstream producers aggregate unweighted signatures (common BDN flow), scaling sigs by (c_i+1) will produce a different aggregate. Keep this method if you also control aggregation, but ensure verify paths assume unweighted sigs (as implemented elsewhere).

gpbft/src/api.rs (1)

36-46: Docs and signature update LGTM.

Clearer API: requires power_table plus signer_indices for BDN. Good.

certs/src/lib.rs (1)

712-718: MockVerifier updated correctly.

Signature matches trait; tests should compile.

blssig/src/verifier/mod.rs (1)

20-22: Good addition: explicit EmptySigners error

Clearer failure mode for empty signer sets. LGTM.

coef_i
sig_i
pub_key_i
+
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the stray + character.

The + character does not belong in a spell-check dictionary file. Dictionary files typically contain one term per line without special characters.

Apply this diff to remove it:

-+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
+
# (line 48 removed; no content remains here)
🤖 Prompt for AI Agents
In .config/rust-f3.dic around line 48, remove the stray "+" character that was
accidentally added; open the file, delete the lone "+" on line 48 so the
dictionary contains only valid entries (one term per line), save the file, and
ensure no other stray punctuation remains.

Comment on lines +66 to 83
/// Aggregates public keys indices using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * pub_key_i)
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
// Sum of pre-computed terms (which are already (coef_i + 1) * pub_key_i)
let mut agg_point = G1Projective::identity();
for pub_key in &self.pub_keys {
let pub_key_point: G1Projective = (*pub_key).into();
agg_point += pub_key_point;
for &idx in indices {
let idx = idx as usize;
if idx >= self.terms.len() {
return Err(BLSError::SignerIndexOutOfRange(idx));
}
let term_point: G1Projective = self.terms[idx].into();
agg_point += term_point;
}

// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid lossy u64→usize cast; check bounds in u64 first.

On 32‑bit targets a large u64 could truncate. Compare in u64, then cast.

Apply:

-        for &idx in indices {
-            let idx = idx as usize;
-            if idx >= self.terms.len() {
-                return Err(BLSError::SignerIndexOutOfRange(idx));
-            }
+        for &idx_u64 in indices {
+            if idx_u64 >= self.terms.len() as u64 {
+                return Err(BLSError::SignerIndexOutOfRange(idx_u64 as usize));
+            }
+            let idx = idx_u64 as usize;
             let term_point: G1Projective = self.terms[idx].into();
             agg_point += term_point;
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Aggregates public keys indices using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * pub_key_i)
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
// Sum of pre-computed terms (which are already (coef_i + 1) * pub_key_i)
let mut agg_point = G1Projective::identity();
for pub_key in &self.pub_keys {
let pub_key_point: G1Projective = (*pub_key).into();
agg_point += pub_key_point;
for &idx in indices {
let idx = idx as usize;
if idx >= self.terms.len() {
return Err(BLSError::SignerIndexOutOfRange(idx));
}
let term_point: G1Projective = self.terms[idx].into();
agg_point += term_point;
}
// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
// Sum of pre-computed terms (which are already (coef_i + 1) * pub_key_i)
let mut agg_point = G1Projective::identity();
for &idx_u64 in indices {
if idx_u64 >= self.terms.len() as u64 {
return Err(BLSError::SignerIndexOutOfRange(idx_u64 as usize));
}
let idx = idx_u64 as usize;
let term_point: G1Projective = self.terms[idx].into();
agg_point += term_point;
}
// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 66 to 83, avoid the lossy u64→usize cast
by validating the u64 index against the terms length first: compute let len =
self.terms.len() as u64, check if idx >= len and return the
SignerIndexOutOfRange error if so, then cast idx to usize (let idx_usize = idx
as usize) and use idx_usize for indexing and error construction; this ensures no
truncation on 32‑bit targets while preserving the existing error semantics.

Comment on lines 85 to 118
pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
let mut hasher = Blake2xs::new(0xFFFF);

// Hash all public keys
for pub_key in pub_keys {
let bytes = pub_key.as_bytes();
hasher.update(&bytes);
}

// Read 16 bytes per public key
let mut reader = hasher.finalize_xof();
let mut output = vec![0u8; pub_keys.len() * 16];
reader.read(&mut output);

// Convert every consecutive 16 bytes chunk to a scalar
let mut coefficients = Vec::with_capacity(pub_keys.len());
for i in 0..pub_keys.len() {
let chunk = &output[i * 16..(i + 1) * 16];

// Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
let mut bytes_32 = [0u8; 32];
bytes_32[..16].copy_from_slice(chunk);

// BLS12-381 scalars expects little-endian byte representation
let scalar = Scalar::from_bytes(&bytes_32);
if scalar.is_some().into() {
coefficients.push(scalar.unwrap());
} else {
return Err(BLSError::InvalidScalar);
}
}

Ok(coefficients)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Coefficient derivation: tighten XOF usage, domain separation, and scalar mapping.

  • Use exact out_len, not 0xFFFF.
  • Add a domain separation tag.
  • Consider including the message per BDN variant (compatibility dependent).
  • Derive scalars uniformly via from_bytes_wide with 64 bytes per coeff (128‑bit is OK but sub‑optimal).

Apply:

-    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
-        let mut hasher = Blake2xs::new(0xFFFF);
+    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
+        // 64 bytes per coefficient; uniform reduction via from_bytes_wide
+        let out_len = pub_keys.len() * 64;
+        let mut hasher = Blake2xs::new(out_len);
 
-        // Hash all public keys
+        // Domain separation + all public keys (compressed)
+        hasher.update(b"F3-BDN-COEFF-V1");
         for pub_key in pub_keys {
             let bytes = pub_key.as_bytes();
             hasher.update(&bytes);
         }
 
-        // Read 16 bytes per public key
         let mut reader = hasher.finalize_xof();
-        let mut output = vec![0u8; pub_keys.len() * 16];
+        let mut output = vec![0u8; out_len];
         reader.read(&mut output);
 
-        // Convert every consecutive 16 bytes chunk to a scalar
         let mut coefficients = Vec::with_capacity(pub_keys.len());
-        for i in 0..pub_keys.len() {
-            let chunk = &output[i * 16..(i + 1) * 16];
-
-            // Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
-            let mut bytes_32 = [0u8; 32];
-            bytes_32[..16].copy_from_slice(chunk);
-
-            // BLS12-381 scalars expects little-endian byte representation
-            let scalar = Scalar::from_bytes(&bytes_32);
-            if scalar.is_some().into() {
-                coefficients.push(scalar.unwrap());
-            } else {
-                return Err(BLSError::InvalidScalar);
-            }
+        for i in 0..pub_keys.len() {
+            let chunk = &output[i * 64..(i + 1) * 64];
+            let mut wide = [0u8; 64];
+            wide.copy_from_slice(chunk);
+            let scalar = Scalar::from_bytes_wide(&wide); // uniform mod r
+            coefficients.push(scalar);
         }
 
         Ok(coefficients)
     }

Verification needed: does go-f3 include message in the coefficient hash (L vs L||msg)? Aligning is critical for cross‑impl compatibility.


🌐 Web query:

In go-f3's BDN multi-signature implementation, are coefficients derived from H(L) only, or H(L || msg)? Provide file/line refs if possible.

💡 Result:

Short answer: coefficients are derived from H(L) only (i.e., from the public-key list), not H(L || msg).

Evidence:

  • kyber's BDN docs/state that AggregateSignatures uses c = H(pk) (coefficient = hash of the public key / key list). [1]
  • go-f3 uses a BDN-style BLS implementation in its blssig module (BDN aggregation is used for multisig). [2][3]

References:

  1. kyber bdn package docs — AggregateSignatures: "aggregates the signatures using a coefficient for each one of them where c = H(pk)". (pkg: github.com/.../kyber/sign/bdn). [1]
  2. go-f3 repository — blssig module (uses BDN/BDN-style aggregation). (go-f3/blssig). [2]
  3. Filecoin / bls/BDN references in Filecoin docs (BLS aggregation / BDN multisig). [3]

If you want I can open the exact kyber bdn.go source and paste the exact file+line reference.


Tighten XOF usage and scalar mapping; maintain BDN compatibility

  • Use exact out_len = pub_keys.len() * 64 and initialize Blake2xs with that.
  • Prepend a clear domain-separation tag (e.g. b"F3-BDN-COEFF-V1") before hashing keys.
  • Read 64 bytes per key and derive scalars via Scalar::from_bytes_wide for uniform reduction.
  • Derive coefficients from H(L) only (no message) to match go-f3’s BDN (c = H(pk list) only).
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 85-118, tighten XOF usage and scalar
mapping: initialize the Blake2xs XOF to produce out_len = pub_keys.len() * 64
bytes (not 0xFFFF), prepend a domain-separation tag (e.g. b"F3-BDN-COEFF-V1") to
the hasher before updating with each pubkey, finalize the XOF and read 64 bytes
per public key into the output buffer, and for each 64-byte chunk derive the
coefficient using Scalar::from_bytes_wide (instead of padding 16→32 and
from_bytes) so reduction is uniform; return InvalidScalar on any mapping failure
and ensure only H(L) (the hashed list with the domain tag) is used to derive
coefficients (no extra message).

Cargo.toml Outdated
anyhow = "1"
base32 = "0.5.1"
base64 = "0.22"
blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Pin the Git dependency to a commit for reproducible builds and supply‑chain safety.

Tracking a branch is brittle and risky. Pin to a specific commit (rev) and add a short comment with the PR link.

Apply:

-blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
+blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "<commit-sha>", features = ["blake2x"] } # blake2x PR: https://github.com/RustCrypto/hashes/pull/704

Also consider moving this under [patch.crates-io] to scope the override explicitly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "<commit-sha>", features = ["blake2x"] } # blake2x PR: https://github.com/RustCrypto/hashes/pull/704
🤖 Prompt for AI Agents
In Cargo.toml at line 17, the blake2 dependency currently tracks a branch which
is non‑reproducible; replace the branch pin with a specific commit by using rev
= "<commit-sha>" instead of branch, add a short inline comment with the upstream
PR/PR URL that introduced the change, and consider moving this entry under a
[patch.crates-io] section to scope the override to the registry; update the
lockfile afterwards (cargo update -p blake2 --precise <version> or cargo
generate-lockfile) to ensure reproducible builds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement BDN aggregation scheme

2 participants