Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .config/rust-f3.dic
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,8 @@ G1
G2
JSON
CBOR
TODO
TODO
coef_i
sig_i
pub_key_i
+
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the stray + character.

The + character does not belong in a spell-check dictionary file. Dictionary files typically contain one term per line without special characters.

Apply this diff to remove it:

-+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
+
# (line 48 removed; no content remains here)
🤖 Prompt for AI Agents
In .config/rust-f3.dic around line 48, remove the stray "+" character that was
accidentally added; open the file, delete the lone "+" on line 48 so the
dictionary contains only valid entries (one term per line), save the file, and
ensure no other stray punctuation remains.

1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ ahash = "0.8"
anyhow = "1"
base32 = "0.5.1"
base64 = "0.22"
blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Pin the Git dependency to a commit for reproducible builds and supply‑chain safety.

Tracking a branch is brittle and risky. Pin to a specific commit (rev) and add a short comment with the PR link.

Apply:

-blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
+blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "<commit-sha>", features = ["blake2x"] } # blake2x PR: https://github.com/RustCrypto/hashes/pull/704

Also consider moving this under [patch.crates-io] to scope the override explicitly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "<commit-sha>", features = ["blake2x"] } # blake2x PR: https://github.com/RustCrypto/hashes/pull/704
🤖 Prompt for AI Agents
In Cargo.toml at line 17, the blake2 dependency currently tracks a branch which
is non‑reproducible; replace the branch pin with a specific commit by using rev
= "<commit-sha>" instead of branch, add a short inline comment with the upstream
PR/PR URL that introduced the change, and consider moving this entry under a
[patch.crates-io] section to scope the override to the registry; update the
lockfile afterwards (cargo update -p blake2 --precise <version> or cargo
generate-lockfile) to ensure reproducible builds.

bls-signatures = { version = "0.15" }
bls12_381 = "0.8"
cid = { version = "0.10.1", features = ["std"] }
Expand Down
1 change: 1 addition & 0 deletions blssig/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ edition.workspace = true
rust-version.workspace = true

[dependencies]
blake2.workspace = true
bls-signatures.workspace = true
bls12_381.workspace = true
filecoin-f3-gpbft = { path = "../gpbft", version = "0.1.0" }
Expand Down
104 changes: 86 additions & 18 deletions blssig/src/bdn/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,24 @@
// SPDX-License-Identifier: Apache-2.0, MIT

//! BDN (Boneh-Drijvers-Neven) signature aggregation scheme, for preventing rogue public-key attacks.
//! Those attacks could allow an attacker to forge a public-key and then make a verifiable
//! signature for an aggregation of signatures. It fixes the situation by adding coefficients to the aggregate.
//!
//! NOTE: currently uses standard BLS aggregation without coefficient weighting, hence returns incorrect values compared to go-f3.
//! See the papers:
//! `https://eprint.iacr.org/2018/483.pdf`
//! `https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html`
//!
use crate::verifier::BLSError;
use bls_signatures::{PublicKey, Signature};
use bls12_381::{G1Projective, G2Affine, G2Projective};
use blake2::Blake2xs;
use blake2::digest::{ExtendableOutput, Update, XofReader};
use bls_signatures::{PublicKey, Serialize, Signature};
use bls12_381::{G1Projective, G2Projective, Scalar};

/// BDN aggregation context for managing signature and public key aggregation
pub struct BDNAggregation {
pub_keys: Vec<PublicKey>,
pub(crate) pub_keys: Vec<PublicKey>,
pub(crate) coefficients: Vec<Scalar>,
pub(crate) terms: Vec<PublicKey>,
}

impl BDNAggregation {
Expand All @@ -20,11 +28,18 @@ impl BDNAggregation {
return Err(BLSError::EmptyPublicKeys);
}

Ok(Self { pub_keys })
let coefficients = Self::calc_coefficients(&pub_keys)?;
let terms = Self::calc_terms(&pub_keys, &coefficients);

Ok(Self {
pub_keys,
coefficients,
terms,
})
}

/// Aggregates signatures using standard BLS aggregation
/// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29
/// Aggregates signatures using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * sig_i)
pub fn aggregate_sigs(&self, sigs: Vec<Signature>) -> Result<Signature, BLSError> {
if sigs.len() != self.pub_keys.len() {
return Err(BLSError::LengthMismatch {
Expand All @@ -33,30 +48,83 @@ impl BDNAggregation {
});
}

// Standard BLS aggregation
let mut agg_point = G2Projective::identity();
for sig in sigs {
let sig: G2Affine = sig.into();
agg_point += sig;
for (i, sig) in sigs.iter().enumerate() {
let coef = self.coefficients[i];
let sig_point: G2Projective = (*sig).into();
let sig_c = sig_point * coef;
let sig_c = sig_c + sig_point;

agg_point += sig_c;
}

// Convert back to Signature
let agg_sig: Signature = agg_point.into();
Ok(agg_sig)
}

/// Aggregates public keys using standard BLS aggregation
/// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29
pub fn aggregate_pub_keys(&self) -> Result<PublicKey, BLSError> {
// Standard BLS aggregation
/// Aggregates public keys indices using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * pub_key_i)
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
// Sum of pre-computed terms (which are already (coef_i + 1) * pub_key_i)
let mut agg_point = G1Projective::identity();
for pub_key in &self.pub_keys {
let pub_key_point: G1Projective = (*pub_key).into();
agg_point += pub_key_point;
for &idx in indices {
let idx = idx as usize;
if idx >= self.terms.len() {
return Err(BLSError::SignerIndexOutOfRange(idx));
}
let term_point: G1Projective = self.terms[idx].into();
agg_point += term_point;
}

// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
Comment on lines +67 to 84
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid lossy u64→usize cast; check bounds in u64 first.

On 32‑bit targets a large u64 could truncate. Compare in u64, then cast.

Apply:

-        for &idx in indices {
-            let idx = idx as usize;
-            if idx >= self.terms.len() {
-                return Err(BLSError::SignerIndexOutOfRange(idx));
-            }
+        for &idx_u64 in indices {
+            if idx_u64 >= self.terms.len() as u64 {
+                return Err(BLSError::SignerIndexOutOfRange(idx_u64 as usize));
+            }
+            let idx = idx_u64 as usize;
             let term_point: G1Projective = self.terms[idx].into();
             agg_point += term_point;
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Aggregates public keys indices using BDN aggregation with coefficients.
/// Computes: sum((coef_i + 1) * pub_key_i)
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
// Sum of pre-computed terms (which are already (coef_i + 1) * pub_key_i)
let mut agg_point = G1Projective::identity();
for pub_key in &self.pub_keys {
let pub_key_point: G1Projective = (*pub_key).into();
agg_point += pub_key_point;
for &idx in indices {
let idx = idx as usize;
if idx >= self.terms.len() {
return Err(BLSError::SignerIndexOutOfRange(idx));
}
let term_point: G1Projective = self.terms[idx].into();
agg_point += term_point;
}
// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
pub fn aggregate_pub_keys(&self, indices: &[u64]) -> Result<PublicKey, BLSError> {
// Sum of pre-computed terms (which are already (coef_i + 1) * pub_key_i)
let mut agg_point = G1Projective::identity();
for &idx_u64 in indices {
if idx_u64 >= self.terms.len() as u64 {
return Err(BLSError::SignerIndexOutOfRange(idx_u64 as usize));
}
let idx = idx_u64 as usize;
let term_point: G1Projective = self.terms[idx].into();
agg_point += term_point;
}
// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 66 to 83, avoid the lossy u64→usize cast
by validating the u64 index against the terms length first: compute let len =
self.terms.len() as u64, check if idx >= len and return the
SignerIndexOutOfRange error if so, then cast idx to usize (let idx_usize = idx
as usize) and use idx_usize for indexing and error construction; this ensures no
truncation on 32‑bit targets while preserving the existing error semantics.


pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
let mut hasher = Blake2xs::new(0xFFFF);

// Hash all public keys
for pub_key in pub_keys {
let bytes = pub_key.as_bytes();
hasher.update(&bytes);
}

// Read 16 bytes per public key
let mut reader = hasher.finalize_xof();
let mut output = vec![0u8; pub_keys.len() * 16];
reader.read(&mut output);

// Convert every consecutive 16 bytes chunk to a scalar
let mut coefficients = Vec::with_capacity(pub_keys.len());
for i in 0..pub_keys.len() {
let chunk = &output[i * 16..(i + 1) * 16];

// Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
let mut bytes_32 = [0u8; 32];
bytes_32[..16].copy_from_slice(chunk);

// BLS12-381 scalars expects little-endian byte representation
let scalar = Scalar::from_bytes(&bytes_32);
if scalar.is_some().into() {
coefficients.push(scalar.unwrap());
} else {
return Err(BLSError::InvalidScalar);
}
}

Ok(coefficients)
}
Comment on lines 86 to 119
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Coefficient derivation: tighten XOF usage, domain separation, and scalar mapping.

  • Use exact out_len, not 0xFFFF.
  • Add a domain separation tag.
  • Consider including the message per BDN variant (compatibility dependent).
  • Derive scalars uniformly via from_bytes_wide with 64 bytes per coeff (128‑bit is OK but sub‑optimal).

Apply:

-    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
-        let mut hasher = Blake2xs::new(0xFFFF);
+    pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
+        // 64 bytes per coefficient; uniform reduction via from_bytes_wide
+        let out_len = pub_keys.len() * 64;
+        let mut hasher = Blake2xs::new(out_len);
 
-        // Hash all public keys
+        // Domain separation + all public keys (compressed)
+        hasher.update(b"F3-BDN-COEFF-V1");
         for pub_key in pub_keys {
             let bytes = pub_key.as_bytes();
             hasher.update(&bytes);
         }
 
-        // Read 16 bytes per public key
         let mut reader = hasher.finalize_xof();
-        let mut output = vec![0u8; pub_keys.len() * 16];
+        let mut output = vec![0u8; out_len];
         reader.read(&mut output);
 
-        // Convert every consecutive 16 bytes chunk to a scalar
         let mut coefficients = Vec::with_capacity(pub_keys.len());
-        for i in 0..pub_keys.len() {
-            let chunk = &output[i * 16..(i + 1) * 16];
-
-            // Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
-            let mut bytes_32 = [0u8; 32];
-            bytes_32[..16].copy_from_slice(chunk);
-
-            // BLS12-381 scalars expects little-endian byte representation
-            let scalar = Scalar::from_bytes(&bytes_32);
-            if scalar.is_some().into() {
-                coefficients.push(scalar.unwrap());
-            } else {
-                return Err(BLSError::InvalidScalar);
-            }
+        for i in 0..pub_keys.len() {
+            let chunk = &output[i * 64..(i + 1) * 64];
+            let mut wide = [0u8; 64];
+            wide.copy_from_slice(chunk);
+            let scalar = Scalar::from_bytes_wide(&wide); // uniform mod r
+            coefficients.push(scalar);
         }
 
         Ok(coefficients)
     }

Verification needed: does go-f3 include message in the coefficient hash (L vs L||msg)? Aligning is critical for cross‑impl compatibility.


🌐 Web query:

In go-f3's BDN multi-signature implementation, are coefficients derived from H(L) only, or H(L || msg)? Provide file/line refs if possible.

💡 Result:

Short answer: coefficients are derived from H(L) only (i.e., from the public-key list), not H(L || msg).

Evidence:

  • kyber's BDN docs/state that AggregateSignatures uses c = H(pk) (coefficient = hash of the public key / key list). [1]
  • go-f3 uses a BDN-style BLS implementation in its blssig module (BDN aggregation is used for multisig). [2][3]

References:

  1. kyber bdn package docs — AggregateSignatures: "aggregates the signatures using a coefficient for each one of them where c = H(pk)". (pkg: github.com/.../kyber/sign/bdn). [1]
  2. go-f3 repository — blssig module (uses BDN/BDN-style aggregation). (go-f3/blssig). [2]
  3. Filecoin / bls/BDN references in Filecoin docs (BLS aggregation / BDN multisig). [3]

If you want I can open the exact kyber bdn.go source and paste the exact file+line reference.


Tighten XOF usage and scalar mapping; maintain BDN compatibility

  • Use exact out_len = pub_keys.len() * 64 and initialize Blake2xs with that.
  • Prepend a clear domain-separation tag (e.g. b"F3-BDN-COEFF-V1") before hashing keys.
  • Read 64 bytes per key and derive scalars via Scalar::from_bytes_wide for uniform reduction.
  • Derive coefficients from H(L) only (no message) to match go-f3’s BDN (c = H(pk list) only).
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 85-118, tighten XOF usage and scalar
mapping: initialize the Blake2xs XOF to produce out_len = pub_keys.len() * 64
bytes (not 0xFFFF), prepend a domain-separation tag (e.g. b"F3-BDN-COEFF-V1") to
the hasher before updating with each pubkey, finalize the XOF and read 64 bytes
per public key into the output buffer, and for each 64-byte chunk derive the
coefficient using Scalar::from_bytes_wide (instead of padding 16→32 and
from_bytes) so reduction is uniform; return InvalidScalar on any mapping failure
and ensure only H(L) (the hashed list with the domain tag) is used to derive
coefficients (no extra message).


pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {
let mut terms = vec![];
for (i, pub_key) in pub_keys.iter().enumerate() {
let pub_key_point: G1Projective = (*pub_key).into();
let pub_c = pub_key_point * coefficients[i];
let term = pub_c + pub_key_point;
terms.push(term.into());
}
terms
}
}
18 changes: 14 additions & 4 deletions blssig/src/verifier/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ mod tests;
pub enum BLSError {
#[error("empty public keys provided")]
EmptyPublicKeys,
#[error("empty signers provided")]
EmptySigners,
#[error("empty signatures provided")]
EmptySignatures,
#[error("invalid public key length: expected {BLS_PUBLIC_KEY_LENGTH} bytes, got {0}")]
Expand All @@ -31,6 +33,10 @@ pub enum BLSError {
SignatureVerificationFailed,
#[error("mismatched number of public keys and signatures: {pub_keys} != {sigs}")]
LengthMismatch { pub_keys: usize, sigs: usize },
#[error("invalid scalar value")]
InvalidScalar,
#[error("signer index {0} is out of range")]
SignerIndexOutOfRange(usize),
}

/// BLS signature verifier using BDN aggregation scheme
Expand Down Expand Up @@ -162,14 +168,18 @@ impl Verifier for BLSVerifier {
&self,
payload: &[u8],
agg_sig: &[u8],
signers: &[PubKey],
power_table: &[PubKey],
signer_indices: &[u64],
) -> Result<(), Self::Error> {
if signers.is_empty() {
if power_table.is_empty() {
return Err(BLSError::EmptyPublicKeys);
}
if signer_indices.is_empty() {
return Err(BLSError::EmptySigners);
}

let mut typed_pub_keys = vec![];
for pub_key in signers {
for pub_key in power_table {
if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH {
return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len()));
}
Expand All @@ -178,7 +188,7 @@ impl Verifier for BLSVerifier {
}

let bdn = BDNAggregation::new(typed_pub_keys)?;
let agg_pub_key = bdn.aggregate_pub_keys()?;
let agg_pub_key = bdn.aggregate_pub_keys(signer_indices)?;
let agg_pub_key_bytes = PubKey(agg_pub_key.as_bytes().to_vec());
self.verify_single(&agg_pub_key_bytes, payload, agg_sig)
}
Expand Down
23 changes: 8 additions & 15 deletions certs/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -293,27 +293,19 @@ fn verify_signature(
// Encode the payload for signing
let payload_bytes = payload.serialize_for_signing(&network.to_string());

// Extract public keys for signers
let signers_pk: Vec<PubKey> = signers
// Extract all public keys from power table
let pub_keys: Vec<PubKey> = power_table
.iter()
.map(|&index| power_table[index as usize].pub_key.clone())
.map(|entry| entry.pub_key.clone())
.collect();

// Verify the aggregate signature
let res = verifier
.verify_aggregate(&payload_bytes, &cert.signature, &signers_pk)
verifier
.verify_aggregate(&payload_bytes, &cert.signature, &pub_keys, &signers)
.map_err(|e| CertsError::SignatureVerificationFailed {
instance: cert.gpbft_instance,
error: e.to_string(),
});

// Temporarily silencing verification errors
// The current BDN implementation uses standard BLS aggregation, causing verification to fail.
// This logging allows development to continue.
// TODO: Remove this workaround once BDN aggregation scheme is implemented
if let Err(err) = res {
println!("WARN: {}", err);
}
})?;

Ok(())
}
Expand Down Expand Up @@ -721,7 +713,8 @@ mod tests {
&self,
payload: &[u8],
agg_sig: &[u8],
signers: &[PubKey],
power_table: &[PubKey],
signer_indices: &[u64],
) -> std::result::Result<(), Self::Error> {
Ok(())
}
Expand Down
9 changes: 7 additions & 2 deletions gpbft/src/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,19 +33,24 @@ pub trait Verifier: Send + Sync {

/// Verifies an aggregate signature
///
/// The aggregation requires all public keys from the power table in order
/// to compute coefficients correctly, even when only a subset actually signed.
///
/// This method must be safe for concurrent use.
///
/// # Arguments
/// * `payload` - The payload that was signed
/// * `agg_sig` - The aggregate signature to verify
/// * `signers` - The public keys of the signers
/// * `power_table` - All public keys from the power table
/// * `signer_indices` - Indices of the signers in the power table
///
/// # Returns
/// A Result indicating success or failure with an error message
fn verify_aggregate(
&self,
payload: &[u8],
agg_sig: &[u8],
signers: &[PubKey],
power_table: &[PubKey],
signer_indices: &[u64],
) -> Result<(), Self::Error>;
}
Loading