diff --git a/bip-frost-signing.md b/bip-frost-signing.md new file mode 100644 index 0000000000..68a8bcc578 --- /dev/null +++ b/bip-frost-signing.md @@ -0,0 +1,822 @@ +``` +BIP: ? +Title: FROST Signing Protocol for BIP340 Signatures +Author: Sivaram Dhakshinamoorthy +Comments-URI: +Status: Draft +Type: Standards Track +Assigned: ? +License: CC0-1.0 +Discussion: 2024-07-31: https://groups.google.com/g/bitcoindev/c/PeMp2HQl-H4/m/AcJtK0aKAwAJ +Requires: 32, 340, 341 +``` + +## Abstract + +This document proposes a standard for the Flexible Round-Optimized Schnorr Threshold (FROST) signing protocol. The standard is compatible with [BIP340][bip340] public keys and signatures. It supports *tweaking*, which allows deriving [BIP32][bip32] child keys from the threshold public key and creating [BIP341][bip341] Taproot outputs with key and script paths. + +## Copyright + +This document is made available under [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/). +The accompanying source code is licensed under the [MIT license](https://opensource.org/license/mit). + +## Motivation + + + +The FROST signature scheme enables threshold Schnorr signatures. In a *t*-of-*n* threshold configuration, any *t*[^t-edge-cases] participants can cooperatively produce a Schnorr signature that is indistinguishable from a signature produced by a single signer. FROST signatures are unforgeable as long as fewer than *t* participants are corrupted. The signing protocol remains functional provided that at least *t* honest participants retain access to their secret key shares. + +[^t-edge-cases]: While *t = n* and *t = 1* are in principle supported, simpler alternatives are available in these cases. In the case *t = n*, using a dedicated *n*-of-*n* multi-signature scheme such as MuSig2 (see [BIP327][bip327]) instead of FROST avoids the need for an interactive DKG. The case *t = 1* can be realized by letting one signer generate an ordinary [BIP340][bip340] key pair and transmitting the key pair to every other signer, who can check its consistency and then simply use the ordinary [BIP340][bip340] signing algorithm. Signers still need to ensure that they agree on a key pair. + +The IRTF has published [RFC 9591][rfc9591], which specifies the FROST signing protocol for several elliptic curve and hash function combinations, including secp256k1 with SHA-256, the cryptographic primitives used in Bitcoin. However, the signatures produced by RFC 9591 are incompatible with BIP340 Schnorr signatures due to the X-only public keys introduced in BIP340. Additionally, RFC 9591 does not specify key tweaking mechanisms, which are essential for Bitcoin applications such as [BIP32][bip32] key derivation and [BIP341][bip341] Taproot. This document addresses these limitations by specifying a BIP340-compatible variant of FROST signing protocol that supports key tweaking. + +Following the initial publication of the FROST protocol[[KG20][frost1]], several optimized variants have been proposed to improve computational efficiency and bandwidth optimization: FROST2[[CKM21][frost2]], FROST2-BTZ[[BTZ21][stronger-security-frost]], and FROST3[[RRJSS][roast], [CGRS23][olaf]]. Among these variants, FROST3 is the most efficient variant to date. + +This document specifies the FROST3 variant[^frost3-security]. The FROST3 signing protocol shares substantial similarities with the MuSig2 signing protocol specified in [BIP327][bip327]. Accordingly, this specification adopts several design principles from BIP327, including support for key tweaking, partial signature verification, and identifiable abort mechanisms. We note that significant portions of this document have been directly adapted from BIP327 due to the similarities in the signing protocols. Key generation for FROST signing is out of scope for this document. + +[^frost3-security]: The FROST3 signing scheme has been proven existentially unforgeable for both trusted dealer and distributed key generation setups. When using a trusted dealer for key generation, security reduces to the standard One-More Discrete Logarithm (OMDL) assumption. When instantiated with a distributed key generation protocol such as SimplPedPoP, security reduces to the Algebraic One-More Discrete Logarithm (AOMDL) assumption. + +## Overview + +Implementers must make sure to understand this section thoroughly to avoid subtle mistakes that may lead to catastrophic failure. + +### Optionality of Features + +The goal of this proposal is to support a wide range of possible application scenarios. +Given a specific application scenario, some features may be unnecessary or not desirable, and implementers can choose not to support them. +Such optional features include: + +- Applying plain tweaks after x-only tweaks. +- Applying tweaks at all. +- Dealing with messages that are not exactly 32 bytes. +- Identifying a disruptive signer after aborting (aborting itself remains mandatory). +If applicable, the corresponding algorithms should simply fail when encountering inputs unsupported by a particular implementation. (For example, the signing algorithm may fail when given a message which is not 32 bytes.) +Similarly, the test vectors that exercise the unimplemented features should be re-interpreted to expect an error, or be skipped if appropriate. + +### Key Material and Setup + + +A FROST key generation protocol configures a group of *n* participants with a *threshold public key* (representing a *t*-of-*n* threshold policy). +The corresponding *threshold secret key* is Shamir secret-shared among all *n* participants, where each participant holds a distinct long-term *secret share*. +This ensures that any subset of at least *t* participants can jointly run the FROST signing protocol to produce a signature under the *threshold secret key*. + +Key generation for FROST signing is out of scope for this document. Implementations can use either a trusted dealer setup, as specified in [Appendix C of RFC 9591](https://www.rfc-editor.org/rfc/rfc9591.html#name-trusted-dealer-key-generati), or a distributed key generation (DKG) protocol such as [ChillDKG](https://github.com/BlockstreamResearch/bip-frost-dkg). The appropriate choice depends on the implementation's trust model and operational requirements. + +This protocol distinguishes between two public key formats: *plain public keys* are 33-byte compressed public keys traditionally used in Bitcoin, while *X-only public keys* are 32-byte keys defined in [BIP340][bip340]. +Key generation protocols produce *public shares* and *threshold public keys* in the plain format. During signing, we conditionally negate *secret shares* to ensure the resulting threshold-signature verifies under the corresponding *X-only threshold public key*. + +> [!WARNING] +> Key generation protocols must commit the *threshold public key* to an unspendable script path as recommended in [BIP341](https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-23). This prevents a malicious party from embedding a hidden script path during key generation that would allow them to bypass the *t*-of-*n* threshold policy. + +#### Protocol Parties and Network Setup + +There are *u* (where *t <= u <= n < 2^32*) participants and one coordinator initiating the FROST signing protocol. +Each participant has a point-to-point communication link to the coordinator (but participants do not have direct communication links to each other). + +If there is no dedicated coordinator, one of the participants can act as the coordinator. + +#### Signing Inputs and Outputs + +Each signing session requires two inputs: a participant's long-term *secret share* (individual to each participant, not shared with the coordinator) and a [Signers Context](#signers-context)[^signers-ctx-struct] data structure (common to all participants and the coordinator). + +[^signers-ctx-struct]: The Signers Context represents the public data of signing participants: their identifiers (*id1..u*) and public shares (*pubshare1..u*). +Implementations may represent this as simply as two separate lists passed to signing APIs. +The threshold public key *thresh_pk* can be stored for efficiency or recomputed when needed using *DeriveThreshPubkey*. +Similarly, the values *n* and *t* are used only for validation, and can be omitted if validation is not performed. + +This signing protocol is compatible with any key generation protocol that produces valid FROST keys. +Valid keys satisfy: (1) each *secret share* is a Shamir share of the *threshold secret key*, and (2) each *public share* equals the scalar multiplication *secshare \* G*. +Implementations may **optionally** validate key compatibility for a signing session using the *ValidateSignersCtx* function. +For comprehensive validation of the entire key material, *ValidateSignersCtx* can be run on all possible *u* signing participants. + +> [!IMPORTANT] +> Passing *ValidateSignersCtx* ensures functional compatibility with the signing protocol but does not guarantee the security of the key generation protocol itself. + +The output of the FROST signing protocol is a BIP340 Schnorr signature that verifies under the *threshold public key* as if it were produced by a single signer using the *threshold secret key*. + +### General Signing Flow + +The coordinator and signing participants must be determined before initiating the signing protocol. +The signing participants information is stored in a [Signers Context](#signers-context) data structure. +The *threshold public key* may optionally be tweaked by initializing a [Tweak Context](#tweak-context) at this stage. + +Whenever the signing participants want to sign a message, the basic order of operations to create a threshold-signature is as follows: + +**First broadcast round:** +Signers begin the signing session by running *NonceGen* to compute their *secnonce* and *pubnonce*.[^nonce-serialization-detail] +Each signer sends their *pubnonce* to the coordinator, who aggregates them using *NonceAgg* to produce an aggregate nonce and sends it back to all signers. + +[^nonce-serialization-detail]: We treat the *secnonce* and *pubnonce* as grammatically singular even though they include serializations of two scalars and two elliptic curve points, respectively. +This treatment may be confusing for readers familiar with the MuSig2 paper. +However, serialization is a technical detail that is irrelevant for users of MuSig2 interfaces. + +**Second broadcast round:** +At this point, every signer has the required data to sign, which, in the algorithms specified below, is stored in a data structure called [Session Context](#session-context). +Every signer computes a partial signature by running *Sign* with their long-term *secret share*, *secnonce* and the session context. +Then, the signers broadcast their partial signatures to the coordinator, who runs *PartialSigAgg* to produce the final signature. +If all parties behaved honestly, the result passes [BIP340][bip340] verification. + +![Frost signing flow](./bip-frost-signing/docs/frost-signing-flow.png) + +A malicious coordinator can cause the signing session to fail but cannot compromise the unforgeability of the scheme. Even when colluding with up to *t-1* signers, a malicious coordinator cannot forge a signature. + +> [!TIP] +> The *Sign* algorithm must **not** be executed twice with the same *secnonce*. +> Otherwise, it is possible to extract the secret signing key from the two partial signatures output by the two executions of *Sign*. +> To avoid accidental reuse of *secnonce*, an implementation may securely erase the *secnonce* argument by overwriting it with 64 zero bytes after it has been read by *Sign*. +> A *secnonce* consisting of only zero bytes is invalid for *Sign* and will cause it to fail. + +To simplify the specification of the algorithms, some intermediary values are unnecessarily recomputed from scratch, e.g., when executing *GetSessionValues* multiple times. +Actual implementations can cache these values. +As a result, the [Session Context](#session-context) may look very different in implementations or may not exist at all. +However, computation of *GetSessionValues* and storage of the result must be protected against modification from an untrusted third party. +This party would have complete control over the aggregate public key and message to be signed. + +### Nonce Generation + +*NonceGen* must have access to a high-quality random generator to draw an unbiased, uniformly random value *rand'*. +In contrast to BIP340 signing, the values *k1* and *k2* **must not be derived deterministically** from the session parameters because deriving nonces deterministically allows for a [complete key-recovery attack in multi-party discrete logarithm-based signatures](https://medium.com/blockstream/musig-dn-schnorr-multisignatures-with-verifiably-deterministic-nonces-27424b5df9d6#e3b6). + + +The optional arguments to *NonceGen* enable a defense-in-depth mechanism that may prevent secret share exposure if *rand'* is accidentally not drawn uniformly at random. +If the value *rand'* was identical in two *NonceGen* invocations, but any other argument was different, the *secnonce* would still be guaranteed to be different as well (with overwhelming probability), and thus accidentally using the same *secnonce* for *Sign* in both sessions would be avoided. +Therefore, it is recommended to provide the optional arguments *secshare*, *pubshare*, *thresh_pk*, and *m* if these session parameters are already determined during nonce generation. +The auxiliary input *extra_in* can contain additional contextual data that has a chance of changing between *NonceGen* runs, +e.g., a supposedly unique session id (taken from the application), a session counter wide enough not to repeat in practice, any nonces by other signers (if already known), or the serialization of a data structure containing multiple of the above. +However, the protection provided by the optional arguments should only be viewed as a last resort. +In most conceivable scenarios, the assumption that the arguments are different between two executions of *NonceGen* is relatively strong, particularly when facing an active adversary. + +In some applications, the coordinator may enable preprocessing of nonce generation to reduce signing latency. +Participants run *NonceGen* to generate a batch of *pubnonce* values before the message or Signers Context[^preprocess-round1] is known, which are stored with the coordinator (e.g., on a centralized server). +During this preprocessing phase, only the available arguments are provided to *NonceGen*. +When a signing session begins, the coordinator selects and aggregates *pubnonces* of the signing participants, enabling them to run *Sign* immediately once the message is determined. +This way, the final signature is created quicker and with fewer round trips. +However, applications that use this method presumably store the nonces for a longer time and must therefore be even more careful not to reuse them. +Moreover, this method is not compatible with the defense-in-depth mechanism described in the previous paragraph. + + +[^preprocess-round1]: When preprocessing *NonceGen* round, the Signers Context can be extended to include the *pubnonces* of the signing participants, as these are generated and stored before the signing session begins. + +FROST signers are typically stateful: they generate *secnonce*, store it, and later use it to produce a partial signature after receiving the aggregated nonce. +However, stateless signing is possible when one signer receives the aggregate nonce of all OTHER signers before generating their own nonce. +In coordinator-based setups, the coordinator facilitates this by collecting pubnonces from the other signers, computing their aggregate (*aggothernonce*), and providing it to the stateless signer. +The stateless signer then runs *NonceGen*, *NonceAgg*, and *Sign* in sequence, sending its *pubnonce* and partial signature simultaneously to the coordinator, who computes the final aggregate nonce for all participants. +In coordinator-less setups, any one signer can achieve stateless operation by generating their nonce after seeing all other signers' *pubnonces*. +Stateless signers may want to consider signing deterministically (see [Modifications to Nonce Generation](#modifications-to-nonce-generation)) to remove the reliance on the random number generator in the *NonceGen* algorithm. + + +### Identifying Disruptive Signers + +The signing protocol makes it possible to identify malicious signers who send invalid contributions to a signing session in order to make the signing session abort and prevent the honest signers from obtaining a valid signature. +This property is called "identifiable aborts" and ensures that honest parties can assign blame to malicious signers who cause an abort in the signing protocol. + +Aborts are identifiable for an honest party if the following conditions hold in a signing session: + +- The contributions received from all signers have not been tampered with (e.g., because they were sent over authenticated connections). +- Nonce aggregation is performed honestly (e.g., because the honest signer performs nonce aggregation on its own or because the coordinator is trusted). +- The partial signatures received from all signers are verified using the algorithm *PartialSigVerify*. + +If these conditions hold and an honest party (signer or coordinator) runs an algorithm that fails due to invalid protocol contributions from malicious signers, then the algorithm run by the honest party will output the participant identifier of exactly one malicious signer. +Additionally, if the honest parties agree on the contributions sent by all signers in the signing session, all the honest parties who run the aborting algorithm will identify the same malicious signer. + +#### Further Remarks + +Some of the algorithms specified below may also assign blame to a malicious coordinator. +While this is possible for some particular misbehavior of the coordinator, it is not guaranteed that a malicious coordinator can be identified. +More specifically, a malicious coordinator (whose existence violates the second condition above) can always make signing abort and wrongly hold honest signers accountable for the abort (e.g., by claiming to have received an invalid contribution from a particular honest signer). + +The only purpose of the algorithm *PartialSigVerify* is to ensure identifiable aborts, and it is not necessary to use it when identifiable aborts are not desired. +In particular, partial signatures are *not* signatures. +An adversary can forge a partial signature, i.e., create a partial signature without knowing the secret share for that particular participant public share.[^partialsig-forgery] +However, if *PartialSigVerify* succeeds for all partial signatures then *PartialSigAgg* will return a valid Schnorr signature. + +[^partialsig-forgery]: Assume a malicious participant intends to forge a partial signature for the participant with public share *P*. It participates in the signing session pretending to be two distinct signers: one with the public share *P* and the other with its own public share. The adversary then sets the nonce for the second signer in such a way that allows it to generate a partial signature for *P*. As a side effect, it cannot generate a valid partial signature for its own public share. An explanation of the steps required to create a partial signature forgery can be found in [this document](https://gist.github.com/siv2r/0eab97bae9b7186ef2a4919e49d3b426). + +### Tweaking the Threshold Public Key + +The threshold public key can be *tweaked*, which modifies the key as defined in the [Tweaking Definition](#tweaking-definition) subsection. +In order to apply a tweak, the Tweak Context output by *TweakCtxInit* is provided to the *ApplyTweak* algorithm with the *is_xonly_t* argument set to false for plain tweaking and true for X-only tweaking. +The resulting Tweak Context can be used to apply another tweak with *ApplyTweak* or obtain the threshold public key with *GetXonlyPubkey* or *GetPlainPubkey*. + +The purpose of supporting tweaking is to ensure compatibility with existing uses of tweaking, i.e., that the result of signing is a valid signature for the tweaked public key. +The FROST signing algorithms take arbitrary tweaks as input but accepting arbitrary tweaks may negatively affect the security of the scheme.[^arbitrary-tweaks] +Instead, signers should obtain the tweaks according to other specifications. +This typically involves deriving the tweaks from a hash of the threshold public key and some other information. +Depending on the specific scheme that is used for tweaking, either the plain or the X-only threshold public key is required. +For example, to do [BIP32][bip32] derivation, you call *GetPlainPubkey* to be able to compute the tweak, whereas [BIP341][bip341] TapTweaks require X-only public keys that are obtained with *GetXonlyPubkey*. + +[^arbitrary-tweaks]: It is an open question whether allowing arbitrary tweaks from an adversary affects the unforgeability of FROST. + +The tweak mode provided to *ApplyTweak* depends on the application: +Plain tweaking can be used to derive child public keys from a threshold public key using [BIP32][bip32]. +On the other hand, X-only tweaking is required for Taproot tweaking per [BIP341][bip341]. +A Taproot-tweaked public key commits to a *script path*, allowing users to create transaction outputs that are spendable either with a FROST threshold-signature or by providing inputs that satisfy the script path. +Script path spends require a control block that contains a parity bit for the tweaked X-only public key. + +The bit can be obtained with *GetPlainPubkey(tweak_ctx)[0] & 1*. + +## Algorithms + +The following specification of the algorithms has been written with a focus on clarity. As a result, the specified algorithms are not always optimal in terms of computation and space. In particular, some values are recomputed but can be cached in actual implementations (see [General Signing Flow](#general-signing-flow)). + +### Notation + +The algorithms are defined over the **[secp256k1](https://www.secg.org/sec2-v2.pdf) group and its associated scalar field**. We note that adapting this proposal to other elliptic curves is not straightforward and can result in an insecure scheme. + +#### Cryptographic Types and Operations + +We rely on the following types and conventions throughout this document: + +- **Types:** Points on the curve are represented by the object *GE*, and scalars are represented by *Scalar*. +- **Naming:** Points are denoted using uppercase letters (e.g., *P*, *Q*), while scalars are denoted using lowercase letters (e.g., *r*, *s*). +- **Mathematical Context:** Points are group elements under elliptic curve addition. The group includes all points on the secp256k1 curve plus the point at infinity (the identity element). +- **Arithmetic:** The operators +, -, and · are overloaded depending on their operands: + - **Scalar Arithmetic:** When applied to two *Scalar* operands, +, -, and · denote integer addition, subtraction, and multiplication modulo the group order. + - **Point Addition:** When applied to two *GE* operands, + denotes the elliptic curve [group addition operation](https://en.wikipedia.org/wiki/Elliptic_curve#The_group_law). + - **Scalar Multiplication:** The notation r · P denotes [scalar multiplication](https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication) (the repeated addition of point P, r times). + +The reference code vendors the secp256k1lab library to handle underlying arithmetic, serialization, deserialization, and auxiliary functions. To improve the readability of this specification, we utilize simplified notation aliases for the library's internal methods, as mapped below: + + +| Notation | secp256k1lab | Description | +| --- | --- | --- | +| *p* | *FE.SIZE* | Field element size | +| *ord* | *GE.ORDER* | Group order | +| *G* | *G* | The secp256k1 generator point | +| *inf_point* | *GE()* | The infinity point | +| *is_infinity(P)* | *P.infinity()* | Returns whether *P* is the point at infinity | +| *x(P)* | *P.x* | Returns the x-coordinate of a non-infinity point *P*, in the range *[0, p−1]* | +| *y(P)* | *P.y* | Returns the y-coordinate of a non-infinity point *P*, in the range *[0, p-1]* | +| *has_even_y(P)* | *P.has_even_y()* | Returns whether *P* has an even y-coordinate | +| *with_even_y(P)* | - | Returns the version of point *P* that has an even y-coordinate. If *P* already has an even y-coordinate (or is infinity), it is returned unchanged. Otherwise, its negation *-P* is returned | +| *xbytes(P)* | *P.to_bytes_xonly()* | Returns the 32-byte x-only serialization of a non-infinity point *P* | +| *cbytes(P)* | *P.to_bytes_compressed()* | Returns the 33-byte compressed serialization of a non-infinity point *P* | +| *cbytes_ext(P)* | *P.to_bytes_compressed
_with_infinity()* | Returns the 33-byte compressed serialization of a point *P*. If *P* is the point at infinity, it is encoded as a 33-byte array of zeros. | +| *lift_x(x)*[^liftx-soln] | *GE.lift_x(x)* | Decodes a 32-byte x-only serialization *x* into a non-infinity point P. The resulting point always has an even y-coordinate. | +| *cpoint(b)* | *GE.from_bytes_compressed(b)* | Decodes a 33-byte compressed serialization *b* into a non-infinity point | +| *cpoint_ext(b)* | *GE.from_bytes_compressed
_with_infinity(b)* | Decodes a 33-byte compressed serialization *b* into a point. If *b* is a 33-byte array of zeros, it returns the point at infinity | +| *scalar_to_bytes(s)* | *s.to_bytes()* | Returns the 32-byte serialization of a scalar *s* | +| *scalar_from_bytes_checked(b)* | *Scalar.from_bytes_checked(b)* | Deserializes a 32-byte array *b* to a scalar, fails if the value is ≥ *ord* | +| *scalar_from_bytes
_nonzero_checked(b)* | *Scalar.from_bytes
_nonzero_checked(b)* | Deserializes a 32-byte array *b* to a scalar, fails if the value is zero or ≥ *ord* | +| *scalar_from_bytes_wrapping(b)* | *Scalar.from_bytes_wrapping(b)* | Deserializes a 32-byte array *b* to a scalar, reducing the value modulo *ord* | +| *hashtag(x)* | *tagged_hash(x)* | Computes a 32-byte domain-separated hash of the byte array *x*. The output is *SHA256(SHA256(tag) \|\| SHA256(tag) \|\| x)*, where *tag* is UTF-8 encoded string unique to the context | +| *random_bytes(n)* | - | Returns *n* bytes, sampled uniformly at random using a cryptographically secure pseudorandom number generator (CSPRNG) | +| *xor_bytes(a, b)* | *xor_bytes(a, b)* | Returns byte-wise xor of *a* and *b* | + + +[^liftx-soln]: Given a candidate X coordinate *x* in the range *0..p-1*, there exist either exactly two or exactly zero valid Y coordinates. If no valid Y coordinate exists, then *x* is not a valid X coordinate either, i.e., no point *P* exists for which *x(P) = x*. The valid Y coordinates for a given candidate *x* are the square roots of *c = x3 + 7 mod p* and they can be computed as *y = ±c(p+1)/4 mod p* (see [Quadratic residue](https://en.wikipedia.org/wiki/Quadratic_residue#Prime_or_prime_power_modulus)) if they exist, which can be checked by squaring and comparing with *c*. + +#### Auxiliary and Byte-string Operations + +The following helper functions and notation are used for operations on standard integers and byte arrays, independent of curve arithmetic. Note that like Scalars, these variables are denoted by lowercase letters (e.g., *x*, *n*); the intended type is implied by context. + +| Notation | Description | +| --- | --- | +| *\|\|* | Refers to byte array concatenation | +| *len(x)* | Returns the length of the byte array *x* in bytes | +| *x[i:j]* | Returns the sub-array of the byte array *x* starting at index *i* (inclusive) and ending at *j* (exclusive). The result has length *j - i* | +| *empty_bytestring* | A constant representing an empty byte array where length is 0 | +| *bytes(n, x)* | Returns the big-endian *n*-byte encoding of the integer *x* | +| *count(x, lst)* | Returns the number of times the element *x* occurs in the list *lst* | +| *has_duplicates(lst)* | Returns *True* if any element in *lst* appears more than once, *False* otherwise | +| *sorted(lst)* | Returns a new list containing the elements of *lst* arranged in ascending order | +| *(a, b, ...)* | Refers to a tuple containing the listed elements | + +> [!NOTE] +> In the following algorithms, all scalar arithmetic is understood to be modulo the group order. For example, *a · b* implicitly means *a · b mod order* + +### Key Material and Setup + +#### Signers Context + +The Signers Context is a data structure consisting of the following elements: + +- The total number *n* of participants involved in key generation: an integer with *2 ≤ n < 232* +- The threshold number *t* of participants required to issue a signature: an integer with *1 ≤ t ≤ n* +- The number *u* of signing participants: an integer with *t ≤ u ≤ n* +- The list of participant identifiers *id1..u*: *u* distinct integers, each with *0 ≤ idi ≤ n - 1* +- The list of participant public shares *pubshare1..u*: *u* 33-byte arrays, each a compressed serialized point +- The threshold public key *thresh_pk*: a 33-byte array, compressed serialized point + +We write "Let *(n, t, u, id1..u, pubshare1..u, thresh_pk) = signers_ctx*" to assign names to the elements of Signers Context. + +Algorithm *ValidateSignersCtx(signers_ctx)*: + +- Inputs: + - The *signers_ctx*: a [Signers Context](#signers-context) data structure +- *(n, t, u, id1..u, pubshare1..u, thresh_pk) = signers_ctx* +- Fail if not *1 ≤ t ≤ n* +- Fail if not *t ≤ u ≤ n* +- For *i = 1 .. u*: + - Fail if not *0 ≤ idi ≤ n - 1* + - Fail if *cpoint(pubsharei)* fails +- Fail if *has_duplicates(id1..u)* +- Fail if *DeriveThreshPubkey(id1..u, pubshare1..u) ≠ thresh_pk* +- No return + +Internal Algorithm *DeriveThreshPubkey(id1..u, pubshare1..u)*[^derive-thresh-no-validate-inputs] + +- *Q = inf_point* +- For *i = 1..u*: + - *P* = cpoint(pubsharei); fail if that fails + - *λ = DeriveInterpolatingValue(id1..u, idi)* + - *Q = Q + λ · P* +- Return *cbytes(Q)* + +[^derive-thresh-no-validate-inputs]: *DeriveThreshPubkey* does not check that its inputs are in range. This validation is performed by *ValidateSignersCtx*, which is its only caller. + +Internal Algorithm *DeriveInterpolatingValue(id1..u, my_id):* + +- Fail if *my_id* not in *id1..u* +- Fail if *has_duplicates(id1..u)* +- Let *num = Scalar(1)* +- Let *deno = Scalar(1)* +- For *i = 1..u*: + - If *idi ≠ my_id*: + - Let *num = num · Scalar(idi + 1)[^lagrange-shift]  (mod ord)* + - Let *deno = deno · Scalar(idi - my_id)  (mod ord)* +- *λ = num · deno-1  (mod ord)* +- Return *λ* + +[^lagrange-shift]: The standard Lagrange interpolation coefficient uses the formula *idi / (idi - my_id)* for each term in the product, where ids are in the range *1..n*. However, since participant identifiers in this protocol are zero-indexed (range *0..n-1*), we shift them by adding 1. This transforms each term to *(idi+1) / (idi - my_id)*. + +### Tweaking the Threshold Public Key + +#### Tweak Context + +The Tweak Context is a data structure consisting of the following elements: + +- The point *Q* representing the potentially tweaked threshold public key: a *GE* +- The accumulated tweak *tacc*: a *Scalar* +- The value *gacc*: *Scalar(1)* or *Scalar(-1)* + +We write "Let *(Q, gacc, tacc) = tweak_ctx*" to assign names to the elements of a Tweak Context. + +Algorithm *TweakCtxInit(thresh_pk):* + +- Input: + - The threshold public key *thresh_pk*: a 33-byte array, compressed serialized point +- Let *Q = cpoint(thresh_pk)*; fail if that fails +- Fail if *is_infinity(Q)* +- Let *gacc = Scalar(1)* +- Let *tacc = Scalar(0)* +- Return *tweak_ctx = (Q, gacc, tacc)* + +Algorithm *GetXonlyPubkey(tweak_ctx)*: + +- Inputs: + - The *tweak_ctx*: a [Tweak Context](#tweak-context) data structure +- Let *(Q, _, _) = tweak_ctx* +- Return *xbytes(Q)* + +Algorithm *GetPlainPubkey(tweak_ctx)*: + +- Inputs: + - The *tweak_ctx*: a [Tweak Context](#tweak-context) data structure +- Let *(Q, _, _) = tweak_ctx* +- Return *cbytes(Q)* + +#### Applying Tweaks + +Algorithm *ApplyTweak(tweak_ctx, tweak, is_xonly_t)*: + +- Inputs: + - The *tweak_ctx*: a [Tweak Context](#tweak-context) data structure + - The *tweak*: a 32-byte array, serialized scalar + - The tweak mode *is_xonly_t*: a boolean +- Let *(Q, gacc, tacc) = tweak_ctx* +- If *is_xonly_t* and not *has_even_y(Q)*: + - Let *g = Scalar(-1)* +- Else: + - Let *g = Scalar(1)* +- Let *t = scalar_from_bytes_nonzero_checked(tweak)*; fail if that fails +- Let *Q' = g · Q + t · G* + - Fail if *is_infinity(Q')* +- Let *gacc' = g · gacc  (mod ord)* +- Let *tacc' = t + g · tacc  (mod ord)* +- Return *tweak_ctx' = (Q', gacc', tacc')* + +### Nonce Generation + +Algorithm *NonceGen(secshare, pubshare, thresh_pk, m, extra_in)*: + +- Inputs: + - The participant secret signing share *secshare*: a 32-byte array, serialized scalar (optional argument) + - The participant public share *pubshare*: a 33-byte array, compressed serialized point (optional argument) + + - The x-only threshold public key *thresh_pk*: a 32-byte array, X-only serialized point (optional argument) + - The message *m*: a byte array (optional argument)[^max-msg-len] + - The auxiliary input *extra_in*: a byte array with *0 ≤ len(extra_in) ≤ 232-1* (optional argument) +- Let *rand' = random_bytes(32)* +- If the optional argument *secshare* is present: + - Let *rand = xor_bytes(secshare, hashFROST/aux(rand'))*[^sk-xor-rand] +- Else: + - Let *rand = rand'* +- If the optional argument *pubshare* is not present: + - Let *pubshare* = *empty_bytestring* +- If the optional argument *thresh_pk* is not present: + - Let *thresh_pk* = *empty_bytestring* +- If the optional argument *m* is not present: + - Let *m_prefixed = bytes(1, 0)* +- Else: + - Let *m_prefixed = bytes(1, 1) || bytes(8, len(m)) || m* +- If the optional argument *extra_in* is not present: + - Let *extra_in = empty_bytestring* +- Let *ki = scalar_from_bytes_wrapping(hashFROST/nonce(rand || bytes(1, len(pubshare)) || pubshare || bytes(1, len(thresh_pk)) || thresh_pk || m_prefixed || bytes(4, len(extra_in)) || extra_in || bytes(1, i - 1)))* for *i = 1,2* +- Fail if *k1 = Scalar(0)* or *k2 = Scalar(0)* +- Let *R\*,1 = k1 · G*, *R\*,2 = k2 · G* +- Let *pubnonce = cbytes(R\*,1) || cbytes(R\*,2)* +- Let *secnonce = bytes(32, k1) || bytes(32, k2)*[^secnonce-ser] +- Return *(secnonce, pubnonce)* + +[^sk-xor-rand]: The random data is hashed (with a unique tag) as a precaution against situations where the randomness may be correlated with the secret signing share itself. It is xored with the secret share (rather than combined with it in a hash) to reduce the number of operations exposed to the actual secret share. + +[^secnonce-ser]: The algorithms as specified here assume that the *secnonce* is stored as a 64-byte array using the serialization *secnonce = bytes(32, k1) || bytes(32, k2)*. The same format is used in the reference implementation and in the test vectors. However, since the *secnonce* is (obviously) not meant to be sent over the wire, compatibility between implementations is not a concern, and this method of storing the *secnonce* is merely a suggestion. The *secnonce* is effectively a local data structure of the signer which comprises the value pair *(k1, k2)*, and implementations may choose any suitable method to carry it from *NonceGen* (first communication round) to *Sign* (second communication round). In particular, implementations may choose to hide the *secnonce* in internal state without exposing it in an API explicitly, e.g., in an effort to prevent callers from reusing a *secnonce* accidentally. + +[^max-msg-len]: In theory, the allowed message size is restricted because SHA256 accepts byte strings only up to size of 2^61-1 bytes (and because of the 8-byte length encoding). + +### Nonce Aggregation + +Algorithm *NonceAgg(pubnonce1..u, id1..u)*: + +- Inputs: + - The number *u* of signing participants: an integer with *t ≤ u ≤ n* + - The list of participant public nonces *pubnonce1..u*: *u* 66-byte array, each an output of *NonceGen* + - The list of participant identifiers *id1..u*: *u* integers, each with 0 ≤ *idi* < *n* +- For *j = 1 .. 2*: + - For *i = 1 .. u*: + - Let *Ri,j = cpoint(pubnoncei[(j-1)\*33:j\*33])*; fail if that fails and blame signer *idi* for invalid *pubnonce* + - Let *Rj = R1,j + R2,j + ... + Ru,j* +- Return *aggnonce = cbytes_ext(R1) || cbytes_ext(R2)* + +### Session Context + +The Session Context is a data structure consisting of the following elements: + +- The *signers_ctx*: a [Signers Context](#signers-context) data structure +- The aggregate public nonce *aggnonce*: a 66-byte array, output of *NonceAgg* +- The number *v* of tweaks with *0 ≤ v < 2^32* +- The list of tweaks *tweak1..v*: *v* 32-byte arrays, each a serialized scalar +- The list of tweak modes *is_xonly_t1..v* : *v* booleans +- The message *m*: a byte array[^max-msg-len] + +We write "Let *(signers_ctx, aggnonce, v, tweak1..v, is_xonly_t1..v, m) = session_ctx*" to assign names to the elements of a Session Context. + +Algorithm *GetSessionValues(session_ctx)*: + +- Let *(signers_ctx, aggnonce, v, tweak1..v, is_xonly_t1..v, m) = session_ctx* +- *ValidateSignersCtx(signers_ctx)*; fail if that fails +- Let *(_, _, u, id1..u, pubshare1..u, thresh_pk) = signers_ctx* +- Let *tweak_ctx0 = TweakCtxInit(thresh_pk)*; fail if that fails +- For *i = 1 .. v*: + - Let *tweak_ctxi = ApplyTweak(tweak_ctxi-1, tweaki, is_xonly_ti)*; fail if that fails +- Let *(Q, gacc, tacc) = tweak_ctxv* +- Let *ser_ids* = *SerializeIds(id1..u)* +- Let *b* = *scalar_from_bytes_wrapping(hashFROST/noncecoef(ser_ids || aggnonce || xbytes(Q) || m))* +- Fail if *b = Scalar(0)* +- Let *R1 = cpoint_ext(aggnonce[0:33]), R2 = cpoint_ext(aggnonce[33:66])*; fail if that fails and blame the coordinator for invalid *aggnonce*. +- Let *R' = R1 + b · R2* +- If *is_infinity(R'):* + - Let final nonce *R = G* ([see Dealing with Infinity in Nonce Aggregation](#dealing-with-infinity-in-nonce-aggregation)) +- Else: + - Let final nonce *R = R'* +- Let *e = scalar_from_bytes_wrapping(hashBIP0340/challenge((xbytes(R) || xbytes(Q) || m)))* +- Fail if *e = Scalar(0)* +- Return (Q, gacc, tacc, id1..u, pubshare1..u, b, R, e) + +Internal Algorithm *SerializeIds(id1..u)*: + +- *res = empty_bytestring* +- For *id* in *sorted(id1..u)*: + - *res = res || bytes(4, id)* +- Return *res* + +### Signing + +Algorithm *Sign(secnonce, secshare, my_id, session_ctx)*: + +- Inputs: + - The secret nonce *secnonce* that has never been used as input to *Sign* before: a 64-byte array[^secnonce-ser] + - The participant secret signing share *secshare*: a 32-byte array, serialized scalar + - The participant identifier *my_id*: an integer with *0 ≤ my_id ≤ n-1* + - The *session_ctx*: a [Session Context](#session-context) data structure +- Let *(Q, gacc, _, id1..u, pubshare1..u, b, R, e) = GetSessionValues(session_ctx)*; fail if that fails +- Let *k1' = scalar_from_bytes_nonzero_checked(secnonce[0:32])*; fail if that fails +- Let *k2' = scalar_from_bytes_nonzero_checked(secnonce[32:64])*; fail if that fails +- Let *k1 = k1', k2 = k2'* if *has_even_y(R)*, otherwise let *k1 = -k1', k2 = -k2'* +- Let *d' = scalar_from_bytes_nonzero_checked(secshare)*; fail if that fails +- Let *pubshare = cbytes(d' · G)* +- Fail if *pubshare* not in *pubshare1..u* +- Fail if *my_id* not in *id1..u* +- Let *λ = DeriveInterpolatingValue(id1..u, my_id)*; fail if that fails +- Let *g = Scalar(1)* if *has_even_y(Q)*, otherwise let *g = Scalar(-1)* +- Let *d = g · gacc · d'  (mod ord)* (See [Negation of Secret Share When Signing](#negation-of-the-secret-share-when-signing)) +- Let *s = k1 + b · k2 + e · λ · d  (mod ord)* +- Let *psig = scalar_to_bytes(s)* +- Let *pubnonce = cbytes(k1' · G) || cbytes(k2' · G)* +- If *PartialSigVerifyInternal(psig, my_id, pubnonce, pubshare, session_ctx)* (see below) returns failure, fail[^why-verify-partialsig] +- Return partial signature *psig* + +[^why-verify-partialsig]: Verifying the signature before leaving the signer prevents random or adversarially provoked computation errors. This prevents publishing invalid signatures which may leak information about the secret share. It is recommended but can be omitted if the computation cost is prohibitive. + +### Partial Signature Verification + +Algorithm *PartialSigVerify(psig, pubnonce1..u, signers_ctx, tweak1..v, is_xonly_t1..v, m, i)*: + +- Inputs: + - The partial signature *psig*: a 32-byte array, serialized scalar + - The list public nonces *pubnonce1..u*: *u* 66-byte arrays, each an output of *NonceGen* + - The *signers_ctx*: a [Signers Context](#signers-context) data structure + - The number *v* of tweaks with *0 ≤ v < 2^32* + - The list of tweaks *tweak1..v*: *v* 32-byte arrays, each a serialized scalar + - The list of tweak modes *is_xonly_t1..v* : *v* booleans + - The message *m*: a byte array[^max-msg-len] + - The index *i* of the signer in the list of public nonces where *0 < i ≤ u* +- Let *(_, _, u, id1..u, pubshare1..u, _) = signers_ctx* +- Let *aggnonce = NonceAgg(pubnonce1..u, id1..u)*; fail if that fails +- Let *session_ctx = (signers_ctx, aggnonce, v, tweak1..v, is_xonly_t1..v, m)* +- Run *PartialSigVerifyInternal(psig, idi, pubnoncei, pubsharei, session_ctx)* +- Return success iff no failure occurred before reaching this point. + +Internal Algorithm *PartialSigVerifyInternal(psig, my_id, pubnonce, pubshare, session_ctx)*: + +- Let *(Q, gacc, _, id1..u, pubshare1..u, b, R, e) = GetSessionValues(session_ctx)*; fail if that fails +- Let *s = scalar_from_bytes_nonzero_checked(psig)*; fail if that fails +- Fail if *pubshare* not in *pubshare1..u* +- Fail if *my_id* not in *id1..u* +- Let *R\*,1 = cpoint(pubnonce[0:33]), R\*,2 = cpoint(pubnonce[33:66])* +- Let *Re\*' = R\*,1 + b · R\*,2* +- Let effective nonce *Re\* = Re\*'* if *has_even_y(R)*, otherwise let *Re\* = -Re\*'* +- Let *P = cpoint(pubshare)*; fail if that fails +- Let *λ = DeriveInterpolatingValue(id1..u, my_id)*[^lambda-cant-fail] +- Let *g = Scalar(1)* if *has_even_y(Q)*, otherwise let *g = Scalar(-1)* +- Let *g' = g · gacc  (mod ord)* (See [Negation of Pubshare When Partially Verifying](#negation-of-the-pubshare-when-partially-verifying)) +- Fail if *s · G ≠ Re\* + e · λ · g' · P* +- Return success iff no failure occurred before reaching this point. + +[^lambda-cant-fail]: *DeriveInterpolatingValue(id1..u, my_id)* cannot fail when called from *PartialSigVerifyInternal* as *PartialSigVerify* picks *my_id* from *id1..u* + +### Partial Signature Aggregation + +Algorithm *PartialSigAgg(psig1..u, id1..u, session_ctx)*: + +- Inputs: + - The number *u* of signatures with *t ≤ u ≤ n* + - The list of partial signatures *psig1..u*: *u* 32-byte arrays, each an output of *Sign* + - The list of participant identifiers *id1..u*: *u* distinct integers, each with *0 ≤ idi ≤ n-1* + - The *session_ctx*: a [Session Context](#session-context) data structure +- Let *(Q, _, tacc, _, _, _, R, e) = GetSessionValues(session_ctx)*; fail if that fails +- For *i = 1 .. u*: + - Let *si = scalar_from_bytes_nonzero_checked(psigi)*; fail if that fails and blame signer *idi* for invalid partial signature. +- Let *g = Scalar(1)* if *has_even_y(Q)*, otherwise let *g = Scalar(-1)* +- Let *s = s1 + ... + su + e · g · tacc  (mod ord)* +- Return *sig = xbytes(R) || scalar_to_bytes(s)* + +### Test Vectors & Reference Code + +We provide a naive, highly inefficient, and non-constant time [pure Python 3 reference implementation of the threshold public key tweaking, nonce generation, partial signing, and partial signature verification algorithms](./bip-frost-signing/python/frost_ref/). + +Standalone JSON test vectors are also available in the [same directory](./bip-frost-signing/python/vectors/), to facilitate porting the test vectors into other implementations. + +> [!CAUTION] +> The reference implementation is for demonstration purposes only and not to be used in production environments. + +## Remarks on Security and Correctness + +### Modifications to Nonce Generation + +Implementers must avoid modifying the *NonceGen* algorithm without being fully aware of the implications. +We provide two modifications to *NonceGen* that are secure when applied correctly and may be useful in special circumstances, summarized in the following table. + +| | needs secure randomness | needs secure counter | needs to keep state securely | needs aggregate nonce of all other signers (only possible for one signer) | +| --- | --- | --- | --- | --- | +| **NonceGen** | ✓ | | ✓ | | +| **CounterNonceGen** | | ✓ | ✓ | | +| **DeterministicSign** | | | | ✓ | + +First, on systems where obtaining uniformly random values is much harder than maintaining a global atomic counter, it can be beneficial to modify *NonceGen*. +The resulting algorithm *CounterNonceGen* does not draw *rand'* uniformly at random but instead sets *rand'* to the value of an atomic counter that is incremented whenever it is read. +With this modification, the secret share *secshare* of the signer generating the nonce is **not** an optional argument and must be provided to *NonceGen*. +The security of the resulting scheme then depends on the requirement that reading the counter must never yield the same counter value in two *NonceGen* invocations with the same *secshare*. + +Second, if there is a unique signer who generates their nonce last (i.e., after receiving the aggregate nonce from all other signers), it is possible to modify nonce generation for this single signer to not require high-quality randomness. +Such a nonce generation algorithm *DeterministicSign* is specified below. +Note that the only optional argument is *rand*, which can be omitted if randomness is entirely unavailable. +*DeterministicSign* requires the argument *aggothernonce* which should be set to the output of *NonceAgg* run on the *pubnonce* value of **all** other signers (but can be provided by an untrusted party). +Hence, using *DeterministicSign* is only possible for the last signer to generate a nonce and makes the signer stateless, similar to the stateless signer described in the [Nonce Generation](#nonce-generation) section. + + +#### Deterministic and Stateless Signing for a Single Signer + +Algorithm *DeterministicSign(secshare, my_id, aggothernonce, signers_ctx, tweak1..v, is_xonly_t1..v, m, rand)*: + +- Inputs: + - The participant secret signing share *secshare*: a 32-byte array, serialized scalar + - The participant identifier *my_id*: an integer with *0 ≤ my_id ≤ n-1* + - The aggregate public nonce *aggothernonce* (see [above](#modifications-to-nonce-generation)): a 66-byte array, output of *NonceAgg* + - The *signers_ctx*: a [Signers Context](#signers-context) data structure + - The number *v* of tweaks with *0 ≤ v < 2^32* + - The list of tweaks *tweak1..v*: *v* 32-byte arrays, each a serialized scalar + - The list of tweak methods *is_xonly_t1..v*: *v* booleans + - The message *m*: a byte array[^max-msg-len] + - The auxiliary randomness *rand*: a 32-byte array, serialized scalar (optional argument) +- If the optional argument *rand* is present: + - Let *secshare' = xor_bytes(secshare, hashFROST/aux(rand))* +- Else: + - Let *secshare' = secshare* +- Let *(_, _, u, id1..u, pubshare1..u, thresh_pk) = signers_ctx* +- Let *tweak_ctx0 = TweakCtxInit(thresh_pk)*; fail if that fails +- For *i = 1 .. v*: + - Let *tweak_ctxi = ApplyTweak(tweak_ctxi-1, tweaki, is_xonly_ti)*; fail if that fails +- Let *tweaked_tpk = GetXonlyPubkey(tweak_ctxv)* +- Let *ki = scalar_from_bytes_wrapping(hashFROST/deterministic/nonce(secshare' || aggothernonce || tweaked_tpk || bytes(8, len(m)) || m || bytes(1, i - 1)))* for *i = 1,2* +- Fail if *k1 = Scalar(0)* or *k2 = Scalar(0)* +- Let *R\*,1 = k1 · G, R\*,2 = k2 · G* +- Let *pubnonce = cbytes(R\*,1) || cbytes(R\*,2)* +- Let *d = scalar_from_bytes_nonzero_checked(secshare')*; fail if that fails +- Let *my_pubshare = cbytes(d · G)* +- Fail if *my_pubshare* is not present in *pubshare1..u* +- Let *secnonce = scalar_to_bytes(k1) || scalar_to_bytes(k2)* +- Let *aggnonce = NonceAgg((pubnonce, aggothernonce), (my_id, COORDINATOR_ID))*[^coordinator-id-sentinel]; fail if that fails and blame coordinator for invalid *aggothernonce*. +- Let *session_ctx = (signers_ctx, aggnonce, v, tweak1..v, is_xonly_t1..v, m)* +- Return (pubnonce, Sign(secnonce, secshare, my_id, session_ctx)) + +[^coordinator-id-sentinel]: *COORDINATOR_ID* is a sentinel value (not an actual participant identifier) used to track the source of *aggothernonce* for error attribution. If *NonceAgg* fails, the coordinator is blamed for providing an invalid *aggothernonce*. In the reference implementation, *COORDINATOR_ID* is represented as *None*. + +### Tweaking Definition + +Two modes of tweaking the threshold public key are supported. They correspond to the following algorithms: + +Algorithm *ApplyPlainTweak(P, t)*: + +- Inputs: + - *P*: a point + - The tweak *t*: a scalar +- Return *P + t · G* + +Algorithm *ApplyXonlyTweak(P, t)*: + +- Inputs: + - *P*: a point + - The tweak *t*: a scalar +- Return *with_even_y(P) + t · G* + + +### Negation of the Secret Share when Signing + +> [!NOTE] +> In the following equations, all scalar arithmetic is understood to be modulo the group order, as specified in the [Notation](#notation) section. + +During the signing process, the *[Sign](#signing)* algorithm might have to negate the secret share in order to produce a partial signature for an X-only threshold public key, which may be tweaked *v* times (X-only or plain). + +The following elliptic curve points arise as intermediate steps when creating a signature: + +- The values *Pi* (pubshare), *di'* (secret share), and *Q0* (threshold public key) are produced by a FROST key generation protocol. We have +
+    Pi = di'·G
+    Q0 = λid1·P1 + λid2·P2 + ... + λidu·Pu
+  
+ Here, *λidi* denotes the interpolating value for the *i*-th signing participant in the [Signers Context](#signers-context). + +- *Qi* is the tweaked threshold public key after the *i*-th execution of *ApplyTweak* for *1 ≤ i ≤ v*. It holds that +
+    Qi = f(i-1) + ti·G for i = 1, ..., v where
+      f(i-1) := with_even_y(Qi-1) if is_xonly_ti and
+      f(i-1) := Qi-1 otherwise.
+  
+- *with_even_y(Q*v*)* is the final result of the threshold public key tweaking operations. It corresponds to the output of *GetXonlyPubkey* applied on the final Tweak Context. + +The signer's goal is to produce a partial signature corresponding to the final result of threshold pubkey derivation and tweaking, i.e., the X-only public key *with_even_y(Qv)*. + +For *1 ≤ i ≤ v*, we denote the value *g* computed in the *i*-th execution of *ApplyTweak* by *gi-1*. Therefore, *gi-1* equals *Scalar(-1)* if and only if *is_xonly_ti* is true and *Qi-1* has an odd Y coordinate. In other words, *gi-1* indicates whether *Qi-1* needed to be negated to apply an X-only tweak: +
+  f(i-1) = gi-1·Qi-1 for 1 ≤ i ≤ v
+
+Furthermore, the *Sign* and *PartialSigVerify* algorithms set value *g* depending on whether Qv needed to be negated to produce the (X-only) final output. For consistency, this value *g* is referred to as *gv* in this section. +
+  with_even_y(Qv) = gv·Qv
+
+ +So, the (X-only) final public key is +
+  with_even_y(Qv)
+    = gv·Qv
+    = gv·(f(v-1) + tv·G)
+    = gv·(gv-1·(f(v-2) + tv-1·G) + tv·G)
+    = gv·gv-1·f(v-2) + gv·(tv + gv-1·tv-1)·G
+    = gv·gv-1·f(v-2) + (sumi=v-1..v ti · prodj=i..v gj)·G
+    = gv·gv-1· ... ·g1·f(0) + (sumi=1..v ti · prodj=i..v gj)·G
+    = gv· ... ·g0·Q0 + gv·taccv·G
+
+where tacci is computed by TweakCtxInit and ApplyTweak as follows: +
+  tacc0 = 0  (mod ord)
+  tacci = ti + gi-1·tacci-1  (mod ord) for i=1..v
+
+ for which it holds that +
+  gv·taccv = sumi=1..v ti · prodj=i..v gj  (mod ord)
+
+ +*TweakCtxInit* and *ApplyTweak* compute +
+  gacc0 = 1  (mod ord)
+  gacci = gi-1 · gacci-1  (mod ord) for i=1..v
+
+So we can rewrite above equation for the final public key as +
+  with_even_y(Qv) = gv · gaccv · Q0 + gv · taccv · G
+
+ +Then we have +
+  with_even_y(Qv) - gv·taccv·G
+    = gv·gaccv·Q0
+    = gv·gaccv·(λid1·P1 + ... + λidu·Pu)
+    = gv·gaccv·(λid1·d1'·G + ... + λidu·du'·G)
+    = sumj=1..u(gv·gaccv·λidj·dj')·G
+
+ +Intuitively, *gacci* tracks accumulated sign flipping and *tacci* tracks the accumulated tweak value after applying the first *i* individual tweaks. Additionally, *gv* indicates whether *Qv* needed to be negated to produce the final X-only result. Thus, participant *i* multiplies their secret share *di'* with *gv·gaccv* in the [*Sign*](#signing) algorithm. + +#### Negation of the Pubshare when Partially Verifying + +As explained in [Negation Of The Secret Share When Signing](#negation-of-the-secret-share-when-signing) the signer uses a possibly negated secret share +
+  d = gv·gaccv·d'  (mod ord)
+
+when producing a partial signature to ensure that the aggregate signature will correspond to a threshold public key with even Y coordinate. + +The [*PartialSigVerifyInternal*](#partial-signature-verification) algorithm is supposed to check +
+  s·G = Re* + e·λ·d·G
+
+ +The verifier doesn't have access to *d · G* but can construct it using the participant *pubshare* as follows: +
+d·G
+  = gv · gaccv · d' · G
+  = gv · gaccv · cpoint(pubshare)
+
+Note that the threshold public key and list of tweaks are inputs to partial signature verification, so the verifier can also construct *gv* and *gaccv*. + +### Dealing with Infinity in Nonce Aggregation + +If the coordinator provides *aggnonce = bytes(33,0) || bytes(33,0)*, either the coordinator is dishonest or there is at least one dishonest signer (except with negligible probability). +If signing aborted in this case, it would be impossible to determine who is dishonest. +Therefore, signing continues so that the culprit is revealed when collecting and verifying partial signatures. + +However, the final nonce *R* of a BIP340 Schnorr signature cannot be the point at infinity. +If we would nonetheless allow the final nonce to be the point at infinity, then the scheme would lose the following property: +if *PartialSigVerify* succeeds for all partial signatures, then *PartialSigAgg* will return a valid Schnorr signature. +Since this is a valuable feature, we modify [FROST3 signing][roast] to avoid producing an invalid Schnorr signature while still allowing detection of the dishonest signer: In *GetSessionValues*, if the final nonce *R* would be the point at infinity, set it to the generator instead (an arbitrary choice). + +This modification to *GetSessionValues* does not affect the unforgeability of the scheme. +Given a successful adversary against the unforgeability game (EUF-CMA) for the modified scheme, a reduction can win the unforgeability game for the original scheme by simulating the modification towards the adversary: +When the adversary provides *aggnonce' = bytes(33, 0) || bytes(33, 0)*, the reduction sets *aggnonce = cbytes_ext(G) || bytes(33, 0)*. +For any other *aggnonce'*, the reduction sets *aggnonce = aggnonce'*. +(The case that the adversary provides an *aggnonce' ≠ bytes(33, 0) || bytes(33, 0)* but nevertheless *R'* in *GetSessionValues* is the point at infinity happens only with negligible probability.) + +## Backwards Compatibility + +This document proposes a standard for the FROST threshold signature scheme that is compatible with [BIP340][bip340]. FROST is *not* compatible with ECDSA signatures traditionally used in Bitcoin. + +## Changelog + +- *0.3.6* (2026-01-28): Add MIT license file for reference code and other auxiliary files. +- *0.3.5* (2026-01-25): Update secp256k1lab to latest version, remove stub file, and fix formatting in the BIP text. +- *0.3.4* (2026-01-01): Add an example file to the reference code. +- *0.3.3* (2025-12-29): Replace the lengthy Introduction section with a concise Motivation section. +- *0.3.2* (2025-12-20): Use 2-of-3 keys in test vectors. +- *0.3.1* (2025-12-17): Update the Algorithms section to use secp256k1lab methods and types. +- *0.3.0* (2025-12-15): Introduces the following changes: + - Introduce *SignersContext* and define key material compatibility with *ValidateSignersCtx*. + - Rewrite the signing protocol assuming a coordinator, add sequence diagram, and warn key generation protocols to output Taproot-safe *threshold public key*. + - Remove *GetSessionInterpolatingValue*, *SessionHasSignerPubshare*, *ValidatePubshares*, and *ValidateThreshPubkey* algorithms + - Revert back to initializing *TweakCtxInit* with threshold public key instead of *pubshares* +- *0.2.3* (2025-11-25): Sync terminologies with the ChillDKG BIP. +- *0.2.2* (2025-11-11): Remove key generation test vectors as key generation is out of scope for this specification. +- *0.2.1* (2025-11-10): Vendor secp256k1lab library to provide *Scalar* and *GE* primitives. Restructure reference implementation into a Python package layout. +- *0.2.0* (2025-04-11): Includes minor fixes and the following major changes: + - Initialize *TweakCtxInit* using individual *pubshares* instead of the threshold public key. + - Add Python script to automate generation of test vectors. + - Represent participant identifiers as 4-byte integers in the range *0..n - 1* (inclusive). +- *0.1.0* (2024-07-31): Publication of draft BIP on the bitcoin-dev mailing list + +## Acknowledgments + +We thank Jonas Nick, Tim Ruffing, Jesse Posner, and Sebastian Falbesoner for their contributions to this document. + + +[bip32]: https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki +[bip340]: https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki +[bip341]: https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki + +[bip327]: https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki +[frost1]: https://eprint.iacr.org/2020/852 +[frost2]: https://eprint.iacr.org/2021/1375 +[stronger-security-frost]: https://eprint.iacr.org/2022/833 +[olaf]: https://eprint.iacr.org/2023/899 +[roast]: https://eprint.iacr.org/2022/550 + +[rfc9591]: https://www.rfc-editor.org/rfc/rfc9591.html diff --git a/bip-frost-signing/.markdownlint.json b/bip-frost-signing/.markdownlint.json new file mode 100644 index 0000000000..2c21fe5555 --- /dev/null +++ b/bip-frost-signing/.markdownlint.json @@ -0,0 +1,10 @@ +{ + "first-line-heading": false, + "line-length": false, + "no-inline-html": { "allowed_elements": ["sub", "sup", "pre"] }, + "no-duplicate-heading": { "siblings_only": true }, + "emphasis-style": { "style": "asterisk" }, + "strong-style": { "style": "asterisk" }, + "ul-style": { "style": "dash" }, + "fenced-code-language": false +} \ No newline at end of file diff --git a/bip-frost-signing/COPYING b/bip-frost-signing/COPYING new file mode 100644 index 0000000000..e4998218e3 --- /dev/null +++ b/bip-frost-signing/COPYING @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2024-2026 Sivaram Dhakshinamoorthy + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. \ No newline at end of file diff --git a/bip-frost-signing/all.sh b/bip-frost-signing/all.sh new file mode 100755 index 0000000000..82643c3502 --- /dev/null +++ b/bip-frost-signing/all.sh @@ -0,0 +1,20 @@ +#!/bin/sh + +set -euo pipefail + +check_availability() { + command -v "$1" > /dev/null 2>&1 || { + echo >&2 "$1 is required but it's not installed. Aborting."; + exit 1; + } +} + +check_availability markdownlint-cli2 +check_availability typos + +markdownlint-cli2 ../bip-frost-signing.md --config ./.markdownlint.json || true +typos ../bip-frost-signing.md . || true + +cd python || exit 1 +./tests.sh +./example.py \ No newline at end of file diff --git a/bip-frost-signing/docs/frost-signing-flow.png b/bip-frost-signing/docs/frost-signing-flow.png new file mode 100644 index 0000000000..f0fb210930 Binary files /dev/null and b/bip-frost-signing/docs/frost-signing-flow.png differ diff --git a/bip-frost-signing/python/.ruff.toml b/bip-frost-signing/python/.ruff.toml new file mode 100644 index 0000000000..d8cd9cc98f --- /dev/null +++ b/bip-frost-signing/python/.ruff.toml @@ -0,0 +1,3 @@ +[format] +# Exclude vendored package. +exclude = ["secp256k1lab/*"] \ No newline at end of file diff --git a/bip-frost-signing/python/example.py b/bip-frost-signing/python/example.py new file mode 100755 index 0000000000..da0cd6c2db --- /dev/null +++ b/bip-frost-signing/python/example.py @@ -0,0 +1,320 @@ +#!/usr/bin/env python3 + +"""Example of a full FROST signing session.""" + +from typing import List, Tuple +import asyncio +import argparse +import secrets + +# Import frost_ref first to set up secp256k1lab path +from frost_ref import ( + nonce_gen, + nonce_agg, + sign, + partial_sig_agg, + partial_sig_verify, + SignersContext, + SessionContext, + PlainPk, +) +from frost_ref.signing import ( + thresh_pubkey_and_tweak, + get_xonly_pk, + partial_sig_verify_internal, +) + +from secp256k1lab.bip340 import schnorr_verify +from trusted_dealer import trusted_dealer_keygen + + +# +# Network mocks to simulate full FROST signing sessions +# + + +class CoordinatorChannels: + def __init__(self, n): + self.n = n + self.queues = [asyncio.Queue() for _ in range(n)] + self.participant_queues = None + + def set_participant_queues(self, participant_queues): + self.participant_queues = participant_queues + + def send_to(self, i, m): + assert self.participant_queues is not None + self.participant_queues[i].put_nowait(m) + + def send_all(self, m): + assert self.participant_queues is not None + for i in range(self.n): + self.participant_queues[i].put_nowait(m) + + async def receive_from(self, i: int) -> bytes: + return await self.queues[i].get() + + +class ParticipantChannel: + def __init__(self, coord_queue): + self.queue = asyncio.Queue() + self.coord_queue = coord_queue + + def send(self, m): + self.coord_queue.put_nowait(m) + + async def receive(self): + return await self.queue.get() + + +# +# Helper functions +# + + +def generate_frost_keys( + n: int, t: int +) -> Tuple[PlainPk, List[int], List[bytes], List[PlainPk]]: + """Generate t-of-n FROST keys using trusted dealer. + + Returns: + thresh_pk: Threshold public key (33-byte compressed) + ids: List of signer IDs (0-indexed: 0, 1, ..., n-1) + secshares: List of secret shares (32-byte scalars) + pubshares: List of public shares (33-byte compressed) + """ + thresh_pk, secshares, pubshares = trusted_dealer_keygen( + secrets.token_bytes(32), n, t + ) + + assert len(secshares) == n + ids = list(range(len(secshares))) # ids are 0..n-1 + + return thresh_pk, ids, secshares, pubshares + + +# +# Protocol parties +# + + +async def participant( + chan: ParticipantChannel, + secshare: bytes, + pubshare: PlainPk, + my_id: int, + signers_ctx: SignersContext, + tweaks: List[bytes], + is_xonly: List[bool], + msg: bytes, +) -> Tuple[bytes, bytes]: + """ + Participant in FROST signing protocol. + + Returns: + (psig, final_sig): Partial signature and final BIP340 signature + """ + # Get tweaked threshold pubkey + tweak_ctx = thresh_pubkey_and_tweak(signers_ctx.thresh_pk, tweaks, is_xonly) + tweaked_thresh_pk = get_xonly_pk(tweak_ctx) + + # Round 1: Nonce generation + secnonce, pubnonce = nonce_gen(secshare, pubshare, tweaked_thresh_pk, msg, None) + chan.send(pubnonce) + aggnonce = await chan.receive() + + # Round 2: Signing + session_ctx = SessionContext(aggnonce, signers_ctx, tweaks, is_xonly, msg) + psig = sign(secnonce, secshare, my_id, session_ctx) + assert partial_sig_verify_internal(psig, my_id, pubnonce, pubshare, session_ctx), ( + "Partial signature verification failed" + ) + chan.send(psig) + + # Receive final signature + final_sig = await chan.receive() + return (psig, final_sig) + + +async def coordinator( + chans: CoordinatorChannels, + signers_ctx: SignersContext, + tweaks: List[bytes], + is_xonly: List[bool], + msg: bytes, +) -> bytes: + """ + Coordinator in FROST signing protocol. + + Returns: + final_sig: Final BIP340 signature (64 bytes) + """ + # Determine the signers + signer_ids = signers_ctx.ids + num_signers = len(signer_ids) + + # Round 1: Collect pubnonces + pubnonces = [] + for i in range(num_signers): + pubnonce = await chans.receive_from(i) + pubnonces.append(pubnonce) + + # Aggregate nonces + aggnonce = nonce_agg(pubnonces, signer_ids) + chans.send_all(aggnonce) + + # Round 2: Collect partial signatures + session_ctx = SessionContext(aggnonce, signers_ctx, tweaks, is_xonly, msg) + psigs = [] + for i in range(num_signers): + psig = await chans.receive_from(i) + assert partial_sig_verify( + psig, pubnonces, signers_ctx, tweaks, is_xonly, msg, i + ), f"Partial signature verification failed for singer {i}" + psigs.append(psig) + + # Aggregate partial signatures + final_sig = partial_sig_agg(psigs, signer_ids, session_ctx) + chans.send_all(final_sig) + + return final_sig + + +# +# Signing Session +# + + +def simulate_frost_signing( + secshares: List[bytes], + signers_ctx: SignersContext, + msg: bytes, + tweaks: List[bytes], + is_xonly: List[bool], +) -> Tuple[bytes, List[bytes]]: + """Run a full FROST signing session. + + Returns: + (final_sig, psigs): Final signature and list of partial signatures + """ + # Extract signer set from signers_ctx + signer_ids = signers_ctx.ids + pubshares = signers_ctx.pubshares + num_signers = len(signer_ids) + + async def session(): + # Set up channels + coord_chans = CoordinatorChannels(num_signers) + participant_chans = [ + ParticipantChannel(coord_chans.queues[i]) for i in range(num_signers) + ] + coord_chans.set_participant_queues( + [participant_chans[i].queue for i in range(num_signers)] + ) + + # Create coroutines + coroutines = [coordinator(coord_chans, signers_ctx, tweaks, is_xonly, msg)] + [ + participant( + participant_chans[i], + secshares[i], + pubshares[i], + signer_ids[i], + signers_ctx, + tweaks, + is_xonly, + msg, + ) + for i in range(num_signers) + ] + + return await asyncio.gather(*coroutines) + + results = asyncio.run(session()) + final_sig = results[0] + psigs = [r[0] for r in results[1:]] # Extract psigs from participant results + return final_sig, psigs + + +def main(): + parser = argparse.ArgumentParser(description="FROST Signing example") + parser.add_argument( + "t", nargs="?", type=int, default=2, help="Threshold [default=2]" + ) + parser.add_argument( + "n", nargs="?", type=int, default=3, help="Participants [default=3]" + ) + args = parser.parse_args() + + t, n = args.t, args.n + assert 2 <= t <= n, "Threshold t must satisfy 2 <= t <= n" + + print("====== FROST Signing example session ======") + print(f"Using n = {n} participants and a threshold of t = {t}.") + print() + + # 1. Generate FROST keys + thresh_pk, all_ids, all_secshares, all_pubshares = generate_frost_keys(n, t) + + print("=== Key Configuration ===") + print(f"Threshold public key: {thresh_pk.hex()}") + print() + print("=== Public shares ===") + for i, pubshare in enumerate(all_pubshares): + print(f" Participant {all_ids[i]}: {pubshare.hex()}") + print() + + # 2. Select first t signers + signer_indices = list(range(t)) + signer_ids = [all_ids[i] for i in signer_indices] + signer_secshares = [all_secshares[i] for i in signer_indices] + signer_pubshares = [all_pubshares[i] for i in signer_indices] + + # 3. Initialize the signers context + print("=== Signing Set ===") + print(f"Selected signers: {signer_ids}") + print() + signers_ctx = SignersContext(n, t, signer_ids, signer_pubshares, thresh_pk) + + # 4. Create message and tweaks + msg = secrets.token_bytes(32) + + # Apply both plain (BIP32-style) and xonly (BIP341-style) tweaks + tweaks = [secrets.token_bytes(32), secrets.token_bytes(32)] + is_xonly = [False, True] # First: plain (BIP32), Second: xonly (BIP341) + + tweak_ctx = thresh_pubkey_and_tweak(thresh_pk, tweaks, is_xonly) + tweaked_thresh_pk = get_xonly_pk(tweak_ctx) + + print("=== Message and Tweaks ===") + print(f"Message: {msg.hex()}") + print(f"Tweak 1 (plain/BIP32): {tweaks[0].hex()}") + print(f"Tweak 2 (xonly/BIP341): {tweaks[1].hex()}") + print(f"Tweaked threshold public key: {tweaked_thresh_pk.hex()}") + print() + + # 5. Run signing protocol + final_sig, psigs = simulate_frost_signing( + signer_secshares, + signers_ctx, + msg, + tweaks, + is_xonly, + ) + + print("=== Participants Partial Signatures ===") + for i, psig in enumerate(psigs): + print(f" Participant {signer_ids[i]}: {psig.hex()}") + print() + + print("=== Final Signature ===") + print(f"BIP340 signature: {final_sig.hex()}") + print() + + # 6. Verify signature + assert schnorr_verify(msg, tweaked_thresh_pk, final_sig) + print("=== Verification ===") + print("Signature verified successfully!") + + +if __name__ == "__main__": + main() diff --git a/bip-frost-signing/python/frost_ref/__init__.py b/bip-frost-signing/python/frost_ref/__init__.py new file mode 100644 index 0000000000..d6a00bb380 --- /dev/null +++ b/bip-frost-signing/python/frost_ref/__init__.py @@ -0,0 +1,43 @@ +from pathlib import Path +import sys + +# Add the vendored copy of secp256k1lab to path. +sys.path.append(str(Path(__file__).parent / "../secp256k1lab/src")) + +from .signing import ( + # Functions + validate_signers_ctx, + nonce_gen, + nonce_agg, + sign, + deterministic_sign, + partial_sig_verify, + partial_sig_agg, + # Exceptions + InvalidContributionError, + # Types + PlainPk, + XonlyPk, + SignersContext, + TweakContext, + SessionContext, +) + +__all__ = [ + # Functions + "validate_signers_ctx", + "nonce_gen", + "nonce_agg", + "sign", + "deterministic_sign", + "partial_sig_verify", + "partial_sig_agg", + # Exceptions + "InvalidContributionError", + # Types + "PlainPk", + "XonlyPk", + "SignersContext", + "TweakContext", + "SessionContext", +] diff --git a/bip-frost-signing/python/frost_ref/signing.py b/bip-frost-signing/python/frost_ref/signing.py new file mode 100644 index 0000000000..709ed10386 --- /dev/null +++ b/bip-frost-signing/python/frost_ref/signing.py @@ -0,0 +1,499 @@ +# BIP FROST Signing reference implementation +# +# It's worth noting that many functions, types, and exceptions were directly +# copied or modified from the MuSig2 (BIP 327) reference code, found at: +# https://github.com/bitcoin/bips/blob/master/bip-0327/reference.py +# +# WARNING: This implementation is for demonstration purposes only and _not_ to +# be used in production environments. The code is vulnerable to timing attacks, +# for example. + +from typing import List, Optional, Tuple, NewType, NamedTuple, Sequence, Literal +import secrets + +from secp256k1lab.secp256k1 import G, GE, Scalar +from secp256k1lab.util import int_from_bytes, tagged_hash, xor_bytes + +PlainPk = NewType("PlainPk", bytes) +XonlyPk = NewType("XonlyPk", bytes) +ContribKind = Literal[ + "aggothernonce", "aggnonce", "psig", "pubkey", "pubnonce", "pubshare" +] + +# There are two types of exceptions that can be raised by this implementation: +# - ValueError for indicating that an input doesn't conform to some function +# precondition (e.g. an input array is the wrong length, a serialized +# representation doesn't have the correct format). +# - InvalidContributionError for indicating that a signer (or the +# coordinator) is misbehaving in the protocol. +# +# Assertions are used to (1) satisfy the type-checking system, and (2) check for +# inconvenient events that can't happen except with negligible probability (e.g. +# output of a hash function is 0) and can't be manually triggered by any +# signer. + + +# This exception is raised if a party (signer or nonce coordinator) sends invalid +# values. Actual implementations should not crash when receiving invalid +# contributions. Instead, they should hold the offending party accountable. +class InvalidContributionError(Exception): + def __init__(self, signer_id: Optional[int], contrib: ContribKind) -> None: + # participant identifier of the signer who sent the invalid value + self.id = signer_id + # contrib is one of "pubkey", "pubnonce", "aggnonce", or "psig". + self.contrib = contrib + + +def derive_interpolating_value(ids: List[int], my_id: int) -> Scalar: + assert my_id in ids + assert 0 <= my_id < 2**32 + assert len(set(ids)) == len(ids) + num = Scalar(1) + deno = Scalar(1) + for curr_id in ids: + if curr_id == my_id: + continue + num *= Scalar(curr_id + 1) + deno *= Scalar(curr_id - my_id) + return num / deno + + +def derive_thresh_pubkey(ids: List[int], pubshares: List[PlainPk]) -> PlainPk: + Q = GE() + for my_id, pubshare in zip(ids, pubshares): + try: + X_i = GE.from_bytes_compressed(pubshare) + except ValueError: + raise InvalidContributionError(my_id, "pubshare") + lam_i = derive_interpolating_value(ids, my_id) + Q = Q + lam_i * X_i + # Q is not the point at infinity except with negligible probability. + assert not Q.infinity + return PlainPk(Q.to_bytes_compressed()) + + +# REVIEW: should we remove n and t from this struct? +class SignersContext(NamedTuple): + n: int + t: int + ids: List[int] + pubshares: List[PlainPk] + thresh_pk: PlainPk + + +def validate_signers_ctx(signers_ctx: SignersContext) -> None: + n, t, ids, pubshares, thresh_pk = signers_ctx + assert t <= n + if not t <= len(ids) <= n: + raise ValueError("The number of signers must be between t and n.") + if len(pubshares) != len(ids): + raise ValueError("The pubshares and ids arrays must have the same length.") + for i, pubshare in zip(ids, pubshares): + if not 0 <= i <= n - 1: + raise ValueError(f"The participant identifier {i} is out of range.") + try: + _ = GE.from_bytes_compressed(pubshare) + except ValueError: + raise InvalidContributionError(i, "pubshare") + if len(set(ids)) != len(ids): + raise ValueError("The participant identifier list contains duplicate elements.") + if derive_thresh_pubkey(ids, pubshares) != thresh_pk: + raise ValueError("The provided key material is incorrect.") + + +class TweakContext(NamedTuple): + Q: GE + gacc: Scalar + tacc: Scalar + + +def get_xonly_pk(tweak_ctx: TweakContext) -> XonlyPk: + Q, _, _ = tweak_ctx + return XonlyPk(Q.to_bytes_xonly()) + + +def get_plain_pk(tweak_ctx: TweakContext) -> PlainPk: + Q, _, _ = tweak_ctx + return PlainPk(Q.to_bytes_compressed()) + + +def tweak_ctx_init(thresh_pk: PlainPk) -> TweakContext: + Q = GE.from_bytes_compressed(thresh_pk) + gacc = Scalar(1) + tacc = Scalar(0) + return TweakContext(Q, gacc, tacc) + + +def apply_tweak(tweak_ctx: TweakContext, tweak: bytes, is_xonly: bool) -> TweakContext: + if len(tweak) != 32: + raise ValueError("The tweak must be a 32-byte array.") + Q, gacc, tacc = tweak_ctx + if is_xonly and not Q.has_even_y(): + g = Scalar(-1) + else: + g = Scalar(1) + try: + twk = Scalar.from_bytes_checked(tweak) + except ValueError: + raise ValueError("The tweak must be less than n.") + Q_ = g * Q + twk * G + if Q_.infinity: + raise ValueError("The result of tweaking cannot be infinity.") + gacc_ = g * gacc + tacc_ = twk + g * tacc + return TweakContext(Q_, gacc_, tacc_) + + +def nonce_hash( + rand: bytes, + pubshare: PlainPk, + thresh_pk: XonlyPk, + i: int, + msg_prefixed: bytes, + extra_in: bytes, +) -> bytes: + buf = b"" + buf += rand + buf += len(pubshare).to_bytes(1, "big") + buf += pubshare + buf += len(thresh_pk).to_bytes(1, "big") + buf += thresh_pk + buf += msg_prefixed + buf += len(extra_in).to_bytes(4, "big") + buf += extra_in + buf += i.to_bytes(1, "big") + return tagged_hash("FROST/nonce", buf) + + +def nonce_gen_internal( + rand_: bytes, + secshare: Optional[bytes], + pubshare: Optional[PlainPk], + thresh_pk: Optional[XonlyPk], + msg: Optional[bytes], + extra_in: Optional[bytes], +) -> Tuple[bytearray, bytes]: + if secshare is not None: + rand = xor_bytes(secshare, tagged_hash("FROST/aux", rand_)) + else: + rand = rand_ + if pubshare is None: + pubshare = PlainPk(b"") + if thresh_pk is None: + thresh_pk = XonlyPk(b"") + if msg is None: + msg_prefixed = b"\x00" + else: + msg_prefixed = b"\x01" + msg_prefixed += len(msg).to_bytes(8, "big") + msg_prefixed += msg + if extra_in is None: + extra_in = b"" + k_1 = Scalar.from_bytes_wrapping( + nonce_hash(rand, pubshare, thresh_pk, 0, msg_prefixed, extra_in) + ) + k_2 = Scalar.from_bytes_wrapping( + nonce_hash(rand, pubshare, thresh_pk, 1, msg_prefixed, extra_in) + ) + # k_1 == 0 or k_2 == 0 cannot occur except with negligible probability. + assert k_1 != 0 + assert k_2 != 0 + R1_partial = k_1 * G + R2_partial = k_2 * G + assert not R1_partial.infinity + assert not R2_partial.infinity + pubnonce = R1_partial.to_bytes_compressed() + R2_partial.to_bytes_compressed() + # use mutable `bytearray` since secnonce need to be replaced with zeros during signing. + secnonce = bytearray(k_1.to_bytes() + k_2.to_bytes()) + return secnonce, pubnonce + + +# think: can msg & extra_in be of any length here? +# think: why doesn't musig2 ref code check for `pk` length here? +# REVIEW: Why should thresh_pk be XOnlyPk here? Shouldn't it be PlainPk? +def nonce_gen( + secshare: Optional[bytes], + pubshare: Optional[PlainPk], + thresh_pk: Optional[XonlyPk], + msg: Optional[bytes], + extra_in: Optional[bytes], +) -> Tuple[bytearray, bytes]: + if secshare is not None and len(secshare) != 32: + raise ValueError("The optional byte array secshare must have length 32.") + if pubshare is not None and len(pubshare) != 33: + raise ValueError("The optional byte array pubshare must have length 33.") + if thresh_pk is not None and len(thresh_pk) != 32: + raise ValueError("The optional byte array thresh_pk must have length 32.") + # bench: will adding individual_pk(secshare) == pubshare check, increase the execution time significantly? + rand_ = secrets.token_bytes(32) + return nonce_gen_internal(rand_, secshare, pubshare, thresh_pk, msg, extra_in) + + +# REVIEW should we raise value errors for: +# (1) duplicate ids +# (2) 0 <= id < max_participants < 2^32 +# in each function that takes `ids` as argument? + + +# `ids` is typed as Sequence[Optional[int]] so that callers can pass either +# List[int] or List[Optional[int]] without triggering mypy invariance errors. +# Sequence is read-only and covariant. +def nonce_agg(pubnonces: List[bytes], ids: Sequence[Optional[int]]) -> bytes: + if len(pubnonces) != len(ids): + raise ValueError("The pubnonces and ids arrays must have the same length.") + aggnonce = b"" + for j in (1, 2): + R_j = GE() + for my_id, pubnonce in zip(ids, pubnonces): + try: + R_ij = GE.from_bytes_compressed(pubnonce[(j - 1) * 33 : j * 33]) + except ValueError: + raise InvalidContributionError(my_id, "pubnonce") + R_j = R_j + R_ij + aggnonce += R_j.to_bytes_compressed_with_infinity() + return aggnonce + + +class SessionContext(NamedTuple): + aggnonce: bytes + signers_ctx: SignersContext + tweaks: List[bytes] + is_xonly: List[bool] + msg: bytes + + +def thresh_pubkey_and_tweak( + thresh_pk: PlainPk, tweaks: List[bytes], is_xonly: List[bool] +) -> TweakContext: + if len(tweaks) != len(is_xonly): + raise ValueError("The tweaks and is_xonly arrays must have the same length.") + tweak_ctx = tweak_ctx_init(thresh_pk) + v = len(tweaks) + for i in range(v): + tweak_ctx = apply_tweak(tweak_ctx, tweaks[i], is_xonly[i]) + return tweak_ctx + + +def get_session_values( + session_ctx: SessionContext, +) -> Tuple[GE, Scalar, Scalar, List[int], List[PlainPk], Scalar, GE, Scalar]: + (aggnonce, signers_ctx, tweaks, is_xonly, msg) = session_ctx + validate_signers_ctx(signers_ctx) + _, _, ids, pubshares, thresh_pk = signers_ctx + Q, gacc, tacc = thresh_pubkey_and_tweak(thresh_pk, tweaks, is_xonly) + # sort the ids before serializing because ROAST paper considers them as a set + ser_ids = serialize_ids(ids) + b = Scalar.from_bytes_wrapping( + tagged_hash("FROST/noncecoef", ser_ids + aggnonce + Q.to_bytes_xonly() + msg) + ) + assert b != 0 + try: + R1 = GE.from_bytes_compressed_with_infinity(aggnonce[0:33]) + R2 = GE.from_bytes_compressed_with_infinity(aggnonce[33:66]) + except ValueError: + # coordinator sent invalid aggnonce + raise InvalidContributionError(None, "aggnonce") + R_ = R1 + b * R2 + R = R_ if not R_.infinity else G + assert not R.infinity + e = Scalar.from_bytes_wrapping( + tagged_hash("BIP0340/challenge", R.to_bytes_xonly() + Q.to_bytes_xonly() + msg) + ) + assert e != 0 + return (Q, gacc, tacc, ids, pubshares, b, R, e) + + +def serialize_ids(ids: List[int]) -> bytes: + # REVIEW assert for ids not being unsigned values? + sorted_ids = sorted(ids) + ser_ids = b"".join(i.to_bytes(4, byteorder="big", signed=False) for i in sorted_ids) + return ser_ids + + +def sign( + secnonce: bytearray, secshare: bytes, my_id: int, session_ctx: SessionContext +) -> bytes: + (Q, gacc, _, ids, pubshares, b, R, e) = get_session_values(session_ctx) + try: + k_1_ = Scalar.from_bytes_nonzero_checked(bytes(secnonce[0:32])) + except ValueError: + raise ValueError("first secnonce value is out of range.") + try: + k_2_ = Scalar.from_bytes_nonzero_checked(bytes(secnonce[32:64])) + except ValueError: + raise ValueError("second secnonce value is out of range.") + # Overwrite the secnonce argument with zeros such that subsequent calls of + # sign with the same secnonce raise a ValueError. + secnonce[:] = bytearray(b"\x00" * 64) + k_1 = k_1_ if R.has_even_y() else -k_1_ + k_2 = k_2_ if R.has_even_y() else -k_2_ + d_ = int_from_bytes(secshare) + if not 0 < d_ < GE.ORDER: + raise ValueError("The signer's secret share value is out of range.") + P = d_ * G + assert not P.infinity + my_pubshare = P.to_bytes_compressed() + # REVIEW: do we actually need this check? Musig2 embeds pk in secnonce to prevent + # the wagner's attack related to tweaked pubkeys, but here we don't have that issue. + # If we don't need to worry about that attack, we remove pubshare from get_session_values + # return values + if my_pubshare not in pubshares: + raise ValueError( + "The signer's pubshare must be included in the list of pubshares." + ) + # REVIEW: do we actually need this check? + if my_id not in ids: + raise ValueError( + "The signer's id must be present in the participant identifier list." + ) + a = derive_interpolating_value(ids, my_id) + g = Scalar(1) if Q.has_even_y() else Scalar(-1) + d = g * gacc * d_ + s = k_1 + b * k_2 + e * a * d + psig = s.to_bytes() + R1_partial = k_1_ * G + R2_partial = k_2_ * G + assert not R1_partial.infinity + assert not R2_partial.infinity + pubnonce = R1_partial.to_bytes_compressed() + R2_partial.to_bytes_compressed() + # Optional correctness check. The result of signing should pass signature verification. + assert partial_sig_verify_internal(psig, my_id, pubnonce, my_pubshare, session_ctx) + return psig + + +# REVIEW should we hash the signer set (or pubshares) too? Otherwise same nonce will be generate even if the signer set changes +def det_nonce_hash( + secshare_: bytes, aggothernonce: bytes, tweaked_tpk: bytes, msg: bytes, i: int +) -> bytes: + buf = b"" + buf += secshare_ + buf += aggothernonce + buf += tweaked_tpk + buf += len(msg).to_bytes(8, "big") + buf += msg + buf += i.to_bytes(1, "big") + return tagged_hash("FROST/deterministic/nonce", buf) + + +COORDINATOR_ID = None + + +def deterministic_sign( + secshare: bytes, + my_id: int, + aggothernonce: bytes, + signers_ctx: SignersContext, + tweaks: List[bytes], + is_xonly: List[bool], + msg: bytes, + rand: Optional[bytes], +) -> Tuple[bytes, bytes]: + if rand is not None: + secshare_ = xor_bytes(secshare, tagged_hash("FROST/aux", rand)) + else: + secshare_ = secshare + # REVIEW: do we need to add any check for ids & pubshares (in signers_ctx context) here? + validate_signers_ctx(signers_ctx) + _, _, _, _, thresh_pk = signers_ctx + tweaked_tpk = get_xonly_pk(thresh_pubkey_and_tweak(thresh_pk, tweaks, is_xonly)) + + k_1 = Scalar.from_bytes_wrapping( + det_nonce_hash(secshare_, aggothernonce, tweaked_tpk, msg, 0) + ) + k_2 = Scalar.from_bytes_wrapping( + det_nonce_hash(secshare_, aggothernonce, tweaked_tpk, msg, 1) + ) + # k_1 == 0 or k_2 == 0 cannot occur except with negligible probability. + assert k_1 != 0 + assert k_2 != 0 + + R1_partial = k_1 * G + R2_partial = k_2 * G + assert not R1_partial.infinity + assert not R2_partial.infinity + pubnonce = R1_partial.to_bytes_compressed() + R2_partial.to_bytes_compressed() + secnonce = bytearray(k_1.to_bytes() + k_2.to_bytes()) + try: + aggnonce = nonce_agg([pubnonce, aggothernonce], [my_id, COORDINATOR_ID]) + except Exception: + # Since `pubnonce` can never be invalid, blame coordinator's pubnonce. + # REVIEW: should we introduce an unknown participant or coordinator error? + raise InvalidContributionError(COORDINATOR_ID, "aggothernonce") + session_ctx = SessionContext(aggnonce, signers_ctx, tweaks, is_xonly, msg) + psig = sign(secnonce, secshare, my_id, session_ctx) + return (pubnonce, psig) + + +def partial_sig_verify( + psig: bytes, + pubnonces: List[bytes], + signers_ctx: SignersContext, + tweaks: List[bytes], + is_xonly: List[bool], + msg: bytes, + i: int, +) -> bool: + validate_signers_ctx(signers_ctx) + _, _, ids, pubshares, _ = signers_ctx + if len(pubnonces) != len(ids): + raise ValueError("The pubnonces and ids arrays must have the same length.") + if len(tweaks) != len(is_xonly): + raise ValueError("The tweaks and is_xonly arrays must have the same length.") + aggnonce = nonce_agg(pubnonces, ids) + session_ctx = SessionContext(aggnonce, signers_ctx, tweaks, is_xonly, msg) + return partial_sig_verify_internal( + psig, ids[i], pubnonces[i], pubshares[i], session_ctx + ) + + +# REVIEW: catch `cpoint` ValueError and return false +def partial_sig_verify_internal( + psig: bytes, + my_id: int, + pubnonce: bytes, + pubshare: bytes, + session_ctx: SessionContext, +) -> bool: + (Q, gacc, _, ids, pubshares, b, R, e) = get_session_values(session_ctx) + try: + s = Scalar.from_bytes_nonzero_checked(psig) + except ValueError: + return False + if pubshare not in pubshares: + return False + if my_id not in ids: + return False + try: + R1_partial = GE.from_bytes_compressed(pubnonce[0:33]) + R2_partial = GE.from_bytes_compressed(pubnonce[33:66]) + except ValueError: + return False + Re_s_ = R1_partial + b * R2_partial + Re_s = Re_s_ if R.has_even_y() else -Re_s_ + try: + P = GE.from_bytes_compressed(pubshare) + except ValueError: + return False + a = derive_interpolating_value(ids, my_id) + g = Scalar(1) if Q.has_even_y() else Scalar(-1) + g_ = g * gacc + return s * G == Re_s + (e * a * g_) * P + + +def partial_sig_agg( + psigs: List[bytes], ids: List[int], session_ctx: SessionContext +) -> bytes: + assert COORDINATOR_ID not in ids + if len(psigs) != len(ids): + raise ValueError("The psigs and ids arrays must have the same length.") + (Q, _, tacc, _, _, _, R, e) = get_session_values(session_ctx) + s = Scalar(0) + for my_id, psig in zip(ids, psigs): + try: + s_i = Scalar.from_bytes_checked(psig) + except ValueError: + raise InvalidContributionError(my_id, "psig") + s = s + s_i + g = Scalar(1) if Q.has_even_y() else Scalar(-1) + s = s + e * g * tacc + return R.to_bytes_xonly() + s.to_bytes() diff --git a/bip-frost-signing/python/gen_vectors.py b/bip-frost-signing/python/gen_vectors.py new file mode 100755 index 0000000000..912e11e9c1 --- /dev/null +++ b/bip-frost-signing/python/gen_vectors.py @@ -0,0 +1,1466 @@ +#!/usr/bin/env python3 + +import json +import os +import shutil +import sys +from typing import Dict, List, Sequence, Union +import secrets +import pprint + +from frost_ref import ( + InvalidContributionError, + SessionContext, + SignersContext, + deterministic_sign, + nonce_agg, + partial_sig_agg, + partial_sig_verify, + sign, +) +from frost_ref.signing import nonce_gen_internal +from secp256k1lab.secp256k1 import GE, Scalar +from secp256k1lab.keys import pubkey_gen_plain +from trusted_dealer import trusted_dealer_keygen + + +def bytes_to_hex(data: bytes) -> str: + return data.hex().upper() + + +def bytes_list_to_hex(lst: Sequence[bytes]) -> List[str]: + return [l_i.hex().upper() for l_i in lst] + + +def hex_list_to_bytes(lst: List[str]) -> List[bytes]: + return [bytes.fromhex(l_i) for l_i in lst] + + +def int_list_to_bytes(lst: List[int]) -> List[bytes]: + return [Scalar(x).to_bytes() for x in lst] + + +ErrorInfo = Dict[str, Union[int, str, None, "ErrorInfo"]] + + +def exception_asdict(e: Exception) -> dict: + error_info: ErrorInfo = {"type": e.__class__.__name__} + + for key, value in e.__dict__.items(): + if isinstance(value, (str, int, type(None))): + error_info[key] = value + elif isinstance(value, bytes): + error_info[key] = bytes_to_hex(value) + else: + raise NotImplementedError( + f"Conversion for type {type(value).__name__} is not implemented" + ) + + # If the last argument is not found in the instance’s attributes and + # is a string, treat it as an extra message. + if e.args and isinstance(e.args[-1], str) and e.args[-1] not in e.__dict__.values(): + error_info.setdefault("message", e.args[-1]) + return error_info + + +def expect_exception(try_fn, expected_exception): + try: + try_fn() + except expected_exception as e: + return exception_asdict(e) + except Exception as e: + raise AssertionError(f"Wrong exception raised: {type(e).__name__}") + else: + raise AssertionError("Expected exception") + + +COMMON_RAND = bytes.fromhex( + "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F" +) + +COMMON_MSGS = [ + bytes.fromhex( + "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF" + ), # 32-byte message + bytes.fromhex(""), # Empty message + bytes.fromhex( + "2626262626262626262626262626262626262626262626262626262626262626262626262626" + ), # 38-byte message +] + +COMMON_TWEAKS = hex_list_to_bytes( + [ + "E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB", + "AE2EA797CC0FE72AC5B97B97F3C6957D7E4199A167A58EB08BCAFFDA70AC0455", + "F52ECBC565B3D8BEA2DFD5B75A4F457E54369809322E4120831626F290FA87E0", + "1969AD73CC177FA0B4FCED6DF1F7BF9907E665FDE9BA196A74FED0A3CF5AEF9D", + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", # Invalid (exceeds group size) + ] +) + +SIG_AGG_TWEAKS = hex_list_to_bytes( + [ + "B511DA492182A91B0FFB9A98020D55F260AE86D7ECBD0399C7383D59A5F2AF7C", + "A815FE049EE3C5AAB66310477FBC8BCCCAC2F3395F59F921C364ACD78A2F48DC", + "75448A87274B056468B977BE06EB1E9F657577B7320B0A3376EA51FD420D18A8", + ] +) + +INVALID_PUBSHARE = bytes.fromhex( + "020000000000000000000000000000000000000000000000000000000000000007" +) + + +def write_test_vectors(filename, vectors): + output_file = os.path.join("vectors", filename) + with open(output_file, "w") as f: + json.dump(vectors, f, indent=4) + + +def get_common_setup(): + t, n, thresh_pk_ge, secshares, pubshares = frost_keygen_fixed() + return ( + n, + t, + thresh_pk_ge.to_bytes_compressed(), + thresh_pk_ge.to_bytes_xonly(), + list(range(n)), + secshares, + pubshares, + ) + + +def generate_all_nonces(rand, secshares, pubshares, xonly_thresh_pk, msg=None): + secnonces = [] + pubnonces = [] + for i in range(len(secshares)): + sec, pub = nonce_gen_internal( + rand, secshares[i], pubshares[i], xonly_thresh_pk, msg, None + ) + secnonces.append(sec) + pubnonces.append(pub) + return secnonces, pubnonces + + +def frost_keygen_fixed(): + n = 3 + t = 2 + thresh_pubkey_bytes = bytes.fromhex( + "03B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237" + ) + thresh_pubkey_ge = GE.from_bytes_compressed(thresh_pubkey_bytes) + secshares = hex_list_to_bytes( + [ + "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "62A04F63F105A40FCF25634AA645D77AAC692641916E4DFC8C1EEC83CAB5BEBA", + "F86DAF82883042BC4DB8EE93C2E079AF3D1A9A6DCD24935ED8BE959F9274FCC4", + ] + ) + pubshares = hex_list_to_bytes( + [ + "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "02EC6444271D791A1DA95300329DB2268611B9C60E193DABFDEE0AA816AE512583", + "03113F810F612567D9552F46AF9BDA21A67D52060F95BD4A723F4B60B1820D3676", + ] + ) + return (t, n, thresh_pubkey_ge, secshares, pubshares) + + +# NOTE: This function is used only once to generate a long-term key for frost_keygen_fixed(). It is intentionally not called anywhere else. It will be used in case we decide to change the long-term key, in future. +def frost_keygen_random(): + random_scalar = Scalar.from_bytes_nonzero_checked(secrets.token_bytes(32)) + threshold_seckey = random_scalar.to_bytes() + threshold_pubkey = pubkey_gen_plain(threshold_seckey) + output_tpk, secshares, pubshares = trusted_dealer_keygen(random_scalar, 3, 2) + assert threshold_pubkey == output_tpk + + print(f"threshold secret key: {threshold_seckey.hex().upper()}") + print(f"threshold public key: {threshold_pubkey.hex().upper()}") + print("secret shares:") + pprint.pprint(bytes_list_to_hex(secshares)) + print("public shares:") + pprint.pprint(bytes_list_to_hex(pubshares)) + + +def generate_nonce_gen_vectors(): + vectors = {"test_cases": []} + + _, _, thresh_pk_ge, secshares, pubshares = frost_keygen_fixed() + extra_in = bytes.fromhex( + "0808080808080808080808080808080808080808080808080808080808080808" + ) + xonly_thresh_pk = thresh_pk_ge.to_bytes_xonly() + + # --- Valid Test Case 1 --- + msg = bytes.fromhex( + "0101010101010101010101010101010101010101010101010101010101010101" + ) + secnonce, pubnonce = nonce_gen_internal( + COMMON_RAND, secshares[0], pubshares[0], xonly_thresh_pk, msg, extra_in + ) + vectors["test_cases"].append( + { + "rand_": bytes_to_hex(COMMON_RAND), + "secshare": bytes_to_hex(secshares[0]), + "pubshare": bytes_to_hex(pubshares[0]), + "threshold_pubkey": bytes_to_hex(xonly_thresh_pk), + "msg": bytes_to_hex(msg), + "extra_in": bytes_to_hex(extra_in), + "expected_secnonce": bytes_to_hex(secnonce), + "expected_pubnonce": bytes_to_hex(pubnonce), + "comment": "", + } + ) + # --- Valid Test Case 2 --- + secnonce, pubnonce = nonce_gen_internal( + COMMON_RAND, + secshares[0], + pubshares[0], + xonly_thresh_pk, + COMMON_MSGS[1], + extra_in, + ) + vectors["test_cases"].append( + { + "rand_": bytes_to_hex(COMMON_RAND), + "secshare": bytes_to_hex(secshares[0]), + "pubshare": bytes_to_hex(pubshares[0]), + "threshold_pubkey": bytes_to_hex(xonly_thresh_pk), + "msg": bytes_to_hex(COMMON_MSGS[1]), + "extra_in": bytes_to_hex(extra_in), + "expected_secnonce": bytes_to_hex(secnonce), + "expected_pubnonce": bytes_to_hex(pubnonce), + "comment": "Empty Message", + } + ) + # --- Valid Test Case 3 --- + secnonce, pubnonce = nonce_gen_internal( + COMMON_RAND, + secshares[0], + pubshares[0], + xonly_thresh_pk, + COMMON_MSGS[2], + extra_in, + ) + vectors["test_cases"].append( + { + "rand_": bytes_to_hex(COMMON_RAND), + "secshare": bytes_to_hex(secshares[0]), + "pubshare": bytes_to_hex(pubshares[0]), + "threshold_pubkey": bytes_to_hex(xonly_thresh_pk), + "msg": bytes_to_hex(COMMON_MSGS[2]), + "extra_in": bytes_to_hex(extra_in), + "expected_secnonce": bytes_to_hex(secnonce), + "expected_pubnonce": bytes_to_hex(pubnonce), + "comment": "38-byte message", + } + ) + # --- Valid Test Case 4 --- + secnonce, pubnonce = nonce_gen_internal(COMMON_RAND, None, None, None, None, None) + vectors["test_cases"].append( + { + "rand_": bytes_to_hex(COMMON_RAND), + "secshare": None, + "pubshare": None, + "threshold_pubkey": None, + "msg": None, + "extra_in": None, + "expected_secnonce": bytes_to_hex(secnonce), + "expected_pubnonce": bytes_to_hex(pubnonce), + "comment": "Every optional parameter is absent", + } + ) + + write_test_vectors("nonce_gen_vectors.json", vectors) + + +# REVIEW: we can simply use the pubnonces directly in the valid & error +# test cases, instead of referencing their indices +def generate_nonce_agg_vectors(): + vectors = dict() + + # Special pubnonce indices for test cases + INVALID_TAG_IDX = 4 # Pubnonce with wrong tag 0x04 + INVALID_XCOORD_IDX = 5 # Pubnonce with invalid X coordinate + INVALID_EXCEEDS_FIELD_IDX = 6 # Pubnonce X exceeds field size + + pubnonces = hex_list_to_bytes( + [ + "020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E66603BA47FBC1834437B3212E89A84D8425E7BF12E0245D98262268EBDCB385D50641", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833", + "020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E6660279BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60379BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", + "04FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B831", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A602FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30", + ] + ) + vectors["pubnonces"] = bytes_list_to_hex(pubnonces) + + vectors["valid_test_cases"] = [] + # --- Valid Test Case 1 --- + pubnonce_indices = [0, 1] + curr_pubnonces = [pubnonces[i] for i in pubnonce_indices] + pids = [0, 1] + aggnonce = nonce_agg(curr_pubnonces, pids) + vectors["valid_test_cases"].append( + { + "pubnonce_indices": pubnonce_indices, + "participant_identifiers": pids, + "expected_aggnonce": bytes_to_hex(aggnonce), + } + ) + # --- Valid Test Case 2 --- + pubnonce_indices = [2, 3] + curr_pubnonces = [pubnonces[i] for i in pubnonce_indices] + pids = [0, 1] + aggnonce = nonce_agg(curr_pubnonces, pids) + vectors["valid_test_cases"].append( + { + "pubnonce_indices": pubnonce_indices, + "participant_identifiers": pids, + "expected_aggnonce": bytes_to_hex(aggnonce), + "comment": "Sum of second points encoded in the nonces is point at infinity which is serialized as 33 zero bytes", + } + ) + + vectors["error_test_cases"] = [] + # --- Error Test Case 1 --- + pubnonce_indices = [0, INVALID_TAG_IDX] + curr_pubnonces = [pubnonces[i] for i in pubnonce_indices] + pids = [0, 1] + error = expect_exception( + lambda: nonce_agg(curr_pubnonces, pids), InvalidContributionError + ) + vectors["error_test_cases"].append( + { + "pubnonce_indices": pubnonce_indices, + "participant_identifiers": pids, + "error": error, + "comment": "Public nonce from signer 1 is invalid due wrong tag, 0x04, in the first half", + } + ) + # --- Error Test Case 2 --- + pubnonce_indices = [INVALID_XCOORD_IDX, 1] + curr_pubnonces = [pubnonces[i] for i in pubnonce_indices] + pids = [0, 1] + error = expect_exception( + lambda: nonce_agg(curr_pubnonces, pids), InvalidContributionError + ) + vectors["error_test_cases"].append( + { + "pubnonce_indices": pubnonce_indices, + "participant_identifiers": pids, + "error": error, + "comment": "Public nonce from signer 0 is invalid because the second half does not correspond to an X coordinate", + } + ) + # --- Error Test Case 3 --- + pubnonce_indices = [INVALID_EXCEEDS_FIELD_IDX, 1] + curr_pubnonces = [pubnonces[i] for i in pubnonce_indices] + pids = [0, 1] + error = expect_exception( + lambda: nonce_agg(curr_pubnonces, pids), InvalidContributionError + ) + vectors["error_test_cases"].append( + { + "pubnonce_indices": pubnonce_indices, + "participant_identifiers": pids, + "error": error, + "comment": "Public nonce from signer 0 is invalid because second half exceeds field size", + } + ) + + write_test_vectors("nonce_agg_vectors.json", vectors) + + +# TODO: Remove `pubnonces` param from these vectors. It's not used. +def generate_sign_verify_vectors(): + vectors = dict() + + n, t, thresh_pk, xonly_thresh_pk, ids, secshares, pubshares = get_common_setup() + secshare_p0 = secshares[0] + + # Special indices for test cases + INVALID_PUBSHARE_IDX = 3 # Invalid pubshare (appended to list) + INV_PUBNONCE_IDX = 4 # Inverse pubnonce (for infinity test) + SECNONCE_ZERO_IDX = 1 # All-zero secnonce (nonce reuse) + AGGNONCE_INF_IDX = 3 # Aggnonce with both halves as infinity + AGGNONCE_INVALID_TAG_IDX = 4 # Invalid tag 0x04 + AGGNONCE_INVALID_XCOORD_IDX = 5 # Invalid X coordinate + AGGNONCE_INVALID_EXCEEDS_FIELD_IDX = 6 # X exceeds field size + + vectors["n"] = n + vectors["t"] = t + vectors["threshold_pubkey"] = bytes_to_hex(thresh_pk) + vectors["secshare_p0"] = bytes_to_hex(secshare_p0) + vectors["identifiers"] = ids + pubshares.append(INVALID_PUBSHARE) + vectors["pubshares"] = bytes_list_to_hex(pubshares) + + secnonces, pubnonces = generate_all_nonces( + COMMON_RAND, secshares, pubshares, xonly_thresh_pk + ) + secnonces_p0 = [ + secnonces[0], + bytes.fromhex( + "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" + ), # all zero + ] + vectors["secnonces_p0"] = bytes_list_to_hex(secnonces_p0) + # compute -(pubnonce[0] + pubnonce[1]) + tmp = nonce_agg(pubnonces[:2], ids[:2]) + R1 = GE.from_bytes_compressed_with_infinity(tmp[0:33]) + R2 = GE.from_bytes_compressed_with_infinity(tmp[33:66]) + neg_R1 = -R1 + neg_R2 = -R2 + inv_pubnonce = ( + neg_R1.to_bytes_compressed_with_infinity() + + neg_R2.to_bytes_compressed_with_infinity() + ) + invalid_pubnonce = bytes.fromhex( + "0200000000000000000000000000000000000000000000000000000000000000090287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480" + ) + pubnonces += [invalid_pubnonce, inv_pubnonce] + vectors["pubnonces"] = bytes_list_to_hex(pubnonces) + + # aggnonces indices represent the following + # 0 - 2 -> valid aggnonces for the three indices group below + # 3 -> valid aggnonce with both halves as inf points + # 4 -> wrong parity tag + # 5 -> invalid x coordinate in second half + # 6 -> second half exceeds field size + indices_grp = [[0, 1], [0, 2], [0, 1, 2]] + aggnonces = [ + nonce_agg([pubnonces[i] for i in indices], [ids[i] for i in indices]) + for indices in indices_grp + ] + # aggnonce with inf points + aggnonces.append( + nonce_agg( + [ + pubnonces[0], + pubnonces[1], + pubnonces[-1], + ], # pubnonces[-1] is inv_pubnonce + [ids[0], ids[1], ids[2]], + ) + ) + # invalid aggnonces + aggnonces += [ + bytes.fromhex( + "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9" + ), # wrong parity tag 04 + bytes.fromhex( + "028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61020000000000000000000000000000000000000000000000000000000000000009" + ), # invalid x coordinate in second half + bytes.fromhex( + "028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD6102FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30" + ), # second half exceeds field size + ] + vectors["aggnonces"] = bytes_list_to_hex(aggnonces) + + vectors["msgs"] = bytes_list_to_hex(COMMON_MSGS) + + vectors["valid_test_cases"] = [] + # --- Valid Test Cases --- + # Every List[int] & int below represents indices + # REVIEW: add secnonce here (easy readability), than using `secnonce_p0` list as common prefix + valid_cases = [ + { + "ids": [0, 1], + "pubshares": [0, 1], + "pubnonces": [0, 1], + "aggnonce": 0, + "msg": 0, + "signer": 0, + "comment": "Signing with minimum number of participants", + }, + { + "ids": [1, 0], + "pubshares": [1, 0], + "pubnonces": [1, 0], + "aggnonce": 0, + "msg": 0, + "signer": 1, + "comment": "Partial-signature doesn't change if the order of signers set changes (without changing secnonces)", + }, + { + "ids": [0, 2], + "pubshares": [0, 2], + "pubnonces": [0, 2], + "aggnonce": 1, + "msg": 0, + "signer": 0, + "comment": "Partial-signature changes if the members of signers set changes", + }, + { + "ids": [0, 1, 2], + "pubshares": [0, 1, 2], + "pubnonces": [0, 1, 2], + "aggnonce": 2, + "msg": 0, + "signer": 0, + "comment": "Signing with max number of participants", + }, + { + "ids": [0, 1, 2], + "pubshares": [0, 1, 2], + "pubnonces": [0, 1, INV_PUBNONCE_IDX], + "aggnonce": AGGNONCE_INF_IDX, + "msg": 0, + "signer": 0, + "comment": "Both halves of aggregate nonce correspond to point at infinity", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "pubnonces": [0, 1], + "aggnonce": 0, + "msg": 1, + "signer": 0, + "comment": "Empty message", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "pubnonces": [0, 1], + "aggnonce": 0, + "msg": 2, + "signer": 0, + "comment": "Message longer than 32 bytes (38-byte msg)", + }, + ] + for case in valid_cases: + curr_ids = [ids[i] for i in case["ids"]] + curr_pubshares = [pubshares[i] for i in case["pubshares"]] + curr_pubnonces = [pubnonces[i] for i in case["pubnonces"]] + curr_aggnonce = aggnonces[case["aggnonce"]] + curr_msg = COMMON_MSGS[case["msg"]] + my_id = curr_ids[case["signer"]] + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext(curr_aggnonce, curr_signers, [], [], curr_msg) + expected_psig = sign( + bytearray(secnonces_p0[0]), secshare_p0, my_id, session_ctx + ) + vectors["valid_test_cases"].append( + { + "id_indices": case["ids"], + "pubshare_indices": case["pubshares"], + "pubnonce_indices": case["pubnonces"], + "aggnonce_index": case["aggnonce"], + "msg_index": case["msg"], + "signer_index": case["signer"], + "expected": bytes_to_hex(expected_psig), + "comment": case["comment"], + } + ) + # TODO: verify the signatures here + + vectors["sign_error_test_cases"] = [] + # --- Sign Error Test Cases --- + error_cases = [ + { + "ids": [2, 1], + "pubshares": [0, 1], + "aggnonce": 0, + "msg": 0, + "signer_idx": None, + "signer_id": 0, + "secnonce": 0, + "error": "value", + "comment": "The signer's id is not in the participant identifier list", + }, + { + "ids": [0, 1, 1], + "pubshares": [0, 1, 1], + "aggnonce": 0, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "value", + "comment": "The participant identifier list contains duplicate elements", + }, + { + "ids": [0, 1], + "pubshares": [2, 1], + "aggnonce": 0, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "value", + "comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares.", + }, + { + "ids": [0, 1, 2], + "pubshares": [0, 1], + "aggnonce": 0, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "value", + "comment": "The participant identifiers count exceed the participant public shares count", + }, + { + "ids": [0, 1], + "pubshares": [0, INVALID_PUBSHARE_IDX], + "aggnonce": 0, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "invalid_contrib", + "comment": "Signer 1 provided an invalid participant public share", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "aggnonce": AGGNONCE_INVALID_TAG_IDX, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "invalid_contrib", + "comment": "Aggregate nonce is invalid due wrong tag, 0x04, in the first half", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "aggnonce": AGGNONCE_INVALID_XCOORD_IDX, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "invalid_contrib", + "comment": "Aggregate nonce is invalid because the second half does not correspond to an X coordinate", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "aggnonce": AGGNONCE_INVALID_EXCEEDS_FIELD_IDX, + "msg": 0, + "signer_idx": 0, + "secnonce": 0, + "error": "invalid_contrib", + "comment": "Aggregate nonce is invalid because second half exceeds field size", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "aggnonce": 0, + "msg": 0, + "signer_idx": 0, + "secnonce": SECNONCE_ZERO_IDX, + "error": "value", + "comment": "Secnonce is invalid which may indicate nonce reuse", + }, + ] + for case in error_cases: + curr_ids = [ids[i] for i in case["ids"]] + curr_pubshares = [pubshares[i] for i in case["pubshares"]] + curr_aggnonce = aggnonces[case["aggnonce"]] + curr_msg = COMMON_MSGS[case["msg"]] + if case["signer_idx"] is None: + my_id = case["signer_id"] + else: + my_id = curr_ids[case["signer_idx"]] + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext(curr_aggnonce, curr_signers, [], [], curr_msg) + curr_secnonce = bytearray(secnonces_p0[case["secnonce"]]) + expected_error = ( + ValueError if case["error"] == "value" else InvalidContributionError + ) + error = expect_exception( + lambda: sign(curr_secnonce, secshare_p0, my_id, session_ctx), expected_error + ) + vectors["sign_error_test_cases"].append( + { + "id_indices": case["ids"], + "pubshare_indices": case["pubshares"], + "aggnonce_index": case["aggnonce"], + "msg_index": case["msg"], + "signer_index": case["signer_idx"], + **( + {"signer_id": case["signer_id"]} + if case["signer_idx"] is None + else {} + ), + "secnonce_index": case["secnonce"], + "error": error, + "comment": case["comment"], + } + ) + + # REVIEW: In the following vectors, pubshare_indices are not required, + # just aggnonce value would do. But we should include `secshare` and + # `secnonce` indices tho. + vectors["verify_fail_test_cases"] = [] + # --- Verify Fail Test Cases --- + id_indices = [0, 1] + pubshare_indices = [0, 1] + pubnonce_indices = [0, 1] + aggnonce_idx = 0 + msg_idx = 0 + signer_idx = 0 + + curr_ids = [ids[i] for i in id_indices] + curr_pubshares = [pubshares[i] for i in pubshare_indices] + curr_aggnonce = aggnonces[aggnonce_idx] + curr_msg = COMMON_MSGS[msg_idx] + my_id = curr_ids[signer_idx] + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext(curr_aggnonce, curr_signers, [], [], curr_msg) + curr_secnonce = bytearray(secnonces_p0[0]) + psig = sign(curr_secnonce, secshare_p0, my_id, session_ctx) + # --- Verify Fail Test Cases 1 --- + psig_scalar = Scalar.from_bytes_checked(psig) + neg_psig = (-psig_scalar).to_bytes() + vectors["verify_fail_test_cases"].append( + { + "psig": bytes_to_hex(neg_psig), + "id_indices": id_indices, + "pubshare_indices": pubshare_indices, + "pubnonce_indices": pubnonce_indices, + "msg_index": msg_idx, + "signer_index": signer_idx, + "comment": "Wrong signature (which is equal to the negation of valid signature)", + } + ) + # --- Verify Fail Test Cases 2 --- + vectors["verify_fail_test_cases"].append( + { + "psig": bytes_to_hex(psig), + "id_indices": id_indices, + "pubshare_indices": pubshare_indices, + "pubnonce_indices": pubnonce_indices, + "msg_index": msg_idx, + "signer_index": signer_idx + 1, + "comment": "Wrong signer index", + } + ) + # --- Verify Fail Test Cases 3 --- + vectors["verify_fail_test_cases"].append( + { + "psig": bytes_to_hex(psig), + "id_indices": id_indices, + "pubshare_indices": [2] + pubshare_indices[1:], + "pubnonce_indices": pubnonce_indices, + "msg_index": msg_idx, + "signer_index": signer_idx, + "comment": "The signer's pubshare is not in the list of pubshares", + } + ) + # --- Verify Fail Test Cases 4 --- + vectors["verify_fail_test_cases"].append( + { + "psig": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", + "id_indices": id_indices, + "pubshare_indices": pubshare_indices, + "pubnonce_indices": pubnonce_indices, + "msg_index": msg_idx, + "signer_index": signer_idx, + "comment": "Signature value is out of range", + } + ) + + vectors["verify_error_test_cases"] = [] + # --- Verify Error Test Cases --- + verify_error_cases = [ + { + "ids": [0, 1], + "pubshares": [0, 1], + "pubnonces": [3, 1], + "msg": 0, + "signer": 0, + "error": "invalid_contrib", + "comment": "Invalid pubnonce", + }, + { + "ids": [0, 1], + "pubshares": [INVALID_PUBSHARE_IDX, 1], + "pubnonces": [0, 1], + "msg": 0, + "signer": 0, + "error": "invalid_contrib", + "comment": "Invalid pubshare", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "pubnonces": [0, 1, 2], + "msg": 0, + "signer": 0, + "error": "value", + "comment": "public nonces count is greater than ids and pubshares", + }, + ] + for case in verify_error_cases: + curr_ids = [ids[i] for i in case["ids"]] + curr_pubshares = [pubshares[i] for i in case["pubshares"]] + curr_pubnonces = [pubnonces[i] for i in case["pubnonces"]] + msg = case["msg"] + signer_idx = case["signer"] + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + expected_error = ( + ValueError if case["error"] == "value" else InvalidContributionError + ) + error = expect_exception( + # reuse the valid `psig` generated at the start of "verify fail test cases" + lambda: partial_sig_verify( + psig, curr_pubnonces, curr_signers, [], [], msg, signer_idx + ), + expected_error, + ) + vectors["verify_error_test_cases"].append( + { + "psig": bytes_to_hex(psig), + "id_indices": case["ids"], + "pubshare_indices": case["pubshares"], + "pubnonce_indices": case["pubnonces"], + "msg_index": case["msg"], + "signer_index": case["signer"], + "error": error, + "comment": case["comment"], + } + ) + + write_test_vectors("sign_verify_vectors.json", vectors) + + +def generate_tweak_vectors(): + vectors = dict() + + n, t, thresh_pk, xonly_thresh_pk, ids, secshares, pubshares = get_common_setup() + secshare_p0 = secshares[0] + + # Special indices for test cases + INVALID_TWEAK_IDX = 4 # Tweak exceeds secp256k1 group order + + vectors["n"] = n + vectors["t"] = t + vectors["threshold_pubkey"] = bytes_to_hex(thresh_pk) + vectors["secshare_p0"] = bytes_to_hex(secshare_p0) + vectors["identifiers"] = ids + pubshares_with_invalid = pubshares + [INVALID_PUBSHARE] + vectors["pubshares"] = bytes_list_to_hex(pubshares_with_invalid) + + secnonces, pubnonces = generate_all_nonces( + COMMON_RAND, secshares, pubshares, xonly_thresh_pk + ) + vectors["secnonce_p0"] = bytes_to_hex(secnonces[0]) + vectors["pubnonces"] = bytes_list_to_hex(pubnonces) + + # create valid aggnonces + indices_grp = [[0, 1], [0, 1, 2]] + aggnonces = [ + nonce_agg([pubnonces[i] for i in indices], [ids[i] for i in indices]) + for indices in indices_grp + ] + # aggnonce with inf points + aggnonces.append( + nonce_agg( + [pubnonces[0], pubnonces[1], pubnonces[-1]], + [ids[0], ids[1], ids[2]], + ) + ) + vectors["aggnonces"] = bytes_list_to_hex(aggnonces) + + vectors["tweaks"] = bytes_list_to_hex(COMMON_TWEAKS) + vectors["msg"] = bytes_to_hex(COMMON_MSGS[0]) + + vectors["valid_test_cases"] = [] + # --- Valid Test Cases --- + valid_cases = [ + {"tweaks_indices": [], "is_xonly": [], "comment": "No tweak"}, + {"tweaks_indices": [0], "is_xonly": [True], "comment": "A single x-only tweak"}, + {"tweaks_indices": [0], "is_xonly": [False], "comment": "A single plain tweak"}, + { + "tweaks_indices": [0, 1], + "is_xonly": [False, True], + "comment": "A plain tweak followed by an x-only tweak", + }, + { + "tweaks_indices": [0, 1, 2, 3], + "is_xonly": [True, False, True, False], + "comment": "Four tweaks: x-only, plain, x-only, plain. If an implementation prohibits applying plain tweaks after x-only tweaks, it can skip this test vector or return an error", + }, + { + "tweaks_indices": [0, 1, 2, 3], + "is_xonly": [False, False, True, True], + "comment": "Four tweaks: plain, plain, x-only, x-only", + }, + { + "tweaks_indices": [0, 1, 2, 3], + "is_xonly": [False, False, True, True], + "indices": [0, 1, 2], + "aggnonce_idx": 1, + "comment": "Tweaking with max number of participants. The expected value (partial sig) must match the previous test vector", + }, + ] + for case in valid_cases: + indices = case.get("indices", [0, 1]) + curr_ids = [ids[i] for i in indices] + curr_pubshares = [pubshares_with_invalid[i] for i in indices] + aggnonce_idx = case.get("aggnonce_idx", 0) + curr_aggnonce = aggnonces[aggnonce_idx] + curr_tweaks = [COMMON_TWEAKS[i] for i in case["tweaks_indices"]] + curr_tweak_modes = case["is_xonly"] + signer_idx = 0 + my_id = curr_ids[signer_idx] + + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext( + curr_aggnonce, curr_signers, curr_tweaks, curr_tweak_modes, COMMON_MSGS[0] + ) + psig = sign(bytearray(secnonces[0]), secshare_p0, my_id, session_ctx) + + vectors["valid_test_cases"].append( + { + "id_indices": indices, + "pubshare_indices": indices, + "pubnonce_indices": indices, + "tweak_indices": case["tweaks_indices"], + "aggnonce_index": aggnonce_idx, + "is_xonly": curr_tweak_modes, + "signer_index": signer_idx, + "expected": bytes_to_hex(psig), + "comment": case["comment"], + } + ) + + vectors["error_test_cases"] = [] + # --- Error Test Cases --- + error_cases = [ + { + "tweaks_indices": [INVALID_TWEAK_IDX], + "is_xonly": [False], + "comment": "Tweak is invalid because it exceeds group size", + }, + { + "tweaks_indices": [0, 1, 2, 3], + "is_xonly": [True, False], + "comment": "Tweaks count doesn't match the tweak modes count", + }, + ] + for case in error_cases: + indices = [0, 1] + curr_ids = [ids[i] for i in indices] + curr_pubshares = [pubshares_with_invalid[i] for i in indices] + aggnonce_idx = 0 + curr_aggnonce = aggnonces[aggnonce_idx] + curr_tweaks = [COMMON_TWEAKS[i] for i in case["tweaks_indices"]] + curr_tweak_modes = case["is_xonly"] + signer_idx = 0 + my_id = curr_ids[signer_idx] + + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext( + curr_aggnonce, curr_signers, curr_tweaks, curr_tweak_modes, COMMON_MSGS[0] + ) + error = expect_exception( + lambda: sign(bytearray(secnonces[0]), secshare_p0, my_id, session_ctx), + ValueError, + ) + vectors["error_test_cases"].append( + { + "id_indices": indices, + "pubshare_indices": indices, + "tweak_indices": case["tweaks_indices"], + "aggnonce_index": 0, + "is_xonly": curr_tweak_modes, + "signer_index": signer_idx, + "error": error, + "comment": case["comment"], + } + ) + + write_test_vectors("tweak_vectors.json", vectors) + + +def generate_det_sign_vectors(): + vectors = dict() + + n, t, thresh_pk, xonly_thresh_pk, ids, secshares, pubshares = get_common_setup() + secshare_p0 = secshares[0] + + # Special indices for test cases + INVALID_PUBSHARE_IDX = 3 # Invalid pubshare (appended to list) + INVALID_TWEAK_IDX = 1 # Invalid tweak (COMMON_TWEAKS[4]) + RAND_NONE_IDX = 1 # No auxiliary randomness (None) + RAND_MAX_IDX = 2 # Max auxiliary randomness (0xFF...FF) + + vectors["n"] = n + vectors["t"] = t + vectors["threshold_pubkey"] = bytes_to_hex(thresh_pk) + vectors["secshare_p0"] = bytes_to_hex(secshare_p0) + vectors["identifiers"] = ids + pubshares.append(INVALID_PUBSHARE) + vectors["pubshares"] = bytes_list_to_hex(pubshares) + + vectors["msgs"] = bytes_list_to_hex(COMMON_MSGS) + assert len(COMMON_MSGS[2]) == 38 + + rands = [ + bytes.fromhex( + "0000000000000000000000000000000000000000000000000000000000000000" + ), + None, + bytes.fromhex( + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" + ), + ] + + tweaks = [ + [COMMON_TWEAKS[0]], + [COMMON_TWEAKS[4]], + ] + + vectors["valid_test_cases"] = [] + # --- Valid Test Cases --- + valid_cases = [ + { + "indices": [0, 1], + "signer": 0, + "msg": 0, + "rand": 0, + "comment": "Signing with minimum number of participants", + }, + { + "indices": [1, 0], + "signer": 1, + "msg": 0, + "rand": 0, + "comment": "Partial-signature shouldn't change if the order of signers set changes. Note: The deterministic sign will generate the same secnonces due to unchanged parameters", + }, + { + "indices": [0, 2], + "signer": 0, + "msg": 0, + "rand": 0, + "comment": "Partial-signature changes if the members of signers set changes", + }, + { + "indices": [0, 1], + "signer": 0, + "msg": 0, + "rand": RAND_NONE_IDX, + "comment": "Signing without auxiliary randomness", + }, + { + "indices": [0, 1], + "signer": 0, + "msg": 0, + "rand": RAND_MAX_IDX, + "comment": "Signing with max auxiliary randomness", + }, + { + "indices": [0, 1, 2], + "signer": 0, + "msg": 0, + "rand": 0, + "comment": "Signing with maximum number of participants", + }, + { + "indices": [0, 1], + "signer": 0, + "msg": 1, + "rand": 0, + "comment": "Empty message", + }, + { + "indices": [0, 1], + "signer": 0, + "msg": 2, + "rand": 0, + "comment": "Message longer than 32 bytes (38-byte msg)", + }, + { + "indices": [0, 1], + "signer": 0, + "msg": 0, + "rand": 0, + "tweaks": 0, + "is_xonly": [True], + "comment": "Signing with tweaks", + }, + ] + for case in valid_cases: + curr_ids = [ids[i] for i in case["indices"]] + curr_pubshares = [pubshares[i] for i in case["indices"]] + curr_msg = COMMON_MSGS[case["msg"]] + curr_rand = rands[case["rand"]] + signer_index = case["signer"] + my_id = curr_ids[signer_index] + tweaks_idx = case.get("tweaks", None) + curr_tweaks = [] if tweaks_idx is None else tweaks[tweaks_idx] + curr_tweak_modes = case.get("is_xonly", []) + + # generate `aggothernonce` + other_ids = curr_ids[:signer_index] + curr_ids[signer_index + 1 :] + other_pubnonces = [] + for i in case["indices"]: + if i == signer_index: + continue + tmp = b"" if curr_rand is None else curr_rand + _, pub = nonce_gen_internal( + tmp, secshares[i], pubshares[i], xonly_thresh_pk, curr_msg, None + ) + other_pubnonces.append(pub) + curr_aggothernonce = nonce_agg(other_pubnonces, other_ids) + + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + expected = deterministic_sign( + secshare_p0, + my_id, + curr_aggothernonce, + curr_signers, + curr_tweaks, + curr_tweak_modes, + curr_msg, + curr_rand, + ) + + vectors["valid_test_cases"].append( + { + "rand": bytes_to_hex(curr_rand) if curr_rand is not None else curr_rand, + "aggothernonce": bytes_to_hex(curr_aggothernonce), + "id_indices": case["indices"], + "pubshare_indices": case["indices"], + "tweaks": bytes_list_to_hex(curr_tweaks), + "is_xonly": curr_tweak_modes, + "msg_index": case["msg"], + "signer_index": signer_index, + "expected": bytes_list_to_hex(list(expected)), + "comment": case["comment"], + } + ) + + vectors["error_test_cases"] = [] + # --- Error Test Cases --- + error_cases = [ + { + "ids": [2, 1], + "pubshares": [0, 1], + "signer_idx": None, + "signer_id": 0, + "msg": 0, + "rand": 0, + "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", + "error": "value", + "comment": "The signer's id is not in the participant identifier list", + }, + { + "ids": [0, 1, 1], + "pubshares": [0, 1, 1], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "error": "value", + "comment": "The participant identifier list contains duplicate elements", + }, + { + "ids": [0, 1], + "pubshares": [2, 1], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "error": "value", + "comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares.", + }, + { + "ids": [0, 1, 2], + "pubshares": [0, 1], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", + "error": "value", + "comment": "The participant identifiers count exceed the participant public shares count", + }, + { + "ids": [0, 1], + "pubshares": [0, INVALID_PUBSHARE_IDX], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "error": "invalid_contrib", + "comment": "Signer 1 provided an invalid participant public share", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "aggothernonce": "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9", + "error": "invalid_contrib", + "comment": "aggothernonce is invalid due wrong tag, 0x04, in the first half", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "aggothernonce": "0000000000000000000000000000000000000000000000000000000000000000000287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480", + "error": "invalid_contrib", + "comment": "aggothernonce is invalid because first half corresponds to point at infinity", + }, + { + "ids": [0, 1], + "pubshares": [0, 1], + "signer_idx": 0, + "msg": 0, + "rand": 0, + "tweaks": INVALID_TWEAK_IDX, + "is_xonly": [False], + "error": "value", + "comment": "Tweak is invalid because it exceeds group size", + }, + ] + for case in error_cases: + curr_ids = [ids[i] for i in case["ids"]] + curr_pubshares = [pubshares[i] for i in case["pubshares"]] + curr_msg = COMMON_MSGS[case["msg"]] + curr_rand = rands[case["rand"]] + signer_index = case["signer_idx"] + if case["signer_idx"] is None: + my_id = case["signer_id"] + else: + my_id = curr_ids[case["signer_idx"]] + tweaks_idx = case.get("tweaks", None) + curr_tweaks = [] if tweaks_idx is None else tweaks[tweaks_idx] + curr_tweak_modes = case.get("is_xonly", []) + + # generate `aggothernonce` + is_aggothernonce = case.get("aggothernonce", None) + if is_aggothernonce is None: + if signer_index is None: + other_ids = curr_ids[1:] + else: + other_ids = curr_ids[:signer_index] + curr_ids[signer_index + 1 :] + other_pubnonces = [] + for i in case["ids"]: + if i == signer_index: + continue + tmp = b"" if curr_rand is None else curr_rand + _, pub = nonce_gen_internal( + tmp, secshares[i], pubshares[i], xonly_thresh_pk, curr_msg, None + ) + other_pubnonces.append(pub) + curr_aggothernonce = nonce_agg(other_pubnonces, other_ids) + else: + curr_aggothernonce = bytes.fromhex(is_aggothernonce) + + expected_exception = ( + ValueError if case["error"] == "value" else InvalidContributionError + ) + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + error = expect_exception( + lambda: deterministic_sign( + secshare_p0, + my_id, + curr_aggothernonce, + curr_signers, + curr_tweaks, + curr_tweak_modes, + curr_msg, + curr_rand, + ), + expected_exception, + ) + + vectors["error_test_cases"].append( + { + "rand": bytes_to_hex(curr_rand) if curr_rand is not None else curr_rand, + "aggothernonce": bytes_to_hex(curr_aggothernonce), + "id_indices": case["ids"], + "pubshare_indices": case["pubshares"], + "tweaks": bytes_list_to_hex(curr_tweaks), + "is_xonly": curr_tweak_modes, + "msg_index": case["msg"], + "signer_index": signer_index, + **( + {"signer_id": case["signer_id"]} + if case["signer_idx"] is None + else {} + ), + "error": error, + "comment": case["comment"], + } + ) + + write_test_vectors("det_sign_vectors.json", vectors) + + +def generate_sig_agg_vectors(): + vectors = dict() + + n, t, thresh_pk, xonly_thresh_pk, ids, secshares, pubshares = get_common_setup() + + vectors["n"] = n + vectors["t"] = t + vectors["threshold_pubkey"] = bytes_to_hex(thresh_pk) + vectors["identifiers"] = ids + vectors["pubshares"] = bytes_list_to_hex(pubshares) + + secnonces, pubnonces = generate_all_nonces( + COMMON_RAND, secshares, pubshares, xonly_thresh_pk + ) + vectors["pubnonces"] = bytes_list_to_hex(pubnonces) + + vectors["tweaks"] = bytes_list_to_hex(SIG_AGG_TWEAKS) + + msg = bytes.fromhex( + "599C67EA410D005B9DA90817CF03ED3B1C868E4DA4EDF00A5880B0082C237869" + ) + vectors["msg"] = bytes_to_hex(msg) + + vectors["valid_test_cases"] = [] + # --- Valid Test Cases --- + valid_cases = [ + { + "indices": [0, 1], + "comment": "Signing with minimum number of participants", + }, + { + "indices": [1, 0], + "comment": "Order of the singer set shouldn't affect the aggregate signature. The expected value must match the previous test vector.", + }, + { + "indices": [0, 1], + "tweaks": [0, 1, 2], + "is_xonly": [True, False, False], + "comment": "Signing with tweaked threshold public key", + }, + { + "indices": [0, 1, 2], + "comment": "Signing with max number of participants and tweaked threshold public key", + }, + ] + for case in valid_cases: + curr_ids = [ids[i] for i in case["indices"]] + curr_pubshares = [pubshares[i] for i in case["indices"]] + curr_pubnonces = [pubnonces[i] for i in case["indices"]] + curr_aggnonce = nonce_agg(curr_pubnonces, curr_ids) + curr_msg = msg + tweak_indices = case.get("tweaks", []) + curr_tweaks = [SIG_AGG_TWEAKS[i] for i in tweak_indices] + curr_tweak_modes = case.get("is_xonly", []) + psigs = [] + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext( + curr_aggnonce, + curr_signers, + curr_tweaks, + curr_tweak_modes, + curr_msg, + ) + for i in case["indices"]: + my_id = ids[i] + sig = sign(bytearray(secnonces[i]), secshares[i], my_id, session_ctx) + psigs.append(sig) + # TODO: verify the signatures here + bip340_sig = partial_sig_agg(psigs, curr_ids, session_ctx) + vectors["valid_test_cases"].append( + { + "id_indices": case["indices"], + "pubshare_indices": case["indices"], + "pubnonce_indices": case["indices"], + "aggnonce": bytes_to_hex(curr_aggnonce), + "tweak_indices": tweak_indices, + "is_xonly": curr_tweak_modes, + "psigs": bytes_list_to_hex(psigs), + "expected": bytes_to_hex(bip340_sig), + "comment": case["comment"], + } + ) + + vectors["error_test_cases"] = [] + # --- Error Test Cases --- + error_cases = [ + { + "indices": [0, 1], + "error": "invalid_contrib", + "comment": "Partial signature is invalid because it exceeds group size", + }, + { + "indices": [0, 1], + "error": "value", + "comment": "Partial signature count doesn't match the signer set count", + }, + ] + for j, case in enumerate(error_cases): + curr_ids = [ids[i] for i in case["indices"]] + curr_pubshares = [pubshares[i] for i in case["indices"]] + curr_pubnonces = [pubnonces[i] for i in case["indices"]] + curr_aggnonce = nonce_agg(curr_pubnonces, curr_ids) + curr_msg = msg + psigs = [] + curr_signers = SignersContext(n, t, curr_ids, curr_pubshares, thresh_pk) + session_ctx = SessionContext(curr_aggnonce, curr_signers, [], [], curr_msg) + for i in case["indices"]: + my_id = ids[i] + sig = sign(bytearray(secnonces[i]), secshares[i], my_id, session_ctx) + psigs.append(sig) + # TODO: verify the signatures here + + if j == 0: + invalid_psig = bytes.fromhex( + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141" + ) + psigs[1] = invalid_psig + if j == 1: + psigs.pop() + + expected_exception = ( + ValueError if case["error"] == "value" else InvalidContributionError + ) + error = expect_exception( + lambda: partial_sig_agg(psigs, curr_ids, session_ctx), expected_exception + ) + vectors["error_test_cases"].append( + { + "id_indices": case["indices"], + "pubshare_indices": case["indices"], + "pubnonce_indices": case["indices"], + "aggnonce": bytes_to_hex(curr_aggnonce), + "tweak_indices": [], + "is_xonly": [], + "psigs": bytes_list_to_hex(psigs), + "error": error, + "comment": case["comment"], + } + ) + + write_test_vectors("sig_agg_vectors.json", vectors) + + +def create_vectors_directory(): + if os.path.exists("vectors"): + shutil.rmtree("vectors") + os.makedirs("vectors") + + +def run_gen_vectors(test_name, test_func): + max_len = 30 + test_name = test_name.ljust(max_len, ".") + print(f"Running {test_name}...", end="", flush=True) + try: + test_func() + print("Done!") + except Exception as e: + print(f"Failed :'(\nError: {e}") + + +def main(): + create_vectors_directory() + + run_gen_vectors("generate_nonce_gen_vectors", generate_nonce_gen_vectors) + run_gen_vectors("generate_nonce_agg_vectors", generate_nonce_agg_vectors) + run_gen_vectors("generate_sign_verify_vectors", generate_sign_verify_vectors) + run_gen_vectors("generate_tweak_vectors", generate_tweak_vectors) + run_gen_vectors("generate_sig_agg_vectors", generate_sig_agg_vectors) + run_gen_vectors("generate_det_sign_vectors", generate_det_sign_vectors) + print("Test vectors generated successfully") + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/bip-frost-signing/python/mypy.ini b/bip-frost-signing/python/mypy.ini new file mode 100644 index 0000000000..08f1e086d1 --- /dev/null +++ b/bip-frost-signing/python/mypy.ini @@ -0,0 +1,4 @@ +[mypy] +# Include path to vendored copy of secp256k1lab, in order to +# avoid "import-not-found" errors in mypy's `--strict` mode +mypy_path = $MYPY_CONFIG_FILE_DIR/secp256k1lab/src \ No newline at end of file diff --git a/bip-frost-signing/python/secp256k1lab/.github/workflows/main.yml b/bip-frost-signing/python/secp256k1lab/.github/workflows/main.yml new file mode 100644 index 0000000000..fb05230b3c --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/.github/workflows/main.yml @@ -0,0 +1,34 @@ +name: Tests +on: [push, pull_request] +jobs: + ruff: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Install the latest version of uv + uses: astral-sh/setup-uv@v5 + - run: uvx ruff check . + mypy: + runs-on: ubuntu-latest + strategy: + matrix: + python-version: ["3.11", "3.12", "3.13", "3.14"] + steps: + - uses: actions/checkout@v4 + - name: Install the latest version of uv, setup Python ${{ matrix.python-version }} + uses: astral-sh/setup-uv@v5 + with: + python-version: ${{ matrix.python-version }} + - run: uvx mypy . + unittest: + runs-on: ubuntu-latest + strategy: + matrix: + python-version: ["3.11", "3.12", "3.13", "3.14"] + steps: + - uses: actions/checkout@v4 + - name: Setup Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + - run: python3 -m unittest diff --git a/bip-frost-signing/python/secp256k1lab/.gitignore b/bip-frost-signing/python/secp256k1lab/.gitignore new file mode 100644 index 0000000000..505a3b1ca2 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/.gitignore @@ -0,0 +1,10 @@ +# Python-generated files +__pycache__/ +*.py[oc] +build/ +dist/ +wheels/ +*.egg-info + +# Virtual environments +.venv diff --git a/bip-frost-signing/python/secp256k1lab/.python-version b/bip-frost-signing/python/secp256k1lab/.python-version new file mode 100644 index 0000000000..2c0733315e --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/.python-version @@ -0,0 +1 @@ +3.11 diff --git a/bip-frost-signing/python/secp256k1lab/CHANGELOG.md b/bip-frost-signing/python/secp256k1lab/CHANGELOG.md new file mode 100644 index 0000000000..4c756d3695 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/CHANGELOG.md @@ -0,0 +1,25 @@ +# Changelog + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [Unreleased] + +#### Added + - Added new methods `Scalar.from_int_nonzero_checked` and `Scalar.from_bytes_nonzero_checked` + that ensure a constructed scalar is in the range `0 < s < N` (i.e. is non-zero and within the + group order) and throw a `ValueError` otherwise. This is e.g. useful for ensuring that newly + generated secret keys or nonces are valid without having to do the non-zero check manually. + The already existing methods `Scalar.from_int_checked` and `Scalar.from_bytes_checked` error + on overflow, but not on zero, i.e. they only ensure `0 <= s < N`. + + - Added a new method `GE.from_bytes_compressed_with_infinity` to parse a compressed + public key (33 bytes) to a group element, where the all-zeros bytestring maps to the + point at infinity. This is the counterpart to the already existing serialization + method `GE.to_bytes_compressed_with_infinity`. + +## [1.0.0] - 2025-03-31 + +Initial release. diff --git a/bip-frost-signing/python/secp256k1lab/COPYING b/bip-frost-signing/python/secp256k1lab/COPYING new file mode 100644 index 0000000000..e8f2163641 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/COPYING @@ -0,0 +1,23 @@ +The MIT License (MIT) + +Copyright (c) 2009-2024 The Bitcoin Core developers +Copyright (c) 2009-2024 Bitcoin Developers +Copyright (c) 2025- The secp256k1lab Developers + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/bip-frost-signing/python/secp256k1lab/README.md b/bip-frost-signing/python/secp256k1lab/README.md new file mode 100644 index 0000000000..dbc9dbd04c --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/README.md @@ -0,0 +1,13 @@ +secp256k1lab +============ + +![Dependencies: None](https://img.shields.io/badge/dependencies-none-success) + +An INSECURE implementation of the secp256k1 elliptic curve and related cryptographic schemes written in Python, intended for prototyping, experimentation and education. + +Features: +* Low-level secp256k1 field and group arithmetic. +* Schnorr signing/verification and key generation according to [BIP-340](https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki). +* ECDH key exchange. + +WARNING: The code in this library is slow and trivially vulnerable to side channel attacks. diff --git a/bip-frost-signing/python/secp256k1lab/pyproject.toml b/bip-frost-signing/python/secp256k1lab/pyproject.toml new file mode 100644 index 0000000000..68b927b384 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/pyproject.toml @@ -0,0 +1,34 @@ +[project] +name = "secp256k1lab" +version = "1.0.0" +description = "An INSECURE implementation of the secp256k1 elliptic curve and related cryptographic schemes, intended for prototyping, experimentation and education" +readme = "README.md" +authors = [ + { name = "Pieter Wuille", email = "pieter@wuille.net" }, + { name = "Tim Ruffing", email = "me@real-or-random.org" }, + { name = "Jonas Nick", email = "jonasd.nick@gmail.com" }, + { name = "Sebastian Falbesoner", email = "sebastian.falbesoner@gmail.com" } +] +maintainers = [ + { name = "Tim Ruffing", email = "me@real-or-random.org" }, + { name = "Jonas Nick", email = "jonasd.nick@gmail.com" }, + { name = "Sebastian Falbesoner", email = "sebastian.falbesoner@gmail.com" } +] +requires-python = ">=3.11" +license = "MIT" +license-files = ["COPYING"] +keywords = ["secp256k1", "elliptic curves", "cryptography", "Bitcoin"] +classifiers = [ + "Development Status :: 5 - Production/Stable", + "Intended Audience :: Developers", + "Intended Audience :: Education", + "Intended Audience :: Science/Research", + "License :: OSI Approved :: MIT License", + "Programming Language :: Python", + "Topic :: Security :: Cryptography", +] +dependencies = [] + +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/__init__.py b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/bip340.py b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/bip340.py new file mode 100644 index 0000000000..ba839d16e1 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/bip340.py @@ -0,0 +1,73 @@ +# The following functions are based on the BIP 340 reference implementation: +# https://github.com/bitcoin/bips/blob/master/bip-0340/reference.py + +from .secp256k1 import FE, GE, G +from .util import int_from_bytes, bytes_from_int, xor_bytes, tagged_hash + + +def pubkey_gen(seckey: bytes) -> bytes: + d0 = int_from_bytes(seckey) + if not (1 <= d0 <= GE.ORDER - 1): + raise ValueError("The secret key must be an integer in the range 1..n-1.") + P = d0 * G + assert not P.infinity + return P.to_bytes_xonly() + + +def schnorr_sign( + msg: bytes, seckey: bytes, aux_rand: bytes, tag_prefix: str = "BIP0340" +) -> bytes: + d0 = int_from_bytes(seckey) + if not (1 <= d0 <= GE.ORDER - 1): + raise ValueError("The secret key must be an integer in the range 1..n-1.") + if len(aux_rand) != 32: + raise ValueError("aux_rand must be 32 bytes instead of %i." % len(aux_rand)) + P = d0 * G + assert not P.infinity + d = d0 if P.has_even_y() else GE.ORDER - d0 + t = xor_bytes(bytes_from_int(d), tagged_hash(tag_prefix + "/aux", aux_rand)) + k0 = ( + int_from_bytes(tagged_hash(tag_prefix + "/nonce", t + P.to_bytes_xonly() + msg)) + % GE.ORDER + ) + if k0 == 0: + raise RuntimeError("Failure. This happens only with negligible probability.") + R = k0 * G + assert not R.infinity + k = k0 if R.has_even_y() else GE.ORDER - k0 + e = ( + int_from_bytes( + tagged_hash( + tag_prefix + "/challenge", R.to_bytes_xonly() + P.to_bytes_xonly() + msg + ) + ) + % GE.ORDER + ) + sig = R.to_bytes_xonly() + bytes_from_int((k + e * d) % GE.ORDER) + assert schnorr_verify(msg, P.to_bytes_xonly(), sig, tag_prefix=tag_prefix) + return sig + + +def schnorr_verify( + msg: bytes, pubkey: bytes, sig: bytes, tag_prefix: str = "BIP0340" +) -> bool: + if len(pubkey) != 32: + raise ValueError("The public key must be a 32-byte array.") + if len(sig) != 64: + raise ValueError("The signature must be a 64-byte array.") + try: + P = GE.from_bytes_xonly(pubkey) + except ValueError: + return False + r = int_from_bytes(sig[0:32]) + s = int_from_bytes(sig[32:64]) + if (r >= FE.SIZE) or (s >= GE.ORDER): + return False + e = ( + int_from_bytes(tagged_hash(tag_prefix + "/challenge", sig[0:32] + pubkey + msg)) + % GE.ORDER + ) + R = s * G - e * P + if R.infinity or (not R.has_even_y()) or (R.x != r): + return False + return True diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/ecdh.py b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/ecdh.py new file mode 100644 index 0000000000..73f47fa1a7 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/ecdh.py @@ -0,0 +1,16 @@ +import hashlib + +from .secp256k1 import GE, Scalar + + +def ecdh_compressed_in_raw_out(seckey: bytes, pubkey: bytes) -> GE: + """TODO""" + shared_secret = Scalar.from_bytes_checked(seckey) * GE.from_bytes_compressed(pubkey) + assert not shared_secret.infinity # prime-order group + return shared_secret + + +def ecdh_libsecp256k1(seckey: bytes, pubkey: bytes) -> bytes: + """TODO""" + shared_secret = ecdh_compressed_in_raw_out(seckey, pubkey) + return hashlib.sha256(shared_secret.to_bytes_compressed()).digest() diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/keys.py b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/keys.py new file mode 100644 index 0000000000..3e28897e99 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/keys.py @@ -0,0 +1,15 @@ +from .secp256k1 import GE, G +from .util import int_from_bytes + +# The following function is based on the BIP 327 reference implementation +# https://github.com/bitcoin/bips/blob/master/bip-0327/reference.py + + +# Return the plain public key corresponding to a given secret key +def pubkey_gen_plain(seckey: bytes) -> bytes: + d0 = int_from_bytes(seckey) + if not (1 <= d0 <= GE.ORDER - 1): + raise ValueError("The secret key must be an integer in the range 1..n-1.") + P = d0 * G + assert not P.infinity + return P.to_bytes_compressed() diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/py.typed b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/py.typed new file mode 100644 index 0000000000..e69de29bb2 diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/secp256k1.py b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/secp256k1.py new file mode 100644 index 0000000000..0526878d91 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/secp256k1.py @@ -0,0 +1,483 @@ +# Copyright (c) 2022-2023 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +"""Test-only implementation of low-level secp256k1 field and group arithmetic + +It is designed for ease of understanding, not performance. + +WARNING: This code is slow and trivially vulnerable to side channel attacks. Do not use for +anything but tests. + +Exports: +* FE: class for secp256k1 field elements +* GE: class for secp256k1 group elements +* G: the secp256k1 generator point +""" + +from __future__ import annotations +from typing import Self + +# TODO Docstrings of methods still say "field element" +class APrimeFE: + """Objects of this class represent elements of a prime field. + + They are represented internally in numerator / denominator form, in order to delay inversions. + """ + + # The size of the field (also its modulus and characteristic). + SIZE: int + + def __init__(self, a: int | Self = 0, b: int | Self = 1) -> None: + """Initialize a field element a/b; both a and b can be ints or field elements.""" + if isinstance(a, type(self)): + num = a._num + den = a._den + else: + assert isinstance(a, int) + num = a % self.SIZE + den = 1 + if isinstance(b, type(self)): + den = (den * b._num) % self.SIZE + num = (num * b._den) % self.SIZE + else: + assert isinstance(b, int) + den = (den * b) % self.SIZE + assert den != 0 + if num == 0: + den = 1 + self._num: int = num + self._den: int = den + + def __add__(self, a: int | Self) -> Self: + """Compute the sum of two field elements (second may be int).""" + if isinstance(a, type(self)): + return type(self)(self._num * a._den + self._den * a._num, self._den * a._den) + if isinstance(a, int): + return type(self)(self._num + self._den * a, self._den) + return NotImplemented + + def __radd__(self, a: int) -> Self: + """Compute the sum of an integer and a field element.""" + return type(self)(a) + self + + @classmethod + def sum(cls, *es: Self) -> Self: + """Compute the sum of field elements. + + sum(a, b, c, ...) is identical to (0 + a + b + c + ...).""" + return sum(es, start=cls(0)) + + def __sub__(self, a: int | Self) -> Self: + """Compute the difference of two field elements (second may be int).""" + if isinstance(a, type(self)): + return type(self)(self._num * a._den - self._den * a._num, self._den * a._den) + if isinstance(a, int): + return type(self)(self._num - self._den * a, self._den) + return NotImplemented + + def __rsub__(self, a: int) -> Self: + """Compute the difference of an integer and a field element.""" + return type(self)(a) - self + + def __mul__(self, a: int | Self) -> Self: + """Compute the product of two field elements (second may be int).""" + if isinstance(a, type(self)): + return type(self)(self._num * a._num, self._den * a._den) + if isinstance(a, int): + return type(self)(self._num * a, self._den) + return NotImplemented + + def __rmul__(self, a: int) -> Self: + """Compute the product of an integer with a field element.""" + return type(self)(a) * self + + def __truediv__(self, a: int | Self) -> Self: + """Compute the ratio of two field elements (second may be int).""" + if isinstance(a, type(self)) or isinstance(a, int): + return type(self)(self, a) + return NotImplemented + + def __pow__(self, a: int) -> Self: + """Raise a field element to an integer power.""" + return type(self)(pow(self._num, a, self.SIZE), pow(self._den, a, self.SIZE)) + + def __neg__(self) -> Self: + """Negate a field element.""" + return type(self)(-self._num, self._den) + + def __int__(self) -> int: + """Convert a field element to an integer in range 0..SIZE-1. The result is cached.""" + if self._den != 1: + self._num = (self._num * pow(self._den, -1, self.SIZE)) % self.SIZE + self._den = 1 + return self._num + + def sqrt(self) -> Self | None: + """Compute the square root of a field element if it exists (None otherwise).""" + raise NotImplementedError + + def is_square(self) -> bool: + """Determine if this field element has a square root.""" + # A more efficient algorithm is possible here (Jacobi symbol). + return self.sqrt() is not None + + def is_even(self) -> bool: + """Determine whether this field element, represented as integer in 0..SIZE-1, is even.""" + return int(self) & 1 == 0 + + def __eq__(self, a: object) -> bool: + """Check whether two field elements are equal (second may be an int).""" + if isinstance(a, type(self)): + return (self._num * a._den - self._den * a._num) % self.SIZE == 0 + elif isinstance(a, int): + return (self._num - self._den * a) % self.SIZE == 0 + return False # for other types + + def to_bytes(self) -> bytes: + """Convert a field element to a 32-byte array (BE byte order).""" + return int(self).to_bytes(32, 'big') + + @classmethod + def from_int_checked(cls, v: int) -> Self: + """Convert an integer to a field element (no overflow allowed).""" + if v >= cls.SIZE: + raise ValueError + return cls(v) + + @classmethod + def from_int_wrapping(cls, v: int) -> Self: + """Convert an integer to a field element (reduced modulo SIZE).""" + return cls(v % cls.SIZE) + + @classmethod + def from_bytes_checked(cls, b: bytes) -> Self: + """Convert a 32-byte array to a field element (BE byte order, no overflow allowed).""" + v = int.from_bytes(b, 'big') + return cls.from_int_checked(v) + + @classmethod + def from_bytes_wrapping(cls, b: bytes) -> Self: + """Convert a 32-byte array to a field element (BE byte order, reduced modulo SIZE).""" + v = int.from_bytes(b, 'big') + return cls.from_int_wrapping(v) + + def __str__(self) -> str: + """Convert this field element to a 64 character hex string.""" + return f"{int(self):064x}" + + def __repr__(self) -> str: + """Get a string representation of this field element.""" + return f"{type(self).__qualname__}(0x{int(self):x})" + + +class FE(APrimeFE): + SIZE = 2**256 - 2**32 - 977 + + def sqrt(self) -> Self | None: + # Due to the fact that our modulus p is of the form (p % 4) == 3, the Tonelli-Shanks + # algorithm (https://en.wikipedia.org/wiki/Tonelli-Shanks_algorithm) is simply + # raising the argument to the power (p + 1) / 4. + + # To see why: (p-1) % 2 = 0, so 2 divides the order of the multiplicative group, + # and thus only half of the non-zero field elements are squares. An element a is + # a (nonzero) square when Euler's criterion, a^((p-1)/2) = 1 (mod p), holds. We're + # looking for x such that x^2 = a (mod p). Given a^((p-1)/2) = 1, that is equivalent + # to x^2 = a^(1 + (p-1)/2) mod p. As (1 + (p-1)/2) is even, this is equivalent to + # x = a^((1 + (p-1)/2)/2) mod p, or x = a^((p+1)/4) mod p. + v = int(self) + s = pow(v, (self.SIZE + 1) // 4, self.SIZE) + if s**2 % self.SIZE == v: + return type(self)(s) + return None + + +class Scalar(APrimeFE): + """TODO Docstring""" + SIZE = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 + + @classmethod + def from_int_nonzero_checked(cls, v: int) -> Self: + """Convert an integer to a scalar (no zero or overflow allowed).""" + if not (0 < v < cls.SIZE): + raise ValueError + return cls(v) + + @classmethod + def from_bytes_nonzero_checked(cls, b: bytes) -> Self: + """Convert a 32-byte array to a scalar (BE byte order, no zero or overflow allowed).""" + v = int.from_bytes(b, 'big') + return cls.from_int_nonzero_checked(v) + + +class GE: + """Objects of this class represent secp256k1 group elements (curve points or infinity) + + GE objects are immutable. + + Normal points on the curve have fields: + * x: the x coordinate (a field element) + * y: the y coordinate (a field element, satisfying y^2 = x^3 + 7) + * infinity: False + + The point at infinity has field: + * infinity: True + """ + + # TODO The following two class attributes should probably be just getters as + # classmethods to enforce immutability. Unfortunately Python makes it hard + # to create "classproperties". `G` could then also be just a classmethod. + + # Order of the group (number of points on the curve, plus 1 for infinity) + ORDER = Scalar.SIZE + + # Number of valid distinct x coordinates on the curve. + ORDER_HALF = ORDER // 2 + + @property + def infinity(self) -> bool: + """Whether the group element is the point at infinity.""" + return self._infinity + + @property + def x(self) -> FE: + """The x coordinate (a field element) of a non-infinite group element.""" + assert not self.infinity + return self._x + + @property + def y(self) -> FE: + """The y coordinate (a field element) of a non-infinite group element.""" + assert not self.infinity + return self._y + + def __init__(self, x: int | FE | None = None, y: int | FE | None = None) -> None: + """Initialize a group element with specified x and y coordinates, or infinity.""" + if x is None: + # Initialize as infinity. + assert y is None + self._infinity = True + else: + # Initialize as point on the curve (and check that it is). + assert x is not None + assert y is not None + fx = FE(x) + fy = FE(y) + assert fy**2 == fx**3 + 7 + self._infinity = False + self._x = fx + self._y = fy + + def __add__(self, a: GE) -> GE: + """Add two group elements together.""" + # Deal with infinity: a + infinity == infinity + a == a. + if self.infinity: + return a + if a.infinity: + return self + if self.x == a.x: + if self.y != a.y: + # A point added to its own negation is infinity. + assert self.y + a.y == 0 + return GE() + else: + # For identical inputs, use the tangent (doubling formula). + lam = (3 * self.x**2) / (2 * self.y) + else: + # For distinct inputs, use the line through both points (adding formula). + lam = (self.y - a.y) / (self.x - a.x) + # Determine point opposite to the intersection of that line with the curve. + x = lam**2 - (self.x + a.x) + y = lam * (self.x - x) - self.y + return GE(x, y) + + @staticmethod + def sum(*ps: GE) -> GE: + """Compute the sum of group elements. + + GE.sum(a, b, c, ...) is identical to (GE() + a + b + c + ...).""" + return sum(ps, start=GE()) + + @staticmethod + def batch_mul(*aps: tuple[Scalar, GE]) -> GE: + """Compute a (batch) scalar group element multiplication. + + GE.batch_mul((a1, p1), (a2, p2), (a3, p3)) is identical to a1*p1 + a2*p2 + a3*p3, + but more efficient.""" + # Reduce all the scalars modulo order first (so we can deal with negatives etc). + naps = [(int(a), p) for a, p in aps] + # Start with point at infinity. + r = GE() + # Iterate over all bit positions, from high to low. + for i in range(255, -1, -1): + # Double what we have so far. + r = r + r + # Add then add the points for which the corresponding scalar bit is set. + for (a, p) in naps: + if (a >> i) & 1: + r += p + return r + + def __rmul__(self, a: int | Scalar) -> GE: + """Multiply an integer or scalar with a group element.""" + if self == G: + return FAST_G.mul(Scalar(a)) + return GE.batch_mul((Scalar(a), self)) + + def __neg__(self) -> GE: + """Compute the negation of a group element.""" + if self.infinity: + return self + return GE(self.x, -self.y) + + def __sub__(self, a: GE) -> GE: + """Subtract a group element from another.""" + return self + (-a) + + def __eq__(self, a: object) -> bool: + """Check if two group elements are equal.""" + if not isinstance(a, type(self)): + return False + return (self - a).infinity + + def has_even_y(self) -> bool: + """Determine whether a non-infinity group element has an even y coordinate.""" + assert not self.infinity + return self.y.is_even() + + def to_bytes_compressed(self) -> bytes: + """Convert a non-infinite group element to 33-byte compressed encoding.""" + assert not self.infinity + return bytes([3 - self.y.is_even()]) + self.x.to_bytes() + + def to_bytes_compressed_with_infinity(self) -> bytes: + """Convert a group element to 33-byte compressed encoding, mapping infinity to zeros.""" + if self.infinity: + return 33 * b"\x00" + return self.to_bytes_compressed() + + def to_bytes_uncompressed(self) -> bytes: + """Convert a non-infinite group element to 65-byte uncompressed encoding.""" + assert not self.infinity + return b'\x04' + self.x.to_bytes() + self.y.to_bytes() + + def to_bytes_xonly(self) -> bytes: + """Convert (the x coordinate of) a non-infinite group element to 32-byte xonly encoding.""" + assert not self.infinity + return self.x.to_bytes() + + @staticmethod + def lift_x(x: int | FE) -> GE: + """Return group element with specified field element as x coordinate (and even y).""" + y = (FE(x)**3 + 7).sqrt() + if y is None: + raise ValueError + if not y.is_even(): + y = -y + return GE(x, y) + + @staticmethod + def from_bytes_compressed(b: bytes) -> GE: + """Convert a compressed to a group element.""" + assert len(b) == 33 + if b[0] != 2 and b[0] != 3: + raise ValueError + x = FE.from_bytes_checked(b[1:]) + r = GE.lift_x(x) + if b[0] == 3: + r = -r + return r + + @staticmethod + def from_bytes_compressed_with_infinity(b: bytes) -> GE: + """Convert a compressed to a group element, mapping zeros to infinity.""" + if b == 33 * b"\x00": + return GE() + else: + return GE.from_bytes_compressed(b) + + @staticmethod + def from_bytes_uncompressed(b: bytes) -> GE: + """Convert an uncompressed to a group element.""" + assert len(b) == 65 + if b[0] != 4: + raise ValueError + x = FE.from_bytes_checked(b[1:33]) + y = FE.from_bytes_checked(b[33:]) + if y**2 != x**3 + 7: + raise ValueError + return GE(x, y) + + @staticmethod + def from_bytes(b: bytes) -> GE: + """Convert a compressed or uncompressed encoding to a group element.""" + assert len(b) in (33, 65) + if len(b) == 33: + return GE.from_bytes_compressed(b) + else: + return GE.from_bytes_uncompressed(b) + + @staticmethod + def from_bytes_xonly(b: bytes) -> GE: + """Convert a point given in xonly encoding to a group element.""" + assert len(b) == 32 + x = FE.from_bytes_checked(b) + r = GE.lift_x(x) + return r + + @staticmethod + def is_valid_x(x: int | FE) -> bool: + """Determine whether the provided field element is a valid X coordinate.""" + return (FE(x)**3 + 7).is_square() + + def __str__(self) -> str: + """Convert this group element to a string.""" + if self.infinity: + return "(inf)" + return f"({self.x},{self.y})" + + def __repr__(self) -> str: + """Get a string representation for this group element.""" + if self.infinity: + return "GE()" + return f"GE(0x{int(self.x):x},0x{int(self.y):x})" + + def __hash__(self) -> int: + """Compute a non-cryptographic hash of the group element.""" + if self.infinity: + return 0 # 0 is not a valid x coordinate + return int(self.x) + + +# The secp256k1 generator point +G = GE.lift_x(0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798) + + +class FastGEMul: + """Table for fast multiplication with a constant group element. + + Speed up scalar multiplication with a fixed point P by using a precomputed lookup table with + its powers of 2: + + table = [P, 2*P, 4*P, (2^3)*P, (2^4)*P, ..., (2^255)*P] + + During multiplication, the points corresponding to each bit set in the scalar are added up, + i.e. on average ~128 point additions take place. + """ + + def __init__(self, p: GE) -> None: + self.table: list[GE] = [p] # table[i] = (2^i) * p + for _ in range(255): + p = p + p + self.table.append(p) + + def mul(self, a: Scalar | int) -> GE: + result = GE() + a_ = int(a) + for bit in range(a_.bit_length()): + if a_ & (1 << bit): + result += self.table[bit] + return result + +# Precomputed table with multiples of G for fast multiplication +FAST_G = FastGEMul(G) diff --git a/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/util.py b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/util.py new file mode 100644 index 0000000000..d8c744b795 --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/src/secp256k1lab/util.py @@ -0,0 +1,24 @@ +import hashlib + + +# This implementation can be sped up by storing the midstate after hashing +# tag_hash instead of rehashing it all the time. +def tagged_hash(tag: str, msg: bytes) -> bytes: + tag_hash = hashlib.sha256(tag.encode()).digest() + return hashlib.sha256(tag_hash + tag_hash + msg).digest() + + +def bytes_from_int(x: int) -> bytes: + return x.to_bytes(32, byteorder="big") + + +def xor_bytes(b0: bytes, b1: bytes) -> bytes: + return bytes(x ^ y for (x, y) in zip(b0, b1)) + + +def int_from_bytes(b: bytes) -> int: + return int.from_bytes(b, byteorder="big") + + +def hash_sha256(b: bytes) -> bytes: + return hashlib.sha256(b).digest() diff --git a/bip-frost-signing/python/secp256k1lab/test/__init__.py b/bip-frost-signing/python/secp256k1lab/test/__init__.py new file mode 100644 index 0000000000..862ed6e21c --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/test/__init__.py @@ -0,0 +1,5 @@ +from pathlib import Path +import sys + +# Ensure secp256k1lab is found and can be imported directly +sys.path.insert(0, str(Path(__file__).parent / "../src/")) diff --git a/bip-frost-signing/python/secp256k1lab/test/test_bip340.py b/bip-frost-signing/python/secp256k1lab/test/test_bip340.py new file mode 100644 index 0000000000..7fafad54bd --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/test/test_bip340.py @@ -0,0 +1,51 @@ +import csv +from pathlib import Path +from random import randbytes +import unittest + +from secp256k1lab.bip340 import pubkey_gen, schnorr_sign, schnorr_verify + + +class BIP340Tests(unittest.TestCase): + """Test schnorr signatures (BIP 340).""" + + def test_correctness(self): + seckey = randbytes(32) + pubkey_xonly = pubkey_gen(seckey) + aux_rand = randbytes(32) + message = b'this is some arbitrary message' + signature = schnorr_sign(message, seckey, aux_rand) + success = schnorr_verify(message, pubkey_xonly, signature) + self.assertTrue(success) + + def test_vectors(self): + # Test against vectors from the BIPs repository + # [https://github.com/bitcoin/bips/blob/master/bip-0340/test-vectors.csv] + vectors_file = Path(__file__).parent / "vectors" / "bip340.csv" + with open(vectors_file, encoding='utf8') as csvfile: + reader = csv.DictReader(csvfile) + for row in reader: + with self.subTest(i=int(row['index'])): + self.subtest_vectors_case(row) + + def subtest_vectors_case(self, row): + seckey = bytes.fromhex(row['secret key']) + pubkey_xonly = bytes.fromhex(row['public key']) + aux_rand = bytes.fromhex(row['aux_rand']) + msg = bytes.fromhex(row['message']) + sig = bytes.fromhex(row['signature']) + result_str = row['verification result'] + comment = row['comment'] + + result = result_str == 'TRUE' + assert result or result_str == 'FALSE' + if seckey != b'': + pubkey_xonly_actual = pubkey_gen(seckey) + self.assertEqual(pubkey_xonly.hex(), pubkey_xonly_actual.hex(), f"BIP340 test vector ({comment}): pubkey mismatch") + sig_actual = schnorr_sign(msg, seckey, aux_rand) + self.assertEqual(sig.hex(), sig_actual.hex(), f"BIP340 test vector ({comment}): sig mismatch") + result_actual = schnorr_verify(msg, pubkey_xonly, sig) + if result: + self.assertEqual(result, result_actual, f"BIP340 test vector ({comment}): verification failed unexpectedly") + else: + self.assertEqual(result, result_actual, f"BIP340 test vector ({comment}): verification succeeded unexpectedly") diff --git a/bip-frost-signing/python/secp256k1lab/test/test_ecdh.py b/bip-frost-signing/python/secp256k1lab/test/test_ecdh.py new file mode 100644 index 0000000000..63c9da7a1b --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/test/test_ecdh.py @@ -0,0 +1,18 @@ +from random import randbytes +import unittest + +from secp256k1lab.ecdh import ecdh_libsecp256k1 +from secp256k1lab.keys import pubkey_gen_plain + + +class ECDHTests(unittest.TestCase): + """Test ECDH module.""" + + def test_correctness(self): + seckey_alice = randbytes(32) + pubkey_alice = pubkey_gen_plain(seckey_alice) + seckey_bob = randbytes(32) + pubkey_bob = pubkey_gen_plain(seckey_bob) + shared_secret1 = ecdh_libsecp256k1(seckey_alice, pubkey_bob) + shared_secret2 = ecdh_libsecp256k1(seckey_bob, pubkey_alice) + self.assertEqual(shared_secret1, shared_secret2) diff --git a/bip-frost-signing/python/secp256k1lab/test/test_secp256k1.py b/bip-frost-signing/python/secp256k1lab/test/test_secp256k1.py new file mode 100644 index 0000000000..c6aee19a0a --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/test/test_secp256k1.py @@ -0,0 +1,180 @@ +"""Test low-level secp256k1 field and group arithmetic classes.""" +from random import randint +import unittest + +from secp256k1lab.secp256k1 import FE, G, GE, Scalar + + +class PrimeFieldTests(unittest.TestCase): + def test_fe_constructors(self): + P = FE.SIZE + random_fe_valid = randint(0, P-1) + random_fe_overflowing = randint(P, 2**256-1) + + # wrapping constructors + for init_value in [0, P-1, P, P+1, random_fe_valid, random_fe_overflowing]: + fe1 = FE(init_value) + fe2 = FE.from_int_wrapping(init_value) + fe3 = FE.from_bytes_wrapping(init_value.to_bytes(32, 'big')) + reduced_value = init_value % P + self.assertEqual(int(fe1), reduced_value) + self.assertEqual(int(fe1), int(fe2)) + self.assertEqual(int(fe2), int(fe3)) + + # checking constructors (should throw on overflow) + for valid_value in [0, P-1, random_fe_valid]: + fe1 = FE.from_int_checked(valid_value) + fe2 = FE.from_bytes_checked(valid_value.to_bytes(32, 'big')) + self.assertEqual(int(fe1), valid_value) + self.assertEqual(int(fe1), int(fe2)) + + for overflow_value in [P, P+1, random_fe_overflowing]: + with self.assertRaises(ValueError): + _ = FE.from_int_checked(overflow_value) + with self.assertRaises(ValueError): + _ = FE.from_bytes_checked(overflow_value.to_bytes(32, 'big')) + + def test_scalar_constructors(self): + N = Scalar.SIZE + random_scalar_valid = randint(0, N-1) + random_scalar_overflowing = randint(N, 2**256-1) + + # wrapping constructors + for init_value in [0, N-1, N, N+1, random_scalar_valid, random_scalar_overflowing]: + s1 = Scalar(init_value) + s2 = Scalar.from_int_wrapping(init_value) + s3 = Scalar.from_bytes_wrapping(init_value.to_bytes(32, 'big')) + reduced_value = init_value % N + self.assertEqual(int(s1), reduced_value) + self.assertEqual(int(s1), int(s2)) + self.assertEqual(int(s2), int(s3)) + + # checking constructors (should throw on overflow) + for valid_value in [0, N-1, random_scalar_valid]: + s1 = Scalar.from_int_checked(valid_value) + s2 = Scalar.from_bytes_checked(valid_value.to_bytes(32, 'big')) + self.assertEqual(int(s1), valid_value) + self.assertEqual(int(s1), int(s2)) + + for overflow_value in [N, N+1, random_scalar_overflowing]: + with self.assertRaises(ValueError): + _ = Scalar.from_int_checked(overflow_value) + with self.assertRaises(ValueError): + _ = Scalar.from_bytes_checked(overflow_value.to_bytes(32, 'big')) + + # non-zero checking constructors (should throw on zero or overflow, only for Scalar) + random_nonzero_scalar_valid = randint(1, N-1) + for valid_value in [1, N-1, random_nonzero_scalar_valid]: + s1 = Scalar.from_int_nonzero_checked(valid_value) + s2 = Scalar.from_bytes_nonzero_checked(valid_value.to_bytes(32, 'big')) + self.assertEqual(int(s1), valid_value) + self.assertEqual(int(s1), int(s2)) + + for invalid_value in [0, N, random_scalar_overflowing]: + with self.assertRaises(ValueError): + _ = Scalar.from_int_nonzero_checked(invalid_value) + with self.assertRaises(ValueError): + _ = Scalar.from_bytes_nonzero_checked(invalid_value.to_bytes(32, 'big')) + + +class GeSerializationTests(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.point_at_infinity = GE() + cls.group_elements_on_curve = [ + # generator point + G, + # Bitcoin genesis block public key + GE(0x678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb6, + 0x49f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f), + ] + # generate a few random points, to likely cover both even/odd y polarity + cls.group_elements_on_curve.extend([randint(1, Scalar.SIZE-1) * G for _ in range(8)]) + # generate x coordinates that don't have a valid point on the curve + # (note that ~50% of all x coordinates are valid, so finding one needs two loop iterations on average) + cls.x_coords_not_on_curve = [] + while len(cls.x_coords_not_on_curve) < 8: + x = randint(0, FE.SIZE-1) + if not GE.is_valid_x(x): + cls.x_coords_not_on_curve.append(x) + + cls.group_elements = [cls.point_at_infinity] + cls.group_elements_on_curve + + def test_infinity_raises(self): + with self.assertRaises(AssertionError): + _ = self.point_at_infinity.to_bytes_uncompressed() + with self.assertRaises(AssertionError): + _ = self.point_at_infinity.to_bytes_compressed() + with self.assertRaises(AssertionError): + _ = self.point_at_infinity.to_bytes_xonly() + + def test_not_on_curve_raises(self): + # for compressed and x-only GE deserialization, test with invalid x coordinate + for x in self.x_coords_not_on_curve: + x_bytes = x.to_bytes(32, 'big') + with self.assertRaises(ValueError): + _ = GE.from_bytes_compressed(b'\x02' + x_bytes) + with self.assertRaises(ValueError): + _ = GE.from_bytes_compressed(b'\x03' + x_bytes) + with self.assertRaises(ValueError): + _ = GE.from_bytes_compressed_with_infinity(b'\x02' + x_bytes) + with self.assertRaises(ValueError): + _ = GE.from_bytes_compressed_with_infinity(b'\x03' + x_bytes) + with self.assertRaises(ValueError): + _ = GE.from_bytes_xonly(x_bytes) + + # for uncompressed GE serialization, test by invalidating either coordinate + for ge in self.group_elements_on_curve: + valid_x = ge.x + valid_y = ge.y + invalid_x = ge.x + 1 + invalid_y = ge.y + 1 + + # valid cases (if point (x,y) is on the curve, then point(x,-y) is on the curve as well) + _ = GE.from_bytes_uncompressed(b'\x04' + valid_x.to_bytes() + valid_y.to_bytes()) + _ = GE.from_bytes_uncompressed(b'\x04' + valid_x.to_bytes() + (-valid_y).to_bytes()) + # invalid cases (curve equation y**2 = x**3 + 7 doesn't hold) + self.assertNotEqual(invalid_y**2, valid_x**3 + 7) + with self.assertRaises(ValueError): + _ = GE.from_bytes_uncompressed(b'\x04' + valid_x.to_bytes() + invalid_y.to_bytes()) + self.assertNotEqual(valid_y**2, invalid_x**3 + 7) + with self.assertRaises(ValueError): + _ = GE.from_bytes_uncompressed(b'\x04' + invalid_x.to_bytes() + valid_y.to_bytes()) + + def test_affine(self): + # GE serialization and parsing round-trip (variants that only support serializing points on the curve) + for ge_orig in self.group_elements_on_curve: + # uncompressed serialization: 65 bytes, starts with 0x04 + ge_ser = ge_orig.to_bytes_uncompressed() + self.assertEqual(len(ge_ser), 65) + self.assertEqual(ge_ser[0], 0x04) + ge_deser = GE.from_bytes_uncompressed(ge_ser) + self.assertEqual(ge_deser, ge_orig) + + # compressed serialization: 33 bytes, starts with 0x02 (if y is even) or 0x03 (if y is odd) + ge_ser = ge_orig.to_bytes_compressed() + self.assertEqual(len(ge_ser), 33) + self.assertEqual(ge_ser[0], 0x02 if ge_orig.has_even_y() else 0x03) + ge_deser = GE.from_bytes_compressed(ge_ser) + self.assertEqual(ge_deser, ge_orig) + + # x-only serialization: 32 bytes + ge_ser = ge_orig.to_bytes_xonly() + self.assertEqual(len(ge_ser), 32) + ge_deser = GE.from_bytes_xonly(ge_ser) + if not ge_orig.has_even_y(): # x-only implies even y, so flip if necessary + ge_deser = -ge_deser + self.assertEqual(ge_deser, ge_orig) + + def test_affine_with_infinity(self): + # GE serialization and parsing round-trip (variants that also support serializing the point at infinity) + for ge_orig in self.group_elements: + # compressed serialization: 33 bytes, all-zeros for point at infinity + ge_ser = ge_orig.to_bytes_compressed_with_infinity() + self.assertEqual(len(ge_ser), 33) + if ge_orig.infinity: + self.assertEqual(ge_ser, b'\x00'*33) + else: + self.assertEqual(ge_ser[0], 0x02 if ge_orig.has_even_y() else 0x03) + ge_deser = GE.from_bytes_compressed_with_infinity(ge_ser) + self.assertEqual(ge_deser, ge_orig) diff --git a/bip-frost-signing/python/secp256k1lab/test/vectors/bip340.csv b/bip-frost-signing/python/secp256k1lab/test/vectors/bip340.csv new file mode 100644 index 0000000000..aa317a3b3d --- /dev/null +++ b/bip-frost-signing/python/secp256k1lab/test/vectors/bip340.csv @@ -0,0 +1,20 @@ +index,secret key,public key,aux_rand,message,signature,verification result,comment +0,0000000000000000000000000000000000000000000000000000000000000003,F9308A019258C31049344F85F89D5229B531C845836F99B08601F113BCE036F9,0000000000000000000000000000000000000000000000000000000000000000,0000000000000000000000000000000000000000000000000000000000000000,E907831F80848D1069A5371B402410364BDF1C5F8307B0084C55F1CE2DCA821525F66A4A85EA8B71E482A74F382D2CE5EBEEE8FDB2172F477DF4900D310536C0,TRUE, +1,B7E151628AED2A6ABF7158809CF4F3C762E7160F38B4DA56A784D9045190CFEF,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,0000000000000000000000000000000000000000000000000000000000000001,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6896BD60EEAE296DB48A229FF71DFE071BDE413E6D43F917DC8DCF8C78DE33418906D11AC976ABCCB20B091292BFF4EA897EFCB639EA871CFA95F6DE339E4B0A,TRUE, +2,C90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B14E5C9,DD308AFEC5777E13121FA72B9CC1B7CC0139715309B086C960E18FD969774EB8,C87AA53824B4D7AE2EB035A2B5BBBCCC080E76CDC6D1692C4B0B62D798E6D906,7E2D58D8B3BCDF1ABADEC7829054F90DDA9805AAB56C77333024B9D0A508B75C,5831AAEED7B44BB74E5EAB94BA9D4294C49BCF2A60728D8B4C200F50DD313C1BAB745879A5AD954A72C45A91C3A51D3C7ADEA98D82F8481E0E1E03674A6F3FB7,TRUE, +3,0B432B2677937381AEF05BB02A66ECD012773062CF3FA2549E44F58ED2401710,25D1DFF95105F5253C4022F628A996AD3A0D95FBF21D468A1B33F8C160D8F517,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,7EB0509757E246F19449885651611CB965ECC1A187DD51B64FDA1EDC9637D5EC97582B9CB13DB3933705B32BA982AF5AF25FD78881EBB32771FC5922EFC66EA3,TRUE,test fails if msg is reduced modulo p or n +4,,D69C3509BB99E412E68B0FE8544E72837DFA30746D8BE2AA65975F29D22DC7B9,,4DF3C3F68FCC83B27E9D42C90431A72499F17875C81A599B566C9889B9696703,00000000000000000000003B78CE563F89A0ED9414F5AA28AD0D96D6795F9C6376AFB1548AF603B3EB45C9F8207DEE1060CB71C04E80F593060B07D28308D7F4,TRUE, +5,,EEFDEA4CDB677750A420FEE807EACF21EB9898AE79B9768766E4FAA04A2D4A34,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E17776969E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,public key not on the curve +6,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,FFF97BD5755EEEA420453A14355235D382F6472F8568A18B2F057A14602975563CC27944640AC607CD107AE10923D9EF7A73C643E166BE5EBEAFA34B1AC553E2,FALSE,has_even_y(R) is false +7,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,1FA62E331EDBC21C394792D2AB1100A7B432B013DF3F6FF4F99FCB33E0E1515F28890B3EDB6E7189B630448B515CE4F8622A954CFE545735AAEA5134FCCDB2BD,FALSE,negated message +8,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E177769961764B3AA9B2FFCB6EF947B6887A226E8D7C93E00C5ED0C1834FF0D0C2E6DA6,FALSE,negated s value +9,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,0000000000000000000000000000000000000000000000000000000000000000123DDA8328AF9C23A94C1FEECFD123BA4FB73476F0D594DCB65C6425BD186051,FALSE,sG - eP is infinite. Test fails in single verification if has_even_y(inf) is defined as true and x(inf) as 0 +10,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,00000000000000000000000000000000000000000000000000000000000000017615FBAF5AE28864013C099742DEADB4DBA87F11AC6754F93780D5A1837CF197,FALSE,sG - eP is infinite. Test fails in single verification if has_even_y(inf) is defined as true and x(inf) as 1 +11,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,4A298DACAE57395A15D0795DDBFD1DCB564DA82B0F269BC70A74F8220429BA1D69E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,sig[0:32] is not an X coordinate on the curve +12,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F69E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,sig[0:32] is equal to field size +13,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E177769FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141,FALSE,sig[32:64] is equal to curve order +14,,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E17776969E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,public key is not a valid X coordinate because it exceeds the field size +15,0340034003400340034003400340034003400340034003400340034003400340,778CAA53B4393AC467774D09497A87224BF9FAB6F6E68B23086497324D6FD117,0000000000000000000000000000000000000000000000000000000000000000,,71535DB165ECD9FBBC046E5FFAEA61186BB6AD436732FCCC25291A55895464CF6069CE26BF03466228F19A3A62DB8A649F2D560FAC652827D1AF0574E427AB63,TRUE,message of size 0 (added 2022-12) +16,0340034003400340034003400340034003400340034003400340034003400340,778CAA53B4393AC467774D09497A87224BF9FAB6F6E68B23086497324D6FD117,0000000000000000000000000000000000000000000000000000000000000000,11,08A20A0AFEF64124649232E0693C583AB1B9934AE63B4C3511F3AE1134C6A303EA3173BFEA6683BD101FA5AA5DBC1996FE7CACFC5A577D33EC14564CEC2BACBF,TRUE,message of size 1 (added 2022-12) +17,0340034003400340034003400340034003400340034003400340034003400340,778CAA53B4393AC467774D09497A87224BF9FAB6F6E68B23086497324D6FD117,0000000000000000000000000000000000000000000000000000000000000000,0102030405060708090A0B0C0D0E0F1011,5130F39A4059B43BC7CAC09A19ECE52B5D8699D1A71E3C52DA9AFDB6B50AC370C4A482B77BF960F8681540E25B6771ECE1E5A37FD80E5A51897C5566A97EA5A5,TRUE,message of size 17 (added 2022-12) +18,0340034003400340034003400340034003400340034003400340034003400340,778CAA53B4393AC467774D09497A87224BF9FAB6F6E68B23086497324D6FD117,0000000000000000000000000000000000000000000000000000000000000000,99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999,403B12B0D8555A344175EA7EC746566303321E5DBFA8BE6F091635163ECA79A8585ED3E3170807E7C03B720FC54C7B23897FCBA0E9D0B4A06894CFD249F22367,TRUE,message of size 100 (added 2022-12) diff --git a/bip-frost-signing/python/tests.py b/bip-frost-signing/python/tests.py new file mode 100755 index 0000000000..442bf492a1 --- /dev/null +++ b/bip-frost-signing/python/tests.py @@ -0,0 +1,669 @@ +#!/usr/bin/env python3 + +import json +import os +import secrets +import sys +import time +from typing import List, Optional, Tuple + +from frost_ref.signing import ( + COORDINATOR_ID, + InvalidContributionError, + PlainPk, + SessionContext, + SignersContext, + XonlyPk, + deterministic_sign, + get_xonly_pk, + thresh_pubkey_and_tweak, + nonce_agg, + nonce_gen, + nonce_gen_internal, + partial_sig_agg, + partial_sig_verify, + partial_sig_verify_internal, + sign, +) +from secp256k1lab.keys import pubkey_gen_plain +from secp256k1lab.secp256k1 import G, Scalar +from secp256k1lab.bip340 import schnorr_verify +from secp256k1lab.util import int_from_bytes +from trusted_dealer import trusted_dealer_keygen + + +def fromhex_all(hex_values): + return [bytes.fromhex(value) for value in hex_values] + + +# Check that calling `try_fn` raises a `exception`. If `exception` is raised, +# examine it with `except_fn`. +def assert_raises(exception, try_fn, except_fn): + raised = False + try: + try_fn() + except exception as e: + raised = True + assert except_fn(e) + except BaseException: + raise AssertionError("Wrong exception raised in a test.") + if not raised: + raise AssertionError( + "Exception was _not_ raised in a test where it was required." + ) + + +def get_error_details(test_case): + error = test_case["error"] + if error["type"] == "InvalidContributionError": + exception = InvalidContributionError + if "contrib" in error: + + def except_fn(e): + return e.id == error["id"] and e.contrib == error["contrib"] + else: + + def except_fn(e): + return e.id == error["id"] + elif error["type"] == "ValueError": + exception = ValueError + + def except_fn(e): + return str(e) == error["message"] + else: + raise RuntimeError(f"Invalid error type: {error['type']}") + return exception, except_fn + + +def generate_frost_keys( + n: int, t: int +) -> Tuple[PlainPk, List[int], List[bytes], List[PlainPk]]: + if not (2 <= t <= n): + raise ValueError("values must satisfy: 2 <= t <= n") + + thresh_pk, secshares, pubshares = trusted_dealer_keygen( + secrets.token_bytes(32), n, t + ) + + # IDs are 0-indexed: the index in the list IS the participant ID + assert len(secshares) == n + identifiers = list(range(len(secshares))) + + return (thresh_pk, identifiers, secshares, pubshares) + + +def test_nonce_gen_vectors(): + with open(os.path.join(sys.path[0], "vectors", "nonce_gen_vectors.json")) as f: + test_data = json.load(f) + + for test_case in test_data["test_cases"]: + + def get_value(key) -> bytes: + return bytes.fromhex(test_case[key]) + + def get_value_maybe(key) -> Optional[bytes]: + if test_case[key] is not None: + return get_value(key) + else: + return None + + rand_ = get_value("rand_") + secshare = get_value_maybe("secshare") + pubshare = get_value_maybe("pubshare") + if pubshare is not None: + pubshare = PlainPk(pubshare) + thresh_pk = get_value_maybe("threshold_pubkey") + if thresh_pk is not None: + thresh_pk = XonlyPk(thresh_pk) + msg = get_value_maybe("msg") + extra_in = get_value_maybe("extra_in") + expected_secnonce = get_value("expected_secnonce") + expected_pubnonce = get_value("expected_pubnonce") + + assert nonce_gen_internal( + rand_, secshare, pubshare, thresh_pk, msg, extra_in + ) == (expected_secnonce, expected_pubnonce) + + +def test_nonce_agg_vectors(): + with open(os.path.join(sys.path[0], "vectors", "nonce_agg_vectors.json")) as f: + test_data = json.load(f) + + pubnonces_list = fromhex_all(test_data["pubnonces"]) + valid_test_cases = test_data["valid_test_cases"] + error_test_cases = test_data["error_test_cases"] + + for test_case in valid_test_cases: + # todo: assert the t <= len(pubnonces, ids) <= n + # todo: assert the values of ids too? 1 <= id <= n? + pubnonces = [pubnonces_list[i] for i in test_case["pubnonce_indices"]] + ids = test_case["participant_identifiers"] + expected_aggnonce = bytes.fromhex(test_case["expected_aggnonce"]) + assert nonce_agg(pubnonces, ids) == expected_aggnonce + + for test_case in error_test_cases: + exception, except_fn = get_error_details(test_case) + pubnonces = [pubnonces_list[i] for i in test_case["pubnonce_indices"]] + ids = test_case["participant_identifiers"] + assert_raises(exception, lambda: nonce_agg(pubnonces, ids), except_fn) + + +# todo: include vectors from the frost draft too +# todo: add a test where thresh_pk is even (might need to modify json file) +def test_sign_verify_vectors(): + with open(os.path.join(sys.path[0], "vectors", "sign_verify_vectors.json")) as f: + test_data = json.load(f) + + n = test_data["n"] + t = test_data["t"] + secshare_p0 = bytes.fromhex(test_data["secshare_p0"]) + ids = test_data["identifiers"] + pubshares = fromhex_all(test_data["pubshares"]) + thresh_pk = bytes.fromhex(test_data["threshold_pubkey"]) + # The public key corresponding to the first participant (secshare_p0) is at index 0 + assert pubshares[0] == PlainPk(pubkey_gen_plain(secshare_p0)) + + secnonces_p0 = fromhex_all(test_data["secnonces_p0"]) + pubnonces = fromhex_all(test_data["pubnonces"]) + # The public nonce corresponding to first participant (secnonce_p0[0]) is at index 0 + k_1 = int_from_bytes(secnonces_p0[0][0:32]) + k_2 = int_from_bytes(secnonces_p0[0][32:64]) + R_s1 = k_1 * G + R_s2 = k_2 * G + assert not R_s1.infinity and not R_s2.infinity + assert pubnonces[0] == R_s1.to_bytes_compressed() + R_s2.to_bytes_compressed() + + aggnonces = fromhex_all(test_data["aggnonces"]) + msgs = fromhex_all(test_data["msgs"]) + + valid_test_cases = test_data["valid_test_cases"] + sign_error_test_cases = test_data["sign_error_test_cases"] + verify_fail_test_cases = test_data["verify_fail_test_cases"] + verify_error_test_cases = test_data["verify_error_test_cases"] + + for test_case in valid_test_cases: + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] + aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] + # Make sure that pubnonces and aggnonce in the test vector are consistent + assert nonce_agg(pubnonces_tmp, ids_tmp) == aggnonce_tmp + msg = msgs[test_case["msg_index"]] + signer_index = test_case["signer_index"] + my_id = ids_tmp[signer_index] + expected = bytes.fromhex(test_case["expected"]) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + session_ctx = SessionContext(aggnonce_tmp, signers_tmp, [], [], msg) + # WARNING: An actual implementation should _not_ copy the secnonce. + # Reusing the secnonce, as we do here for testing purposes, can leak the + # secret key. + secnonce_tmp = bytearray(secnonces_p0[0]) + assert sign(secnonce_tmp, secshare_p0, my_id, session_ctx) == expected + assert partial_sig_verify( + expected, pubnonces_tmp, signers_tmp, [], [], msg, signer_index + ) + + for test_case in sign_error_test_cases: + exception, except_fn = get_error_details(test_case) + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] + msg = msgs[test_case["msg_index"]] + signer_index = test_case["signer_index"] + my_id = ( + test_case["signer_id"] if signer_index is None else ids_tmp[signer_index] + ) + secnonce_tmp = bytearray(secnonces_p0[test_case["secnonce_index"]]) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + session_ctx = SessionContext(aggnonce_tmp, signers_tmp, [], [], msg) + assert_raises( + exception, + lambda: sign(secnonce_tmp, secshare_p0, my_id, session_ctx), + except_fn, + ) + + for test_case in verify_fail_test_cases: + psig = bytes.fromhex(test_case["psig"]) + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] + msg = msgs[test_case["msg_index"]] + signer_index = test_case["signer_index"] + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + assert not partial_sig_verify_internal( + psig, + ids_tmp[signer_index], + pubnonces_tmp[signer_index], + pubshares_tmp[signer_index], + session_ctx, + ) + + for test_case in verify_error_test_cases: + exception, except_fn = get_error_details(test_case) + + psig = bytes.fromhex(test_case["psig"]) + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] + msg = msgs[test_case["msg_index"]] + signer_index = test_case["signer_index"] + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + assert_raises( + exception, + lambda: partial_sig_verify( + psig, pubnonces_tmp, signers_tmp, [], [], msg, signer_index + ), + except_fn, + ) + + +def test_tweak_vectors(): + with open(os.path.join(sys.path[0], "vectors", "tweak_vectors.json")) as f: + test_data = json.load(f) + + n = test_data["n"] + t = test_data["t"] + secshare_p0 = bytes.fromhex(test_data["secshare_p0"]) + ids = test_data["identifiers"] + pubshares = fromhex_all(test_data["pubshares"]) + # The public key corresponding to the first participant (secshare_p0) is at index 0 + assert pubshares[0] == PlainPk(pubkey_gen_plain(secshare_p0)) + thresh_pk = bytes.fromhex(test_data["threshold_pubkey"]) + + secnonce_p0 = bytearray(bytes.fromhex(test_data["secnonce_p0"])) + pubnonces = fromhex_all(test_data["pubnonces"]) + # The public nonce corresponding to first participant (secnonce_p0[0]) is at index 0 + k_1 = Scalar.from_bytes_checked(secnonce_p0[0:32]) + k_2 = Scalar.from_bytes_checked(secnonce_p0[32:64]) + R_s1 = k_1 * G + R_s2 = k_2 * G + assert not R_s1.infinity and not R_s2.infinity + assert pubnonces[0] == R_s1.to_bytes_compressed() + R_s2.to_bytes_compressed() + + aggnonces = fromhex_all(test_data["aggnonces"]) + tweaks = fromhex_all(test_data["tweaks"]) + + msg = bytes.fromhex(test_data["msg"]) + + valid_test_cases = test_data["valid_test_cases"] + error_test_cases = test_data["error_test_cases"] + + for i, test_case in enumerate(valid_test_cases): + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] + aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] + # Make sure that pubnonces and aggnonce in the test vector are consistent + assert nonce_agg(pubnonces_tmp, ids_tmp) == aggnonce_tmp + tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] + tweak_modes_tmp = test_case["is_xonly"] + signer_index = test_case["signer_index"] + my_id = ids_tmp[signer_index] + expected = bytes.fromhex(test_case["expected"]) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + session_ctx = SessionContext( + aggnonce_tmp, signers_tmp, tweaks_tmp, tweak_modes_tmp, msg + ) + # WARNING: An actual implementation should _not_ copy the secnonce. + # Reusing the secnonce, as we do here for testing purposes, can leak the + # secret key. + secnonce_tmp = bytearray(secnonce_p0) + assert sign(secnonce_tmp, secshare_p0, my_id, session_ctx) == expected + assert partial_sig_verify( + expected, + pubnonces_tmp, + signers_tmp, + tweaks_tmp, + tweak_modes_tmp, + msg, + signer_index, + ) + + for test_case in error_test_cases: + exception, except_fn = get_error_details(test_case) + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] + tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] + tweak_modes_tmp = test_case["is_xonly"] + signer_index = test_case["signer_index"] + my_id = ids_tmp[signer_index] + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + session_ctx = SessionContext( + aggnonce_tmp, signers_tmp, tweaks_tmp, tweak_modes_tmp, msg + ) + assert_raises( + exception, + lambda: sign(secnonce_p0, secshare_p0, my_id, session_ctx), + except_fn, + ) + + +def test_det_sign_vectors(): + with open(os.path.join(sys.path[0], "vectors", "det_sign_vectors.json")) as f: + test_data = json.load(f) + + n = test_data["n"] + t = test_data["t"] + secshare_p0 = bytes.fromhex(test_data["secshare_p0"]) + ids = test_data["identifiers"] + pubshares = fromhex_all(test_data["pubshares"]) + # The public key corresponding to the first participant (secshare_p0) is at index 0 + assert pubshares[0] == PlainPk(pubkey_gen_plain(secshare_p0)) + + thresh_pk = bytes.fromhex(test_data["threshold_pubkey"]) + msgs = fromhex_all(test_data["msgs"]) + + valid_test_cases = test_data["valid_test_cases"] + error_test_cases = test_data["error_test_cases"] + + for test_case in valid_test_cases: + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + aggothernonce = bytes.fromhex(test_case["aggothernonce"]) + tweaks = fromhex_all(test_case["tweaks"]) + is_xonly = test_case["is_xonly"] + msg = msgs[test_case["msg_index"]] + signer_index = test_case["signer_index"] + my_id = ids_tmp[signer_index] + rand = ( + bytes.fromhex(test_case["rand"]) if test_case["rand"] is not None else None + ) + expected = fromhex_all(test_case["expected"]) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + pubnonce, psig = deterministic_sign( + secshare_p0, + my_id, + aggothernonce, + signers_tmp, + tweaks, + is_xonly, + msg, + rand, + ) + assert pubnonce == expected[0] + assert psig == expected[1] + + pubnonces = [aggothernonce, pubnonce] + aggnonce_tmp = nonce_agg(pubnonces, [COORDINATOR_ID, my_id]) + session_ctx = SessionContext(aggnonce_tmp, signers_tmp, tweaks, is_xonly, msg) + assert partial_sig_verify_internal( + psig, my_id, pubnonce, pubshares_tmp[signer_index], session_ctx + ) + + for test_case in error_test_cases: + exception, except_fn = get_error_details(test_case) + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + aggothernonce = bytes.fromhex(test_case["aggothernonce"]) + tweaks = fromhex_all(test_case["tweaks"]) + is_xonly = test_case["is_xonly"] + msg = msgs[test_case["msg_index"]] + signer_index = test_case["signer_index"] + my_id = ( + test_case["signer_id"] if signer_index is None else ids_tmp[signer_index] + ) + rand = ( + bytes.fromhex(test_case["rand"]) if test_case["rand"] is not None else None + ) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + + def try_fn(): + return deterministic_sign( + secshare_p0, + my_id, + aggothernonce, + signers_tmp, + tweaks, + is_xonly, + msg, + rand, + ) + + assert_raises(exception, try_fn, except_fn) + + +def test_sig_agg_vectors(): + with open(os.path.join(sys.path[0], "vectors", "sig_agg_vectors.json")) as f: + test_data = json.load(f) + + n = test_data["n"] + t = test_data["t"] + ids = test_data["identifiers"] + pubshares = fromhex_all(test_data["pubshares"]) + thresh_pk = bytes.fromhex(test_data["threshold_pubkey"]) + # These nonces are only required if the tested API takes the individual + # nonces and not the aggregate nonce. + pubnonces = fromhex_all(test_data["pubnonces"]) + + tweaks = fromhex_all(test_data["tweaks"]) + msg = bytes.fromhex(test_data["msg"]) + + valid_test_cases = test_data["valid_test_cases"] + error_test_cases = test_data["error_test_cases"] + + for test_case in valid_test_cases: + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] + aggnonce_tmp = bytes.fromhex(test_case["aggnonce"]) + # Make sure that pubnonces and aggnonce in the test vector are consistent + assert aggnonce_tmp == nonce_agg(pubnonces_tmp, ids_tmp) + + tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] + tweak_modes_tmp = test_case["is_xonly"] + psigs_tmp = fromhex_all(test_case["psigs"]) + expected = bytes.fromhex(test_case["expected"]) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + session_ctx = SessionContext( + aggnonce_tmp, signers_tmp, tweaks_tmp, tweak_modes_tmp, msg + ) + # Make sure that the partial signatures in the test vector are consistent. The tested API takes only aggnonce (not pubnonces list), this check can be ignored + for i in range(len(ids_tmp)): + partial_sig_verify( + psigs_tmp[i], + pubnonces_tmp, + signers_tmp, + tweaks_tmp, + tweak_modes_tmp, + msg, + i, + ) + + bip340sig = partial_sig_agg(psigs_tmp, ids_tmp, session_ctx) + assert bip340sig == expected + tweaked_thresh_pk = get_xonly_pk( + thresh_pubkey_and_tweak(thresh_pk, tweaks_tmp, tweak_modes_tmp) + ) + assert schnorr_verify(msg, tweaked_thresh_pk, bip340sig) + + for test_case in error_test_cases: + exception, except_fn = get_error_details(test_case) + + ids_tmp = [ids[i] for i in test_case["id_indices"]] + pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] + pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] + aggnonce_tmp = bytes.fromhex(test_case["aggnonce"]) + + tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] + tweak_modes_tmp = test_case["is_xonly"] + psigs_tmp = fromhex_all(test_case["psigs"]) + + signers_tmp = SignersContext(n, t, ids_tmp, pubshares_tmp, thresh_pk) + session_ctx = SessionContext( + aggnonce_tmp, signers_tmp, tweaks_tmp, tweak_modes_tmp, msg + ) + assert_raises( + exception, + lambda: partial_sig_agg(psigs_tmp, ids_tmp, session_ctx), + except_fn, + ) + + +def test_sign_and_verify_random(iterations: int) -> None: + for itr in range(iterations): + secure_rng = secrets.SystemRandom() + # randomly choose a number: 2 <= number <= 10 + n = secure_rng.randrange(2, 11) + # randomly choose a number: 2 <= number <= n + t = secure_rng.randrange(2, n + 1) + + thresh_pk, ids, secshares, pubshares = generate_frost_keys(n, t) + assert len(ids) == len(secshares) == len(pubshares) == n + + # randomly choose the signer set, with len: t <= len <= n + signer_count = secure_rng.randrange(t, n + 1) + signer_indices = secure_rng.sample(range(n), signer_count) + assert ( + len(set(signer_indices)) == signer_count + ) # signer set must not contain duplicate ids + + signer_ids = [ids[i] for i in signer_indices] + signer_pubshares = [pubshares[i] for i in signer_indices] + # NOTE: secret values MUST NEVER BE COPIED!!! + # we do it here to improve the code readability + signer_secshares = [secshares[i] for i in signer_indices] + + signers_ctx = SignersContext(n, t, signer_ids, signer_pubshares, thresh_pk) + + # In this example, the message and threshold pubkey are known + # before nonce generation, so they can be passed into the nonce + # generation function as a defense-in-depth measure to protect + # against nonce reuse. + # + # If these values are not known when nonce_gen is called, empty + # byte arrays can be passed in for the corresponding arguments + # instead. + msg = secrets.token_bytes(32) + v = secrets.randbelow(4) + tweaks = [secrets.token_bytes(32) for _ in range(v)] + tweak_modes = [secrets.choice([False, True]) for _ in range(v)] + tweaked_thresh_pk = get_xonly_pk( + thresh_pubkey_and_tweak(thresh_pk, tweaks, tweak_modes) + ) + + signer_secnonces = [] + signer_pubnonces = [] + for i in range(signer_count - 1): + # Use a clock for extra_in + timestamp = time.clock_gettime_ns(time.CLOCK_MONOTONIC) + secnonce_i, pubnonce_i = nonce_gen( + signer_secshares[i], + signer_pubshares[i], + tweaked_thresh_pk, + msg, + timestamp.to_bytes(8, "big"), + ) + signer_secnonces.append(secnonce_i) + signer_pubnonces.append(pubnonce_i) + + # On even iterations use regular signing algorithm for the final signer, + # otherwise use deterministic signing algorithm + if itr % 2 == 0: + timestamp = time.clock_gettime_ns(time.CLOCK_MONOTONIC) + secnonce_final, pubnonce_final = nonce_gen( + signer_secshares[-1], + signer_pubshares[-1], + tweaked_thresh_pk, + msg, + timestamp.to_bytes(8, "big"), + ) + signer_secnonces.append(secnonce_final) + else: + aggothernonce = nonce_agg(signer_pubnonces, signer_ids[:-1]) + rand = secrets.token_bytes(32) + pubnonce_final, psig_final = deterministic_sign( + signer_secshares[-1], + signer_ids[-1], + aggothernonce, + signers_ctx, + tweaks, + tweak_modes, + msg, + rand, + ) + + signer_pubnonces.append(pubnonce_final) + aggnonce = nonce_agg(signer_pubnonces, signer_ids) + session_ctx = SessionContext(aggnonce, signers_ctx, tweaks, tweak_modes, msg) + + signer_psigs = [] + for i in range(signer_count): + if itr % 2 != 0 and i == signer_count - 1: + psig_i = psig_final # last signer would have already deterministically signed + else: + psig_i = sign( + signer_secnonces[i], signer_secshares[i], signer_ids[i], session_ctx + ) + assert partial_sig_verify( + psig_i, + signer_pubnonces, + signers_ctx, + tweaks, + tweak_modes, + msg, + i, + ) + signer_psigs.append(psig_i) + + # An exception is thrown if secnonce is accidentally reused + assert_raises( + ValueError, + lambda: sign( + signer_secnonces[0], signer_secshares[0], signer_ids[0], session_ctx + ), + lambda e: True, + ) + + # Wrong signer index + assert not partial_sig_verify( + signer_psigs[0], + signer_pubnonces, + signers_ctx, + tweaks, + tweak_modes, + msg, + 1, + ) + # Wrong message + assert not partial_sig_verify( + signer_psigs[0], + signer_pubnonces, + signers_ctx, + tweaks, + tweak_modes, + secrets.token_bytes(32), + 0, + ) + + bip340sig = partial_sig_agg(signer_psigs, signer_ids, session_ctx) + assert schnorr_verify(msg, tweaked_thresh_pk, bip340sig) + + +def run_test(test_name, test_func): + max_len = 30 + test_name = test_name.ljust(max_len, ".") + print(f"Running {test_name}...", end="", flush=True) + try: + test_func() + print("Passed!") + except Exception as e: + print(f"Failed :'(\nError: {e}") + + +if __name__ == "__main__": + run_test("test_nonce_gen_vectors", test_nonce_gen_vectors) + run_test("test_nonce_agg_vectors", test_nonce_agg_vectors) + run_test("test_sign_verify_vectors", test_sign_verify_vectors) + run_test("test_tweak_vectors", test_tweak_vectors) + run_test("test_det_sign_vectors", test_det_sign_vectors) + run_test("test_sig_agg_vectors", test_sig_agg_vectors) + run_test("test_sign_and_verify_random", lambda: test_sign_and_verify_random(6)) diff --git a/bip-frost-signing/python/tests.sh b/bip-frost-signing/python/tests.sh new file mode 100755 index 0000000000..db15516100 --- /dev/null +++ b/bip-frost-signing/python/tests.sh @@ -0,0 +1,24 @@ +#!/bin/sh +set -e + +check_availability() { + command -v "$1" > /dev/null 2>&1 || { + echo >&2 "$1 is required but it's not installed. Aborting."; + exit 1; + } +} + +check_availability mypy +check_availability ruff + +cd "$(dirname "$0")" + +# Keep going if a linter fails +ruff check --quiet || true +ruff format --diff --quiet || true +mypy --no-error-summary . || true +# Be more strict in the reference code +mypy --no-error-summary --strict --untyped-calls-exclude=secp256k1lab -p frost_ref --follow-imports=silent || true + +./gen_vectors.py +./tests.py diff --git a/bip-frost-signing/python/trusted_dealer.py b/bip-frost-signing/python/trusted_dealer.py new file mode 100644 index 0000000000..08eb1d63e4 --- /dev/null +++ b/bip-frost-signing/python/trusted_dealer.py @@ -0,0 +1,140 @@ +# TODO: remove this file, and use trusted dealer BIP's reference code instead, after it gets published. + +# Implementation of the Trusted Dealer Key Generation approach for FROST mentioned +# in https://datatracker.ietf.org/doc/draft-irtf-cfrg-frost/15/ (Appendix D). +# +# It's worth noting that this isn't the only compatible method (with BIP FROST Signing), +# there are alternative key generation methods available, such as BIP-FROST-DKG: +# https://github.com/BlockstreamResearch/bip-frost-dkg + +from typing import Tuple, List +import unittest +import secrets + +from secp256k1lab.secp256k1 import G, GE, Scalar +from frost_ref.signing import derive_interpolating_value +from frost_ref import PlainPk + + +# evaluates poly using Horner's method, assuming coeff[0] corresponds +# to the coefficient of highest degree term +def polynomial_evaluate(coeffs: List[Scalar], x: Scalar) -> Scalar: + res = Scalar(0) + for coeff in coeffs: + res = res * x + coeff + return res + + +def secret_share_combine(shares: List[Scalar], ids: List[int]) -> Scalar: + assert len(shares) == len(ids) + secret = Scalar(0) + for share, my_id in zip(shares, ids): + lam = derive_interpolating_value(ids, my_id) + secret += share * lam + return secret + + +def secret_share_shard(secret: Scalar, coeffs: List[Scalar], n: int) -> List[Scalar]: + coeffs = coeffs + [secret] + + secshares = [] + # ids are 0-indexed (0..n-1), but polynomial is evaluated at x = id + 1 + # because p(0) = secret + for i in range(n): + x_i = Scalar(i + 1) + y_i = polynomial_evaluate(coeffs, x_i) + assert y_i != 0 + secshares.append(y_i) + return secshares + + +def trusted_dealer_keygen( + thresh_sk_: bytes, n: int, t: int +) -> Tuple[PlainPk, List[bytes], List[PlainPk]]: + assert 2 <= t <= n + + thresh_sk = Scalar.from_bytes_nonzero_checked(thresh_sk_) + # Key generation protocols are allowed to generate plain public keys (i.e., non-xonly) + thresh_pk_ = thresh_sk * G + assert not thresh_pk_.infinity + thresh_pk = PlainPk(thresh_pk_.to_bytes_compressed()) + + coeffs = [] + for _ in range(t - 1): + coeffs.append(Scalar.from_bytes_nonzero_checked(secrets.token_bytes(32))) + + secshares_ = secret_share_shard(thresh_sk, coeffs, n) + secshares = [x.to_bytes() for x in secshares_] + + pubshares_ = [x * G for x in secshares_] + pubshares = [PlainPk(X.to_bytes_compressed()) for X in pubshares_] + + return (thresh_pk, secshares, pubshares) + + +# Test vector from RFC draft. +# section F.5 of https://datatracker.ietf.org/doc/draft-irtf-cfrg-frost/15/ +class Tests(unittest.TestCase): + def setUp(self) -> None: + self.n = 3 + self.t = 2 + self.poly = [ + Scalar(0xFBF85EADAE3058EA14F19148BB72B45E4399C0B16028ACAF0395C9B03C823579), + Scalar(0x0D004150D27C3BF2A42F312683D35FAC7394B1E9E318249C1BFE7F0795A83114), + ] + # id[i] = i + 1, where i is the index in this list + self.secshares = [ + Scalar(0x08F89FFE80AC94DCB920C26F3F46140BFC7F95B493F8310F5FC1EA2B01F4254C), + Scalar(0x04F0FEAC2EDCEDC6CE1253B7FAB8C86B856A797F44D83D82A385554E6E401984), + Scalar(0x00E95D59DD0D46B0E303E500B62B7CCB0E555D49F5B849F5E748C071DA8C0DBC), + ] + self.secret = 0x0D004150D27C3BF2A42F312683D35FAC7394B1E9E318249C1BFE7F0795A83114 + + def test_polynomial_evaluate(self) -> None: + coeffs = self.poly.copy() + expected_secret = self.secret + + self.assertEqual(int(polynomial_evaluate(coeffs, Scalar(0))), expected_secret) + + def test_secret_share_combine(self) -> None: + secshares = self.secshares.copy() + expected_secret = self.secret + + # ids 0 and 1 + self.assertEqual( + secret_share_combine([secshares[0], secshares[1]], [0, 1]), expected_secret + ) + # ids 1 and 2 + self.assertEqual( + secret_share_combine([secshares[1], secshares[2]], [1, 2]), expected_secret + ) + # ids 0 and 2 + self.assertEqual( + secret_share_combine([secshares[0], secshares[2]], [0, 2]), expected_secret + ) + # all ids + self.assertEqual(secret_share_combine(secshares, [0, 1, 2]), expected_secret) + + def test_trusted_dealer_keygen(self) -> None: + thresh_sk_ = secrets.token_bytes(32) + n = 5 + t = 3 + thresh_pk_, secshares_, pubshares_ = trusted_dealer_keygen(thresh_sk_, n, t) + + thresh_sk = Scalar.from_bytes_nonzero_checked(thresh_sk_) + thresh_pk = GE.from_bytes_compressed(thresh_pk_) + secshares = [Scalar.from_bytes_nonzero_checked(s) for s in secshares_] + pubshares = [GE.from_bytes_compressed(p) for p in pubshares_] + + self.assertEqual(thresh_pk, thresh_sk * G) + + self.assertEqual(secret_share_combine(secshares, list(range(n))), thresh_sk) + self.assertEqual(len(secshares), n) + self.assertEqual(len(pubshares), n) + for i in range(len(pubshares)): + with self.subTest(i=i): + self.assertEqual(pubshares[i], secshares[i] * G) + + +if __name__ == "__main__": + unittest.main() diff --git a/bip-frost-signing/python/vectors/det_sign_vectors.json b/bip-frost-signing/python/vectors/det_sign_vectors.json new file mode 100644 index 0000000000..19df817d02 --- /dev/null +++ b/bip-frost-signing/python/vectors/det_sign_vectors.json @@ -0,0 +1,400 @@ +{ + "n": 3, + "t": 2, + "threshold_pubkey": "03B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "secshare_p0": "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "identifiers": [ + 0, + 1, + 2 + ], + "pubshares": [ + "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "02EC6444271D791A1DA95300329DB2268611B9C60E193DABFDEE0AA816AE512583", + "03113F810F612567D9552F46AF9BDA21A67D52060F95BD4A723F4B60B1820D3676", + "020000000000000000000000000000000000000000000000000000000000000007" + ], + "msgs": [ + "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF", + "", + "2626262626262626262626262626262626262626262626262626262626262626262626262626" + ], + "valid_test_cases": [ + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "0353BC2314D46C813AF81317AF1BDF99816B6444E416BB8D3DC04ACB2F5388D1AC02B13BC644F720223B547DB344C94E0F5E769B674D8A9C3F5E86A5231A5B9C3297", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "expected": [ + "02ECF1B7C1C3675E9EC605D95EAFF24CD7A4A0F8DAD89F8A9B6050F78F0C33522103B4368C46B3A9DA59EC52BB1A4B27A0446A302A046E593723E111FECEDE04CC30", + "572A5EED305B5533A4EC73D40B4645C5559BF2C137D57F1EBA974DA84B13B8D7" + ], + "comment": "Signing with minimum number of participants" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "02A8B5F064871F3BBB06D325F5B4A2B51487E0AE24F14E2A121C39B9F7CBDE7474038161382177105511164E63DD2C73138EDB271CF11B922DBA54CA4A9B365EDB55", + "id_indices": [ + 1, + 0 + ], + "pubshare_indices": [ + 1, + 0 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 1, + "expected": [ + "03D1F0BE59DFDC6E9BA09C830FB60B95CA154904F4919D080CF085A86F383EC66E024D3002A880C72187F62A041E01C0A356284C82BA2688DF2CB58B66DF28F75295", + "3BEEB13926DAAF6AA6F042FB20B2C33D5887EA8127B94197D8213474DAA1EF49" + ], + "comment": "Partial-signature shouldn't change if the order of signers set changes. Note: The deterministic sign will generate the same secnonces due to unchanged parameters" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "022260912C9999C9EC5A3B8E493F5DEB76DCA3E772345905E12C24D281612DA403022508B1A355D6E94A5BB239441B38971716031EF05DE9AF952BAB69799621AE52", + "id_indices": [ + 0, + 2 + ], + "pubshare_indices": [ + 0, + 2 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "expected": [ + "03C71C9C0B81BE5F3501553182252266CC935E5D2A900C376CCEC8EDF78BB3F8A903585BF5D830984908F0F3B090CBE8C32B3CCE9719F941B27D7E128F475F22EE14", + "62B7FC46552F3E3F3F7B07596FE9E73817A1576F4D0E850015334CE340434CC3" + ], + "comment": "Partial-signature changes if the members of signers set changes" + }, + { + "rand": null, + "aggothernonce": "0220E5B590F7058B5E88593C8635411767B416EB53378AEE7E40CD1D35329AD2C302164F00762CDFFF9138C43884661CC93FF53D61B4BFD3CD2BD7265F41775A4F0F", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "expected": [ + "036BF6DCC0A80061CC2CCA7A8BB20448534BEFDCCB9A2E23714F5AB1112CA3098A03D68C52E4D90D3951B19D3BC3BB7411066F4AAE6F4B91BEC1B811C1CDD38E1D9F", + "4F464D1B1F3EF7CABFBC446BBB1D226F82AC56B7DC1DAD4EF7A641E0A8F378FD" + ], + "comment": "Signing without auxiliary randomness" + }, + { + "rand": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF", + "aggothernonce": "0258091883DA3CCA616F8BEC80AAE2E397E3572DFFC3C261EDC695E5FCA0BE00180251EC60E426128FA2370F13A6235040120CCDB0CA97C36BC63693C44FC5B73E44", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "expected": [ + "03C2493BB6AB51760DBFBF1A08FCB6F82472552D4343637296DF6392D2F203821903B75F2339C21FB67AB0AA5351A15EB0A1CC5F7BCD3417CF06B6870AABAACAE9CF", + "77CE17195D39B6CBEF533E5EA416F173C294450AD8283153C67F6AAD87BFF0EB" + ], + "comment": "Signing with max auxiliary randomness" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "03B7F17DD6A2495898D19BDEA05C46F45B2C14BCA17F61C3DDC1D3C087CD748AB002CF7D031F5075C164D6A9D713F03B56422FD3472BDC8E0E6BB3ED6B6ED9C529AE", + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1, + 2 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "expected": [ + "0244C0BAA16B62D53047051B95111D89060FA23052C99CD4521B1222D98B267C2002B04FEDE26A30C4648657FC32B334FFA4FB2162133A52E4BB5373FC3B6B6FFA93", + "41F558B7335F18ED09F1B0F04324AC84FDD97DE9E63B5090542B9566BBBAD78E" + ], + "comment": "Signing with maximum number of participants" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "02E67053C8D9A6754D8243EF13B722B4909F3037FEAB23600D019E1C307D0BA2E20381207D01C5D6F4E2E654D2CBD23C9B6E8C38407FECAC41571E0EA2B307634004", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 1, + "signer_index": 0, + "expected": [ + "030B18187D6DA62CE6B6E48ACD472A4CD9285BE7671F11B80D8F838EFE55A2C9660218FC5EED7224B5EA4284ED913D44A634817608721EC1F75B615375B88B1F3293", + "DFDBC51E48278E34AFC436D9EF1944657AB49D349945841442380C54C36F9FA7" + ], + "comment": "Empty message" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "03C71201804A346CCAF2733EB9701302FA50F4C99B053D23C20363ECE2E05DEE97027D4DBE90EC16ACA95DBC921F4BD3FBDBCF1D9F3D01627F6077BE6BD7DAEDB627", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 2, + "signer_index": 0, + "expected": [ + "028BB5C9114C5991CF2DAE6CE6D6AA1274783BDFB30ED89AF0D64B8E4061015087021E5712184A72688ED52D54D02FCB3E28A695C2DAEC6BE9FE9AD894D958AF530A", + "853BF60CA2EB5D15B54D3B6986B763A9E84270CD68F0F8B9167C4AD801C0B86A" + ], + "comment": "Message longer than 32 bytes (38-byte msg)" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "0353BC2314D46C813AF81317AF1BDF99816B6444E416BB8D3DC04ACB2F5388D1AC02B13BC644F720223B547DB344C94E0F5E769B674D8A9C3F5E86A5231A5B9C3297", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [ + "E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB" + ], + "is_xonly": [ + true + ], + "msg_index": 0, + "signer_index": 0, + "expected": [ + "03161BD943B39F07744FD2BE702132A051D194DD5F7B34CD7FC925FA70F35F56D8022F4597F6E0A96BB1DCA84EF68CC91AEC0EAB7A51EF0E9EE75079B36819FA7204", + "496B94BA355C529E60F728DB22F63C9AF64B8E87252AF6A22F7EC99AC788AAC9" + ], + "comment": "Signing with tweaks" + } + ], + "error_test_cases": [ + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", + "id_indices": [ + 2, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": null, + "signer_id": 0, + "error": { + "type": "ValueError", + "message": "The provided key material is incorrect." + }, + "comment": "The signer's id is not in the participant identifier list" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "02C1D8D5D95E15E4B46B49AF7A309520B0D07E5386995B99A572440C658FE443DF028A89044AE2FFC00131089E7B1EB15FE8DF52282F44D5EE2FA25F0DF437082407", + "id_indices": [ + 0, + 1, + 1 + ], + "pubshare_indices": [ + 0, + 1, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The participant identifier list contains duplicate elements." + }, + "comment": "The participant identifier list contains duplicate elements" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "0353BC2314D46C813AF81317AF1BDF99816B6444E416BB8D3DC04ACB2F5388D1AC02B13BC644F720223B547DB344C94E0F5E769B674D8A9C3F5E86A5231A5B9C3297", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 2, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The provided key material is incorrect." + }, + "comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares." + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The pubshares and ids arrays must have the same length." + }, + "comment": "The participant identifiers count exceed the participant public shares count" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "0353BC2314D46C813AF81317AF1BDF99816B6444E416BB8D3DC04ACB2F5388D1AC02B13BC644F720223B547DB344C94E0F5E769B674D8A9C3F5E86A5231A5B9C3297", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 3 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "InvalidContributionError", + "id": 1, + "contrib": "pubshare" + }, + "comment": "Signer 1 provided an invalid participant public share" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "InvalidContributionError", + "id": null, + "contrib": "aggothernonce" + }, + "comment": "aggothernonce is invalid due wrong tag, 0x04, in the first half" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "0000000000000000000000000000000000000000000000000000000000000000000287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [], + "is_xonly": [], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "InvalidContributionError", + "id": null, + "contrib": "aggothernonce" + }, + "comment": "aggothernonce is invalid because first half corresponds to point at infinity" + }, + { + "rand": "0000000000000000000000000000000000000000000000000000000000000000", + "aggothernonce": "0353BC2314D46C813AF81317AF1BDF99816B6444E416BB8D3DC04ACB2F5388D1AC02B13BC644F720223B547DB344C94E0F5E769B674D8A9C3F5E86A5231A5B9C3297", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweaks": [ + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141" + ], + "is_xonly": [ + false + ], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The tweak must be less than n." + }, + "comment": "Tweak is invalid because it exceeds group size" + } + ] +} \ No newline at end of file diff --git a/bip-frost-signing/python/vectors/nonce_agg_vectors.json b/bip-frost-signing/python/vectors/nonce_agg_vectors.json new file mode 100644 index 0000000000..02064e8351 --- /dev/null +++ b/bip-frost-signing/python/vectors/nonce_agg_vectors.json @@ -0,0 +1,86 @@ +{ + "pubnonces": [ + "020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E66603BA47FBC1834437B3212E89A84D8425E7BF12E0245D98262268EBDCB385D50641", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833", + "020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E6660279BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60379BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", + "04FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B831", + "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A602FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30" + ], + "valid_test_cases": [ + { + "pubnonce_indices": [ + 0, + 1 + ], + "participant_identifiers": [ + 0, + 1 + ], + "expected_aggnonce": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B024725377345BDE0E9C33AF3C43C0A29A9249F2F2956FA8CFEB55C8573D0262DC8" + }, + { + "pubnonce_indices": [ + 2, + 3 + ], + "participant_identifiers": [ + 0, + 1 + ], + "expected_aggnonce": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B000000000000000000000000000000000000000000000000000000000000000000", + "comment": "Sum of second points encoded in the nonces is point at infinity which is serialized as 33 zero bytes" + } + ], + "error_test_cases": [ + { + "pubnonce_indices": [ + 0, + 4 + ], + "participant_identifiers": [ + 0, + 1 + ], + "error": { + "type": "InvalidContributionError", + "id": 1, + "contrib": "pubnonce" + }, + "comment": "Public nonce from signer 1 is invalid due wrong tag, 0x04, in the first half" + }, + { + "pubnonce_indices": [ + 5, + 1 + ], + "participant_identifiers": [ + 0, + 1 + ], + "error": { + "type": "InvalidContributionError", + "id": 0, + "contrib": "pubnonce" + }, + "comment": "Public nonce from signer 0 is invalid because the second half does not correspond to an X coordinate" + }, + { + "pubnonce_indices": [ + 6, + 1 + ], + "participant_identifiers": [ + 0, + 1 + ], + "error": { + "type": "InvalidContributionError", + "id": 0, + "contrib": "pubnonce" + }, + "comment": "Public nonce from signer 0 is invalid because second half exceeds field size" + } + ] +} \ No newline at end of file diff --git a/bip-frost-signing/python/vectors/nonce_gen_vectors.json b/bip-frost-signing/python/vectors/nonce_gen_vectors.json new file mode 100644 index 0000000000..066b34295b --- /dev/null +++ b/bip-frost-signing/python/vectors/nonce_gen_vectors.json @@ -0,0 +1,48 @@ +{ + "test_cases": [ + { + "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", + "secshare": "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "pubshare": "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "threshold_pubkey": "B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "msg": "0101010101010101010101010101010101010101010101010101010101010101", + "extra_in": "0808080808080808080808080808080808080808080808080808080808080808", + "expected_secnonce": "0CE17C117FFA8F4BC63E5A01250216F3AE329A212DDAB8ACC146E7EBA5580E7AC176FB8455FEA8C93C29E2A1F572E2998CC316C8685534EFDC9656193D7B6E1B", + "expected_pubnonce": "023F8BD569195D02FA2F70CD9B0BA611E32E0CE984A1BBB0E739F4B4AFBB08CCE902D692CE44EBEE93F9EC8E1503F7DE0D4A926178321A5DAF9AB41E1EA89E8F88E8", + "comment": "" + }, + { + "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", + "secshare": "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "pubshare": "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "threshold_pubkey": "B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "msg": "", + "extra_in": "0808080808080808080808080808080808080808080808080808080808080808", + "expected_secnonce": "2889110F994409F9F7AB5EFEDAC0F0D20C0083A3BD174F80E1C67B86127A4DE46600C968FFA50DE5B5D9EDE17081930C018E9D937A16C673889587817D4B796F", + "expected_pubnonce": "03B98FAFF2B60C56DC584AF1E7EEBDE6B300B514557EA627208608BA69E386C94E021A423749CA1C3B06E063C7F9DA9A894C9CC2D8B7C927F6605CEC7ED638B127F2", + "comment": "Empty Message" + }, + { + "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", + "secshare": "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "pubshare": "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "threshold_pubkey": "B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "msg": "2626262626262626262626262626262626262626262626262626262626262626262626262626", + "extra_in": "0808080808080808080808080808080808080808080808080808080808080808", + "expected_secnonce": "A9841123FC252ED4AA0239B0138B973B18300B6F527F29CF450EAE53BF3E4A96A7B2E49AB947C13B79103C29C12B2B991DBDAE5A04485AE2F1F56FCD3B6AA35E", + "expected_pubnonce": "02EE71AAF852F44EEBD7FAB088F37A8D5904D421F1DB0428E64E58FE67E1CD76FA021DDE4E68AAC676E36CBD98C0F86822A77EAE67B904A76BD41864898985F4FB1C", + "comment": "38-byte message" + }, + { + "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", + "secshare": null, + "pubshare": null, + "threshold_pubkey": null, + "msg": null, + "extra_in": null, + "expected_secnonce": "E8E239B64F9A4D2B03508C029EEFC8156A3AD899FD58B15759C93C7DA745C3550FABE3F7CDD361407B97C1353056310D1610D478633C5DDE04DEC4917591D2E5", + "expected_pubnonce": "0399059E50AF7B23F89E1ED7B17A7B24F2D746C663057F6C3B696A416C99C7A1070383C53B9CF236EADF8BDFEB1C3E9A188A1A84190687CD67916DF9BC60CD2D80EC", + "comment": "Every optional parameter is absent" + } + ] +} \ No newline at end of file diff --git a/bip-frost-signing/python/vectors/sig_agg_vectors.json b/bip-frost-signing/python/vectors/sig_agg_vectors.json new file mode 100644 index 0000000000..4ce35ddf85 --- /dev/null +++ b/bip-frost-signing/python/vectors/sig_agg_vectors.json @@ -0,0 +1,186 @@ +{ + "n": 3, + "t": 2, + "threshold_pubkey": "03B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "identifiers": [ + 0, + 1, + 2 + ], + "pubshares": [ + "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "02EC6444271D791A1DA95300329DB2268611B9C60E193DABFDEE0AA816AE512583", + "03113F810F612567D9552F46AF9BDA21A67D52060F95BD4A723F4B60B1820D3676" + ], + "pubnonces": [ + "0330935948101543C50AF2FA7A7A4F7073CEB73290CA141497EF06E0269363162D0358785EB5CD7C1626CAB55C59B484E1B3147FA4EB919224ECB04BAB1271022A5C", + "0244D225137BC9390069C9D4D230B6D0942A1A3D72678B638B81F3416B6FEA719C02B1C7E637FD51FE2BC2C91CB6ACA0EA6A8BB30A33A0589D369687EAA33BFC5FA8", + "0332DAA54E451217D6F14747B72634D1E9E21B247C8E92397ABFEE296BD714772403FE1674C2B2B8076D641CEC4B2E6DF054C3D60AA77352A55233B40AC12046C312" + ], + "tweaks": [ + "B511DA492182A91B0FFB9A98020D55F260AE86D7ECBD0399C7383D59A5F2AF7C", + "A815FE049EE3C5AAB66310477FBC8BCCCAC2F3395F59F921C364ACD78A2F48DC", + "75448A87274B056468B977BE06EB1E9F657577B7320B0A3376EA51FD420D18A8" + ], + "msg": "599C67EA410D005B9DA90817CF03ED3B1C868E4DA4EDF00A5880B0082C237869", + "valid_test_cases": [ + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce": "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "tweak_indices": [], + "is_xonly": [], + "psigs": [ + "09CA21FA4AE22BBB49EE5ABC091519C8695FB74FC3D39C1437019EF6FE6C7AB4", + "D0750AE90E8475A6E984AE9159E0C4510851672D6D5C26260F643448FF0AC3EB" + ], + "expected": "90495647B268390A2ABA2518AEEEF699620C783EF21DF2800BB3C38CAE6755C7DA3F2CE35966A1623373094D62F5DE1971B11E7D312FC23A4665D33FFD773E9F", + "comment": "Signing with minimum number of participants" + }, + { + "id_indices": [ + 1, + 0 + ], + "pubshare_indices": [ + 1, + 0 + ], + "pubnonce_indices": [ + 1, + 0 + ], + "aggnonce": "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "tweak_indices": [], + "is_xonly": [], + "psigs": [ + "D0750AE90E8475A6E984AE9159E0C4510851672D6D5C26260F643448FF0AC3EB", + "09CA21FA4AE22BBB49EE5ABC091519C8695FB74FC3D39C1437019EF6FE6C7AB4" + ], + "expected": "90495647B268390A2ABA2518AEEEF699620C783EF21DF2800BB3C38CAE6755C7DA3F2CE35966A1623373094D62F5DE1971B11E7D312FC23A4665D33FFD773E9F", + "comment": "Order of the singer set shouldn't affect the aggregate signature. The expected value must match the previous test vector." + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce": "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "tweak_indices": [ + 0, + 1, + 2 + ], + "is_xonly": [ + true, + false, + false + ], + "psigs": [ + "19689B0B3F32C7696FF2C90CF1E6DC8311F96EC7D6307EF56085C78C3FF6C828", + "B7290A65BE76B17C8756D77EF0395D5F219CDF7EA8D63886132C9011938F241C" + ], + "expected": "7430808B048AD9EAA6240B047B790CF24DB878DC3AC31C1B528289ED3AA70C067CA7AA1E418135268A336415D4B634E1F845A981F21B69FD3ADDF7D81D41FAB8", + "comment": "Signing with tweaked threshold public key" + }, + { + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1, + 2 + ], + "pubnonce_indices": [ + 0, + 1, + 2 + ], + "aggnonce": "0282E58B733AB1B74D1C54B960D668E4298C3EF9F406D44249FB30C403568A1B14039B38604D0FD33E07C3EB81BBDAE39A38A82A1D9E325112A24F0F5480582C9CEE", + "tweak_indices": [], + "is_xonly": [], + "psigs": [ + "13036985653E36780C30AFB253D9E506EA2FD5E30439C507D337DA4062898066", + "C665E007CC6D1294105580FC46764C8F49D1A44DE9D574F5C3DF72E6636B383E", + "F5352AB8BD2C34F466ACF16046A222DAFB45FBF6327AC4E8EE6C10C8FED9EF33" + ], + "expected": "553247A24AF9A15C4D495968C0467FE6651F3584FDD671CBAE05BBC850577DD8CE9E7445EED77E008333220EE0F254727498994071415EAAC5B0FF62F4986696", + "comment": "Signing with max number of participants and tweaked threshold public key" + } + ], + "error_test_cases": [ + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce": "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "tweak_indices": [], + "is_xonly": [], + "psigs": [ + "09CA21FA4AE22BBB49EE5ABC091519C8695FB74FC3D39C1437019EF6FE6C7AB4", + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141" + ], + "error": { + "type": "InvalidContributionError", + "id": 1, + "contrib": "psig" + }, + "comment": "Partial signature is invalid because it exceeds group size" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce": "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "tweak_indices": [], + "is_xonly": [], + "psigs": [ + "09CA21FA4AE22BBB49EE5ABC091519C8695FB74FC3D39C1437019EF6FE6C7AB4" + ], + "error": { + "type": "ValueError", + "message": "The psigs and ids arrays must have the same length." + }, + "comment": "Partial signature count doesn't match the signer set count" + } + ] +} \ No newline at end of file diff --git a/bip-frost-signing/python/vectors/sign_verify_vectors.json b/bip-frost-signing/python/vectors/sign_verify_vectors.json new file mode 100644 index 0000000000..3fca0693fd --- /dev/null +++ b/bip-frost-signing/python/vectors/sign_verify_vectors.json @@ -0,0 +1,509 @@ +{ + "n": 3, + "t": 2, + "threshold_pubkey": "03B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "secshare_p0": "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "identifiers": [ + 0, + 1, + 2 + ], + "pubshares": [ + "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "02EC6444271D791A1DA95300329DB2268611B9C60E193DABFDEE0AA816AE512583", + "03113F810F612567D9552F46AF9BDA21A67D52060F95BD4A723F4B60B1820D3676", + "020000000000000000000000000000000000000000000000000000000000000007" + ], + "secnonces_p0": [ + "DB26CEB14C1CF111274574860A4667E3305B9C8D47E48861562445CF2E7D2277D17751A6F6972FD753CF2B2784CF5193ADBEA4DA066526D0A9984E9C1C07179F", + "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" + ], + "pubnonces": [ + "0330935948101543C50AF2FA7A7A4F7073CEB73290CA141497EF06E0269363162D0358785EB5CD7C1626CAB55C59B484E1B3147FA4EB919224ECB04BAB1271022A5C", + "0244D225137BC9390069C9D4D230B6D0942A1A3D72678B638B81F3416B6FEA719C02B1C7E637FD51FE2BC2C91CB6ACA0EA6A8BB30A33A0589D369687EAA33BFC5FA8", + "0332DAA54E451217D6F14747B72634D1E9E21B247C8E92397ABFEE296BD714772403FE1674C2B2B8076D641CEC4B2E6DF054C3D60AA77352A55233B40AC12046C312", + "0200000000000000000000000000000000000000000000000000000000000000090287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480", + "032AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3503178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910" + ], + "aggnonces": [ + "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "020C800916841C67A0261D280120246C2DB4DB930FD6963633DC9B59F026B0EA4B03E5178080C6B5F53C3EE78297D518969CF21DDD7FDAF7C861F7BDD2C48532A6CA", + "0282E58B733AB1B74D1C54B960D668E4298C3EF9F406D44249FB30C403568A1B14039B38604D0FD33E07C3EB81BBDAE39A38A82A1D9E325112A24F0F5480582C9CEE", + "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", + "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9", + "028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61020000000000000000000000000000000000000000000000000000000000000009", + "028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD6102FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30" + ], + "msgs": [ + "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF", + "", + "2626262626262626262626262626262626262626262626262626262626262626262626262626" + ], + "valid_test_cases": [ + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 0, + "expected": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "comment": "Signing with minimum number of participants" + }, + { + "id_indices": [ + 1, + 0 + ], + "pubshare_indices": [ + 1, + 0 + ], + "pubnonce_indices": [ + 1, + 0 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 1, + "expected": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "comment": "Partial-signature doesn't change if the order of signers set changes (without changing secnonces)" + }, + { + "id_indices": [ + 0, + 2 + ], + "pubshare_indices": [ + 0, + 2 + ], + "pubnonce_indices": [ + 0, + 2 + ], + "aggnonce_index": 1, + "msg_index": 0, + "signer_index": 0, + "expected": "39C84732FA3A9EBAF568AB269A18632E8CD62EEE3FE39F0799071FEFB71CAFC2", + "comment": "Partial-signature changes if the members of signers set changes" + }, + { + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1, + 2 + ], + "pubnonce_indices": [ + 0, + 1, + 2 + ], + "aggnonce_index": 2, + "msg_index": 0, + "signer_index": 0, + "expected": "304337D1F0BEF7F3FAFB8F5D165F77D6F511CEA3E0375F9B7EF5579A2314004E", + "comment": "Signing with max number of participants" + }, + { + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1, + 2 + ], + "pubnonce_indices": [ + 0, + 1, + 4 + ], + "aggnonce_index": 3, + "msg_index": 0, + "signer_index": 0, + "expected": "F6A823D47F626D7106D65F7F83AAB8B4C4E2B7DC1CC036A66C747786659FED8D", + "comment": "Both halves of aggregate nonce correspond to point at infinity" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "msg_index": 1, + "signer_index": 0, + "expected": "679F7A5282BC979090AA7167499C27F64BDB562A791AE60DEBAF40F01E7CED40", + "comment": "Empty message" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "msg_index": 2, + "signer_index": 0, + "expected": "FBE3883C9C8D3CA5010A649689CBB54BC429A58CBD4CD2B3347629558536EBD6", + "comment": "Message longer than 32 bytes (38-byte msg)" + } + ], + "sign_error_test_cases": [ + { + "id_indices": [ + 2, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": null, + "signer_id": 0, + "secnonce_index": 0, + "error": { + "type": "ValueError", + "message": "The provided key material is incorrect." + }, + "comment": "The signer's id is not in the participant identifier list" + }, + { + "id_indices": [ + 0, + 1, + 1 + ], + "pubshare_indices": [ + 0, + 1, + 1 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "ValueError", + "message": "The participant identifier list contains duplicate elements." + }, + "comment": "The participant identifier list contains duplicate elements" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 2, + 1 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "ValueError", + "message": "The provided key material is incorrect." + }, + "comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares." + }, + { + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "ValueError", + "message": "The pubshares and ids arrays must have the same length." + }, + "comment": "The participant identifiers count exceed the participant public shares count" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 3 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "InvalidContributionError", + "id": 1, + "contrib": "pubshare" + }, + "comment": "Signer 1 provided an invalid participant public share" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "aggnonce_index": 4, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "InvalidContributionError", + "id": null, + "contrib": "aggnonce" + }, + "comment": "Aggregate nonce is invalid due wrong tag, 0x04, in the first half" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "aggnonce_index": 5, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "InvalidContributionError", + "id": null, + "contrib": "aggnonce" + }, + "comment": "Aggregate nonce is invalid because the second half does not correspond to an X coordinate" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "aggnonce_index": 6, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 0, + "error": { + "type": "InvalidContributionError", + "id": null, + "contrib": "aggnonce" + }, + "comment": "Aggregate nonce is invalid because second half exceeds field size" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "msg_index": 0, + "signer_index": 0, + "secnonce_index": 1, + "error": { + "type": "ValueError", + "message": "first secnonce value is out of range." + }, + "comment": "Secnonce is invalid which may indicate nonce reuse" + } + ], + "verify_fail_test_cases": [ + { + "psig": "86210398630BE64583B750706AD94A29AA0438D55443C16DE1C18FEECA25EE0D", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "msg_index": 0, + "signer_index": 0, + "comment": "Wrong signature (which is equal to the negation of valid signature)" + }, + { + "psig": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "msg_index": 0, + "signer_index": 1, + "comment": "Wrong signer index" + }, + { + "psig": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 2, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "msg_index": 0, + "signer_index": 0, + "comment": "The signer's pubshare is not in the list of pubshares" + }, + { + "psig": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "msg_index": 0, + "signer_index": 0, + "comment": "Signature value is out of range" + } + ], + "verify_error_test_cases": [ + { + "psig": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 3, + 1 + ], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "InvalidContributionError", + "id": 0, + "contrib": "pubnonce" + }, + "comment": "Invalid pubnonce" + }, + { + "psig": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 3, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "InvalidContributionError", + "id": 0, + "contrib": "pubshare" + }, + "comment": "Invalid pubshare" + }, + { + "psig": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1, + 2 + ], + "msg_index": 0, + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The pubnonces and ids arrays must have the same length." + }, + "comment": "public nonces count is greater than ids and pubshares" + } + ] +} \ No newline at end of file diff --git a/bip-frost-signing/python/vectors/tweak_vectors.json b/bip-frost-signing/python/vectors/tweak_vectors.json new file mode 100644 index 0000000000..87e6d89ad9 --- /dev/null +++ b/bip-frost-signing/python/vectors/tweak_vectors.json @@ -0,0 +1,277 @@ +{ + "n": 3, + "t": 2, + "threshold_pubkey": "03B02645D79ABFC494338139410F9D7F0A72BE86C952D6BDE1A66447B8A8D69237", + "secshare_p0": "CCD2EF4559DB05635091D80189AB3544D6668EFC0500A8D5FF51A1F4D32CC1F1", + "identifiers": [ + 0, + 1, + 2 + ], + "pubshares": [ + "022B02109FBCFB4DA3F53C7393B22E72A2A51C4AFBF0C01AAF44F73843CFB4B74B", + "02EC6444271D791A1DA95300329DB2268611B9C60E193DABFDEE0AA816AE512583", + "03113F810F612567D9552F46AF9BDA21A67D52060F95BD4A723F4B60B1820D3676", + "020000000000000000000000000000000000000000000000000000000000000007" + ], + "secnonce_p0": "DB26CEB14C1CF111274574860A4667E3305B9C8D47E48861562445CF2E7D2277D17751A6F6972FD753CF2B2784CF5193ADBEA4DA066526D0A9984E9C1C07179F", + "pubnonces": [ + "0330935948101543C50AF2FA7A7A4F7073CEB73290CA141497EF06E0269363162D0358785EB5CD7C1626CAB55C59B484E1B3147FA4EB919224ECB04BAB1271022A5C", + "0244D225137BC9390069C9D4D230B6D0942A1A3D72678B638B81F3416B6FEA719C02B1C7E637FD51FE2BC2C91CB6ACA0EA6A8BB30A33A0589D369687EAA33BFC5FA8", + "0332DAA54E451217D6F14747B72634D1E9E21B247C8E92397ABFEE296BD714772403FE1674C2B2B8076D641CEC4B2E6DF054C3D60AA77352A55233B40AC12046C312" + ], + "aggnonces": [ + "022AAC6A4960DE35FC36D8E2DC06255C5CB7FD28250DFD84EBF1AC943B1EA22C3502178AD06BB0490BAD857446FEF55C15FD9FF4329F4EE2F23CA8B7CA0598014910", + "0282E58B733AB1B74D1C54B960D668E4298C3EF9F406D44249FB30C403568A1B14039B38604D0FD33E07C3EB81BBDAE39A38A82A1D9E325112A24F0F5480582C9CEE", + "0282E58B733AB1B74D1C54B960D668E4298C3EF9F406D44249FB30C403568A1B14039B38604D0FD33E07C3EB81BBDAE39A38A82A1D9E325112A24F0F5480582C9CEE" + ], + "tweaks": [ + "E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB", + "AE2EA797CC0FE72AC5B97B97F3C6957D7E4199A167A58EB08BCAFFDA70AC0455", + "F52ECBC565B3D8BEA2DFD5B75A4F457E54369809322E4120831626F290FA87E0", + "1969AD73CC177FA0B4FCED6DF1F7BF9907E665FDE9BA196A74FED0A3CF5AEF9D", + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141" + ], + "msg": "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF", + "valid_test_cases": [ + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "tweak_indices": [], + "aggnonce_index": 0, + "is_xonly": [], + "signer_index": 0, + "expected": "79DEFC679CF419BA7C48AF8F9526B5D510AAA4115B04DECDDE10CE9E06105334", + "comment": "No tweak" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 0 + ], + "aggnonce_index": 0, + "is_xonly": [ + true + ], + "signer_index": 0, + "expected": "EED33B9C85BF813CDBE4404DE8E5ACBD8FBF34D53768B45EB21CB8B653F67EA9", + "comment": "A single x-only tweak" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 0 + ], + "aggnonce_index": 0, + "is_xonly": [ + false + ], + "signer_index": 0, + "expected": "65AA345505968E08C2040BC4F726AA5A382A2E578E8E580719E44B2768975B97", + "comment": "A single plain tweak" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 0, + 1 + ], + "aggnonce_index": 0, + "is_xonly": [ + false, + true + ], + "signer_index": 0, + "expected": "10EB2D7E0E1BC46B42C54E739DECCA0BA8CB819D11AF5986CB47B8AF2FA00C91", + "comment": "A plain tweak followed by an x-only tweak" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 0, + 1, + 2, + 3 + ], + "aggnonce_index": 0, + "is_xonly": [ + true, + false, + true, + false + ], + "signer_index": 0, + "expected": "BF9CD5C5558915E44DDBE6399EBFE824224AB23D331173BE90664ACFA7E2C6EF", + "comment": "Four tweaks: x-only, plain, x-only, plain. If an implementation prohibits applying plain tweaks after x-only tweaks, it can skip this test vector or return an error" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "pubnonce_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 0, + 1, + 2, + 3 + ], + "aggnonce_index": 0, + "is_xonly": [ + false, + false, + true, + true + ], + "signer_index": 0, + "expected": "8789A8C4198D125220F693C837CA5896101C4F46D84769E38D284BDD462FDB25", + "comment": "Four tweaks: plain, plain, x-only, x-only" + }, + { + "id_indices": [ + 0, + 1, + 2 + ], + "pubshare_indices": [ + 0, + 1, + 2 + ], + "pubnonce_indices": [ + 0, + 1, + 2 + ], + "tweak_indices": [ + 0, + 1, + 2, + 3 + ], + "aggnonce_index": 1, + "is_xonly": [ + false, + false, + true, + true + ], + "signer_index": 0, + "expected": "F114B99483FCAF269A955479F60D1AE63CF74FA15330374D2DF8523930B1558A", + "comment": "Tweaking with max number of participants. The expected value (partial sig) must match the previous test vector" + } + ], + "error_test_cases": [ + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 4 + ], + "aggnonce_index": 0, + "is_xonly": [ + false + ], + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The tweak must be less than n." + }, + "comment": "Tweak is invalid because it exceeds group size" + }, + { + "id_indices": [ + 0, + 1 + ], + "pubshare_indices": [ + 0, + 1 + ], + "tweak_indices": [ + 0, + 1, + 2, + 3 + ], + "aggnonce_index": 0, + "is_xonly": [ + true, + false + ], + "signer_index": 0, + "error": { + "type": "ValueError", + "message": "The tweaks and is_xonly arrays must have the same length." + }, + "comment": "Tweaks count doesn't match the tweak modes count" + } + ] +} \ No newline at end of file