- The main advantage Async Verifiable Secret Sharing DKG has over the other well-known DKGs like
- Pedersen DKG, Feldman's VSS and its variants is that it is fully asynchronous and thus does
+ The main advantage Async Verifiable Secret Sharing (AVSS) DKG has over the other well-known DKGs like
+ Pedersen DKG, Feldman's VSS, and its variants is that it is fully asynchronous. This means it does
not require a complaint phase when we consider the allowance for a small zero-knowledge proof.
This results in a simpler implementation (with constant communication rounds even during
malicious scenarios), but at the expense of message size.
- In brief, this scheme generates a random bivariate polynomial (i.e. 2D surface) and creates
- horizontal (X) and vertical (Y) slices at the appropriate indices as sharings. We then get
- sub-sharings (points) on these horizontal and vertical sharings at the appropriate indices and
- echo them to other nodes. As a node, the polynomial reconstructed from the sub-sharings
- received from other nodes should match up with the initial sharing that the node received from
- the dealer, and even if they do not, the node can always interpolate the correct sharing via
- these echoed sub-sharings. This eliminates the dealer complaint phase. We then we restrict
- ourselves to just the horizontal (X) domain such that our final sharings are still on that of
- a univariate polynomial, which is what a typical DKG does.
+ In brief, this scheme generates a random bivariate polynomial (a 2D surface) and creates horizontal (X) and
+ vertical (Y) slices at the appropriate indices as shares. From these slices, nodes derive sub-shares (evaluation points) and exchange them with other nodes.
+
+ Each node reconstructs a polynomial from the sub-shares received from other nodes and verifies that it is
+ consistent with the initial share it received from the originating node. If inconsistencies are detected, the node can interpolate the correct share using the exchanged sub-shares, which eliminates the need for a separate
+ complaint phase.
+
+ Finally, the scheme restricts itself to the horizontal (X) domain so that the resulting shares lie on a univariate polynomial, which matches the structure used by standard DKG protocols.
@@ -70,20 +64,20 @@ varient of [Async Verifiable Secret Sharing](https://eprint.iacr.org/2002/134.pd
-### Proactive Secret Sharing (PSS)
+### Proactive Secret Sharing
-Proactive secret sharing allows participants to “refresh” shares, so that all participants receive
-new shares, but the secret remains unchanged. This allows the secret sharing to be secure against
+Proactive Secret Sharing (PSS) allows participants to “refresh” shares, so that all participants receive
+new shares, while the secret remains unchanged. This allows the secret sharing to be secure against
mobile adversaries who may be able to compromise all participants over the lifetime of the secret
-(eg. adversary hacks a random participant’s server every month).
+(for example, an adversary hacks a random participant’s server every month).
Simply copying shares across epochs is a bad idea, since a single node operator operating in two
-separate epochs would get access to two shares, and it also makes it not possible to increase or
-decrease the number of operators in each epoch. Hence, we use PSS to migrate shares across epochs.
+separate epochs would get access to two shares, and it also makes it impossible to increase or
+decrease the number of operators in each epoch. This is why we use PSS to migrate shares across epochs.
We refer the user to a [Proactive Secret Sharing Scheme](https://eprint.iacr.org/2002/134.pdf) that
supports dynamic sets of participants, which we use for share refresh. In brief, the key idea is
-that we create polynomial sharings of the existing key shares and add these polynomials in a
+that we create polynomial sharing of the existing key shares and add these polynomials in a
specific way such that the coefficient of the master polynomial is the Lagrange interpolation of the
existing key shares. Much like how DKGs are the sum of several secret sharings, where the master
secret is the sum of all of the secrets from each of the N-parallel secret sharing protocols, we can
@@ -93,8 +87,8 @@ nodes, with their "secret" as their share. The resulting shares of shares, if ad
### Epochs
-Torus nodes operate within a certain time period, called an epoch. Nodes within the same epoch are
-part of the same BFT (Byzantine Fault Tolerance) network and hold key shares that are compatible
+The underlying node network operates within chunks of time, called an epoch. Nodes within the same epoch are
+part of the same BFT (Byzantine Fault Tolerant) network and hold key shares that are compatible
with each others' key shares. Nodes within different epochs do not. The main purpose of epochs is to
ensure that node operators can be removed and added, and to minimize the impact of loss of key
shares or node failures over time.
diff --git a/embedded-wallets/infrastructure/mpc-architecture.mdx b/embedded-wallets/infrastructure/mpc-architecture.mdx
index 6358be2cfc6..2c100bb99e8 100644
--- a/embedded-wallets/infrastructure/mpc-architecture.mdx
+++ b/embedded-wallets/infrastructure/mpc-architecture.mdx
@@ -1,8 +1,8 @@
---
-title: Web3Auth MPC Architecture
+title: Embedded Wallets MPC Architecture
sidebar_label: MPC Architecture
-description: 'MPC Architecture - Web3Auth Wallet Infrastructure | Embedded Wallets'
+description: 'MPC Architecture - Embedded Wallets Wallet Infrastructure | Web3Auth'
---
import ExpandingSharesFlow from '@site/static/img/embedded-wallets/infrastructure/expanding-shares-tss-flow.png'
@@ -11,43 +11,54 @@ import KeyUsageFlow from '@site/static/img/embedded-wallets/infrastructure/key-u
import TkeyMpcFlowDark from '@site/static/img/embedded-wallets/flow-diagrams/tkey-mpc-flow-dark.png'
import TkeyMpcFlowLight from '@site/static/img/embedded-wallets/flow-diagrams/tkey-mpc-flow-light.png'
-This document provides an in-depth exploration of the technical architecture of the MPC-based SDKs, this includes the MPC Core Kit SDKs.
+This document provides an in-depth exploration of the technical architecture of the Multi-Party Computation (MPC)-based SDK, this includes the MPC Core Kit SDK.
-The only difference between the SSS-based SDKs and MPC SDKs are that during usage/login MPC SDKs do not reconstruct user private keys.
+
-## Overview of Cryptographic and Blockchain Support (compatibility and implementations)
+The only difference between the [Shamir Secret Sharing](./glossary.mdx#shamir-secret-sharing) (SSS)-based SDKs and MPC SDKs are that during usage/login MPC SDKs do not reconstruct user private keys.
-Web3Auth supports most popular blockchains & elliptic curves out there. In particular, out of the box the infrastucture supports all chains on:
+## Overview of cryptographic and blockchain support (compatibility and implementations)
-- `secp256k1` | Ethereum (EVM) chains, Bitcoin, Polygon & other L2s, etc...
+Embedded Wallets supports the most popular blockchains and elliptic curves:
+
+- `secp256k1` | Ethereum (EVM) chains (including Base, Linea, Polygon, and other L2s), Bitcoin
- `ed25519` | Solana, Polkadot, NEAR
-For other elliptic curve/chain support, feel free to [ask/request](https://web3auth.io/contact-us.html) as we may already support them.
+:::tip
+
+For enquiries regarding additional elliptic curve/chain support, [ask/request](https://web3auth.io/contact-us.html) as
+we may already support them.
+
+:::
-### Distributed Key Generation & Pro-active Secret Sharing Schemes Used
+### Distributed key generation and Proactive Secret Sharing schemes
-There are many schemes and variants for DKGs and PSSs out there, we in particular use an asynchronous variant, [Kate12](https://eprint.iacr.org/2012/377.pdf), derived from Asynchronous Verifiable Secret Sharing (AVSS), [Cachin02](https://eprint.iacr.org/2002/134.pdf).
+There are many schemes and variants for [Distributed Key Generation](./glossary.mdx#distributed-key-generation) (DKG)
+and [Proactive Secret Sharing](./glossary.mdx#proactive-secret-sharing) schemes (PSSs). Embedded wallets uses an asynchronous variant, [Kate12](https://eprint.iacr.org/2012/377.pdf), derived from Asynchronous Verifiable Secret
+Sharing (AVSS) [Cachin02](https://eprint.iacr.org/2002/134.pdf).
-### TSS & Signature Schemes Used
+### Threshold Signature Schemes
-TSS schemes often vary in their approach to creating shared cryptographic material in a distributed manner. We support the popular EDDSA, and [DKLS19](https://eprint.iacr.org/2019/523.pdf). In result supporting the `ecdsa` signature standard on both elliptic curves.
+[Threshold Signature Schemes](./glossary.mdx#threshold-signature-schemes) (TSS) vary in their approach to creating shared cryptographic material in a distributed manner. Embedded Wallets supports threshold EdDSA and threshold ECDSA using the [DKLS19](https://eprint.iacr.org/2019/523.pdf), covering both Ed25519 and secp256k1 elliptic curves.
-It's worth noting that the TSS signing is largely decoupled from Web3Auth's infrastucture to allow us to be agnostic to TSS implementation. These include other signature schemes, which arguably are much more convienent. Notably, but non-exhaustively, Web3Auth supports:
+Note that the TSS signing is largely decoupled from Embedded Wallets' infrastructure allowing the implementation to be agnostic to the underlying TSS protocol. This design enables support for multiple signature schemes, including:
- [ECDSA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) or its ElGamal variants
- EDDSA or its [Schnorr variants](https://en.wikipedia.org/wiki/Schnorr_signature)
-- BLS, Stark (coming soon)
+- BLS
+- Stark (coming soon)
-| :memo: For other signature or elliptic curve or chain support, feel free to [ask or request](https://web3auth.io/contact-us.html) if we support them |
-| ---------------------------------------------------------------------------------------------------------------------------------------------------- |
+## User key overview
-## User Key Overview
+Embedded Wallets uses MPC to manage user wallets in a distributed fashion, leveraging various factors or shares
+managed by users, including their devices, private inputs, backup locations, and cloud service providers. As long
+as a user can access 2 out of n (2/n) of these shares, they can access their key. This secure key, generated using
+DKG is called the $TSSKey$.
-Web3Auth uses MPC to manage user wallets in a distributed fashion, leveraging various factors or shares managed by users, including their devices, private inputs, backup locations, and cloud service providers. As long as a user can access 2 out of n (2/n) of these shares, they can access their key. This distributedly secure key is called the $TSSKey$.
+One of disadvantages of a DKG-generated keys is the loss of efficient encryption/decryption capabilities. As such,
+the $TSSKey$ is supported by another cryptographic key whose main purpose is to manage metadata pertaining to the user's account, the **metadataKey**. User metadata is strictly supplementary, helping to facilitate and govern user flows. Importantly, metadata does not leak information about the shares of the $TSSKey$ being used to sign transactions.
-One of the lost functionalities of a distributedly secure key is the loss of efficient encryption/decryption capabilities. As such the **TSSKey** is supported by another cryptographic key that's main purpose is to manage metadata pertaining to the user's account, the **metadataKey**. User metadata is strictly supplementary and only helps to facilitate and govern user flows. In particular, metadata does not leak information about the shares of the **TSSKey** being used to sign transactions.
-
-### TSS Key
+### Threshold signature scheme key
@@ -55,57 +66,68 @@ One of the lost functionalities of a distributedly secure key is the loss of eff
-The user's setup starts by distributedly key generating (DKG) a 2 out of 3 (2/3) sharing, $f_0(x) = a_0 + a_1x$, with three shares: $f_0(1), f_0(z_1), f_0(z_2)$ where $z_1,z_2 \in \mathbb{Z}_q$.
+The user's setup uses DKG to instantiate 2 out of 3 (2/3) sharing, $f_0(x) = a_0 + a_1x$, with three
+shares: $f_0(1), f_0(z_1), f_0(z_2)$ where $z_1,z_2 \in \mathbb{Z}_q$.
-1. **$f_0(z_1)$ "ShareA" is managed by Web3Auth infrastructure**: This share is kept and managed by OAuth authentication flows in a distributed security model.
-2. **$f_0(1)$ "ShareB" is stored on the user's device**: Implementation is device and system specific. For example, on mobile devices, the share could be stored in device storage secured via biometrics.
-3. **$f_0(z_2)$ "ShareC" is a backup share**: An extra share to be kept by the user, possibly kept on a seperate device, downloaded or based on user input with enough entropy (eg. password, security questions, hardware device etc.).
+1. $f_0(z_1)$ **ShareA** is managed by the Embedded Wallets infrastructure: This share is kept and managed by authentication flows (for example, OAuth login from an existing account) in a distributed security model.
+2. **$f_0(1)$ **ShareB** is stored on the user's device: Implementation is device and system specific. For example, on mobile devices, the share could be stored in device storage secured via biometrics.
+3. **$f_0(z_2)$ **ShareC** is a backup share: An extra share to be kept by the user, possibly kept on a separate
+device (such as a hardware device), downloaded, or based on user input with enough entropy (such as a password or
+security questions).
-### The Metadata Key
+### The metadata key
-This key's storage process mirrors that of the TSSKey, with the primary difference being that the metadataKey is always reconstructed and used for encryption/decryption tasks. It's based on the fundamental Shamir’s Secret Sharing scheme and initially generated on the user's front-end.
+This key's storage process mirrors that of the $TSSKey$, with the primary difference being that the metadataKey is
+always reconstructed and used for encryption/decryption tasks. It's based on the fundamental [SSS](./glossary.mdx#shamir-secret-sharing) scheme and initially generated on the user's frontend.
## Other components
-### Factor Keys
+### Factor keys
-Factor keys enable refreshing, setting up multiple keys, deletion, and rotation capabilities on the TSSKey. They are randomly generated across various user-controlled locations or factors, such as their phone, chrome extension, cloud, or assisting third parties. Primarily used for data encryption/decryption, these keys provide a constant secret in different locations as shares to the TSSKey and/or metadataKey may rotate. They represent a storage point with a public address that we can encrypt data blobs for.
+Factor keys enable refreshing, setting up multiple keys, deletion, and rotation capabilities on the $TSSKey$. They are randomly generated across various user-controlled locations or factors, such as their phone, chrome extension, cloud, or assisting third parties. Primarily used for data encryption/decryption, these keys provide a constant secret in different locations as shares to the $TSSKey$ and/or metadataKey may rotate. They represent a storage point with a public address that Embedded Wallets can encrypt data blobs for.
-### User Metadata
+### User metadata
-User metadata is strictly supplementary and only helps to facilitate and govern user flows. In particular, metadata does not leak information about the shares of the private key being used to sign transactions.
+User metadata is strictly supplementary and only helps to facilitate and govern user flows. As noted above, metadata
+does not leak information about the shares of the private key being used to sign transactions.
-Metadata uses an encrypted storage layer that serves as a persistent data store for storing encrypted information about the user’s keys (eg. public key, preferences, device information, thresholds etc). This information is stored in a replicated fashion across the set of nodes that are involved in facilitating the user login.
+Metadata uses an encrypted storage layer that serves as a persistent data store for storing encrypted information
+about the user’s keys (for example, public key, preferences, device information, and thresholds). This information is stored in a replicated fashion across the set of nodes that are involved in facilitating the user login.
-During operation, when the user has threshold shares, they can read and write to metadata. Writing to metadata requires encrypting the data and signing it with the shares / private key.
+During operation, when the user achieves threshold shares, they can read and write to metadata. Writing to metadata requires encrypting the data and signing it with the shares / private key.
## Flows
-This segment goes through some of the interactions on a deeper level.
+Embedded Wallets flows manage how authentication, device storage, and backup factors work together during wallet
+creation, signing, and recovery. Three key elements are involved:
-:::note Components
+- **ShareA:** Embedded Wallets' infrastructure provides a user-specific share/factor based on some form of attestation
+from the user. This attestation could come in the form of an OAuth login from an existing account, a traditional email account login, or even biometrics. It also serves as a persistent data store for storing encrypted metadata: the
+"metadata layer" in the following diagrams.
-- **Web3Auth Infrastructure:** Web3Auth infrasturcture provides a user-specific share/factor based on some form of attestation from the user. This attestation could come in the form of an OAuth login from an existing account, a traditional email account login, or even biometrics. It also serves as a persistent data store for storing encrypted metadata, we also call this the metadata layer in following diagrams.
+- **ShareB:** $TSSKey$ relies on user devices to store shares. The base flow accommodates a single device, but users can use multiple devices to increase the threshold once they have an initial setup. Access to device storage on the user's device is implementation specific. For example, for native applications on mobile, they can make use of the device
+keychain.
-- **User device:** tKey is dependent on user devices to store shares. The base flow accomodates a single device, but users can use multiple devices to increase the threshold once they have an initial setup. Access to device storage on the user's device is implementation specific. For example, for native applications on mobile, they can make use of the device keychain.
+- **ShareC**, the **Backup factor/share:** This is generally _not_ used during normal operation, and is intended for
+use in key recovery / share refresh if the user loses his/her device or shares.
-- **Backup factor/share:** This is generally _not_ used during normal operation, and is intended for use in key recovery / share refresh if the user loses his/her device or shares.
+### Key handling on user login
-:::
+Key handling begins in response to a user-triggered action, such as logging in. At this stage, the system attempts to retrieve any existing encrypted metadata associated with the user.
-### Key initialization
+If metadata is found, the user is an existing user. The metadata is decrypted using the nodes’ $encKey$ , and the stored information is used to validate the user and load the existing secret-sharing parameters. No new key material is generated in this path.
-A key is initialized upon a user-triggered action (eg. login to nodes). We then attempt to retrieve associated metadata for the user. If none exists, the user is a new one and we generate a corresponding SSS 2/3 polynomial with its respective key and shares. If it exists, we decrypt the metadata using the nodes $encKey$ and read the metadata to verify user information and associated secret sharing parameters.
+If no metadata is found, the user is treated as a new user, and a new key is initialized. In this case, a 2-of-3 Shamir’s Secret Sharing (SSS) polynomial is generated, producing a private key and its corresponding shares.
We select a polynomial $f(z)$ over $Z_q$ where: $$f(z) = a_1z + \sigma$$
- $f(0) = \sigma$ denotes the private key scalar to be used by the user
-- $a_1$ is a coefficient to $z$
-- $f(z_1),f(z_2)$ and $f(z_3)$ are ShareA, ShareB and ShareC respectively
+- $a_1$ is a polynomial coefficient to $z$
+- $f(z_1),f(z_2)$ and $f(z_3)$ are ShareA, ShareB, and ShareC respectively
-ShareA is stored on the user’s device, ShareB stored on Web3Auth Infrastructure, and ShareC dependent on user input or handled as a recovery share.
-
-### Key Usage, Access and Signing
+### Key usage, access, and signing
-If a user has logged in previously, he/she access their key by accessing ShareB via a session token handshake and utilzing it with ShareA on the user’s current device using to sign Threshold Signaures.
+For returning users, key access is established by retrieving ShareB via a session token handshake and combining it
+with the locally stored ShareA on the user’s device to produce threshold signatures.
-#### Threshold Signature Scheme (TSS)
+#### Threshold Signature Scheme
-The TSS signing requires information from two sections:
+The [TSS](./glossary.mdx#threshold-signature-schemes) signing requires information from two sections:
-- shared information (eg. public key, share commitments, theeshold, unique identifiers)
-- local information (eg. TSS key share).
+- shared information (such as public key, share commitments, threshold, unique identifiers)
+- local information (such as TSS key share)
-The shared information is stored on metadata and replicated, whereas the local information is kept locally on the user's device. This ensures that metadata for shared operations can be easily replicated and accessed without computationally expensive calls, while for local operations the TSS key shares never leave the local context.
+The shared information is stored as metadata and replicated, whereas the local information is kept on the user's device. This ensures that metadata for shared operations can be easily replicated and accessed without computationally expensive calls, while for local operations the TSS key shares never leave the local context.
-Constructing a threshold signature requires a session token which we can get via the session request. This then allows us to set up a threshold signature session. The threshold signature session consists of an offline signing phase and an online signing phase (GG20, GG19, Doerner19).
+Constructing a threshold signature requires a session token which we can get via the session request. This then allows us to set up a threshold signature session. The threshold signature session consists of an offline signing phase and an
+online signing phase (GG20, GG19, Doerner19).
-The offline signing phase consists of 6 rounds of interaction between the device and nodes and can be pre-computed before the transaction signing request is received. The online signing phase requires the transaction to be present and is non-interactive.
+The offline signing phase consists of 6 rounds of interaction between the device and nodes and can be precomputed before the transaction signing request is received. The online signing phase requires the transaction to be present and is non-interactive.
-This means that although the threshold signature generation takes a substantial amount of time, most of it can be precomputed via a background process, before the user even needs to sign a transaction. When the user decides to sign a message in the online phase, only one round of noninteractive communication is required, which is very fast (\<0.2 seconds).
+This means that although the threshold signature generation takes a substantial amount of time, most of it can be precomputed via a background process, before the user even needs to sign a transaction. When the user decides to sign
+a message in the online phase, only one round of noninteractive communication is required, which is very fast
+(\<0.2 seconds).
-### Expanding the Number of Shares (Adding a Device)
+### Expanding the number of shares (adding a device)
-In the case of a new device the user needs to conduct a Proactive Secret Sharing, a refresh scheme to generate a new factor in a distributed manner. This example goes through the setup on a new user’s device with an existing device in hand. This can also be conducted with a user’s backup factor, ie.e ShareC.
+In the case of adding a new device, the user needs to conduct a [PSS](./glossary.mdx#proactive-secret-sharing) to
+trigger a refresh protocol that derives an additional share in a distributed manner, without changing the
+underlying secret. The following example goes through the setup on a user's device with an existing device in hand.
+This can also be conducted with a user's backup factor, such as ShareC.
## Lifecycle
### Initialization
-When a Web3Auth Network Node is started, it tries to register its connection details on an Ethereum smart contract. Once all nodes have been registered for that epoch, they try to connect with each other to set up the BFT network, and start generating distributed keys. They also listen for incoming information from nodes in the previous epoch.
+When an Embedded Wallets Network node is started, it tries to register its connection details on an Ethereum smart contract. Nodes that successfully register for that epoch try to connect with each other to set up the BFT network, and start generating distributed keys. They also listen for incoming information from nodes in the previous epoch.
### Operation
During operation, a node runs three separate parallel process:
-1. Mapping user IDs to keys
-2. Generating distributed key shares
+1. Mapping user IDs to keys.
+2. Generating distributed key shares.
3. Allowing users to retrieve their shares.
#### Mapping user IDs to keys
-The mapping process primarily interacts with the BFT layer, which allows nodes to share state on which keys belong to which users. When a new user requests for a key, the node submits a BFT transaction that modifies this state. Existing users who have logged in are compared against this shared state to ensure that they retrieve the correct key share.
+The mapping process primarily interacts with the BFT layer, which allows nodes to share state on which keys belong to which users. When a new user requests a key, the node submits a BFT transaction that modifies this state. Existing users who have logged in are compared against this shared state to ensure that they retrieve the correct key share.
#### Generating distributed key shares
-The distributed key generation process primarily uses libp2p for communication between nodes, and generates a buffer of shared keys, in order to reduce the average response time for key assignments for new users.
+The DKG process primarily uses libp2p for communication between nodes, and generates a buffer of shared keys in order to reduce the average response time for key assignments for new users.
-#### Allowing users to retrieve their shares.
+#### Allowing users to retrieve their shares
-The share retrieval process starts when a user wishes to retrieve their keys. They individually submit their OAuth token via a commit-reveal scheme to the nodes, and once this OAuth token is checked for uniqueness and validity, each node returns the user's \(encrypted\) key share. This does not require communication between the nodes.
+The share retrieval process starts when a user wishes to retrieve their keys. They individually submit their authentication token via a commit-reveal scheme to the nodes, and once this authentication token is checked for uniqueness and validity, each node returns the user's \(encrypted\) key share. This does not require communication between the nodes.
-Assignments of keys to new users only require interaction with the mapping process, assuming that there is at least one unassigned key in the buffer. As such, we are able to assign keys to accounts ahead of time, before that accounts' owner decides to login and reconstruct the key. This forms the basis for our account resolver APIs.
+Assignments of keys to new users only require interaction with the mapping process, assuming that there is at least one unassigned key in the buffer. As a result, keys can be assigned to accounts ahead of time, before the account owner logs in and reconstructs the key. This behavior forms the basis of the account resolution logic used by the SDK during authentication (handled by the internal account resolver APIs).
### Migration
-When an epoch comes to an end, the current node operators agree on the next epoch, and send information about the current mapping state and the existing keys to the next set of nodes in the next epoch. This is done via typical reliable broadcast methods for the mapping, and PSS \(proactive secret sharing\) for the key shares.
+When an epoch comes to an end, the current node operators agree on the next epoch, and send information about the current mapping state and the existing keys to the next set of nodes in the next epoch. This is done via typical reliable broadcast methods for the mapping, and [Proactive Secret Sharing](./glossary.mdx#proactive-secret-sharing) (PSS) for the key shares.
-### Trust Assumptions
+### Trust assumptions
-The Torus Network operates on two main threshold assumptions: a key generation threshold \(>¼\)and a key retrieval threshold \(>½\). Generating keys for new users requires more than ¾ of the nodes to be operating honestly, and reconstructing keys for existing users requires >½ of the nodes to be operating honestly. For more information, refer to the dual-threshold construction in [AVSS](https://eprint.iacr.org/2002/134.pdf).
+The Torus Network operates on two main threshold assumptions: a key generation threshold \(>¼\) and a key retrieval threshold \(>½\). Generating keys for new users requires more than ¾ of the nodes to be operating honestly, and reconstructing keys for existing users requires >½ of the nodes to be operating honestly. For more information, refer to the dual-threshold construction in [Async Verifiable Secret Sharing](./glossary.mdx#distributed-key-generation) (AVSS).
-While most other secret sharing schemes use ⅔ honest majority with a >⅓ reconstruction threshold, our preference for total system failure over key theft favors the former thresholds.
+While most other secret sharing schemes use ⅔ honest majority with a >⅓ reconstruction threshold, our preference for total system failure over key theft favors the former thresholds.
-## Key Assignments
+## Key assignments
-The keys are assigned to a combination of `verifier` \(e.g., Google, Reddit, Discord\) and `verifier_id` \(e.g., email, username\), which is a unique identifier respective to and provided by the `verifier`. This assignment can be triggered by any node and is decided through the nodes consensus layer.
+The keys are assigned to a combination of `verifier` (for example, an authentication provider configuration, such as OAuth-based logins via Google) and `verifier_id` (such as, email, username), which is a unique identifier respective to, and provided by, the `verifier`. This assignment can be triggered by any node and is decided through the nodes consensus layer.
-### Verifiers and Key Retrieval
+### Verifiers and key retrieval
-The fundamental flow for Torus sign-in is as follows:
+The key retrieval flow for an Embedded Wallets sign in uses the Torus Node Network as described below:

-1. Your application gets the user to sign-in via their preferred method \(OAuth / email password / passwordless / verification code\).
-2. After the user gives consent/verifies his/her email, Torus SDK will receive an ID token and assign a key to the user depending on User Verifier ID from ID Token.
-3. The key is retrieved from the Torus network and exposed to Web3 provider \(DApp\) to complete user sign-in request.
-4. Torus uses this ID Token to check if the user's profile information exists in the DApp.
- 1. If it does, the user will be signed in to the DApp with their preferred login.
- 2. If it doesn't, the user can create a new account on the DApp with their preferred login.
+1. Your application prompts the user to sign in using their preferred authentication method (for example, OAuth).
+
+2. After the user successfully authenticates, the Embedded Wallets SDK (client-side) receives a verifiable authentication token. From this token, the SDK derives the user’s `verifier_id` (or retrieves it from the authentication provider’s user profile) and uses the (`verifier`, `verifier_id`) pair to identify or assign the user’s key.
+
+3. The SDK communicates with the Embedded Wallets Network (Torus Network) to retrieve the user’s corresponding key share from the Torus network and makes it available to the application to complete the sign in flow, without exposing the full private key.
+
+4. Using the authentication context, the application determines whether the user already has profile data associated with the dapp:
+ 1. If it does, the user is signed in.
+ 2. If it does not, the user can create a new account using the same login method.
+
+To support general verifiers beyond OAuth-based authentication, an external verifier must implement at least two of the following:
-In order to allow for general verifiers to be used instead of only allowing OAuth, we typically need at least two of these APIs to be implemented by the external verifier:
+1. An API that issues unique tokens when a user is logged in.
+2. An API that consumes these tokens and returns user information and when the token was issued.
-1. an API that issues unique tokens when a user is logged in.
-2. an API that consumes these tokens and returns user information as well as when the token was issued.
+The first API must be securely accessible from the browser (must be CORS-enabled and restrict headers), in order to ensure that the Torus servers are not able to intercept the user's token (perform front-running).
-The first API must be accessible from the browser \(e.g. CORS-enabled, restricted headers\), in order to ensure that the Torus servers are not able to intercept the user's token \(front-running\).
+Typically any entity that fulfills these two APIs and provides signatures on unique ID strings and timestamp can be a verifier. This is extendable to several authentication schemes, including existing authentication standards like OAuth Token flow and OpenID Connect.
-Typically any entity that fulfills these two APIs and provides signatures on unique ID strings and timestamp can be a verifier. This is extendable to several authentication schemes, including existing authentication standards like OAuth Token flow and OpenID Connect.
+## Front-running protection
-## Front-Running Protection
+To prevent token front-running and user impersonation by a rogue node or the Torus servers, the system uses a token commitment scheme inspired by Bracha’s Reliable Broadcast. This ensures that a token is revealed, and key shares are released, only after a threshold of nodes have acknowledged the token commitment.
-In order to prevent a rogue node, or the Torus servers, from front-running you by taking your token, impersonating your login, and thereby stealing your key, we use a commitment scheme on our token similar to Bracha's Reliable Broadcast, to ensure that all nodes can be sure that a threshold number of other nodes are aware of the commitment, before it is finally revealed.
+The general approach is as follows: the frontend obtains an authentication token, creates a commitment to the token (hashes the token), and generates a temporary public–private key pair. It sends the token commitment and temporary public key to the nodes. If nodes have not seen the token before, they acknowledge the commitment by returning a signature.
-The general approach is as follows: we ensure that the front-end gets access to the token first, creates a commitment to the token and a temporary public-private keypair, and reveals the token only if a threshold number of nodes accept this commitment. The nodes will then use this keypair to encrypt the shares before sending it to the user.
+Once a threshold of acknowledgements/signatures is collected, the frontend reveals the authentication token along with the signatures. After verification, each node encrypts its key share using the temporary public key and returns it to the frontend.
-This is done by generating a temporary public-private keypair in the front-end. The front-end calls the first API and receives an authentication token. This token is hashed, and the front-end sends the token hash and the temporary public key to each node, and each node returns a signature on this message, if this is the first time they have seen this token commitment. A bundle of these signatures is the proof, and submitting the proof together with the plain \(unhashed token\) to each node results in the node responding with a key share that is encrypted with the temporary public key.
+
### Attack 1: Front-runner intercepts the original commitment request and sends a modified public key
diff --git a/embedded-wallets/infrastructure/sss-architecture.mdx b/embedded-wallets/infrastructure/sss-architecture.mdx
index c698073c6ba..a70ebb77f26 100644
--- a/embedded-wallets/infrastructure/sss-architecture.mdx
+++ b/embedded-wallets/infrastructure/sss-architecture.mdx
@@ -2,7 +2,7 @@
title: Web3Auth Shamir Secret Sharing Architecture
sidebar_label: SSS Architecture
-description: 'SSS Architecture - Web3Auth Wallet Infrastructure | Embedded Wallets'
+description: 'Secret Sharing Architecture - Embedded Wallets Infrastructure | Web3Auth Wallet Infrastructure'
---
import ExpandingSharesFlow from '@site/static/img/embedded-wallets/infrastructure/expanding-shares-sss-flow.png'
@@ -10,13 +10,19 @@ import KeyInitialisationFlow from '@site/static/img/embedded-wallets/infrastruct
import KeyReconstructionFlow from '@site/static/img/embedded-wallets/infrastructure/key-reconstruction-sss-flow.png'
import SSSArchitectureFlow from '@site/static/img/embedded-wallets/infrastructure/sss-architecture-flow.png'
-This document provides an in-depth exploration of the technical architecture of the Shamir's Secret Sharing(SSS)-based SDKs, this includes the current Plug and Play & Single Factor Auth Mobile SDKs.
+This document provides an in-depth exploration of the technical architecture of the [Shamir's Secret Sharing](./glossary.mdx#shamir-secret-sharing) (SSS)-based SDKs, this includes the current Plug and Play and Single Factor Auth Mobile SDKs.
-Shamir's Secret Sharing is a base form of MPC that splits a secret into $n$ shares, of which threshold $t$ are required to reconstruct the secret. You maybe looking for the [MPC Architecture documentation](/embedded-wallets/infrastructure/mpc-architecture/) instead which does not require the key to be reconstructed on usage.
+SSS is a base form of Multi-Party Computation (MPC) that splits a secret into $n$ shares, of which threshold $t$ are required to reconstruct the secret.
+
+:::note
+
+Alternatively, consider the [MPC architecture ](/embedded-wallets/infrastructure/mpc-architecture/) which does not require the key to be reconstructed for usage.
+
+:::
## Components
-The accompanying image illustrates the typical flow of wallet management within the SSS Infrastructure.
+The accompanying image illustrates the typical flow of wallet management within the SSS infrastructure.
-### Auth Nodes (enabling social login)
+### Auth nodes (enabling social login)
-Auth Nodes provide a user-specific key based on some form of attestation from the user. This attestation could come in the form of an OAuth login from an existing account, a traditional email account login, or even biometrics. Nodes need not return a private key, but need to fulfill the interface:
+Auth nodes provide a user-specific key based on some form of attestation from the user. This attestation could come in the form of an OAuth login from an existing account, a traditional email account login, or even biometrics. Nodes need not return a private key, but need to fulfill the interface:
- $retrievePubKey()$
- $encrypt(msg, pubKey)$
@@ -35,31 +41,37 @@ Auth Nodes provide a user-specific key based on some form of attestation from th
For ease of illustration the rest of this document assumes that this is implemented with secp256k1 keys and ECIES. The key retrieved from the nodes is referred to as an encryption key or $encKey$.
+
+
### Storage layer
-The storage layer serves as a persistent data store for storing encrypted metadata. Examples of such systems include IPFS, Arweave, Sia, or even BFT layers. Data availability and cost are the main factors to consider but the choice of storage layer is ultimately up to the implementer.
+The storage layer serves as a persistent data store for storing encrypted metadata. Examples of such systems include IPFS, Arweave, Sia, or even Byzantine Fault Tolerant (BFT) layers. Data availability and cost are the main factors to consider but the choice of storage layer is ultimately up to the implementer.
-Our SSS Infrastructure utilizes $encKey$ from the nodes as an entry point to retrieve the private key. $encKey$ is used to retrieve encrypted user-specific data from the storage layer, referred to as metadata. This data includes a user's threshold, polynomial commitments, associated devices, and so on.
+Our SSS infrastructure utilizes $encKey$ from the nodes as an entry point to retrieve the private key. $encKey$ is used to retrieve encrypted user-specific data from the storage layer, referred to as metadata. This data includes a user's threshold, polynomial commitments, associated devices, and so on.
### User device
-The SSS Infrastructure is dependent on user devices to store shares. The base flow accommodates a single device, but users can use multiple devices to increase the threshold once they have an initial setup. Access to device storage on the user's device is implementation specific. For example, for native applications on mobile, they can make use of the device keychain.
+The SSS infrastructure is dependent on user devices to store shares. The base flow accommodates a single device, but users can use multiple devices to increase the threshold once they have an initial setup. Access to device storage on the user's device is implementation specific. For example, for native applications on mobile, they can make use of the device keychain.
### Recovery share
-This is generally _not_ used during normal operation, and is intended for use in key recovery / share refresh if the user loses his/her device or shares.
+This is generally _not_ used during normal operation, and is intended for use in key recovery / share refresh if the user loses their device or shares.
## Flows
-### Key Initialization
+### Key handling on user login
+
+Key handling begins in response to a user-triggered action, such as logging in. At this stage, the system attempts to retrieve any existing encrypted metadata associated with the user.
+
+If metadata is found, the user is an existing user. The metadata is decrypted using the nodes’ $encKey$ , and the stored information is used to validate the user and load the existing secret-sharing parameters. No new key material is generated in this path.
-A key is initialized upon a user-triggered action (eg. login to nodes). We then attempt to retrieve associated metadata for the user. If none exists, the user is a new one and we generate a corresponding SSS 2/3 polynomial with its respective key and shares. If it exists, we decrypt the metadata using the nodes $encKey$ and read the metadata to verify user information and associated secret sharing parameters.
+If no metadata is found, the user is treated as a new user, and a new key is initialized. In this case, a 2-of-3 Shamir’s Secret Sharing (SSS) polynomial is generated, producing a private key and its corresponding shares.
We select a polynomial $f(z)$ over $Z_q$ where: $$f(z) = a_1z + \sigma$$
- $f(0) = \sigma$ denotes the private key scalar to be used by the user
-- $a_1$ is a coefficient to $z$
-- $f(z_1),f(z_2)$ and $f(z_3)$ are ShareA, ShareB and ShareC respectively
+- $a_1$ is a polynomial coefficient to $z$
+- $f(z_1),f(z_2)$ and $f(z_3)$ are ShareA, ShareB, and ShareC respectively
-If a user has logged in previously, he/she reconstructs their key by decrypting ShareB (retrieved through the storage layer) and combining it with ShareA on the user's current device using [Lagrange interpolation](https://brilliant.org/wiki/lagrange-interpolation/#:~:text=Lagrange%20Interpolation,proof%20of%20the%20theorem%20below).
+If a user has logged in previously, they reconstruct their key by decrypting ShareB (retrieved through the storage layer) and combining it with ShareA on the user's current device using [Lagrange interpolation](https://brilliant.org/wiki/lagrange-interpolation/#:~:text=Lagrange%20Interpolation,proof%20of%20the%20theorem%20below).
-### Key Generation with Deterministic Share
+### Key generation with deterministic share
-Provided with a user input, $input$, we can pre-determine a single share in the generation of the SSS polynomial, where we peg ShareC or $f(z_3)$ to a users input. Let $H$ be a cryptographically secure hash function. $$\text{Given} \, f(z_3)= H(input)\\ \text{Select } \sigma \text{ over } Z_q \text{ and solve } a_1= \frac{f(z_3) - \sigma}{z_3}$$
+Provided with a user input, $input$, we can predetermine a single share in the generation of the SSS polynomial, where we peg ShareC or $f(z_3)$ to a user's input. Let $H$ be a cryptographically secure hash function. Given:
+$$f(z_3)= H(input), \text{select } \sigma \text{ over } Z_q \text{ and solve } a_1= \frac{f(z_3) - \sigma}{z_3}$$.
In the case of secret resharing, we can also conduct the deterministic generation of the $f(z)$ polynomial with a given $\sigma$ and $input$. We are able to peg $n$ given values to the key or shares as long as $n\le d + 1$ where $d$ is the degree of the SSS polynomial $f(z)$.
-It is important that $input$ has sufficient entropy in its generation. A potential way to guarantee this is via using several security questions (for example three of them) with answers $A,B,C$ to derive $input = A|B|C$. This can be implemented with a repository of questions, of which order and index can be defined in metadata.
+It is important that $input$ has sufficient entropy in its generation. A potential way to guarantee this is via using several security questions (for example a minnimum of three) with answers $A,B,C$ to derive $input = A|B|C$. This can be implemented with a repository of questions, of which order and index can be defined in metadata.
-Candidate qualified questions are suggested to be ones with deterministic answers (i.e. "your parent's birthday" or "your city of birth"), rather than "what's your favorite singer?". To accommodate for cases that users tend to forget their answers. There is a further potential of splitting the answers themselves via SSS such that the user can answer 3/5 questions, giving redundancy.
+Candidate qualified questions should have deterministic answers (such as, "your parent's birthday" or "your city of birth"), rather than "what's your favorite singer?". To accommodate for cases that users tend to forget their answers, there is a further potential of splitting the answers themselves via SSS such that the user can answer 3/5 questions, giving redundancy.
-### Expanding the Number of Shares (Adding a Device)
+### Expanding the number of shares (adding a device)
-In the case of a new device the user needs to migrate a share to the device to reconstruct their key. This can be done user input or through a login to a current device.
+In the case of adding a new device, the user needs to migrate a share to the device to reconstruct their key. This can be done via user input or through a login to a current device.
-On reconstruction of $f(z)$ on the new device we set ShareD to $f(z_4)$.
+On reconstruction of $f(z)$ on the new device, we set ShareD to $f(z_4)$.
### Share resharing and revocability
-Utilizing the storage layer, we are able to generate new shares for all devices, regardless if they are online or offline. This allows us to remove devices from the sharing, allow the user to change their security questions and/or adjust their security threshold. The key concept here is utilizing published Share commitments as encryption keys to retrieve shares on the most updated SSS polynomial.
+Utilizing the storage layer, we are able to generate new shares for all devices, regardless if they are online or offline. This allows us to remove devices from the sharing, allow the user to change their security questions and/or adjust their security threshold. The key concept here is utilizing published share commitments as encryption keys to retrieve shares on the most updated SSS polynomial.
This is illustrated from a 2/4 SSS sharing $f(z)$ with shares $s_a, s_b, s_c, s_d$ kept on 3 user devices and the nodes. Let $g$ be a generator of a multiplicative subgroup where the discrete log problem is assumed hard to solve. During initialization of the key we create share commitments $g^{s_a}, g^{s_b}, g^{s_c}, g^{s_d}$ to be stored on the storage layer. These share commitments are analogous to public keys derived from the share scalars.
-Given the user loses device D holding $s_d$, and wants to make that share redundant. He/she first reconstructs their key on device A. We utilize a public key based encryption scheme (eg. ECIES).
+Given the user loses device D holding $s_d$, and wants to make that share redundant, they first reconstruct their key on device A. We utilize a public key-based encryption scheme (ECIES).
The user generates a new 2/3 SSS polynomial $h(z)$ on the same $\sigma$ with shares $s_1, s_2, s_3$ and encrypts the newly generated shares for each respective share commitment, except for the lost $g^{s_d}$ commitment.
diff --git a/ew-sidebar.js b/ew-sidebar.js
index cd9345a388a..d183085bdbc 100644
--- a/ew-sidebar.js
+++ b/ew-sidebar.js
@@ -209,8 +209,8 @@ const sidebar = {
"infrastructure/glossary",
{
type: "link",
- label: "Compliance, Audits and Trust",
- href: "https://trust.web3auth.io",
+ label: "Compliance, Audits and Trust",
+ href: "https://trust.web3auth.io", // needs proofing (landing page missing stop)
},
],
},
diff --git a/package-lock.json b/package-lock.json
index 6717da050e1..58a36f1d1fd 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -41,6 +41,7 @@
"ethers": "^6.15.0",
"js-cookie": "^3.0.5",
"jsonwebtoken": "^9.0.2",
+ "katex": "^0.16.25",
"launchdarkly-js-client-sdk": "^3.9.0",
"lodash.camelcase": "^4.3.0",
"lodash.debounce": "^4.0.8",
@@ -21153,7 +21154,9 @@
}
},
"node_modules/katex": {
- "version": "0.16.22",
+ "version": "0.16.27",
+ "resolved": "https://registry.npmjs.org/katex/-/katex-0.16.27.tgz",
+ "integrity": "sha512-aeQoDkuRWSqQN6nSvVCEFvfXdqo1OQiCmmW1kc9xSdjutPv7BGO7pqY9sQRJpMOGrEdfDgF2TfRXe5eUAD2Waw==",
"funding": [
"https://opencollective.com/katex",
"https://github.com/sponsors/katex"
diff --git a/package.json b/package.json
index ded734324af..b3eca532c27 100644
--- a/package.json
+++ b/package.json
@@ -59,6 +59,7 @@
"ethers": "^6.15.0",
"js-cookie": "^3.0.5",
"jsonwebtoken": "^9.0.2",
+ "katex": "^0.16.25",
"launchdarkly-js-client-sdk": "^3.9.0",
"lodash.camelcase": "^4.3.0",
"lodash.debounce": "^4.0.8",