Conversation
Use +nightly for cargo-fuzz install/build/run to avoid stable toolchain -Z sanitizer failures in scheduled fuzz CI.
* fix: update goldilocks fuzz target to use p3-goldilocks The goldilocks fuzz target was using p3-miden-goldilocks which is deprecated and no longer has Deserializable trait support in miden-serde-utils. This change updates the fuzz target to use p3-goldilocks instead, which is the correct crate that implements the Serializable/Deserializable traits in miden-serde-utils. Fixes the fuzz CI failures seen in: - fuzz miden-serde-utils (goldilocks) * ci: run fuzz jobs sequentially to avoid resource contention Add max-parallel: 1 to both fuzz job matrices to ensure each fuzz target runs with full available resources. This prevents potential failures due to memory/disk pressure when multiple fuzz builds run concurrently on the same runner. The tradeoff is longer total runtime, but since these are scheduled daily runs (not PR blocking), reliability is more important than speed.
Add a workflow that generates Rust documentation on every push to the next branch and publishes it to GitHub Pages. Co-authored-by: Bobbin Threadbare <43513081+bobbinth@users.noreply.github.com>
#817) - Add --no-deps flag to only document workspace crates - Clean target/doc before generating to remove stale dependency docs - Use --enable-index-page -Zunstable-options to generate workspace index
Renames the `value` field and related methods in `NodeIndex` to `position` for improved clarity, as the value represents the horizontal position within a tree level rather than an arbitrary value. Changes: - NodeIndex::value() -> NodeIndex::position() - NodeIndex::is_value_odd() -> NodeIndex::is_position_odd() - LeafIndex::value() -> LeafIndex::position() - InvalidNodeIndex error field: value -> position closes #208
…osition refactor: rename NodeIndex `value` to `position`
This commit implements the in-memory `Backend` for the new SMT forest and then uses it to implement and test the backend-independent functionality of the forest itself. In doing so it also includes a few miscellaneous changes, including: - Updates to the `History` mechanism to change how it stores leaf deltas to be more easily used with the forest itself. - Updates the integration tests between the history and a bare SMT to actually be a correct usage and hence actually test the integration.
* fix: tuple min_serialized_size excludes alignment padding Tuple Deserializable implementations were using the default min_serialized_size() which returns size_of::<Self>(). This includes alignment padding in the calculation, causing budget checks to reject valid serialized data. For example, (NodeIndex, Word) has: - In-memory size: 48 bytes (NodeIndex has 7 bytes padding) - Serialized size: 41 bytes (9 + 32, no padding) The fix adds min_serialized_size() implementations to all tuple types that sum the min_serialized_size() of each element. Fixes #826 * chore: add CHANGELOG entry for tuple min_serialized_size fix
…821) * remove nodes from store when reference count reaches zero * add test * add changelog --------- Co-authored-by: François Garillot <4142+huitseeker@users.noreply.github.com>
* refactor(mmr): make `PartialMmr::open()` return `MmrProof` * chore: changelog * chore: addressing comments * chore: addressing comments * chore: changelog * chore: addressing comments * chore: addressing comments * chore: addressing comments --------- Co-authored-by: Bobbin Threadbare <43513081+bobbinth@users.noreply.github.com>
|
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
…#812) * feat: add validation to PartialMmr deserialization and from_parts This commit adds validation to `PartialMmr::from_parts()` and the `Deserializable` implementation to ensure consistency between components: - Validates that `track_latest` is only true when forest has a single leaf tree - Validates that all node indices are within forest bounds - Adds `from_parts_unchecked()` for performance-critical trusted code paths - Updates `Deserializable` to use the validating constructor This addresses security concerns when deserializing from untrusted sources. Closes #802 * fix: validate all node indices in PartialMmr::from_parts Address review feedback: - Reject index 0 as invalid (InOrderIndex starts at 1) - Check all indices against forest.rightmost_in_order_index() - Handle empty forest case explicitly - Add tests for index 0, large even indices, and deserialization * fix: validate separator indices in PartialMmr::from_parts - Add Forest::is_valid_in_order_index() to check if an index points to an actual node (not a separator position between trees) - Update from_parts() to reject separator indices - Add tests for separator index validation (indices 8 and 12 for 7-leaf forest) - Fix comment: rightmost in-order index for 7 leaves is 13, not 12 - Mark PR as [BREAKING] in CHANGELOG * fix: address review nits in PartialMmr validation
* refactor(smt): use bitmask representation for `Subtree` storage Replace `Map<u8, InnerNode>` with compact bitmask + `Vec<Word>`. * chore: changelog * chore: validate unused bitmask bits, add tests, fix nits * chore: add path selection verification to tests * fix: correct sparse subtree benchmark starting depth * fix: propagate error instead of panicking on invalid field elements * fix: correct dense subtree benchmark off-by-one depth --------- Co-authored-by: Bobbin Threadbare <43513081+bobbinth@users.noreply.github.com> Co-authored-by: François Garillot <4142+huitseeker@users.noreply.github.com>
Updates the Rust toolchain from 1.90 to 1.93. This also updates dependency versions via cargo update to ensure compatibility with the newer compiler.
* fuzz: add MMR and crypto type deserialization fuzz targets Add fuzz targets for high-severity attack surface: - mmr.rs: PartialMmr and Forest deserialization - crypto.rs: Falcon PublicKey, SealingKey, SealedMessage deserialization Also update keccak to 0.1.6 to fix RUSTSEC-2026-0012. * ci: add mmr and crypto fuzz targets to CI workflow Add new fuzz targets for MMR structures (PartialMmr, Forest) and cryptographic types (PublicKey, SealingKey, SealedMessage) to the daily CI fuzz job. * fix: replace unwrap with proper error handling in XChaCha decryption The AeadScheme implementation for XChaCha used unwrap() when deserializing EncryptedData from raw bytes, which could panic on malformed attacker-controlled input. Replace with proper error propagation. Also add AEAD fuzz target to catch similar issues and include it in CI fuzz job. * fuzz: add DSA signatures fuzz target Add fuzz coverage for all signature deserialization paths: - EdDSA (Ed25519) signatures and public keys - ECDSA (secp256k1) signatures, public keys, and recovery - Falcon512 signatures, public keys, and recovery Also exercises verify paths to catch panics on malformed input. * chore: Changelog
* Use budgeted deserialization for untrusted bytes * chore: Changelog * fix: enforce bounded key deserialization
* test x25519 torsion * bind ies kdf * reject x25519 torsion * chore: Changelog * test: cover x25519 torsion rejection * chore: debug-assert x25519 all-zero shared secret * docs: clarify HKDF info context
* Zeroize ECDH RNG seeds * Use constant-time auth tag comparison * Make AEAD SecretKey equality test-only * Address some cleanups * Use constant-time SecretKey equality in tests * Remove auth tag equality test * chore: Changelog * Add Zeroize audit script and CI check * Fix rustdoc links and add doc CI jobs * Inline canonicalization for ct_eq * Replace Zeroize audit script with Rust tool
* Replace unsafe NonZero with safe alternative, document merkle unsafe - sparse_path.rs: Replace unsafe NonZero::new_unchecked with safe NonZero::new().expect(). Micro-benchmarks show no performance difference between safe and unsafe versions (both ~175ps). - merkle_tree.rs: Add performance documentation explaining why unsafe code is kept. Benchmarks at 4K+ leaves show ~2-2.5% improvement (65K leaves: unsafe=67.17ms vs safe=68.27ms). - Add sparse_path.rs benchmark suite for future regression testing. - Add init_vector safe alternative in utils/mod.rs. - Extend merkle benchmarks to 65K leaves for scale testing. * Evaluate remaining unsafe code, add benchmarks and documentation - Add transpose benchmark showing 31% perf improvement from uninit_vector - Expand SAFETY comments in digest.rs:digests_as_bytes explaining repr(transparent) layout guarantees - Expand SAFETY comments in empty_roots.rs:empty_hashes documenting bounds correctness and const fn requirement - Add performance documentation to transpose_slice - Update unsafe-evaluation.prose to cover all remaining unsafe uses * Use MaybeUninit vector init helpers * Bump MSRV to 1.93 * Fix large forest entries regression and add seed * Fix Merkle tree unsafe usage and document invariants * Avoid recursion in large forest iterator and fix MerkleTree reads * Fix empty history iteration ordering
* Revert MSRV to 1.90 * Run cargo-msrv on 1.91 in CI
The `History` container used by `LargeSmtForest` now stores all of the information in each delta that is required to revert the current tree into the correct historical state. This trades increased memory usage for much faster history queries and simpler entries iteration behavior in the forest. The iterator in `LargeSmtForest` has been greatly simplified, as has the iterator for the `InMemoryBackend`.
* Harden MerkleStore deserialization * Integrate merkle_store fuzz target * chore: Changelog
* Use direct sha2 updates and clarify sizes * Fix blake3 hash elements * Allow padded Digest192 hex and match zeroize paths * Reject all-zero X25519 shared secret * Handle zeroize generics and target dir * Add subtree format header * Use trailing zero checks in benches * Fix partial MMR tracking * Reject unknown subtree versions * chore: comment adjustment * Fix zeroize-audit target path * Clarify Digest192 parsing and add check-features * Use subtle for zero shared secret check * addressing review comments Remove legacy deserialization paths for PartialMmr and subtree data. Require the current marker or magic header and reject old formats. Add small comments that explain key behavior: empty-input handling in x25519 all-zero checks, mutation dedup order in subtree updates, marker purpose in PartialMmr, and invalid leaf position handling in untrack(). Fix outdated docs wording in padded_elements_to_bytes and update tests to match the hard cutoff behavior.
This commit implements a persistent backend for the SMT forest that both allows it to start up rapidly from an on-disk state, and allows the offloading of many portions of the forest from resident memory. It stores the full tree data for each lineage in the forest in a RocksDB instance. As it stands, the backend has not undergone any particular optimization work, instead predominantly relying on the optimizations to access patterns developed for `LargeSmt`. Comparative performance analysis has been performed against Large SMT, and we have found that in like-for-like scenarios, the forest ranges from 1.5x to 2x slower than `LargeSmt`. This is perfectly in line with estimations, as in the worst case the forest has to perform 2x the amount of I/O due to its lack of an in-memory prefix. The commit also includes basic benchmarks for the large SMT forest. While they do not cover every single piece of functionality, they currently cover the following, specifically for the persistent backend: - `forest.open(...)`: The time it takes to get a single opening from some arbitrary tree in the forest, both for the current tree and from the history. - `forest.add_lineage(...)`: The time it takes to add a new lineage to the forest. - `forest.update_tree(...)`: The time it takes to update an existing lineage in the forest. - `forest.update_forest(...)`: The time it takes to update multiple lineages in the forest in a single batch. There may be further opportunities for optimization, based around tailoring the database parameters better for the forest, but the current performance is well-within the expected bounds.
This function previously had the potential to overflow for nodes at depth 64. This commit now checks whether an overflow would occur, and returns an error if this is the case.
This commit introduces `get_leaf_and_subtrees` to the `Storage` API, with a default implementation consisting of sequential calls to `get_leaf` and `get_subtree`. This allows backends (e.g. RocksDB) to override this with more efficient implementations where possible to improve performance. While the original idea was to delegate to `get_subtrees` instead, this can be a very heavyweight solution as it is intended for fetching large numbers of subtrees at once. As `get_leaf_and_subtrees` is currently only used in `open`, the default implementation is tailored for that use-case with a small number of subtrees. There is no measurable performance impact of this change. For posterity, using `get_subtrees` instead yielded slowdowns of over +450%.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is a tracking PR for v0.23.0 release.