Conversation
📝 WalkthroughWalkthroughAdds Sui CI workflow; normalizes Move package manifests; updates Move sources/tests; introduces Sui environment-aware build/publish and a staged, resumable CLI deploy flow with version checks and progress persistence; replaces shell symlink usage with a Node fs-based implementation. Changes
Sequence DiagramsequenceDiagram
actor User as User/CLI
participant CLI as CLI (deploy.ts)
participant Helpers as Helpers (helpers.ts)
participant Build as Sui Build
participant TOML as Published.toml Parser
participant SDK as Sui SDK
participant Chain as Sui Chain
User->>CLI: deploySui(buildEnv)
CLI->>Helpers: readSetupProgress(.sui-deploy-progress.<env>.json)
alt fresh deploy
CLI->>Build: buildSuiPackage(...) -e buildEnv
Build-->>CLI: build results
CLI->>Chain: publish packages
Chain-->>CLI: transaction results / package IDs
CLI->>TOML: parsePublishedToml()
else resume deploy
CLI->>TOML: parsePublishedToml() (existing)
end
CLI->>SDK: query DeployerCaps / objects
SDK-->>CLI: on-chain state
CLI->>CLI: Stage 1/2/3 steps (complete burn/lock, transceiver setup, register)
CLI->>Chain: execute setup/register txs
Chain-->>CLI: tx confirmations / object IDs
CLI->>Helpers: saveSetupProgress(.sui-deploy-progress.<env>.json)
CLI-->>User: deployment complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
a910948 to
b86bdb5
Compare
259cced to
79e1d06
Compare
79e1d06 to
0222e10
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (1)
sui/packages/wormhole_transceiver/Move.toml (1)
8-8: Same dependency pinning concern as above (wormholerev should be immutable).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sui/packages/wormhole_transceiver/Move.toml` at line 8, The dependency entry for wormhole in Move.toml pins rev to the mutable branch name "sui-package-update"; replace that with an immutable identifier (a full commit SHA or a release tag) so the dependency is deterministic. Update the wormhole = { git = "...", subdir = "sui/wormhole", rev = "sui-package-update" } line to use the exact commit hash or tagged release instead of the branch name, and verify builds use that immutable rev.
🧹 Nitpick comments (1)
cli/src/suiDeploy.ts (1)
101-103: Escapeenvbefore building the section regex.The function is exported and constructs regex patterns directly from the
envparameter without escaping special regex characters. While current callers pass trusted values ("mainnet" or "testnet"), exported helpers should be defensive against untrusted input.♻️ Suggested fix
+ const escapedEnv = env.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); const section = content.match( - new RegExp(`\\[published\\.${env}\\][\\s\\S]*?(?=\\[|$)`) + new RegExp(`\\[published\\.${escapedEnv}\\][\\s\\S]*?(?=\\[|$)`) );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/suiDeploy.ts` around lines 101 - 103, The RegExp built for 'section' uses the unescaped env value, which can allow special regex characters to change the pattern; add an escape function (e.g., escapeRegExp) and use it on env before constructing the pattern so the RegExp becomes based on the literal env string (replace the RegExp construction that references env with one that references escapedEnv, e.g., escapedEnv = escapeRegExp(env) and then new RegExp(`\\[published\\.${escapedEnv}\\][\\s\\S]*?(?=\\[|$)`)); add the escape helper near the top of suiDeploy.ts and reuse it wherever env-derived regexes are built.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/sui.yml:
- Around line 27-31: Replace the insecure "curl | sh" pattern by pinning the
installer to a specific commit and verifying its integrity before executing:
fetch the installer via its raw URL that includes the commit hash (instead of
the branch), fetch or store the expected checksum/signature for that commit,
verify the downloaded script (e.g., with sha256sum or gpg) and only then execute
it, then proceed with exporting PATH and running "suiup install -y sui@1.63.2";
update the CI run block so the script is downloaded to a file, verified against
the pinned checksum/signature, and executed conditionally upon successful
verification.
In `@cli/src/index.ts`:
- Around line 4108-4113: The progress file is currently shared across
environments (const progressPath =
`${mainTreePackagesPath}/.sui-deploy-progress.json`), causing cross-network
resume corruption; update the progress path to include the environment
identifier derived from ch.network / buildEnv (e.g., append or interpolate
buildEnv or networkType into the filename) so each environment uses its own file
(refer to progressPath, mainTreePackagesPath, networkType, buildEnv) and ensure
any reads/writes use the new environment-scoped progressPath.
- Around line 4470-4537: When transceiverStateId exists but nttAdminCapId is
missing the code silently skips registration and can incorrectly report success;
add an explicit guard before the registration block that checks
(transceiverStateId && !nttAdminCapId && !progress.transceiverRegistered) and,
instead of falling through, log a clear warning/error (via console.warn or
console.error) stating the admin cap is missing and that registration cannot
proceed, persist or update progress if appropriate (use
saveSetupProgress(progressPath, progress)), and avoid marking
progress.transceiverRegistered true; keep the existing registration logic
(register_transceiver call, client.signAndExecuteTransaction, and error
handling) unchanged and placed only inside the branch that requires
nttAdminCapId.
- Around line 4161-4163: Replace the shell execSync symlink call with Node's fs
APIs: use path.resolve on mainPath and wtPath and call fs.symlinkSync(target,
linkPath, ...) (or fs.unlinkSync(linkPath) first if the link already exists)
instead of execSync; update the code around the execSync(`ln -fs
$(pwd)/${mainPath} $(pwd)/${wtPath}`) call to build absolute paths and create
the symlink via fs.symlinkSync (choose appropriate type/junction for Windows),
and preserve the original stdio/behavior by throwing or logging errors from the
symlink operations in the same context where mainPath and wtPath are used.
In `@cli/src/suiDeploy.ts`:
- Around line 49-52: Replace shell-interpolated execSync calls in suiDeploy.ts
with execFileSync: stop building a single shell string using packagesPath,
packageName, buildEnv, gasBudget or path variables and instead call
child_process.execFileSync (or spawnSync) providing the command as the
executable ("sui" or other binary), pass each parameter as separate args (e.g.,
["move","build","-e", buildEnv]) and set cwd to `${packagesPath}/${packageName}`
and env to process.env; do the same for the other execSync occurrences (around
the other blocks at 66-68 and 136-138) to avoid shell interpolation and ensure
arguments with spaces or metacharacters are handled safely.
In `@sui/Makefile`:
- Around line 5-9: The test/build loops in the Makefile iterate over packages
but don't fail fast; update the loop body that uses move_toml/dir so the package
command (e.g., (cd "$$dir" && sui move test -e $(ENV))) is followed by an
immediate check that exits non‑zero on failure (for example: run the command and
append "|| exit 1" or wrap in an if and exit on failure), and apply the same
change to the analogous build loop (lines handling sui move build) so any
package failure aborts the whole Make invocation.
In `@sui/packages/ntt_common/sources/messages/ntt_manager_message.move`:
- Line 61: The length check currently uses assert!(vector::length(&payload) <
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG) and therefore rejects length 65535
even though the payload is later serialized as u16; change the comparison to
allow the maximum encodable value (e.g., assert!(vector::length(&payload) <=
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG)) or better yet compare against
(u16::MAX as u64) to permit length 0..=65535 for vector::length(&payload) while
keeping E_PAYLOAD_TOO_LONG intact.
In `@sui/packages/ntt/Move.toml`:
- Around line 7-8: Replace the mutable branch ref rev = "sui-package-update" in
the wormhole dependency entry (the line that starts with wormhole = { git =
"https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev
= ... }) with a specific commit SHA in each Move.toml where it appears (packages
ntt, ntt_common, wormhole_transceiver), then regenerate or update the Move.lock
so the lockfile records the exact commit; ensure the same SHA is used across all
three Move.toml files to keep builds reproducible.
---
Duplicate comments:
In `@sui/packages/wormhole_transceiver/Move.toml`:
- Line 8: The dependency entry for wormhole in Move.toml pins rev to the mutable
branch name "sui-package-update"; replace that with an immutable identifier (a
full commit SHA or a release tag) so the dependency is deterministic. Update the
wormhole = { git = "...", subdir = "sui/wormhole", rev = "sui-package-update" }
line to use the exact commit hash or tagged release instead of the branch name,
and verify builds use that immutable rev.
---
Nitpick comments:
In `@cli/src/suiDeploy.ts`:
- Around line 101-103: The RegExp built for 'section' uses the unescaped env
value, which can allow special regex characters to change the pattern; add an
escape function (e.g., escapeRegExp) and use it on env before constructing the
pattern so the RegExp becomes based on the literal env string (replace the
RegExp construction that references env with one that references escapedEnv,
e.g., escapedEnv = escapeRegExp(env) and then new
RegExp(`\\[published\\.${escapedEnv}\\][\\s\\S]*?(?=\\[|$)`)); add the escape
helper near the top of suiDeploy.ts and reuse it wherever env-derived regexes
are built.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.github/workflows/sui.ymlcli/src/error.tscli/src/index.tscli/src/suiDeploy.tssui/Makefilesui/packages/ntt/Move.tomlsui/packages/ntt/tests/ntt_scenario.movesui/packages/ntt/tests/ntt_tests.movesui/packages/ntt_common/Move.tomlsui/packages/ntt_common/sources/contract_auth.movesui/packages/ntt_common/sources/datatypes/bitmap.movesui/packages/ntt_common/sources/messages/ntt_manager_message.movesui/packages/ntt_common/sources/messages/transceiver_message.movesui/packages/wormhole_transceiver/Move.toml
| ): vector<u8> { | ||
| let NttManagerMessage {id, sender, payload} = message; | ||
| assert!(vector::length(&payload) < (((1<<16)-1) as u64), E_PAYLOAD_TOO_LONG); | ||
| assert!(vector::length(&payload) < (((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG); |
There was a problem hiding this comment.
Off-by-one in payload length validation.
Line 61 currently rejects payload length 65535, even though the next line serializes length as u16 (which supports 0..=65535). This unnecessarily blocks the max encodable payload.
Suggested fix
- assert!(vector::length(&payload) < (((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG);
+ assert!(vector::length(&payload) <= ((1u64 << 16) - 1), E_PAYLOAD_TOO_LONG);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert!(vector::length(&payload) < (((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG); | |
| assert!(vector::length(&payload) <= ((1u64 << 16) - 1), E_PAYLOAD_TOO_LONG); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sui/packages/ntt_common/sources/messages/ntt_manager_message.move` at line
61, The length check currently uses assert!(vector::length(&payload) <
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG) and therefore rejects length 65535
even though the payload is later serialized as u16; change the comparison to
allow the maximum encodable value (e.g., assert!(vector::length(&payload) <=
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG)) or better yet compare against
(u16::MAX as u64) to permit length 0..=65535 for vector::length(&payload) while
keeping E_PAYLOAD_TOO_LONG intact.
| # After wormhole PR #4639 merges, switch to: rev = "main" | ||
| wormhole = { git = "https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev = "sui-package-update" } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the Move.toml file in question
find . -path "*/sui/packages/ntt/Move.toml" -type fRepository: wormhole-foundation/native-token-transfers
Length of output: 114
🏁 Script executed:
# Check the current state of the dependency
cat -n "sui/packages/ntt/Move.toml" 2>/dev/null | head -20Repository: wormhole-foundation/native-token-transfers
Length of output: 451
🏁 Script executed:
# Run the verification script provided in the review comment
#!/bin/bash
set -euo pipefail
echo "== Git dependencies with rev in Move.toml files =="
rg -nP --type toml '^\s*[A-Za-z0-9_]+\s*=\s*\{[^}]*\bgit\s*=\s*"[^"]+"[^}]*\brev\s*=\s*"[^"]+"[^}]*\}' 2>/dev/null || echo "No matches found"
echo
echo "== Potential mutable rev values (non-hex commit SHAs) =="
rg -nP --type toml '^\s*[A-Za-z0-9_]+\s*=\s*\{[^}]*\brev\s*=\s*"(?![0-9a-f]{7,40}"\b)[^"]+"[^}]*\}' 2>/dev/null || echo "No matches found"
echo
echo "== Move lock files present in repo (if any) =="
fd -HI 'Move\.lock|move\.lock' 2>/dev/null || echo "No lock files found"Repository: wormhole-foundation/native-token-transfers
Length of output: 1444
🏁 Script executed:
# Check if there's a Move.lock file in the ntt package specifically
ls -la "sui/packages/ntt/" 2>/dev/null | grep -i lock || echo "No lock files in ntt directory"Repository: wormhole-foundation/native-token-transfers
Length of output: 116
🏁 Script executed:
# Let's also check the git history to understand if "sui-package-update" is a branch name
cd sui/packages/ntt && cat Move.toml 2>/dev/null || echo "File not found"Repository: wormhole-foundation/native-token-transfers
Length of output: 388
Pin wormhole dependency to a commit SHA across all Move.toml files.
rev = "sui-package-update" is a mutable branch/tag reference without a corresponding Move.lock file to provide reproducibility guarantees. This affects three packages:
sui/packages/ntt/Move.toml(line 8)sui/packages/ntt_common/Move.toml(line 8)sui/packages/wormhole_transceiver/Move.toml(line 8)
Builds can drift over time as the branch ref may point to different commits. Pin to a commit SHA instead and update intentionally when needed.
🔧 Suggested change
[dependencies]
# After wormhole PR `#4639` merges, switch to: rev = "main"
-wormhole = { git = "https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev = "sui-package-update" }
+wormhole = { git = "https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev = "<commit_sha>" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sui/packages/ntt/Move.toml` around lines 7 - 8, Replace the mutable branch
ref rev = "sui-package-update" in the wormhole dependency entry (the line that
starts with wormhole = { git =
"https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev
= ... }) with a specific commit SHA in each Move.toml where it appears (packages
ntt, ntt_common, wormhole_transceiver), then regenerate or update the Move.lock
so the lockfile records the exact commit; ensure the same SHA is used across all
three Move.toml files to keep builds reproducible.
6918805 to
47f5bfe
Compare
47f5bfe to
463e81e
Compare
the "not found" error was triggering false positives when the object was not found
better handling of Published.toml files and more robust handling of partial deployments
463e81e to
73fdc91
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cli/src/sui/helpers.ts (1)
1-1:⚠️ Potential issue | 🟡 MinorFix Prettier formatting.
The pipeline indicates Prettier formatting issues. Run
bun run prettierto fix formatting.#!/bin/bash # Verify Prettier configuration and check this file fd -e ts -e js 'prettier' --exec cat {} \; 2>/dev/null | head -20 cat package.json | jq '.scripts.prettier // empty'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/helpers.ts` at line 1, Prettier is flagging formatting in cli/src/sui/helpers.ts (starting at the import of execFileSync); run the formatter and apply its changes (e.g., run `bun run prettier` or your project's configured prettier command) to fix whitespace/formatting issues, then stage the updated file; ensure no manual style edits remain that conflict with Prettier rules for the import line and surrounding code in helpers.ts.
🧹 Nitpick comments (4)
sui/packages/ntt_common/sources/contract_auth.move (1)
85-92: Add direct tests forauth_asunder the new package-resolution path.Recommend adding one positive and one negative
auth_astest to lock in behavior after the defining-ID migration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sui/packages/ntt_common/sources/contract_auth.move` around lines 85 - 92, Add two new Move tests exercising auth_as under the defining-ID/package-resolution change: one positive test that constructs an Auth value whose defining ID matches the State type's defining ID, calls auth_as<Auth, State>(&auth, type_name, &state) and asserts it returns the expected address (using object::id_address or checking equality), and one negative test that uses an Auth whose defining ID does not match State and asserts the call aborts with EInvalidAuthType. Place tests alongside other contract_auth tests and reference the functions auth_as and assert_auth_type (and the type_name::with_defining_ids behavior) so the new tests validate the migrated package-resolution path.sui/Makefile (1)
1-18: Consider adding.PHONYdeclarations fortestandbuildtargets.The
testandbuildtargets don't produce files with those names, so declaring them as.PHONYprevents potential issues if files namedtestorbuildare ever created in this directory.♻️ Suggested addition
# Default environment for building/testing (testnet or mainnet) ENV ?= testnet +.PHONY: test build + test: `@set` -e; \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sui/Makefile` around lines 1 - 18, The Makefile's "test" and "build" targets are not declared phony which can cause Make to skip them if files named "test" or "build" appear; update the Makefile to add a .PHONY declaration referencing the targets (e.g., .PHONY: test build) near the top so that the targets "test" and "build" are always executed regardless of files with those names.cli/src/sui/deploy.ts (2)
199-204: Empty catch blocks silently swallow errors.The empty catch blocks when deleting files make it unclear whether deletion failed due to a missing file (expected) or another error (unexpected). Consider logging at debug level or checking specifically for
ENOENT.♻️ More explicit error handling
try { fs.unlinkSync(mainPath); - } catch {} + } catch (e: any) { + if (e.code !== 'ENOENT') console.warn(`Could not delete ${mainPath}: ${e.message}`); + } try { fs.unlinkSync(wtPath); - } catch {} + } catch (e: any) { + if (e.code !== 'ENOENT') console.warn(`Could not delete ${wtPath}: ${e.message}`); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/deploy.ts` around lines 199 - 204, The empty catch blocks around the fs.unlinkSync calls for mainPath and wtPath swallow failures silently; change them to catch the error (e.g., catch (err)) and explicitly handle ENOENT by ignoring it (file missing is expected) but log any other errors (include the file path and error) at debug/warn level so unexpected failures are visible; alternatively you can check fs.existsSync(mainPath)/wtPath before unlinking, but ensure mainPath and wtPath error cases are logged rather than silently ignored.
464-465: Replace arbitrary delay with SDK's transaction wait mechanism.The 1000ms delay is a fragile workaround. Use
client.waitForTransaction()instead to properly wait for transaction finality. This method polls the RPC until the transaction is available, adapting to actual network conditions rather than relying on a fixed timeout.Replace:
await new Promise((resolve) => setTimeout(resolve, 1000)); const transceiverResult = await client.signAndExecuteTransaction({...});With:
const transceiverResult = await client.signAndExecuteTransaction({...}); await client.waitForTransaction({ digest: transceiverResult.digest });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/deploy.ts` around lines 464 - 465, Replace the arbitrary 1s sleep used after NTT setup with the SDK's transaction polling: remove the setTimeout Promise and instead call client.waitForTransaction to wait for finality after performing the transaction with client.signAndExecuteTransaction; specifically, after obtaining transceiverResult from client.signAndExecuteTransaction, call client.waitForTransaction with the digest (transceiverResult.digest) so the code waits for the transaction to be available instead of using a fixed timeout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/src/sui/deploy.ts`:
- Around line 577-584: The catch block that logs "Failed to register wormhole
transceiver with NTT manager:" currently swallows the error so deployment
reports success despite registration failure; either rethrow the error to make
deployment a hard failure (after logging) or set a flag (e.g.,
transceiverRegistrationFailed) when catching the error and ensure the final
completion message and exit code reflect a partial failure. Update the catch
around the transceiver registration so it either throws the caught error (throw
error) or sets the flag and append a prominent partial-failure summary to the
final output/exit logic so users aren’t misled.
In `@cli/src/sui/helpers.ts`:
- Around line 340-341: Remove the redundant deletion: delete the call to
fs.unlinkSync(worktreePath) and keep the single fs.rmSync(worktreePath, { force:
true }) call (referencing the existing fs.unlinkSync(worktreePath) and
fs.rmSync(worktreePath, { force: true }) lines) so the file removal is handled
once and safely with force.
---
Outside diff comments:
In `@cli/src/sui/helpers.ts`:
- Line 1: Prettier is flagging formatting in cli/src/sui/helpers.ts (starting at
the import of execFileSync); run the formatter and apply its changes (e.g., run
`bun run prettier` or your project's configured prettier command) to fix
whitespace/formatting issues, then stage the updated file; ensure no manual
style edits remain that conflict with Prettier rules for the import line and
surrounding code in helpers.ts.
---
Nitpick comments:
In `@cli/src/sui/deploy.ts`:
- Around line 199-204: The empty catch blocks around the fs.unlinkSync calls for
mainPath and wtPath swallow failures silently; change them to catch the error
(e.g., catch (err)) and explicitly handle ENOENT by ignoring it (file missing is
expected) but log any other errors (include the file path and error) at
debug/warn level so unexpected failures are visible; alternatively you can check
fs.existsSync(mainPath)/wtPath before unlinking, but ensure mainPath and wtPath
error cases are logged rather than silently ignored.
- Around line 464-465: Replace the arbitrary 1s sleep used after NTT setup with
the SDK's transaction polling: remove the setTimeout Promise and instead call
client.waitForTransaction to wait for finality after performing the transaction
with client.signAndExecuteTransaction; specifically, after obtaining
transceiverResult from client.signAndExecuteTransaction, call
client.waitForTransaction with the digest (transceiverResult.digest) so the code
waits for the transaction to be available instead of using a fixed timeout.
In `@sui/Makefile`:
- Around line 1-18: The Makefile's "test" and "build" targets are not declared
phony which can cause Make to skip them if files named "test" or "build" appear;
update the Makefile to add a .PHONY declaration referencing the targets (e.g.,
.PHONY: test build) near the top so that the targets "test" and "build" are
always executed regardless of files with those names.
In `@sui/packages/ntt_common/sources/contract_auth.move`:
- Around line 85-92: Add two new Move tests exercising auth_as under the
defining-ID/package-resolution change: one positive test that constructs an Auth
value whose defining ID matches the State type's defining ID, calls
auth_as<Auth, State>(&auth, type_name, &state) and asserts it returns the
expected address (using object::id_address or checking equality), and one
negative test that uses an Auth whose defining ID does not match State and
asserts the call aborts with EInvalidAuthType. Place tests alongside other
contract_auth tests and reference the functions auth_as and assert_auth_type
(and the type_name::with_defining_ids behavior) so the new tests validate the
migrated package-resolution path.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.github/workflows/sui.ymlcli/src/error.tscli/src/sui/deploy.tscli/src/sui/helpers.tscli/src/tag.tssui/Makefilesui/packages/ntt/Move.tomlsui/packages/ntt/tests/ntt_scenario.movesui/packages/ntt/tests/ntt_tests.movesui/packages/ntt_common/Move.tomlsui/packages/ntt_common/sources/contract_auth.movesui/packages/ntt_common/sources/datatypes/bitmap.movesui/packages/ntt_common/sources/messages/ntt_manager_message.movesui/packages/ntt_common/sources/messages/transceiver_message.movesui/packages/wormhole_transceiver/Move.toml
🚧 Files skipped from review as they are similar to previous changes (4)
- sui/packages/ntt/tests/ntt_tests.move
- sui/packages/ntt_common/sources/messages/ntt_manager_message.move
- sui/packages/ntt_common/Move.toml
- .github/workflows/sui.yml
| } catch (error) { | ||
| console.error("Wormhole Transceiver setup failed:", error); | ||
| console.error("Error details:", JSON.stringify(error, null, 2)); | ||
| throw error; | ||
| console.error( | ||
| "Failed to register wormhole transceiver with NTT manager:", | ||
| error | ||
| ); | ||
| console.warn( | ||
| "Deployment completed but transceiver registration failed. You may need to register it manually." | ||
| ); |
There was a problem hiding this comment.
Transceiver registration failure continues deployment.
The catch block logs a warning but doesn't propagate the error, meaning deployment will report success even though transceiver registration failed. This could leave the system in a partially configured state where the user believes deployment succeeded.
Consider whether this should be a hard failure or if the current behavior (warn and continue) is intentional. If intentional, the final success message should indicate the partial failure more prominently.
🔧 Option A: Make registration failure a hard error
} catch (error) {
- console.error(
- "Failed to register wormhole transceiver with NTT manager:",
- error
- );
- console.warn(
- "Deployment completed but transceiver registration failed. You may need to register it manually."
- );
+ console.error("Failed to register wormhole transceiver:", error);
+ throw new Error(
+ "Deployment failed: transceiver registration unsuccessful. " +
+ "Progress has been saved - retry deployment to continue."
+ );
}🔧 Option B: Make partial success more visible in output
// ── Done ──
- console.log(colors.green("Sui NTT deployment completed successfully!"));
+ const allStepsComplete = progress.transceiverRegistered !== false;
+ if (allStepsComplete) {
+ console.log(colors.green("Sui NTT deployment completed successfully!"));
+ } else {
+ console.log(colors.yellow("Sui NTT deployment completed with warnings."));
+ console.log(colors.yellow("Transceiver registration failed - manual registration required."));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| console.error("Wormhole Transceiver setup failed:", error); | |
| console.error("Error details:", JSON.stringify(error, null, 2)); | |
| throw error; | |
| console.error( | |
| "Failed to register wormhole transceiver with NTT manager:", | |
| error | |
| ); | |
| console.warn( | |
| "Deployment completed but transceiver registration failed. You may need to register it manually." | |
| ); | |
| } catch (error) { | |
| console.error("Failed to register wormhole transceiver:", error); | |
| throw new Error( | |
| "Deployment failed: transceiver registration unsuccessful. " + | |
| "Progress has been saved - retry deployment to continue." | |
| ); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/src/sui/deploy.ts` around lines 577 - 584, The catch block that logs
"Failed to register wormhole transceiver with NTT manager:" currently swallows
the error so deployment reports success despite registration failure; either
rethrow the error to make deployment a hard failure (after logging) or set a
flag (e.g., transceiverRegistrationFailed) when catching the error and ensure
the final completion message and exit code reflect a partial failure. Update the
catch around the transceiver registration so it either throws the caught error
(throw error) or sets the flag and append a prominent partial-failure summary to
the final output/exit logic so users aren’t misled.
| fs.unlinkSync(worktreePath); | ||
| fs.rmSync(worktreePath, { force: true }); |
There was a problem hiding this comment.
Redundant file deletion.
fs.unlinkSync(worktreePath) on line 340 already removes the file, making the subsequent fs.rmSync(worktreePath, { force: true }) on line 341 unnecessary.
🧹 Suggested fix
fs.copyFileSync(worktreePath, mainTreePath);
fs.unlinkSync(worktreePath);
- fs.rmSync(worktreePath, { force: true });
fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fs.unlinkSync(worktreePath); | |
| fs.rmSync(worktreePath, { force: true }); | |
| fs.unlinkSync(worktreePath); | |
| fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/src/sui/helpers.ts` around lines 340 - 341, Remove the redundant
deletion: delete the call to fs.unlinkSync(worktreePath) and keep the single
fs.rmSync(worktreePath, { force: true }) call (referencing the existing
fs.unlinkSync(worktreePath) and fs.rmSync(worktreePath, { force: true }) lines)
so the file removal is handled once and safely with force.
There was a problem hiding this comment.
Devin also highlights this in another way, might be worth making a small change: https://app.devin.ai/review/wormhole-foundation/native-token-transfers/pull/814
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Fetch all PR reviews and review comments to find Devin's review
gh pr view 814 --repo wormhole-foundation/native-token-transfers --json reviews,comments | jq '.reviews[] | {author: .author.login, body: .body, state: .state}'
echo "---COMMENTS---"
gh pr view 814 --repo wormhole-foundation/native-token-transfers --json comments | jq '.comments[] | {author: .author.login, body: .body}'
Length of output: 55815
@djb15 — Both approaches fix the same redundancy. The difference is which call to keep:
- My original suggestion: keep
fs.unlinkSync, removefs.rmSync. - Devin's framing (which also appears in a later review round): keep
fs.rmSync(worktreePath, { force: true }), removefs.unlinkSync.
Devin's version is arguably slightly better, because fs.rmSync with { force: true } is more robust — it silently succeeds even if the file is already gone (e.g., due to a race or prior error), whereas fs.unlinkSync would throw ENOENT in that case. Here's the recommended fix:
fs.copyFileSync(worktreePath, mainTreePath);
- fs.unlinkSync(worktreePath);
fs.rmSync(worktreePath, { force: true });
fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));🧠 Learnings used
Learnt from: dvgui
Repo: wormhole-foundation/native-token-transfers PR: 820
File: cli/src/solana/deploy.ts:394-435
Timestamp: 2026-02-25T02:00:18.579Z
Learning: When reviewing refactor PRs that move code without functional changes, do not flag pre-existing issues in the moved code unless explicitly requested. Treat such changes as structural; reserve flags for changes that affect behavior, correctness, performance, or security. This guideline applies to TypeScript files across the repository (adjustments to file movement, formatting, or renaming without logic changes should not trigger bug-related feedback unless requested).
There was a problem hiding this comment.
♻️ Duplicate comments (1)
cli/src/sui/deploy.ts (1)
577-585:⚠️ Potential issue | 🟠 MajorTransceiver registration failure continues deployment.
The catch block logs a warning but doesn't propagate the error, meaning deployment will report success even though transceiver registration failed. This could leave the system in a partially configured state where the user believes deployment succeeded.
Consider whether this should be a hard failure or if the current behavior (warn and continue) is intentional. If intentional, the final success message should indicate the partial failure more prominently.
🔧 Option A: Make registration failure a hard error
} catch (error) { - console.error( - "Failed to register wormhole transceiver with NTT manager:", - error - ); - console.warn( - "Deployment completed but transceiver registration failed. You may need to register it manually." - ); + console.error("Failed to register wormhole transceiver:", error); + throw new Error( + "Deployment failed: transceiver registration unsuccessful. " + + "Progress has been saved - retry deployment to continue." + ); }🔧 Option B: Make partial success more visible in output
// ── Done ── - console.log(colors.green("Sui NTT deployment completed successfully!")); + const allStepsComplete = progress.transceiverRegistered !== false; + if (allStepsComplete) { + console.log(colors.green("Sui NTT deployment completed successfully!")); + } else { + console.log(colors.yellow("Sui NTT deployment completed with warnings.")); + console.log(colors.yellow("Transceiver registration failed - manual registration required.")); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/deploy.ts` around lines 577 - 585, The catch in the transceiver registration block (the try/catch that logs "Failed to register wormhole transceiver with NTT manager:") swallows errors and lets deployment report success; either rethrow the error to make registration a hard failure or mark a partial-failure flag and update the final deployment summary to clearly indicate partial success. Locate the transceiver registration try/catch inside the deploy function in deploy.ts and implement one of two fixes: (A) rethrow the caught error (or call process.exit(1)) so registration failure fails the whole deployment, or (B) set a boolean like transceiverRegistrationFailed = true inside the catch and then update the final success message logic (the final summary/console output in the same deploy flow) to show a prominent partial-failure warning when that flag is true. Ensure you adjust tests/logging accordingly.
🧹 Nitpick comments (2)
cli/src/sui/helpers.ts (1)
309-317: Consider sanitizingenvparameter for defense-in-depth.While
envis currently controlled (only "mainnet" or "testnet"), adding explicit validation would protect against future misuse and address the static analysis warning about regex from variable input.🛡️ Optional: Add allowlist validation
export function parsePublishedToml( filePath: string, env: string ): { packageId: string; upgradeCap: string } { + const validEnvs = ["mainnet", "testnet"]; + if (!validEnvs.includes(env)) { + throw new Error(`Invalid env "${env}". Expected one of: ${validEnvs.join(", ")}`); + } const content = fs.readFileSync(filePath, "utf8");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/helpers.ts` around lines 309 - 317, The parsePublishedToml function builds a RegExp from the env parameter; validate env first (allowlist: "mainnet" and "testnet") and throw a clear error for invalid values, and/or escape env before inserting into the RegExp (use a regex-escape helper) to prevent injection; update parsePublishedToml to perform this check/escape before calling new RegExp and keep existing behavior of throwing when the section is missing.cli/src/sui/deploy.ts (1)
464-465: Consider making the network settle delay configurable or using proper confirmation.The hardcoded 1-second delay may be insufficient under high network load or excessive in normal conditions. Consider using transaction confirmation mechanisms or making this configurable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/deploy.ts` around lines 464 - 465, Replace the hardcoded one-second sleep after NTT setup (the await new Promise((resolve) => setTimeout(resolve, 1000)) call) with a configurable or deterministic wait: either hook into the provider/SDK transaction confirmation API to wait for the NTT setup transaction to be finalized, or read a configurable timeout value (e.g., networkSettleDelayMs) passed into the deploy flow and use that instead; update the deploy/NTT setup call site to accept the config variable and remove the magic 1000ms literal so callers can tune behavior or rely on proper confirmation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@cli/src/sui/deploy.ts`:
- Around line 577-585: The catch in the transceiver registration block (the
try/catch that logs "Failed to register wormhole transceiver with NTT manager:")
swallows errors and lets deployment report success; either rethrow the error to
make registration a hard failure or mark a partial-failure flag and update the
final deployment summary to clearly indicate partial success. Locate the
transceiver registration try/catch inside the deploy function in deploy.ts and
implement one of two fixes: (A) rethrow the caught error (or call
process.exit(1)) so registration failure fails the whole deployment, or (B) set
a boolean like transceiverRegistrationFailed = true inside the catch and then
update the final success message logic (the final summary/console output in the
same deploy flow) to show a prominent partial-failure warning when that flag is
true. Ensure you adjust tests/logging accordingly.
---
Nitpick comments:
In `@cli/src/sui/deploy.ts`:
- Around line 464-465: Replace the hardcoded one-second sleep after NTT setup
(the await new Promise((resolve) => setTimeout(resolve, 1000)) call) with a
configurable or deterministic wait: either hook into the provider/SDK
transaction confirmation API to wait for the NTT setup transaction to be
finalized, or read a configurable timeout value (e.g., networkSettleDelayMs)
passed into the deploy flow and use that instead; update the deploy/NTT setup
call site to accept the config variable and remove the magic 1000ms literal so
callers can tune behavior or rely on proper confirmation.
In `@cli/src/sui/helpers.ts`:
- Around line 309-317: The parsePublishedToml function builds a RegExp from the
env parameter; validate env first (allowlist: "mainnet" and "testnet") and throw
a clear error for invalid values, and/or escape env before inserting into the
RegExp (use a regex-escape helper) to prevent injection; update
parsePublishedToml to perform this check/escape before calling new RegExp and
keep existing behavior of throwing when the section is missing.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cli/src/error.tscli/src/sui/deploy.tscli/src/sui/helpers.tscli/src/tag.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- cli/src/tag.ts
* sui: implement governance contract Implement a governance contract that receives VAAs and executes admin operations on NTT. One governance package per NTT deployment (no dynamic dispatch on Sui). * sui/sdk: support object owner of ntt * cli: sui governance support
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
cli/src/sui/helpers.ts (1)
370-373:⚠️ Potential issue | 🟡 MinorRedundant file deletion remains.
fs.unlinkSync(worktreePath)on line 371 already removes the file. The subsequentfs.rmSync(worktreePath, { force: true })on line 372 is redundant. Keep onlyfs.rmSyncwith{ force: true }since it handles the case where the file might already be gone.Suggested fix
fs.copyFileSync(worktreePath, mainTreePath); - fs.unlinkSync(worktreePath); fs.rmSync(worktreePath, { force: true }); fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/helpers.ts` around lines 370 - 373, The duplicate deletion is happening in the block using fs.copyFileSync, fs.unlinkSync, fs.rmSync, and fs.symlinkSync for worktreePath/mainTreePath; remove the redundant fs.unlinkSync(worktreePath) and keep only fs.rmSync(worktreePath, { force: true }) (after fs.copyFileSync) so the removal is idempotent and then create the symlink with fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath)).
🧹 Nitpick comments (6)
sui/ts/src/ntt.ts (1)
264-268: Add test coverage for theObjectOwnerbranch in thegetOwner()method.The implementation correctly handles
ObjectOwnerin addition toAddressOwner. However, the test suite insui/ts/__tests__/ntt.test.tsonly covers theAddressOwnercase. ThemockAdminCaphelper insui/ts/__tests__/mocks.tsgenerates onlyAddressOwnervariants. Add a test case that mocks anObjectOwnervariant to ensure this code path behaves correctly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sui/ts/src/ntt.ts` around lines 264 - 268, The test suite lacks coverage for the ObjectOwner branch in getOwner(); add a unit test in sui/ts/__tests__/ntt.test.ts that uses a modified mockAdminCap (from sui/ts/__tests__/mocks.ts) to return adminCap.data.owner as { ObjectOwner: "<some-object-id>" } and assert getOwner() (or the method under test in ntt.ts) returns that ObjectOwner value; update or extend mockAdminCap to support an ObjectOwner variant for this test so the code path handling "ObjectOwner in adminCap.data.owner" is exercised.cli/src/sui/helpers.ts (1)
332-355: Low ReDoS risk from regex construction.The static analysis flagged the regex constructed from
envvariable. However,envis derived internally from the network type ("mainnet"or"testnet"), not from external user input. The risk is minimal, but if you want to be defensive, you could sanitize or use a literal pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/sui/helpers.ts` around lines 332 - 355, The regex in parsePublishedToml builds a RegExp from the env variable which could cause a ReDoS/regex-injection risk; sanitize or escape env before using it in new RegExp: escape any regex metacharacters in env (e.g., via env.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')) into a safeEnv and then use that safeEnv in the RegExp construction (keep the rest of the pattern intact), referencing parsePublishedToml, env, and the new RegExp(...) call to locate where to change.cli/src/suiGovernance.ts (1)
349-350: Hardcoded 2000ms delay for transaction finalization.The fixed delay assumes finalization completes within 2 seconds, which may not hold under network congestion. Consider polling for object availability or documenting this as a known limitation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/suiGovernance.ts` around lines 349 - 350, Replace the hardcoded await new Promise((resolve) => setTimeout(resolve, 2000)) in cli/src/suiGovernance.ts with a robust wait: poll the Sui SDK for either the published objects or the transaction status (e.g., use client.getObject/getObjectRef or client.waitForTransactionEffect/transactionStatus) in a loop with a short backoff and a timeout, returning once the expected object(s) or final status are observed; alternatively, if you cannot implement polling now, add a clear comment near that delay stating it is a temporary known limitation and include a TODO to replace it with polling using getObject/waitForTransaction.cli/src/commands/sui.ts (1)
134-136: Casting toanyto access platform-specific signer properties is a type-system bypass that's fragile.The
Signer<N, C>interface doesn't expose platform-specific properties likeclientor_signer, forcing the code to castsigner.signertoanyto access them. Whileclientis a public readonly property onSuiNativeSigner, the underscore prefix on_signerindicates it's a protected/private property inherited fromPlatformNativeSignerand may change in SDK updates.This pattern is repeated throughout the codebase. Consider whether the SDK should expose a public method to access the raw keypair and client, or document this as an expected coupling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/src/commands/sui.ts` around lines 134 - 136, The code is bypassing TypeScript safety by casting signer.signer to any to access platform-specific fields; instead add a proper type-narrowing check and avoid accessing private underscored properties. Create a type-guard like isSuiNativeSigner(x): x is SuiNativeSigner that checks for the public readonly client property, then in the caller (where getSigner(...) returns Signer) use that guard to narrow signer.signer to SuiNativeSigner and read the public client property; do not reference PlatformNativeSigner._signer or cast to any, and consider adding a small helper function (e.g., getSuiClientFromSigner) to centralize this logic.sui/packages/ntt_governance/tests/governance_tests.move (1)
138-157: Assert the updated limit values, not just non-abort.These two tests currently only prove the governance calls return without aborting. A no-op or wrong-field write would still pass CI because neither the outbound limit nor the peer's inbound limit is read back afterward. If the package does not already expose getters, a small test-only accessor would make these paths observable.
Also applies to: 161-191
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sui/packages/ntt_governance/tests/governance_tests.move` around lines 138 - 157, The test test_set_outbound_limit_via_governance only verifies the call doesn't abort; add assertions to verify the state actually changed by reading the updated outbound limit from the ntt_state and the corresponding peer inbound limit (or create small test-only accessors if getters don't exist). Specifically, after governance::execute_set_outbound_limit (and before gs::return_ntt_state), call the package's getter or a test-only accessor (e.g., get_outbound_limit_on_ntt_state / get_peer_inbound_limit) to assert the new_limit equals the stored outbound limit and the peer’s inbound limit was updated accordingly; if such accessors are missing, add minimal read-only functions in the module under test to expose these fields for testing and use them in test_set_outbound_limit_via_governance (and similarly in the other test referenced at lines 161-191).sui/packages/ntt_governance/tests/governance_scenario.move (1)
107-138: Avoid type-only shared-object retrieval aftersetup_second_ntt.
setup_second_nttleaves a second sharedState<gs::GOVERNANCE_SCENARIO>andCoinMetadata<gs::GOVERNANCE_SCENARIO>in the scenario, buttake_ntt_state/take_coin_metadatastill callts::take_sharedby type only. Any later test that combines these helpers can grab the wrong object nondeterministically. Returning the created IDs, or addingtake_*_by_idhelpers, would keep this fixture stable.Also applies to: 150-163
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sui/packages/ntt_governance/tests/governance_scenario.move` around lines 107 - 138, The test fixture setup_second_ntt creates a second shared State<gs::GOVERNANCE_SCENARIO> and CoinMetadata<gs::GOVERNANCE_SCENARIO> but downstream helpers take_ntt_state and take_coin_metadata still call ts::take_shared by type only, causing nondeterministic grabs; fix by changing setup_second_ntt to return the created shared object IDs (the State id and CoinMetadata id) from ntt::setup::complete_burning (or by extracting their object IDs after creation) and update callers to use new ID-aware helpers (e.g., take_ntt_state_by_id(id) and take_coin_metadata_by_id(id)) that call ts::take_shared with the specific object id instead of type-only ts::take_shared, or alternatively add take_*_by_id helpers and update tests to call them with the returned IDs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sui/packages/ntt_governance/README.md`:
- Around line 26-34: The fenced code blocks in README.md (e.g., the block
showing "Envelope (handled by governance_message): ..." and the other blocks at
the noted ranges) are missing language tags causing markdownlint MD040; update
each triple-backtick fence to include an explicit language tag such as text (or
bash where appropriate) — e.g., change ``` to ```text (or ```bash) for the
blocks containing the envelope/payload examples so the linter stops flagging
them; ensure all four listed fences (around the Envelope/Payload examples) are
updated consistently.
---
Duplicate comments:
In `@cli/src/sui/helpers.ts`:
- Around line 370-373: The duplicate deletion is happening in the block using
fs.copyFileSync, fs.unlinkSync, fs.rmSync, and fs.symlinkSync for
worktreePath/mainTreePath; remove the redundant fs.unlinkSync(worktreePath) and
keep only fs.rmSync(worktreePath, { force: true }) (after fs.copyFileSync) so
the removal is idempotent and then create the symlink with
fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath)).
---
Nitpick comments:
In `@cli/src/commands/sui.ts`:
- Around line 134-136: The code is bypassing TypeScript safety by casting
signer.signer to any to access platform-specific fields; instead add a proper
type-narrowing check and avoid accessing private underscored properties. Create
a type-guard like isSuiNativeSigner(x): x is SuiNativeSigner that checks for the
public readonly client property, then in the caller (where getSigner(...)
returns Signer) use that guard to narrow signer.signer to SuiNativeSigner and
read the public client property; do not reference PlatformNativeSigner._signer
or cast to any, and consider adding a small helper function (e.g.,
getSuiClientFromSigner) to centralize this logic.
In `@cli/src/sui/helpers.ts`:
- Around line 332-355: The regex in parsePublishedToml builds a RegExp from the
env variable which could cause a ReDoS/regex-injection risk; sanitize or escape
env before using it in new RegExp: escape any regex metacharacters in env (e.g.,
via env.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')) into a safeEnv and then use that
safeEnv in the RegExp construction (keep the rest of the pattern intact),
referencing parsePublishedToml, env, and the new RegExp(...) call to locate
where to change.
In `@cli/src/suiGovernance.ts`:
- Around line 349-350: Replace the hardcoded await new Promise((resolve) =>
setTimeout(resolve, 2000)) in cli/src/suiGovernance.ts with a robust wait: poll
the Sui SDK for either the published objects or the transaction status (e.g.,
use client.getObject/getObjectRef or
client.waitForTransactionEffect/transactionStatus) in a loop with a short
backoff and a timeout, returning once the expected object(s) or final status are
observed; alternatively, if you cannot implement polling now, add a clear
comment near that delay stating it is a temporary known limitation and include a
TODO to replace it with polling using getObject/waitForTransaction.
In `@sui/packages/ntt_governance/tests/governance_scenario.move`:
- Around line 107-138: The test fixture setup_second_ntt creates a second shared
State<gs::GOVERNANCE_SCENARIO> and CoinMetadata<gs::GOVERNANCE_SCENARIO> but
downstream helpers take_ntt_state and take_coin_metadata still call
ts::take_shared by type only, causing nondeterministic grabs; fix by changing
setup_second_ntt to return the created shared object IDs (the State id and
CoinMetadata id) from ntt::setup::complete_burning (or by extracting their
object IDs after creation) and update callers to use new ID-aware helpers (e.g.,
take_ntt_state_by_id(id) and take_coin_metadata_by_id(id)) that call
ts::take_shared with the specific object id instead of type-only
ts::take_shared, or alternatively add take_*_by_id helpers and update tests to
call them with the returned IDs.
In `@sui/packages/ntt_governance/tests/governance_tests.move`:
- Around line 138-157: The test test_set_outbound_limit_via_governance only
verifies the call doesn't abort; add assertions to verify the state actually
changed by reading the updated outbound limit from the ntt_state and the
corresponding peer inbound limit (or create small test-only accessors if getters
don't exist). Specifically, after governance::execute_set_outbound_limit (and
before gs::return_ntt_state), call the package's getter or a test-only accessor
(e.g., get_outbound_limit_on_ntt_state / get_peer_inbound_limit) to assert the
new_limit equals the stored outbound limit and the peer’s inbound limit was
updated accordingly; if such accessors are missing, add minimal read-only
functions in the module under test to expose these fields for testing and use
them in test_set_outbound_limit_via_governance (and similarly in the other test
referenced at lines 161-191).
In `@sui/ts/src/ntt.ts`:
- Around line 264-268: The test suite lacks coverage for the ObjectOwner branch
in getOwner(); add a unit test in sui/ts/__tests__/ntt.test.ts that uses a
modified mockAdminCap (from sui/ts/__tests__/mocks.ts) to return
adminCap.data.owner as { ObjectOwner: "<some-object-id>" } and assert getOwner()
(or the method under test in ntt.ts) returns that ObjectOwner value; update or
extend mockAdminCap to support an ObjectOwner variant for this test so the code
path handling "ObjectOwner in adminCap.data.owner" is exercised.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b39ca29c-cae7-464f-b627-abe44fff17b0
⛔ Files ignored due to path filters (1)
sui/packages/ntt_governance/Move.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
cli/src/commands/index.tscli/src/commands/sui.tscli/src/index.tscli/src/sui/helpers.tscli/src/suiGovernance.tssui/packages/ntt/sources/upgrades.movesui/packages/ntt_governance/Move.tomlsui/packages/ntt_governance/README.mdsui/packages/ntt_governance/sources/governance.movesui/packages/ntt_governance/tests/governance_scenario.movesui/packages/ntt_governance/tests/governance_tests.movesui/ts/src/ntt.ts
| ``` | ||
| Envelope (handled by governance_message): | ||
| MODULE (32 bytes, "GeneralPurposeGovernance" left-padded) | ||
| ACTION (1 byte, SUI_CALL = 3) | ||
| CHAIN (2 bytes, Sui chain ID) | ||
|
|
||
| Payload (parsed by this module): | ||
| GOVERNANCE_ID (32 bytes) | NTT_ACTION (1 byte) | ACTION_DATA (variable) | ||
| ``` |
There was a problem hiding this comment.
Add language tags to these fenced code blocks.
markdownlint is already flagging these four fences with MD040. Label them as text (or bash where appropriate) so the README stays lint-clean.
Also applies to: 111-117, 139-145, 168-175
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 26-26: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sui/packages/ntt_governance/README.md` around lines 26 - 34, The fenced code
blocks in README.md (e.g., the block showing "Envelope (handled by
governance_message): ..." and the other blocks at the noted ranges) are missing
language tags causing markdownlint MD040; update each triple-backtick fence to
include an explicit language tag such as text (or bash where appropriate) —
e.g., change ``` to ```text (or ```bash) for the blocks containing the
envelope/payload examples so the linter stops flagging them; ensure all four
listed fences (around the Envelope/Payload examples) are updated consistently.
| #[test, expected_failure] | ||
| fun test_authorize_upgrade_truncated_digest() { | ||
| let mut scenario = test_scenario::begin(gs::admin()); | ||
| gs::setup(&mut scenario); | ||
|
|
||
| let mut gov = gs::take_gov(&scenario); | ||
|
|
||
| // Truncated digest — only 16 bytes instead of 32 | ||
| let short_digest = x"deadbeefdeadbeefdeadbeefdeadbeef"; | ||
| let decree = governance::new_decree(10, short_digest); | ||
| let _ticket = governance::execute_authorize_upgrade(&mut gov, decree); | ||
|
|
||
| abort 0 // unreachable | ||
| } |
There was a problem hiding this comment.
This expected-failure test can pass on a false positive.
Because the annotation does not pin an abort code, the abort 0 on Line 736 still satisfies expected_failure if execute_authorize_upgrade ever accepts the 16-byte digest. Please assert the real parse abort here instead of relying on the unreachable abort.
| [package] | ||
| name = "WormholeTransceiver" | ||
| edition = "2024.beta" # edition = "legacy" to use legacy (pre-2024) Move | ||
| license = "Apache 2.0" |
There was a problem hiding this comment.
Deliberately removed the license in all the of the Move.toml files?
| edition = "2024.beta" | ||
|
|
||
| [dependencies] | ||
| # After wormhole PR #4639 merges, switch to: rev = "main" |
There was a problem hiding this comment.
Suggest we wait to merge this one until after the monorepo PR is merged so we can make the change here
depends on wormhole-foundation/wormhole#4639
Summary by CodeRabbit