Skip to content

sui: use new package management system#814

Open
kcsongor wants to merge 8 commits intomainfrom
cs/sui-new-package-system
Open

sui: use new package management system#814
kcsongor wants to merge 8 commits intomainfrom
cs/sui-new-package-system

Conversation

@kcsongor
Copy link
Contributor

@kcsongor kcsongor commented Feb 5, 2026

depends on wormhole-foundation/wormhole#4639

Summary by CodeRabbit

  • New Features
    • Adds Sui Move CI workflow, environment-aware deploy CLI with resumable multi-stage progress, and new on-chain governance package and CLI commands for governance management.
  • Bug Fixes
    • Improved RPC connection error detection and assorted test updates.
  • Chores
    • Consolidated package manifests, added package versions, and updated build/test targets with environment defaults.
  • Refactor
    • Replaced shell symlink handling with a Node-based implementation.
  • Documentation
    • Added comprehensive governance README.

@coderabbitai
Copy link

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

Adds Sui CI workflow; normalizes Move package manifests; updates Move sources/tests; introduces Sui environment-aware build/publish and a staged, resumable CLI deploy flow with version checks and progress persistence; replaces shell symlink usage with a Node fs-based implementation.

Changes

Cohort / File(s) Summary
CI/CD Workflow
​.github/workflows/sui.yml
New GitHub Actions workflow to install Sui CLI via suiup, verify version, and run make test on push, PR, and dispatch.
CLI — Sui deploy & helpers
cli/src/sui/deploy.ts, cli/src/sui/helpers.ts, cli/src/suiGovernance.ts, cli/src/commands/sui.ts, cli/src/commands/index.ts, cli/src/index.ts
Adds environment-aware build/publish, staged/resumable deploy with progress file, Published.toml handling, Sui CLI version enforcement, build/publish helpers, governance discovery/transfer utilities, and new createSuiCommand CLI entry.
CLI — misc
cli/src/tag.ts, cli/src/error.ts
Replaces shell symlink creation with Node fs.symlinkSync; adjusts RPC error keyword list (add econnrefused, enotfound, etimedout, remove generic tokens).
Makefile / automation
sui/Makefile
Adds ENV default testnet, updates test to run sui move test -e $(ENV) using subshells, and adds build target invoking sui move build -e $(ENV).
Move package manifests
sui/packages/ntt/Move.toml, sui/packages/ntt_common/Move.toml, sui/packages/wormhole_transceiver/Move.toml, sui/packages/ntt_governance/Move.toml
Normalize package names to snake_case, add version = "1.0.0", consolidate dependencies into a single [dependencies] with wormhole as a git dependency and local paths for local packages.
New Move governance package & tests
sui/packages/ntt_governance/Move.toml, sui/packages/ntt_governance/sources/governance.move, sui/packages/ntt_governance/README.md, sui/packages/ntt_governance/tests/*
Adds new ntt_governance package: governance module, extensive tests and scenario helpers, README, and test suite covering governance decrees, cap transfers, and edge cases.
Move source updates
sui/packages/ntt_common/sources/...
.../contract_auth.move, .../datatypes/bitmap.move, .../messages/ntt_manager_message.move, .../messages/transceiver_message.move
API accessor updates (get<T>()with_defining_ids<T>()), string-based address/module accessors, numeric literal type fixes (use u64), and one test expectation change (2 → 17).
Move tests tweaks
sui/packages/ntt/tests/ntt_scenario.move, sui/packages/ntt/tests/ntt_tests.move
Added #[allow(deprecated_usage)] to scenario module; replaced sui::test_utils::destroy(...) with std::unit_test::destroy(...).
Documentation / comments
sui/packages/ntt/sources/upgrades.move
Adds doc comment about VERSION upgrade semantics (no behavioral change).
TypeScript runtime
sui/ts/src/ntt.ts
Support for ObjectOwner inside AdminCap owner structure when deriving owner address.

Sequence Diagram

sequenceDiagram
    actor User as User/CLI
    participant CLI as CLI (deploy.ts)
    participant Helpers as Helpers (helpers.ts)
    participant Build as Sui Build
    participant TOML as Published.toml Parser
    participant SDK as Sui SDK
    participant Chain as Sui Chain

    User->>CLI: deploySui(buildEnv)
    CLI->>Helpers: readSetupProgress(.sui-deploy-progress.<env>.json)
    alt fresh deploy
        CLI->>Build: buildSuiPackage(...) -e buildEnv
        Build-->>CLI: build results
        CLI->>Chain: publish packages
        Chain-->>CLI: transaction results / package IDs
        CLI->>TOML: parsePublishedToml()
    else resume deploy
        CLI->>TOML: parsePublishedToml() (existing)
    end
    CLI->>SDK: query DeployerCaps / objects
    SDK-->>CLI: on-chain state
    CLI->>CLI: Stage 1/2/3 steps (complete burn/lock, transceiver setup, register)
    CLI->>Chain: execute setup/register txs
    Chain-->>CLI: tx confirmations / object IDs
    CLI->>Helpers: saveSetupProgress(.sui-deploy-progress.<env>.json)
    CLI-->>User: deployment complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • dvgui
  • nvsriram
  • connorstake

Poem

🐰 Hopping through Move.toml and code, I cheer,
Builds that resume, progress saves each year.
CLI sings version checks, packages align,
Symlinks fixed in Node, tests run fine.
Rabbit winks: deploys ready — let’s ship, dear!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 38.10% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly references the main change: transitioning to a new package management system for Sui packages. The file summaries confirm widespread package manifest refactoring (Move.toml updates across ntt, ntt_common, wormhole_transceiver), new environment-based build/publish flows, and consolidated dependency structures.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cs/sui-new-package-system

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@kcsongor kcsongor force-pushed the cs/sui-new-package-system branch 4 times, most recently from a910948 to b86bdb5 Compare February 5, 2026 01:22
@kcsongor kcsongor force-pushed the cs/sui-new-package-system branch 2 times, most recently from 259cced to 79e1d06 Compare February 25, 2026 18:33
@kcsongor kcsongor marked this pull request as ready for review February 25, 2026 18:33
@kcsongor kcsongor force-pushed the cs/sui-new-package-system branch from 79e1d06 to 0222e10 Compare February 25, 2026 18:35
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

♻️ Duplicate comments (1)
sui/packages/wormhole_transceiver/Move.toml (1)

8-8: Same dependency pinning concern as above (wormhole rev should be immutable).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/wormhole_transceiver/Move.toml` at line 8, The dependency entry
for wormhole in Move.toml pins rev to the mutable branch name
"sui-package-update"; replace that with an immutable identifier (a full commit
SHA or a release tag) so the dependency is deterministic. Update the wormhole =
{ git = "...", subdir = "sui/wormhole", rev = "sui-package-update" } line to use
the exact commit hash or tagged release instead of the branch name, and verify
builds use that immutable rev.
🧹 Nitpick comments (1)
cli/src/suiDeploy.ts (1)

101-103: Escape env before building the section regex.

The function is exported and constructs regex patterns directly from the env parameter without escaping special regex characters. While current callers pass trusted values ("mainnet" or "testnet"), exported helpers should be defensive against untrusted input.

♻️ Suggested fix
+  const escapedEnv = env.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
   const section = content.match(
-    new RegExp(`\\[published\\.${env}\\][\\s\\S]*?(?=\\[|$)`)
+    new RegExp(`\\[published\\.${escapedEnv}\\][\\s\\S]*?(?=\\[|$)`)
   );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/suiDeploy.ts` around lines 101 - 103, The RegExp built for 'section'
uses the unescaped env value, which can allow special regex characters to change
the pattern; add an escape function (e.g., escapeRegExp) and use it on env
before constructing the pattern so the RegExp becomes based on the literal env
string (replace the RegExp construction that references env with one that
references escapedEnv, e.g., escapedEnv = escapeRegExp(env) and then new
RegExp(`\\[published\\.${escapedEnv}\\][\\s\\S]*?(?=\\[|$)`)); add the escape
helper near the top of suiDeploy.ts and reuse it wherever env-derived regexes
are built.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/sui.yml:
- Around line 27-31: Replace the insecure "curl | sh" pattern by pinning the
installer to a specific commit and verifying its integrity before executing:
fetch the installer via its raw URL that includes the commit hash (instead of
the branch), fetch or store the expected checksum/signature for that commit,
verify the downloaded script (e.g., with sha256sum or gpg) and only then execute
it, then proceed with exporting PATH and running "suiup install -y sui@1.63.2";
update the CI run block so the script is downloaded to a file, verified against
the pinned checksum/signature, and executed conditionally upon successful
verification.

In `@cli/src/index.ts`:
- Around line 4108-4113: The progress file is currently shared across
environments (const progressPath =
`${mainTreePackagesPath}/.sui-deploy-progress.json`), causing cross-network
resume corruption; update the progress path to include the environment
identifier derived from ch.network / buildEnv (e.g., append or interpolate
buildEnv or networkType into the filename) so each environment uses its own file
(refer to progressPath, mainTreePackagesPath, networkType, buildEnv) and ensure
any reads/writes use the new environment-scoped progressPath.
- Around line 4470-4537: When transceiverStateId exists but nttAdminCapId is
missing the code silently skips registration and can incorrectly report success;
add an explicit guard before the registration block that checks
(transceiverStateId && !nttAdminCapId && !progress.transceiverRegistered) and,
instead of falling through, log a clear warning/error (via console.warn or
console.error) stating the admin cap is missing and that registration cannot
proceed, persist or update progress if appropriate (use
saveSetupProgress(progressPath, progress)), and avoid marking
progress.transceiverRegistered true; keep the existing registration logic
(register_transceiver call, client.signAndExecuteTransaction, and error
handling) unchanged and placed only inside the branch that requires
nttAdminCapId.
- Around line 4161-4163: Replace the shell execSync symlink call with Node's fs
APIs: use path.resolve on mainPath and wtPath and call fs.symlinkSync(target,
linkPath, ...) (or fs.unlinkSync(linkPath) first if the link already exists)
instead of execSync; update the code around the execSync(`ln -fs
$(pwd)/${mainPath} $(pwd)/${wtPath}`) call to build absolute paths and create
the symlink via fs.symlinkSync (choose appropriate type/junction for Windows),
and preserve the original stdio/behavior by throwing or logging errors from the
symlink operations in the same context where mainPath and wtPath are used.

In `@cli/src/suiDeploy.ts`:
- Around line 49-52: Replace shell-interpolated execSync calls in suiDeploy.ts
with execFileSync: stop building a single shell string using packagesPath,
packageName, buildEnv, gasBudget or path variables and instead call
child_process.execFileSync (or spawnSync) providing the command as the
executable ("sui" or other binary), pass each parameter as separate args (e.g.,
["move","build","-e", buildEnv]) and set cwd to `${packagesPath}/${packageName}`
and env to process.env; do the same for the other execSync occurrences (around
the other blocks at 66-68 and 136-138) to avoid shell interpolation and ensure
arguments with spaces or metacharacters are handled safely.

In `@sui/Makefile`:
- Around line 5-9: The test/build loops in the Makefile iterate over packages
but don't fail fast; update the loop body that uses move_toml/dir so the package
command (e.g., (cd "$$dir" && sui move test -e $(ENV))) is followed by an
immediate check that exits non‑zero on failure (for example: run the command and
append "|| exit 1" or wrap in an if and exit on failure), and apply the same
change to the analogous build loop (lines handling sui move build) so any
package failure aborts the whole Make invocation.

In `@sui/packages/ntt_common/sources/messages/ntt_manager_message.move`:
- Line 61: The length check currently uses assert!(vector::length(&payload) <
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG) and therefore rejects length 65535
even though the payload is later serialized as u16; change the comparison to
allow the maximum encodable value (e.g., assert!(vector::length(&payload) <=
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG)) or better yet compare against
(u16::MAX as u64) to permit length 0..=65535 for vector::length(&payload) while
keeping E_PAYLOAD_TOO_LONG intact.

In `@sui/packages/ntt/Move.toml`:
- Around line 7-8: Replace the mutable branch ref rev = "sui-package-update" in
the wormhole dependency entry (the line that starts with wormhole = { git =
"https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev
= ... }) with a specific commit SHA in each Move.toml where it appears (packages
ntt, ntt_common, wormhole_transceiver), then regenerate or update the Move.lock
so the lockfile records the exact commit; ensure the same SHA is used across all
three Move.toml files to keep builds reproducible.

---

Duplicate comments:
In `@sui/packages/wormhole_transceiver/Move.toml`:
- Line 8: The dependency entry for wormhole in Move.toml pins rev to the mutable
branch name "sui-package-update"; replace that with an immutable identifier (a
full commit SHA or a release tag) so the dependency is deterministic. Update the
wormhole = { git = "...", subdir = "sui/wormhole", rev = "sui-package-update" }
line to use the exact commit hash or tagged release instead of the branch name,
and verify builds use that immutable rev.

---

Nitpick comments:
In `@cli/src/suiDeploy.ts`:
- Around line 101-103: The RegExp built for 'section' uses the unescaped env
value, which can allow special regex characters to change the pattern; add an
escape function (e.g., escapeRegExp) and use it on env before constructing the
pattern so the RegExp becomes based on the literal env string (replace the
RegExp construction that references env with one that references escapedEnv,
e.g., escapedEnv = escapeRegExp(env) and then new
RegExp(`\\[published\\.${escapedEnv}\\][\\s\\S]*?(?=\\[|$)`)); add the escape
helper near the top of suiDeploy.ts and reuse it wherever env-derived regexes
are built.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 75f77f4 and 0222e10.

📒 Files selected for processing (14)
  • .github/workflows/sui.yml
  • cli/src/error.ts
  • cli/src/index.ts
  • cli/src/suiDeploy.ts
  • sui/Makefile
  • sui/packages/ntt/Move.toml
  • sui/packages/ntt/tests/ntt_scenario.move
  • sui/packages/ntt/tests/ntt_tests.move
  • sui/packages/ntt_common/Move.toml
  • sui/packages/ntt_common/sources/contract_auth.move
  • sui/packages/ntt_common/sources/datatypes/bitmap.move
  • sui/packages/ntt_common/sources/messages/ntt_manager_message.move
  • sui/packages/ntt_common/sources/messages/transceiver_message.move
  • sui/packages/wormhole_transceiver/Move.toml

): vector<u8> {
let NttManagerMessage {id, sender, payload} = message;
assert!(vector::length(&payload) < (((1<<16)-1) as u64), E_PAYLOAD_TOO_LONG);
assert!(vector::length(&payload) < (((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Off-by-one in payload length validation.

Line 61 currently rejects payload length 65535, even though the next line serializes length as u16 (which supports 0..=65535). This unnecessarily blocks the max encodable payload.

Suggested fix
-        assert!(vector::length(&payload) < (((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG);
+        assert!(vector::length(&payload) <= ((1u64 << 16) - 1), E_PAYLOAD_TOO_LONG);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert!(vector::length(&payload) < (((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG);
assert!(vector::length(&payload) <= ((1u64 << 16) - 1), E_PAYLOAD_TOO_LONG);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/ntt_common/sources/messages/ntt_manager_message.move` at line
61, The length check currently uses assert!(vector::length(&payload) <
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG) and therefore rejects length 65535
even though the payload is later serialized as u16; change the comparison to
allow the maximum encodable value (e.g., assert!(vector::length(&payload) <=
(((1u64<<16)-1) as u64), E_PAYLOAD_TOO_LONG)) or better yet compare against
(u16::MAX as u64) to permit length 0..=65535 for vector::length(&payload) while
keeping E_PAYLOAD_TOO_LONG intact.

Comment on lines +7 to +8
# After wormhole PR #4639 merges, switch to: rev = "main"
wormhole = { git = "https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev = "sui-package-update" }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the Move.toml file in question
find . -path "*/sui/packages/ntt/Move.toml" -type f

Repository: wormhole-foundation/native-token-transfers

Length of output: 114


🏁 Script executed:

# Check the current state of the dependency
cat -n "sui/packages/ntt/Move.toml" 2>/dev/null | head -20

Repository: wormhole-foundation/native-token-transfers

Length of output: 451


🏁 Script executed:

# Run the verification script provided in the review comment
#!/bin/bash
set -euo pipefail

echo "== Git dependencies with rev in Move.toml files =="
rg -nP --type toml '^\s*[A-Za-z0-9_]+\s*=\s*\{[^}]*\bgit\s*=\s*"[^"]+"[^}]*\brev\s*=\s*"[^"]+"[^}]*\}' 2>/dev/null || echo "No matches found"

echo
echo "== Potential mutable rev values (non-hex commit SHAs) =="
rg -nP --type toml '^\s*[A-Za-z0-9_]+\s*=\s*\{[^}]*\brev\s*=\s*"(?![0-9a-f]{7,40}"\b)[^"]+"[^}]*\}' 2>/dev/null || echo "No matches found"

echo
echo "== Move lock files present in repo (if any) =="
fd -HI 'Move\.lock|move\.lock' 2>/dev/null || echo "No lock files found"

Repository: wormhole-foundation/native-token-transfers

Length of output: 1444


🏁 Script executed:

# Check if there's a Move.lock file in the ntt package specifically
ls -la "sui/packages/ntt/" 2>/dev/null | grep -i lock || echo "No lock files in ntt directory"

Repository: wormhole-foundation/native-token-transfers

Length of output: 116


🏁 Script executed:

# Let's also check the git history to understand if "sui-package-update" is a branch name
cd sui/packages/ntt && cat Move.toml 2>/dev/null || echo "File not found"

Repository: wormhole-foundation/native-token-transfers

Length of output: 388


Pin wormhole dependency to a commit SHA across all Move.toml files.

rev = "sui-package-update" is a mutable branch/tag reference without a corresponding Move.lock file to provide reproducibility guarantees. This affects three packages:

  • sui/packages/ntt/Move.toml (line 8)
  • sui/packages/ntt_common/Move.toml (line 8)
  • sui/packages/wormhole_transceiver/Move.toml (line 8)

Builds can drift over time as the branch ref may point to different commits. Pin to a commit SHA instead and update intentionally when needed.

🔧 Suggested change
 [dependencies]
 # After wormhole PR `#4639` merges, switch to: rev = "main"
-wormhole = { git = "https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev = "sui-package-update" }
+wormhole = { git = "https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev = "<commit_sha>" }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/ntt/Move.toml` around lines 7 - 8, Replace the mutable branch
ref rev = "sui-package-update" in the wormhole dependency entry (the line that
starts with wormhole = { git =
"https://github.com/wormholelabs-xyz/wormhole.git", subdir = "sui/wormhole", rev
= ... }) with a specific commit SHA in each Move.toml where it appears (packages
ntt, ntt_common, wormhole_transceiver), then regenerate or update the Move.lock
so the lockfile records the exact commit; ensure the same SHA is used across all
three Move.toml files to keep builds reproducible.

@kcsongor kcsongor force-pushed the cs/sui-new-package-system branch 2 times, most recently from 6918805 to 47f5bfe Compare February 25, 2026 19:01
@kcsongor kcsongor force-pushed the cs/sui-new-package-system branch from 47f5bfe to 463e81e Compare February 26, 2026 17:17
the "not found" error was triggering false positives when the object was
not found
better handling of Published.toml files and more robust handling of partial deployments
@kcsongor kcsongor force-pushed the cs/sui-new-package-system branch from 463e81e to 73fdc91 Compare February 26, 2026 17:19
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cli/src/sui/helpers.ts (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Fix Prettier formatting.

The pipeline indicates Prettier formatting issues. Run bun run prettier to fix formatting.

#!/bin/bash
# Verify Prettier configuration and check this file
fd -e ts -e js 'prettier' --exec cat {} \; 2>/dev/null | head -20
cat package.json | jq '.scripts.prettier // empty'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/helpers.ts` at line 1, Prettier is flagging formatting in
cli/src/sui/helpers.ts (starting at the import of execFileSync); run the
formatter and apply its changes (e.g., run `bun run prettier` or your project's
configured prettier command) to fix whitespace/formatting issues, then stage the
updated file; ensure no manual style edits remain that conflict with Prettier
rules for the import line and surrounding code in helpers.ts.
🧹 Nitpick comments (4)
sui/packages/ntt_common/sources/contract_auth.move (1)

85-92: Add direct tests for auth_as under the new package-resolution path.

Recommend adding one positive and one negative auth_as test to lock in behavior after the defining-ID migration.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/ntt_common/sources/contract_auth.move` around lines 85 - 92, Add
two new Move tests exercising auth_as under the defining-ID/package-resolution
change: one positive test that constructs an Auth value whose defining ID
matches the State type's defining ID, calls auth_as<Auth, State>(&auth,
type_name, &state) and asserts it returns the expected address (using
object::id_address or checking equality), and one negative test that uses an
Auth whose defining ID does not match State and asserts the call aborts with
EInvalidAuthType. Place tests alongside other contract_auth tests and reference
the functions auth_as and assert_auth_type (and the type_name::with_defining_ids
behavior) so the new tests validate the migrated package-resolution path.
sui/Makefile (1)

1-18: Consider adding .PHONY declarations for test and build targets.

The test and build targets don't produce files with those names, so declaring them as .PHONY prevents potential issues if files named test or build are ever created in this directory.

♻️ Suggested addition
 # Default environment for building/testing (testnet or mainnet)
 ENV ?= testnet

+.PHONY: test build
+
 test:
 	`@set` -e; \
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/Makefile` around lines 1 - 18, The Makefile's "test" and "build" targets
are not declared phony which can cause Make to skip them if files named "test"
or "build" appear; update the Makefile to add a .PHONY declaration referencing
the targets (e.g., .PHONY: test build) near the top so that the targets "test"
and "build" are always executed regardless of files with those names.
cli/src/sui/deploy.ts (2)

199-204: Empty catch blocks silently swallow errors.

The empty catch blocks when deleting files make it unclear whether deletion failed due to a missing file (expected) or another error (unexpected). Consider logging at debug level or checking specifically for ENOENT.

♻️ More explicit error handling
           try {
             fs.unlinkSync(mainPath);
-          } catch {}
+          } catch (e: any) {
+            if (e.code !== 'ENOENT') console.warn(`Could not delete ${mainPath}: ${e.message}`);
+          }
           try {
             fs.unlinkSync(wtPath);
-          } catch {}
+          } catch (e: any) {
+            if (e.code !== 'ENOENT') console.warn(`Could not delete ${wtPath}: ${e.message}`);
+          }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/deploy.ts` around lines 199 - 204, The empty catch blocks around
the fs.unlinkSync calls for mainPath and wtPath swallow failures silently;
change them to catch the error (e.g., catch (err)) and explicitly handle ENOENT
by ignoring it (file missing is expected) but log any other errors (include the
file path and error) at debug/warn level so unexpected failures are visible;
alternatively you can check fs.existsSync(mainPath)/wtPath before unlinking, but
ensure mainPath and wtPath error cases are logged rather than silently ignored.

464-465: Replace arbitrary delay with SDK's transaction wait mechanism.

The 1000ms delay is a fragile workaround. Use client.waitForTransaction() instead to properly wait for transaction finality. This method polls the RPC until the transaction is available, adapting to actual network conditions rather than relying on a fixed timeout.

Replace:

await new Promise((resolve) => setTimeout(resolve, 1000));
const transceiverResult = await client.signAndExecuteTransaction({...});

With:

const transceiverResult = await client.signAndExecuteTransaction({...});
await client.waitForTransaction({ digest: transceiverResult.digest });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/deploy.ts` around lines 464 - 465, Replace the arbitrary 1s sleep
used after NTT setup with the SDK's transaction polling: remove the setTimeout
Promise and instead call client.waitForTransaction to wait for finality after
performing the transaction with client.signAndExecuteTransaction; specifically,
after obtaining transceiverResult from client.signAndExecuteTransaction, call
client.waitForTransaction with the digest (transceiverResult.digest) so the code
waits for the transaction to be available instead of using a fixed timeout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cli/src/sui/deploy.ts`:
- Around line 577-584: The catch block that logs "Failed to register wormhole
transceiver with NTT manager:" currently swallows the error so deployment
reports success despite registration failure; either rethrow the error to make
deployment a hard failure (after logging) or set a flag (e.g.,
transceiverRegistrationFailed) when catching the error and ensure the final
completion message and exit code reflect a partial failure. Update the catch
around the transceiver registration so it either throws the caught error (throw
error) or sets the flag and append a prominent partial-failure summary to the
final output/exit logic so users aren’t misled.

In `@cli/src/sui/helpers.ts`:
- Around line 340-341: Remove the redundant deletion: delete the call to
fs.unlinkSync(worktreePath) and keep the single fs.rmSync(worktreePath, { force:
true }) call (referencing the existing fs.unlinkSync(worktreePath) and
fs.rmSync(worktreePath, { force: true }) lines) so the file removal is handled
once and safely with force.

---

Outside diff comments:
In `@cli/src/sui/helpers.ts`:
- Line 1: Prettier is flagging formatting in cli/src/sui/helpers.ts (starting at
the import of execFileSync); run the formatter and apply its changes (e.g., run
`bun run prettier` or your project's configured prettier command) to fix
whitespace/formatting issues, then stage the updated file; ensure no manual
style edits remain that conflict with Prettier rules for the import line and
surrounding code in helpers.ts.

---

Nitpick comments:
In `@cli/src/sui/deploy.ts`:
- Around line 199-204: The empty catch blocks around the fs.unlinkSync calls for
mainPath and wtPath swallow failures silently; change them to catch the error
(e.g., catch (err)) and explicitly handle ENOENT by ignoring it (file missing is
expected) but log any other errors (include the file path and error) at
debug/warn level so unexpected failures are visible; alternatively you can check
fs.existsSync(mainPath)/wtPath before unlinking, but ensure mainPath and wtPath
error cases are logged rather than silently ignored.
- Around line 464-465: Replace the arbitrary 1s sleep used after NTT setup with
the SDK's transaction polling: remove the setTimeout Promise and instead call
client.waitForTransaction to wait for finality after performing the transaction
with client.signAndExecuteTransaction; specifically, after obtaining
transceiverResult from client.signAndExecuteTransaction, call
client.waitForTransaction with the digest (transceiverResult.digest) so the code
waits for the transaction to be available instead of using a fixed timeout.

In `@sui/Makefile`:
- Around line 1-18: The Makefile's "test" and "build" targets are not declared
phony which can cause Make to skip them if files named "test" or "build" appear;
update the Makefile to add a .PHONY declaration referencing the targets (e.g.,
.PHONY: test build) near the top so that the targets "test" and "build" are
always executed regardless of files with those names.

In `@sui/packages/ntt_common/sources/contract_auth.move`:
- Around line 85-92: Add two new Move tests exercising auth_as under the
defining-ID/package-resolution change: one positive test that constructs an Auth
value whose defining ID matches the State type's defining ID, calls
auth_as<Auth, State>(&auth, type_name, &state) and asserts it returns the
expected address (using object::id_address or checking equality), and one
negative test that uses an Auth whose defining ID does not match State and
asserts the call aborts with EInvalidAuthType. Place tests alongside other
contract_auth tests and reference the functions auth_as and assert_auth_type
(and the type_name::with_defining_ids behavior) so the new tests validate the
migrated package-resolution path.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0222e10 and 463e81e.

📒 Files selected for processing (15)
  • .github/workflows/sui.yml
  • cli/src/error.ts
  • cli/src/sui/deploy.ts
  • cli/src/sui/helpers.ts
  • cli/src/tag.ts
  • sui/Makefile
  • sui/packages/ntt/Move.toml
  • sui/packages/ntt/tests/ntt_scenario.move
  • sui/packages/ntt/tests/ntt_tests.move
  • sui/packages/ntt_common/Move.toml
  • sui/packages/ntt_common/sources/contract_auth.move
  • sui/packages/ntt_common/sources/datatypes/bitmap.move
  • sui/packages/ntt_common/sources/messages/ntt_manager_message.move
  • sui/packages/ntt_common/sources/messages/transceiver_message.move
  • sui/packages/wormhole_transceiver/Move.toml
🚧 Files skipped from review as they are similar to previous changes (4)
  • sui/packages/ntt/tests/ntt_tests.move
  • sui/packages/ntt_common/sources/messages/ntt_manager_message.move
  • sui/packages/ntt_common/Move.toml
  • .github/workflows/sui.yml

Comment on lines 577 to +584
} catch (error) {
console.error("Wormhole Transceiver setup failed:", error);
console.error("Error details:", JSON.stringify(error, null, 2));
throw error;
console.error(
"Failed to register wormhole transceiver with NTT manager:",
error
);
console.warn(
"Deployment completed but transceiver registration failed. You may need to register it manually."
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Transceiver registration failure continues deployment.

The catch block logs a warning but doesn't propagate the error, meaning deployment will report success even though transceiver registration failed. This could leave the system in a partially configured state where the user believes deployment succeeded.

Consider whether this should be a hard failure or if the current behavior (warn and continue) is intentional. If intentional, the final success message should indicate the partial failure more prominently.

🔧 Option A: Make registration failure a hard error
         } catch (error) {
-          console.error(
-            "Failed to register wormhole transceiver with NTT manager:",
-            error
-          );
-          console.warn(
-            "Deployment completed but transceiver registration failed. You may need to register it manually."
-          );
+          console.error("Failed to register wormhole transceiver:", error);
+          throw new Error(
+            "Deployment failed: transceiver registration unsuccessful. " +
+              "Progress has been saved - retry deployment to continue."
+          );
         }
🔧 Option B: Make partial success more visible in output
       // ── Done ──
-      console.log(colors.green("Sui NTT deployment completed successfully!"));
+      const allStepsComplete = progress.transceiverRegistered !== false;
+      if (allStepsComplete) {
+        console.log(colors.green("Sui NTT deployment completed successfully!"));
+      } else {
+        console.log(colors.yellow("Sui NTT deployment completed with warnings."));
+        console.log(colors.yellow("Transceiver registration failed - manual registration required."));
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error) {
console.error("Wormhole Transceiver setup failed:", error);
console.error("Error details:", JSON.stringify(error, null, 2));
throw error;
console.error(
"Failed to register wormhole transceiver with NTT manager:",
error
);
console.warn(
"Deployment completed but transceiver registration failed. You may need to register it manually."
);
} catch (error) {
console.error("Failed to register wormhole transceiver:", error);
throw new Error(
"Deployment failed: transceiver registration unsuccessful. " +
"Progress has been saved - retry deployment to continue."
);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/deploy.ts` around lines 577 - 584, The catch block that logs
"Failed to register wormhole transceiver with NTT manager:" currently swallows
the error so deployment reports success despite registration failure; either
rethrow the error to make deployment a hard failure (after logging) or set a
flag (e.g., transceiverRegistrationFailed) when catching the error and ensure
the final completion message and exit code reflect a partial failure. Update the
catch around the transceiver registration so it either throws the caught error
(throw error) or sets the flag and append a prominent partial-failure summary to
the final output/exit logic so users aren’t misled.

Comment on lines +340 to +341
fs.unlinkSync(worktreePath);
fs.rmSync(worktreePath, { force: true });
Copy link

@coderabbitai coderabbitai bot Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Redundant file deletion.

fs.unlinkSync(worktreePath) on line 340 already removes the file, making the subsequent fs.rmSync(worktreePath, { force: true }) on line 341 unnecessary.

🧹 Suggested fix
       fs.copyFileSync(worktreePath, mainTreePath);
       fs.unlinkSync(worktreePath);
-      fs.rmSync(worktreePath, { force: true });
       fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fs.unlinkSync(worktreePath);
fs.rmSync(worktreePath, { force: true });
fs.unlinkSync(worktreePath);
fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/helpers.ts` around lines 340 - 341, Remove the redundant
deletion: delete the call to fs.unlinkSync(worktreePath) and keep the single
fs.rmSync(worktreePath, { force: true }) call (referencing the existing
fs.unlinkSync(worktreePath) and fs.rmSync(worktreePath, { force: true }) lines)
so the file removal is handled once and safely with force.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin also highlights this in another way, might be worth making a small change: https://app.devin.ai/review/wormhole-foundation/native-token-transfers/pull/814

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Fetch all PR reviews and review comments to find Devin's review
gh pr view 814 --repo wormhole-foundation/native-token-transfers --json reviews,comments | jq '.reviews[] | {author: .author.login, body: .body, state: .state}'
echo "---COMMENTS---"
gh pr view 814 --repo wormhole-foundation/native-token-transfers --json comments | jq '.comments[] | {author: .author.login, body: .body}'

Length of output: 55815


@djb15 — Both approaches fix the same redundancy. The difference is which call to keep:

  • My original suggestion: keep fs.unlinkSync, remove fs.rmSync.
  • Devin's framing (which also appears in a later review round): keep fs.rmSync(worktreePath, { force: true }), remove fs.unlinkSync.

Devin's version is arguably slightly better, because fs.rmSync with { force: true } is more robust — it silently succeeds even if the file is already gone (e.g., due to a race or prior error), whereas fs.unlinkSync would throw ENOENT in that case. Here's the recommended fix:

       fs.copyFileSync(worktreePath, mainTreePath);
-      fs.unlinkSync(worktreePath);
       fs.rmSync(worktreePath, { force: true });
       fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));

🧠 Learnings used
Learnt from: dvgui
Repo: wormhole-foundation/native-token-transfers PR: 820
File: cli/src/solana/deploy.ts:394-435
Timestamp: 2026-02-25T02:00:18.579Z
Learning: When reviewing refactor PRs that move code without functional changes, do not flag pre-existing issues in the moved code unless explicitly requested. Treat such changes as structural; reserve flags for changes that affect behavior, correctness, performance, or security. This guideline applies to TypeScript files across the repository (adjustments to file movement, formatting, or renaming without logic changes should not trigger bug-related feedback unless requested).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
cli/src/sui/deploy.ts (1)

577-585: ⚠️ Potential issue | 🟠 Major

Transceiver registration failure continues deployment.

The catch block logs a warning but doesn't propagate the error, meaning deployment will report success even though transceiver registration failed. This could leave the system in a partially configured state where the user believes deployment succeeded.

Consider whether this should be a hard failure or if the current behavior (warn and continue) is intentional. If intentional, the final success message should indicate the partial failure more prominently.

🔧 Option A: Make registration failure a hard error
         } catch (error) {
-          console.error(
-            "Failed to register wormhole transceiver with NTT manager:",
-            error
-          );
-          console.warn(
-            "Deployment completed but transceiver registration failed. You may need to register it manually."
-          );
+          console.error("Failed to register wormhole transceiver:", error);
+          throw new Error(
+            "Deployment failed: transceiver registration unsuccessful. " +
+              "Progress has been saved - retry deployment to continue."
+          );
         }
🔧 Option B: Make partial success more visible in output
       // ── Done ──
-      console.log(colors.green("Sui NTT deployment completed successfully!"));
+      const allStepsComplete = progress.transceiverRegistered !== false;
+      if (allStepsComplete) {
+        console.log(colors.green("Sui NTT deployment completed successfully!"));
+      } else {
+        console.log(colors.yellow("Sui NTT deployment completed with warnings."));
+        console.log(colors.yellow("Transceiver registration failed - manual registration required."));
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/deploy.ts` around lines 577 - 585, The catch in the transceiver
registration block (the try/catch that logs "Failed to register wormhole
transceiver with NTT manager:") swallows errors and lets deployment report
success; either rethrow the error to make registration a hard failure or mark a
partial-failure flag and update the final deployment summary to clearly indicate
partial success. Locate the transceiver registration try/catch inside the deploy
function in deploy.ts and implement one of two fixes: (A) rethrow the caught
error (or call process.exit(1)) so registration failure fails the whole
deployment, or (B) set a boolean like transceiverRegistrationFailed = true
inside the catch and then update the final success message logic (the final
summary/console output in the same deploy flow) to show a prominent
partial-failure warning when that flag is true. Ensure you adjust tests/logging
accordingly.
🧹 Nitpick comments (2)
cli/src/sui/helpers.ts (1)

309-317: Consider sanitizing env parameter for defense-in-depth.

While env is currently controlled (only "mainnet" or "testnet"), adding explicit validation would protect against future misuse and address the static analysis warning about regex from variable input.

🛡️ Optional: Add allowlist validation
 export function parsePublishedToml(
   filePath: string,
   env: string
 ): { packageId: string; upgradeCap: string } {
+  const validEnvs = ["mainnet", "testnet"];
+  if (!validEnvs.includes(env)) {
+    throw new Error(`Invalid env "${env}". Expected one of: ${validEnvs.join(", ")}`);
+  }
   const content = fs.readFileSync(filePath, "utf8");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/helpers.ts` around lines 309 - 317, The parsePublishedToml
function builds a RegExp from the env parameter; validate env first (allowlist:
"mainnet" and "testnet") and throw a clear error for invalid values, and/or
escape env before inserting into the RegExp (use a regex-escape helper) to
prevent injection; update parsePublishedToml to perform this check/escape before
calling new RegExp and keep existing behavior of throwing when the section is
missing.
cli/src/sui/deploy.ts (1)

464-465: Consider making the network settle delay configurable or using proper confirmation.

The hardcoded 1-second delay may be insufficient under high network load or excessive in normal conditions. Consider using transaction confirmation mechanisms or making this configurable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/deploy.ts` around lines 464 - 465, Replace the hardcoded
one-second sleep after NTT setup (the await new Promise((resolve) =>
setTimeout(resolve, 1000)) call) with a configurable or deterministic wait:
either hook into the provider/SDK transaction confirmation API to wait for the
NTT setup transaction to be finalized, or read a configurable timeout value
(e.g., networkSettleDelayMs) passed into the deploy flow and use that instead;
update the deploy/NTT setup call site to accept the config variable and remove
the magic 1000ms literal so callers can tune behavior or rely on proper
confirmation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@cli/src/sui/deploy.ts`:
- Around line 577-585: The catch in the transceiver registration block (the
try/catch that logs "Failed to register wormhole transceiver with NTT manager:")
swallows errors and lets deployment report success; either rethrow the error to
make registration a hard failure or mark a partial-failure flag and update the
final deployment summary to clearly indicate partial success. Locate the
transceiver registration try/catch inside the deploy function in deploy.ts and
implement one of two fixes: (A) rethrow the caught error (or call
process.exit(1)) so registration failure fails the whole deployment, or (B) set
a boolean like transceiverRegistrationFailed = true inside the catch and then
update the final success message logic (the final summary/console output in the
same deploy flow) to show a prominent partial-failure warning when that flag is
true. Ensure you adjust tests/logging accordingly.

---

Nitpick comments:
In `@cli/src/sui/deploy.ts`:
- Around line 464-465: Replace the hardcoded one-second sleep after NTT setup
(the await new Promise((resolve) => setTimeout(resolve, 1000)) call) with a
configurable or deterministic wait: either hook into the provider/SDK
transaction confirmation API to wait for the NTT setup transaction to be
finalized, or read a configurable timeout value (e.g., networkSettleDelayMs)
passed into the deploy flow and use that instead; update the deploy/NTT setup
call site to accept the config variable and remove the magic 1000ms literal so
callers can tune behavior or rely on proper confirmation.

In `@cli/src/sui/helpers.ts`:
- Around line 309-317: The parsePublishedToml function builds a RegExp from the
env parameter; validate env first (allowlist: "mainnet" and "testnet") and throw
a clear error for invalid values, and/or escape env before inserting into the
RegExp (use a regex-escape helper) to prevent injection; update
parsePublishedToml to perform this check/escape before calling new RegExp and
keep existing behavior of throwing when the section is missing.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 463e81e and 73fdc91.

📒 Files selected for processing (4)
  • cli/src/error.ts
  • cli/src/sui/deploy.ts
  • cli/src/sui/helpers.ts
  • cli/src/tag.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • cli/src/tag.ts

* sui: implement governance contract

Implement a governance contract that receives VAAs and executes admin
operations on NTT. One governance package per NTT deployment (no dynamic
dispatch on Sui).

* sui/sdk: support object owner of ntt

* cli: sui governance support
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
cli/src/sui/helpers.ts (1)

370-373: ⚠️ Potential issue | 🟡 Minor

Redundant file deletion remains.

fs.unlinkSync(worktreePath) on line 371 already removes the file. The subsequent fs.rmSync(worktreePath, { force: true }) on line 372 is redundant. Keep only fs.rmSync with { force: true } since it handles the case where the file might already be gone.

Suggested fix
       fs.copyFileSync(worktreePath, mainTreePath);
-      fs.unlinkSync(worktreePath);
       fs.rmSync(worktreePath, { force: true });
       fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/helpers.ts` around lines 370 - 373, The duplicate deletion is
happening in the block using fs.copyFileSync, fs.unlinkSync, fs.rmSync, and
fs.symlinkSync for worktreePath/mainTreePath; remove the redundant
fs.unlinkSync(worktreePath) and keep only fs.rmSync(worktreePath, { force: true
}) (after fs.copyFileSync) so the removal is idempotent and then create the
symlink with fs.symlinkSync(path.resolve(mainTreePath),
path.resolve(worktreePath)).
🧹 Nitpick comments (6)
sui/ts/src/ntt.ts (1)

264-268: Add test coverage for the ObjectOwner branch in the getOwner() method.

The implementation correctly handles ObjectOwner in addition to AddressOwner. However, the test suite in sui/ts/__tests__/ntt.test.ts only covers the AddressOwner case. The mockAdminCap helper in sui/ts/__tests__/mocks.ts generates only AddressOwner variants. Add a test case that mocks an ObjectOwner variant to ensure this code path behaves correctly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/ts/src/ntt.ts` around lines 264 - 268, The test suite lacks coverage for
the ObjectOwner branch in getOwner(); add a unit test in
sui/ts/__tests__/ntt.test.ts that uses a modified mockAdminCap (from
sui/ts/__tests__/mocks.ts) to return adminCap.data.owner as { ObjectOwner:
"<some-object-id>" } and assert getOwner() (or the method under test in ntt.ts)
returns that ObjectOwner value; update or extend mockAdminCap to support an
ObjectOwner variant for this test so the code path handling "ObjectOwner in
adminCap.data.owner" is exercised.
cli/src/sui/helpers.ts (1)

332-355: Low ReDoS risk from regex construction.

The static analysis flagged the regex constructed from env variable. However, env is derived internally from the network type ("mainnet" or "testnet"), not from external user input. The risk is minimal, but if you want to be defensive, you could sanitize or use a literal pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/sui/helpers.ts` around lines 332 - 355, The regex in
parsePublishedToml builds a RegExp from the env variable which could cause a
ReDoS/regex-injection risk; sanitize or escape env before using it in new
RegExp: escape any regex metacharacters in env (e.g., via
env.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')) into a safeEnv and then use that
safeEnv in the RegExp construction (keep the rest of the pattern intact),
referencing parsePublishedToml, env, and the new RegExp(...) call to locate
where to change.
cli/src/suiGovernance.ts (1)

349-350: Hardcoded 2000ms delay for transaction finalization.

The fixed delay assumes finalization completes within 2 seconds, which may not hold under network congestion. Consider polling for object availability or documenting this as a known limitation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/suiGovernance.ts` around lines 349 - 350, Replace the hardcoded await
new Promise((resolve) => setTimeout(resolve, 2000)) in cli/src/suiGovernance.ts
with a robust wait: poll the Sui SDK for either the published objects or the
transaction status (e.g., use client.getObject/getObjectRef or
client.waitForTransactionEffect/transactionStatus) in a loop with a short
backoff and a timeout, returning once the expected object(s) or final status are
observed; alternatively, if you cannot implement polling now, add a clear
comment near that delay stating it is a temporary known limitation and include a
TODO to replace it with polling using getObject/waitForTransaction.
cli/src/commands/sui.ts (1)

134-136: Casting to any to access platform-specific signer properties is a type-system bypass that's fragile.

The Signer<N, C> interface doesn't expose platform-specific properties like client or _signer, forcing the code to cast signer.signer to any to access them. While client is a public readonly property on SuiNativeSigner, the underscore prefix on _signer indicates it's a protected/private property inherited from PlatformNativeSigner and may change in SDK updates.

This pattern is repeated throughout the codebase. Consider whether the SDK should expose a public method to access the raw keypair and client, or document this as an expected coupling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/commands/sui.ts` around lines 134 - 136, The code is bypassing
TypeScript safety by casting signer.signer to any to access platform-specific
fields; instead add a proper type-narrowing check and avoid accessing private
underscored properties. Create a type-guard like isSuiNativeSigner(x): x is
SuiNativeSigner that checks for the public readonly client property, then in the
caller (where getSigner(...) returns Signer) use that guard to narrow
signer.signer to SuiNativeSigner and read the public client property; do not
reference PlatformNativeSigner._signer or cast to any, and consider adding a
small helper function (e.g., getSuiClientFromSigner) to centralize this logic.
sui/packages/ntt_governance/tests/governance_tests.move (1)

138-157: Assert the updated limit values, not just non-abort.

These two tests currently only prove the governance calls return without aborting. A no-op or wrong-field write would still pass CI because neither the outbound limit nor the peer's inbound limit is read back afterward. If the package does not already expose getters, a small test-only accessor would make these paths observable.

Also applies to: 161-191

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/ntt_governance/tests/governance_tests.move` around lines 138 -
157, The test test_set_outbound_limit_via_governance only verifies the call
doesn't abort; add assertions to verify the state actually changed by reading
the updated outbound limit from the ntt_state and the corresponding peer inbound
limit (or create small test-only accessors if getters don't exist).
Specifically, after governance::execute_set_outbound_limit (and before
gs::return_ntt_state), call the package's getter or a test-only accessor (e.g.,
get_outbound_limit_on_ntt_state / get_peer_inbound_limit) to assert the
new_limit equals the stored outbound limit and the peer’s inbound limit was
updated accordingly; if such accessors are missing, add minimal read-only
functions in the module under test to expose these fields for testing and use
them in test_set_outbound_limit_via_governance (and similarly in the other test
referenced at lines 161-191).
sui/packages/ntt_governance/tests/governance_scenario.move (1)

107-138: Avoid type-only shared-object retrieval after setup_second_ntt.

setup_second_ntt leaves a second shared State<gs::GOVERNANCE_SCENARIO> and CoinMetadata<gs::GOVERNANCE_SCENARIO> in the scenario, but take_ntt_state / take_coin_metadata still call ts::take_shared by type only. Any later test that combines these helpers can grab the wrong object nondeterministically. Returning the created IDs, or adding take_*_by_id helpers, would keep this fixture stable.

Also applies to: 150-163

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/ntt_governance/tests/governance_scenario.move` around lines 107
- 138, The test fixture setup_second_ntt creates a second shared
State<gs::GOVERNANCE_SCENARIO> and CoinMetadata<gs::GOVERNANCE_SCENARIO> but
downstream helpers take_ntt_state and take_coin_metadata still call
ts::take_shared by type only, causing nondeterministic grabs; fix by changing
setup_second_ntt to return the created shared object IDs (the State id and
CoinMetadata id) from ntt::setup::complete_burning (or by extracting their
object IDs after creation) and update callers to use new ID-aware helpers (e.g.,
take_ntt_state_by_id(id) and take_coin_metadata_by_id(id)) that call
ts::take_shared with the specific object id instead of type-only
ts::take_shared, or alternatively add take_*_by_id helpers and update tests to
call them with the returned IDs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@sui/packages/ntt_governance/README.md`:
- Around line 26-34: The fenced code blocks in README.md (e.g., the block
showing "Envelope (handled by governance_message): ..." and the other blocks at
the noted ranges) are missing language tags causing markdownlint MD040; update
each triple-backtick fence to include an explicit language tag such as text (or
bash where appropriate) — e.g., change ``` to ```text (or ```bash) for the
blocks containing the envelope/payload examples so the linter stops flagging
them; ensure all four listed fences (around the Envelope/Payload examples) are
updated consistently.

---

Duplicate comments:
In `@cli/src/sui/helpers.ts`:
- Around line 370-373: The duplicate deletion is happening in the block using
fs.copyFileSync, fs.unlinkSync, fs.rmSync, and fs.symlinkSync for
worktreePath/mainTreePath; remove the redundant fs.unlinkSync(worktreePath) and
keep only fs.rmSync(worktreePath, { force: true }) (after fs.copyFileSync) so
the removal is idempotent and then create the symlink with
fs.symlinkSync(path.resolve(mainTreePath), path.resolve(worktreePath)).

---

Nitpick comments:
In `@cli/src/commands/sui.ts`:
- Around line 134-136: The code is bypassing TypeScript safety by casting
signer.signer to any to access platform-specific fields; instead add a proper
type-narrowing check and avoid accessing private underscored properties. Create
a type-guard like isSuiNativeSigner(x): x is SuiNativeSigner that checks for the
public readonly client property, then in the caller (where getSigner(...)
returns Signer) use that guard to narrow signer.signer to SuiNativeSigner and
read the public client property; do not reference PlatformNativeSigner._signer
or cast to any, and consider adding a small helper function (e.g.,
getSuiClientFromSigner) to centralize this logic.

In `@cli/src/sui/helpers.ts`:
- Around line 332-355: The regex in parsePublishedToml builds a RegExp from the
env variable which could cause a ReDoS/regex-injection risk; sanitize or escape
env before using it in new RegExp: escape any regex metacharacters in env (e.g.,
via env.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')) into a safeEnv and then use that
safeEnv in the RegExp construction (keep the rest of the pattern intact),
referencing parsePublishedToml, env, and the new RegExp(...) call to locate
where to change.

In `@cli/src/suiGovernance.ts`:
- Around line 349-350: Replace the hardcoded await new Promise((resolve) =>
setTimeout(resolve, 2000)) in cli/src/suiGovernance.ts with a robust wait: poll
the Sui SDK for either the published objects or the transaction status (e.g.,
use client.getObject/getObjectRef or
client.waitForTransactionEffect/transactionStatus) in a loop with a short
backoff and a timeout, returning once the expected object(s) or final status are
observed; alternatively, if you cannot implement polling now, add a clear
comment near that delay stating it is a temporary known limitation and include a
TODO to replace it with polling using getObject/waitForTransaction.

In `@sui/packages/ntt_governance/tests/governance_scenario.move`:
- Around line 107-138: The test fixture setup_second_ntt creates a second shared
State<gs::GOVERNANCE_SCENARIO> and CoinMetadata<gs::GOVERNANCE_SCENARIO> but
downstream helpers take_ntt_state and take_coin_metadata still call
ts::take_shared by type only, causing nondeterministic grabs; fix by changing
setup_second_ntt to return the created shared object IDs (the State id and
CoinMetadata id) from ntt::setup::complete_burning (or by extracting their
object IDs after creation) and update callers to use new ID-aware helpers (e.g.,
take_ntt_state_by_id(id) and take_coin_metadata_by_id(id)) that call
ts::take_shared with the specific object id instead of type-only
ts::take_shared, or alternatively add take_*_by_id helpers and update tests to
call them with the returned IDs.

In `@sui/packages/ntt_governance/tests/governance_tests.move`:
- Around line 138-157: The test test_set_outbound_limit_via_governance only
verifies the call doesn't abort; add assertions to verify the state actually
changed by reading the updated outbound limit from the ntt_state and the
corresponding peer inbound limit (or create small test-only accessors if getters
don't exist). Specifically, after governance::execute_set_outbound_limit (and
before gs::return_ntt_state), call the package's getter or a test-only accessor
(e.g., get_outbound_limit_on_ntt_state / get_peer_inbound_limit) to assert the
new_limit equals the stored outbound limit and the peer’s inbound limit was
updated accordingly; if such accessors are missing, add minimal read-only
functions in the module under test to expose these fields for testing and use
them in test_set_outbound_limit_via_governance (and similarly in the other test
referenced at lines 161-191).

In `@sui/ts/src/ntt.ts`:
- Around line 264-268: The test suite lacks coverage for the ObjectOwner branch
in getOwner(); add a unit test in sui/ts/__tests__/ntt.test.ts that uses a
modified mockAdminCap (from sui/ts/__tests__/mocks.ts) to return
adminCap.data.owner as { ObjectOwner: "<some-object-id>" } and assert getOwner()
(or the method under test in ntt.ts) returns that ObjectOwner value; update or
extend mockAdminCap to support an ObjectOwner variant for this test so the code
path handling "ObjectOwner in adminCap.data.owner" is exercised.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b39ca29c-cae7-464f-b627-abe44fff17b0

📥 Commits

Reviewing files that changed from the base of the PR and between 73fdc91 and 04dba0c.

⛔ Files ignored due to path filters (1)
  • sui/packages/ntt_governance/Move.lock is excluded by !**/*.lock
📒 Files selected for processing (12)
  • cli/src/commands/index.ts
  • cli/src/commands/sui.ts
  • cli/src/index.ts
  • cli/src/sui/helpers.ts
  • cli/src/suiGovernance.ts
  • sui/packages/ntt/sources/upgrades.move
  • sui/packages/ntt_governance/Move.toml
  • sui/packages/ntt_governance/README.md
  • sui/packages/ntt_governance/sources/governance.move
  • sui/packages/ntt_governance/tests/governance_scenario.move
  • sui/packages/ntt_governance/tests/governance_tests.move
  • sui/ts/src/ntt.ts

Comment on lines +26 to +34
```
Envelope (handled by governance_message):
MODULE (32 bytes, "GeneralPurposeGovernance" left-padded)
ACTION (1 byte, SUI_CALL = 3)
CHAIN (2 bytes, Sui chain ID)

Payload (parsed by this module):
GOVERNANCE_ID (32 bytes) | NTT_ACTION (1 byte) | ACTION_DATA (variable)
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language tags to these fenced code blocks.

markdownlint is already flagging these four fences with MD040. Label them as text (or bash where appropriate) so the README stays lint-clean.

Also applies to: 111-117, 139-145, 168-175

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 26-26: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sui/packages/ntt_governance/README.md` around lines 26 - 34, The fenced code
blocks in README.md (e.g., the block showing "Envelope (handled by
governance_message): ..." and the other blocks at the noted ranges) are missing
language tags causing markdownlint MD040; update each triple-backtick fence to
include an explicit language tag such as text (or bash where appropriate) —
e.g., change ``` to ```text (or ```bash) for the blocks containing the
envelope/payload examples so the linter stops flagging them; ensure all four
listed fences (around the Envelope/Payload examples) are updated consistently.

Comment on lines +724 to +737
#[test, expected_failure]
fun test_authorize_upgrade_truncated_digest() {
let mut scenario = test_scenario::begin(gs::admin());
gs::setup(&mut scenario);

let mut gov = gs::take_gov(&scenario);

// Truncated digest — only 16 bytes instead of 32
let short_digest = x"deadbeefdeadbeefdeadbeefdeadbeef";
let decree = governance::new_decree(10, short_digest);
let _ticket = governance::execute_authorize_upgrade(&mut gov, decree);

abort 0 // unreachable
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This expected-failure test can pass on a false positive.

Because the annotation does not pin an abort code, the abort 0 on Line 736 still satisfies expected_failure if execute_authorize_upgrade ever accepts the 16-byte digest. Please assert the real parse abort here instead of relying on the unreachable abort.

[package]
name = "WormholeTransceiver"
edition = "2024.beta" # edition = "legacy" to use legacy (pre-2024) Move
license = "Apache 2.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deliberately removed the license in all the of the Move.toml files?

edition = "2024.beta"

[dependencies]
# After wormhole PR #4639 merges, switch to: rev = "main"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest we wait to merge this one until after the monorepo PR is merged so we can make the change here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants