Skip to content

release: v3.2.0 — NFT gating hardening, Unicode validation, CI fixes#28

Merged
Oba-One merged 19 commits intodevfrom
release/3.2.0
Feb 12, 2026
Merged

release: v3.2.0 — NFT gating hardening, Unicode validation, CI fixes#28
Oba-One merged 19 commits intodevfrom
release/3.2.0

Conversation

@Oba-One
Copy link
Member

@Oba-One Oba-One commented Feb 11, 2026

Summary

Production-hardening release for Cookie Jar v3.2.0, focused on NFT gating audit fixes, cross-stack validation alignment, CI pipeline reliability, and accessibility improvements.

NFT Gating Audit Fixes

  • ERC-721/ERC-1155 balance proof validation with race condition prevention
  • safeBigInt() helper prevents BigInt() throws on invalid manual token ID input
  • Real-time ownership verification with as const tuple types for wagmi

Unicode-Aware Purpose Validation

  • Solidity: countUnicodeCodePoints() counts UTF-8 leading bytes instead of raw bytes().length
  • Client: unicodeCodePointLength() via [...str].length (string iterator handles surrogate pairs)
  • CookieJar._validateWithdrawalConstraints() delegates to CookieJarValidation.validatePurpose()

CI & Build Pipeline

  • OZ v5 ERC777 compatibility shim (scripts/oz-compat.sh) for Superfluid imports
  • All 4 GitHub Actions workflows passing (contracts, integration, unit, lint)
  • Vercel deployment fixed with vercel.json for bun builds
  • FOUNDRY_PROFILE=dev consistency across CI

Config & Script Fixes

  • bun testbun run test in 3 package.json scripts + all README/docs references
  • Removed bunfig.toml vitest preload (broke bun's native runner) and hardcoded NODE_ENV=production
  • Standardized Foundry remappings with trailing slashes

Accessibility & UX

  • Select component: --primary-foreground token for WCAG 4.5:1 contrast
  • NFT form: htmlFor/id linking for 3 label-input pairs
  • Deposit placeholder: Math.max(0, tokenDecimals) prevents RangeError

Test Plan

  • 150 Foundry contract tests passing (including purpose validation)
  • Client TypeScript type checking clean
  • CI: all 4 workflows green
  • Vercel preview deployment successful
  • Local forge build + forge test -vv verified

🤖 Generated with Claude Code

Oba-One and others added 4 commits November 15, 2025 16:44
… scripts

This commit transitions the project from using pnpm to bun for dependency management and script execution. Key changes include updating installation commands, modifying script definitions in package.json, and adjusting documentation to reflect the new setup. The migration aims to streamline the development process and improve performance. Additionally, the removal of pnpm-related files enhances project clarity and maintainability.
… process

This commit refines the `package.json` scripts for better organization and clarity, specifically adjusting the client build and type generation commands to ensure they operate within the correct directory context. Additionally, it updates the deployment configuration in `deployments.auto.ts` to reflect the latest factory addresses and timestamps, enhancing the accuracy of deployment tracking. These changes aim to streamline the development workflow and improve maintainability.
- Replace generic NotAuthorized with distinct errors in _checkAccess():
  InsufficientNFTBalance, NFTNotOwned, NFTValidationFailed
- Use type(uint256).max sentinel in factory to allow explicit 0% fees
- Replace require string with custom error TransferFailed in withdrawals
- Guard FeeCollected event emission on fee > 0 (reduces noise for 0% jars)
- DRY period reset logic via _isNewPeriod/_effectiveWithdrawnInPeriod helpers
- Fix emergency withdraw to update pendingTokenBalances accounting
- Increase strict purpose minimum from 10 to 27 characters
- Add ERC-1155 balanceOf to MockHats for Hats Protocol compatibility
- Add proper balanceOf tracking to MockPOAP for ERC-721 validation
- Split test access config helpers per protocol (ERC721/ERC1155/Hats)
- Fix jarHats to use mockHats with correct testHatId
- Fix cooldown test with proper vm.expectRevert assertions
- Add hat revocation, wrong hat ID, and multi-user cooldown tests
- Add inline remappings and skip rules to foundry.toml

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Update NFT protocol config components (Hats, Hypercert, POAP, Unlock)
- Polish NFTGatedWithdrawalSection with improved UX
- Update deployment config and supported networks
- Fix jar creation hook and token utils
- Add doc-links and blockchain constants
- Update tests for currency labels, token info, and jar transactions

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Feb 11, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
cookie-jar Ready Ready Preview, Comment Feb 12, 2026 6:09am

Request Review

@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaces pnpm with Bun across CI, scripts, docs, and manifests; centralizes ETH_ADDRESS; raises strict-purpose validation from 10 → 27 Unicode code points across contracts, UI, and tests; refactors jar creation to react-hook-form/Zod; adds mocks/harnesses and Foundry remappings for contract tests.

Changes

Cohort / File(s) Summary
Package manager & CI migration
package.json, bunfig.toml, pnpm-workspace.yaml, .github/actions/.../action.yml, .github/workflows/..., scripts/*, playwright.config.ts, VERCEL_BUILD_FIX.md, vercel.json, AGENTS.md, README.md, e2e/README.md, .cursor/rules/*
Replaced pnpm references/commands with bun equivalents across scripts, CI workflows, actions, docs, and manifests; added bunfig.toml; adjusted package.json engines/workspaces and Vercel build settings.
Centralized ETH constant
client/lib/blockchain/constants.ts, client/lib/blockchain/token-utils.ts, client/config/supported-networks.ts, client/app/profile/page.tsx, client/__tests__/*, various components/hooks
Introduced/exported ETH_ADDRESS constant and updated imports/usages; replaced literal native-address checks with case-insensitive, null-safe comparisons.
Strict-purpose validation (10 → 27)
contracts/src/libraries/CookieJarValidation.sol, contracts/src/CookieJar.sol, client/components/jar/AllowlistWithdrawalSection.tsx, many tests (contracts/test/*, client/__tests__/*)
Increased strict-purpose minimum to 27 Unicode code points; added Unicode code-point counting helpers in contracts and UI; updated validation logic, messages, and tests to match.
Major form refactor (react-hook-form + Zod)
client/hooks/jar/useJarCreation.ts, client/hooks/jar/schemas/jarCreationSchema.ts, client/hooks/jar/createV2CreateArgs.ts, client/app/create/page.tsx, client/components/create/*
Rewrote jar creation to use react-hook-form + Zod schema; consolidated form state, validators, and builder for V2 args; updated consuming components to use FormProvider and new prop signatures.
Contract logic, guards & validations
contracts/src/CookieJar.sol, contracts/src/CookieJarFactory.sol, contracts/src/libraries/Streaming.sol
Refactored withdrawal period tracking helpers, standardized NFT/POAP validation helpers, added zero-address guard in Streaming.createStream, changed fee sentinel handling for factory default fees, and tightened ERC1155/ERC721 error semantics.
Tests: mocks, harnesses & protocol updates
contracts/test/SuperfluidIntegration.t.sol, contracts/test/UniswapV4Integration.t.sol, contracts/test/CookieJarProtocols.t.sol, contracts/test/CookieJar.t.sol, many client tests
Added MockCFA/MockSuperfluidHost, ReentrancyAttacker, SwapAdapterHarness; expanded protocol test helpers and per-owner POAP/ERC1155 behaviors; updated expected errors and new Unicode purpose tests.
NFT gating & selector improvements
client/components/jar/NFTGatedWithdrawalSection.tsx, client/components/nft/NFTSelector.tsx, client/components/nft/NFTGateInput.tsx
Added manual NFT entry mode, token-standard selector, activeNFT abstraction and safeBigInt parsing; introduced reducer-driven state and rate-limit UI in NFT input; keyboard/accessibility improvements in selector.
Protocol docs centralization & accessibility
client/components/nft/doc-links.ts, client/components/nft/ProtocolSelector.tsx, client/components/nft/protocols/*.tsx, client/components/nft/ProtocolConfigBase.tsx
Added ACCESS_CONTROL_DOC_LINKS and wired learn-more URLs to it; added ARIA attributes and errorId prop support on protocol config base; standardized configuration panels.
UI component API & styling changes
client/components/create/StepContent.tsx, client/components/jars/JarControls.tsx, client/components/jar/JarDepositSection.tsx, client/components/jars/ChainDisplay.tsx, client/components/ui/select.tsx
Migrated many components to use form context (useFormContext), updated props/interfaces (e.g., JarControls, CreateJarForm), adjusted layout/styling, and improved placeholders/icons; deposit placeholders now respect native currency.
Client build & Vercel adjustments
client/package.json, VERCEL_BUILD_FIX.md, vercel.json, scripts/*
Added build scripts and dashboard-driven Vercel config for Bun; removed vercel.json in favor of dashboard flow changes and added client script adjustments.
Foundry remappings & deployments
foundry.toml, contracts/remappings.txt, client/config/deployments.auto.ts
Added remappings for local libs, updated remapping formatting, and updated 31337 factory address/timestamp in auto deployments.
New/changed exports and types
client/lib/blockchain/cookie-jar-v2-abi.ts, client/lib/blockchain/constants.ts, client/hooks/jar/*, client/hooks/jar/schemas/*, client/hooks/jar/createV2CreateArgs.ts
Moved/canonicalized ABI export, introduced typed blockchain constants, added Zod schema types (JarCreationFormData, ProtocolConfig), exported builder helpers and new enums/types for V2 arg building.
Misc docs, tests, and small fixes
AGENTS.md, docs/RELEASES.md, e2e/*, playwright.config.ts, .github/workflows/*, assorted tests
Numerous README and test comments updated from pnpm→bun; added RELEASES.md; small Playwright and workflow tweaks (fetch-depth, comments, audit messaging).

Sequence Diagram(s)

sequenceDiagram
  participant UI as Client UI
  participant Hook as useJarCreation (Form)
  participant Factory as CookieJarFactory (contract)
  participant Jar as CookieJar (contract)
  participant StreamLib as Streaming lib
  participant SuperHost as Superfluid Host (mock)

  UI->>Hook: submit form (build args)
  Hook->>Hook: buildV2CreateCookieJarArgs (Zod + helper)
  Hook->>Factory: createCookieJar(args) (tx)
  Factory-->>Hook: tx receipt (event with jar address)
  Hook->>UI: newJarPreview / confirmation
  UI->>StreamLib: createStream(superToken, rate)
  StreamLib->>SuperHost: callAgreement(createFlow)
  SuperHost-->>StreamLib: mocked agreement result
  StreamLib->>Jar: record stream metadata / emit events
  Jar-->>UI: event/confirmation
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐰 Bun beneath my paws I run,

Constants shared and tests made fun.
Purposes lengthened, gates made sure,
Mocks and harnesses to help ensure.
Hoppity hops — build, test, and done! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.25% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main release focus (v3.2.0) and highlights the three primary categories of changes: NFT gating hardening, Unicode validation, and CI fixes, which align with the changeset's major themes.
Description check ✅ Passed The description provides a comprehensive overview of the changes across multiple domains (NFT gating, validation, CI/build, accessibility) with specific technical details, test plans, and verification steps that directly relate to the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch release/3.2.0

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (10)
contracts/test/SuperfluidIntegration.t.sol (1)

263-277: ⚠️ Potential issue | 🟡 Minor

Test is a no-op — no withdrawal is executed and assertions are commented out.

testEmergencyWithdrawSuperToken transfers tokens to the jar but never calls emergencyWithdraw. The two assertions on lines 275-276 are commented out, withdrawAmount and initialOwnerBalance are unused, and vm.prank(_owner) on line 272 has no subsequent call. This test always passes while testing nothing.

Given the PR objective explicitly fixes "emergencyWithdraw accounting for pendingTokenBalances", this seems like the right place to validate that fix.

client/config/supported-networks.ts (1)

326-334: ⚠️ Potential issue | 🟠 Major

Pre-existing bug: Alchemy API key not interpolated for Optimism Sepolia.

Line 329 uses regular double quotes instead of backticks, so ${alchemyId} is sent as a literal string rather than being interpolated. Compare with other Alchemy URLs (e.g., Line 267) that correctly use template literals.

Proposed fix
 [optimismSepolia.id]: createFallbackTransport(
   [
     "https://op-sepolia-pokt.nodies.app",
-    "https://opt-sepolia.g.alchemy.com/v2/${alchemyId}",
+    `https://opt-sepolia.g.alchemy.com/v2/${alchemyId}`,
   ],
client/hooks/jar/useJarCreation.ts (1)

595-629: ⚠️ Potential issue | 🔴 Critical

Pipeline failure: "CUSTOM" is not assignable to `0x${string}`.

TypeScript infers the value field as Address from the first element (ETH_ADDRESS). The "CUSTOM" sentinel on Line 623 doesn't match. Explicitly type the array to allow both.

Proposed fix
 const getCurrencyOptions = () => {
-  const options = [
+  const options: { value: string; label: string; description: string }[] = [
     {
       value: ETH_ADDRESS,
       label: "ETH (Native)",
       description: "Use native Ethereum",
     },
   ];
client/lib/blockchain/token-utils.ts (1)

17-17: 🛠️ Refactor suggestion | 🟠 Major

Inconsistent comparison: strict equality here vs case-insensitive in other files.

useTokenInfo uses strict ===/!== against ETH_ADDRESS (Lines 17, 38), while JarDepositSection.tsx and profile/page.tsx use .toLowerCase() comparisons. If tokenAddress arrives with different casing (e.g., all-lowercase from a contract event), the strict check here would fail to recognize it as native ETH, causing spurious ERC-20 contract reads.

Consider normalizing to lowercase here as well, or using viem's isAddressEqual for consistency:

Proposed fix
+import { erc20Abi, formatUnits, isAddress, isAddressEqual, parseUnits } from "viem";
 ...
-const isERC20 = isAddress(tokenAddress) && tokenAddress !== ETH_ADDRESS;
+const isERC20 = isAddress(tokenAddress) && !isAddressEqual(tokenAddress, ETH_ADDRESS);
 ...
-if (tokenAddress === ETH_ADDRESS) {
+if (isAddressEqual(tokenAddress, ETH_ADDRESS)) {

Also applies to: 38-38

client/__tests__/hooks/useNftValidation.test.ts (1)

187-196: ⚠️ Potential issue | 🟡 Minor

Pre-existing bug: test assertion doesn't match mock return value.

The mock on line 49 returns "Failed to validate contract", but the assertion on line 194 expects "Failed to validate contract. Not a valid NFT contract or network error.". This test will fail when RUN_BLOCKCHAIN_TESTS=true.

🐛 Proposed fix

Either update the mock (line 49) to match:

-			error: "Failed to validate contract",
+			error: "Failed to validate contract. Not a valid NFT contract or network error.",

Or update the assertion (line 194) to match the mock:

-			expect(result.error).toBe(
-				"Failed to validate contract. Not a valid NFT contract or network error.",
-			);
+			expect(result.error).toBe("Failed to validate contract");
.github/workflows/integration-tests.yml (1)

143-156: ⚠️ Potential issue | 🟡 Minor

$? check on Line 151 is always 0 due to || echo fallback.

After bun run test || echo "⚠️ ..." on Line 147, $? will always be 0 because echo succeeds. This means the failure exit on Line 155 is unreachable from this code path. The same issue applies to Line 143's if — though that one is structured correctly as a conditional.

Proposed fix
           # Run vitest with proper syntax for integration tests
-          if bun run test -- --run --testTimeout 60000 2>/dev/null; then
+          if bun run test -- --run --testTimeout 60000 2>/dev/null; then
             echo "✅ Integration tests completed with vitest"
           else
             echo "⚠️  Fallback: Running basic unit tests..."
-            bun run test || echo "⚠️  Some tests may have failed due to missing blockchain context"
+            if ! bun run test; then
+              echo "⚠️  Some tests may have failed due to missing blockchain context"
+              exit 1
+            fi
           fi
-        
-        if [ $? -eq 0 ]; then
-          echo "✅ Integration tests passed!"
-        else
-          echo "❌ Some integration tests failed"
-          exit 1
-        fi
+        
+        echo "✅ Integration tests passed!"
.github/actions/setup-node-pnpm/action.yml (1)

34-42: ⚠️ Potential issue | 🟠 Major

CI/local lockfile logic is inverted — CI should use --frozen-lockfile.

The comment on Line 35 says "Try frozen lockfile first, fallback to regular install in CI", but the CI branch (Line 38) runs bun install --no-frozen-lockfile first, which is the opposite. Best practice is for CI to enforce a frozen lockfile for reproducible builds, while local dev should be more flexible.

Proposed fix
       # Try frozen lockfile first, fallback to regular install in CI
       if [ "$CI" = "true" ] || [ "$GITHUB_ACTIONS" = "true" ]; then
         echo "🤖 CI environment detected"
-        bun install --no-frozen-lockfile || bun install
+        bun install --frozen-lockfile || bun install
       else
         echo "🔒 Using frozen lockfile for local consistency"
-        bun install --frozen-lockfile
+        bun install
       fi
.github/workflows/unit-tests.yml (1)

46-60: ⚠️ Potential issue | 🟡 Minor

Test failures are silently swallowed — CI will always pass.

The if/else construct logs test failures but never exits with a non-zero code, so the workflow always succeeds regardless of test results. While the comments indicate this is intentional during an "environment transition," it effectively disables CI test gating.

Consider adding a TODO or tracking issue so this gets reverted once the transition is complete, or at minimum exit with a warning code.

Would you like me to open a tracking issue to re-enable strict test failure handling once the environment transition is complete?

scripts/dev-start.sh (1)

124-141: ⚠️ Potential issue | 🔴 Critical

Working directory bug: cd client on Line 126 leaves cwd in client/, causing Line 140's cd client to resolve to client/client/.

After Line 126 (cd client && bun generate), the script's working directory is $PROJECT_ROOT/client. The unchanged Line 140 then does cd client again, which would try to enter $PROJECT_ROOT/client/client/ — a directory that doesn't exist. This will cause the Next.js dev server launch to fail.

The previous code likely used a workspace-filtered command (e.g., pnpm --filter client generate) that didn't change directories. The fix is to return to $PROJECT_ROOT after the generate step.

Proposed fix
 # Generate client types (from project root, cd into client directory)
 echo "⚙️  Generating client types..."
-cd client && bun generate
+cd "$PROJECT_ROOT/client" && bun generate

 if [ $? -ne 0 ]; then
     echo "❌ Client type generation failed"
     kill $ANVIL_PID $WATCHER_PID 2>/dev/null
     exit 1
 fi

+cd "$PROJECT_ROOT"
+
 # Start client dev server with proper logging
.github/workflows/security-scan.yml (1)

78-93: ⚠️ Potential issue | 🟠 Major

npm audit JSON format mismatch — .advisories is npm v6; Node 18+ ships npm v7+ which uses .vulnerabilities.

Line 85 checks jq -e '.advisories | length > 0', but Node 18 ships with npm 9+ which uses a different JSON structure (.vulnerabilities key instead of .advisories). This means the vulnerability check will likely always report "No vulnerabilities found" regardless of actual findings.

Proposed fix
        if [ -s audit-results.json ]; then
          # Check if we have any vulnerabilities
-         if jq -e '.advisories | length > 0' audit-results.json > /dev/null 2>&1; then
+         if jq -e '.vulnerabilities | length > 0' audit-results.json > /dev/null 2>&1; then
            echo "⚠️  Vulnerabilities found!"
            npm audit --audit-level=moderate
🤖 Fix all issues with AI agents
In @.cursor/rules/root-standards.mdc:
- Around line 25-34: The docs list bun scripts that are missing from
package.json; either add the three scripts "dev:ethereum", "dev:base", and
"test:coverage" to package.json or remove them from the documentation. If
adding, implement "dev:ethereum" and "dev:base" as variant wrappers of the
existing "dev" script that pass the appropriate network/fork flags used by your
dev server (so they call the same underlying start logic as "dev" but with the
Ethereum or Base fork configuration) and implement "test:coverage" to run the
same test runner with coverage enabled (mirroring how your CI produces coverage,
calling your test command with the coverage flags). Ensure the script names
exactly match dev:ethereum, dev:base, and test:coverage so the docs and
package.json stay in sync.

In @.cursor/rules/testing-patterns.mdc:
- Around line 217-224: Update the documentation to reflect the monorepo scripts:
explicitly state that bun test:watch and bun test:coverage are only defined in
the client workspace (require cd client) or add root-level aliases in
package.json (similar to existing test:client and test:client:coverage) so
commands like bun test:watch and bun test:coverage work from the repo root; for
consistency and to avoid ambiguity with Bun's built-in test runner, recommend
and document using bun run <script> for all npm-style scripts (e.g., bun run
test:client, bun run test:e2e) and clarify that the bun test:e2e* commands are
defined at the root.

In `@bunfig.toml`:
- Around line 27-30: The bunfig.toml [test] preload entry currently references
./client/__tests__/vitest.setup.ts which imports Vitest and breaks Bun's native
test runner; remove that preload or replace it with a Bun-compatible setup file
(or point it to a new client/__tests__/bun.setup.ts) so Bun won't attempt to
preload Vitest code when running test:client, and ensure any test
runner-specific configuration is kept in Vitest config instead of bunfig.toml.
- Around line 42-45: The bunfig currently hardcodes "process.env.NODE_ENV" =
"production" in the [define] section which statically replaces the identifier at
bundle/runtime and prevents runtime dev envs from working; remove that define
(or move it out of the global [define] block into build-only configuration) and
instead apply the production define via the build command (e.g., use bun build
--define "process.env.NODE_ENV=production") so that development runs
(dev-start.sh) can rely on runtime NODE_ENV values; locate the
"process.env.NODE_ENV" entry in the [define] block and either delete it or
relocate it to build-specific flags.

In `@client/components/jar/AllowlistWithdrawalSection.tsx`:
- Around line 74-78: The client-side length check uses withdrawPurpose.length
(UTF-16 code units) which can differ from the contract's bytes(purpose).length
(UTF-8); update the disabled checks (e.g., the JSX control using
withdrawPurpose, PURPOSE_MIN_LENGTH and config.isWithdrawPending) and the other
occurrences around lines 214-218 to compute UTF-8 byte length via new
TextEncoder().encode(withdrawPurpose).length, and compare that value against
PURPOSE_MIN_LENGTH so the client validation matches the on‑chain byte-length
check.

In `@client/components/jar/JarDepositSection.tsx`:
- Around line 68-71: The ERC-20 placeholder in JarDepositSection.tsx has an
inverted ternary and can pass a negative value to "0".repeat(), causing a
RangeError; update the placeholder construction used when isNativeCurrency is
false (the `1.${"0".repeat(...)} ${tokenSymbol || "Tokens"}` expression) to
compute a non-negative count of decimals (e.g., let clampedDecimals =
Math.max(0, tokenDecimals) and use clampedDecimals instead of the current
ternary) so that positive tokenDecimals produce that many zeros and negatives
become 0; keep the isNativeCurrency branch unchanged.

In `@client/components/jar/NFTGatedWithdrawalSection.tsx`:
- Around line 150-153: The ownerOf useReadContract invocation can throw because
BigInt(activeNFT.tokenId) is evaluated even for invalid manual input; add a
safeBigInt helper (e.g., safeBigInt(value: string): bigint | undefined that
try/catches BigInt and returns undefined on failure) and replace direct
BigInt(...) uses in the args for useReadContract (and any other place converting
activeNFT.tokenId) with safeBigInt(activeNFT.tokenId) so args become undefined
for invalid IDs; alternatively, validate/normalize tokenId when building
manualNFT/activeNFT so invalid token IDs are set to undefined/null before
reaching useReadContract.
- Around line 350-395: The three label elements in NFTGatedWithdrawalSection are
not associated with their inputs; add unique id attributes (e.g.,
gateAddressInput, tokenIdInput, tokenTypeSelect) to the corresponding Input
components (the ones bound to gateAddress and tokenId) and to the SelectTrigger
used for manualTokenType, then set each label's htmlFor to the matching id so
screen readers can associate the labels with the form controls (update the Input
for gateAddress and tokenId and the Select/SelectTrigger for manualTokenType
accordingly).

In `@client/components/ui/select.tsx`:
- Around line 121-128: Checked items use a semi-transparent brand background
with a creme background token for text which may fail WCAG contrast and misuses
a background semantic token as foreground; update the Select item checked styles
(around SelectPrimitive.ItemIndicator / Check and the item className usage) to
use a dedicated foreground token (e.g., replace text-[hsl(var(--cj-creme-bg))]
with text-[hsl(var(--cj-creme-fg))] or text-[hsl(var(--cj-on-brand))]) and/or
increase the background opacity or adjust colors to meet 4.5:1 contrast, then
verify the effective contrast in both light and dark themes and rename or add
the token in the design system if needed so text tokens are not derived from
background tokens.

In `@contracts/src/libraries/CookieJarValidation.sol`:
- Around line 52-56: The current validatePurpose uses bytes(purpose).length
which measures UTF-8 byte length and can differ from client-side JS
purpose.length (UTF-16 code units); implement a proper Unicode-aware length
check and use it in validatePurpose to align validation semantics. Add a helper
(e.g., utf8StringLength or countUnicodeCodePoints) that iterates the bytes of
the string and counts UTF-8 code points by treating bytes that are not
continuation bytes (i.e., leading bytes) as a new character, then replace
bytes(purpose).length with that helper in validatePurpose and keep the same
threshold and revert with CookieJarLib.InvalidPurpose when the counted length is
below 27. Ensure the helper is internal/pure and referenced from validatePurpose
so behavior matches client-side expectations for multi-byte characters.

In `@foundry.toml`:
- Around line 18-27: Update the remappings array so all paths point to real
directories and use the standardized "prefix/=path/" form with trailing slashes;
run git submodule update --init --recursive to populate missing libs or correct
entries in the remappings list (e.g., change
"@openzeppelin/contracts=lib/openzeppelin-contracts/contracts" to
"@openzeppelin/contracts/=lib/openzeppelin-contracts/contracts/" and verify/fix
the paths for "erc4626-tests", "forge-std", "halmos-cheatcodes", and
"protocol-monorepo/packages/ethereum-contracts" to the actual locations or
remove unused entries) so import resolution for the remappings variable
succeeds.

In `@package.json`:
- Line 21: The "test:client" npm script currently uses "cd client && bun test"
which invokes Bun's built-in test runner and bypasses the client's configured
"test" script (Vitest); update the "test:client" script to run the client's
package script instead (use "cd client && bun run test") so the client's "test"
script (the "test" field in the client's package.json) is executed via Bun.
- Around line 40-44: The scripts "test:all" and "test:ci" wrongly invoke Bun's
native runner via "bun test" instead of running the package script; update those
script commands to use "bun run test" (and similarly use "bun run test:e2e" if
you want explicit invocation) so that the root "test" script (and "test:e2e")
defined in package.json are executed; edit the "test:all" and "test:ci" entries
to replace "bun test" with "bun run test" (and optionally change "bun test:e2e"
to "bun run test:e2e") to fix the behavior.

In `@README.md`:
- Line 168: The README line for the command "bun deploy:base-sepolia" has an
incorrect description saying "Deploy to mainnet"; update the description to
accurately reflect the target network (e.g., "Deploy to Base Sepolia testnet" or
similar) so the command string bun deploy:base-sepolia and its explanatory text
are consistent and clear.
🧹 Nitpick comments (17)
contracts/src/CookieJarFactory.sol (1)

157-162: Sentinel pattern looks correct and well-documented.

The use of type(uint256).max as a sentinel to distinguish "use factory default" from an explicit 0 is a clean, idiomatic Solidity pattern. The inline comments clearly communicate the intent.

One minor observation on the pre-existing clamping at Line 162: any fee value between PERCENTAGE_BASE + 1 and type(uint256).max - 1 is silently capped rather than reverted. This could mask a caller bug (e.g., passing a raw percentage like 5 instead of 500 for 5%). Consider whether a revert for out-of-range values (excluding the sentinel) would be more appropriate:

Suggested stricter validation
-        if (feePerc > CookieJarLib.PERCENTAGE_BASE) feePerc = CookieJarLib.PERCENTAGE_BASE;
+        if (feePerc > CookieJarLib.PERCENTAGE_BASE) revert CookieJarLib.InvalidFeePercentage();

This is optional since the clamping behavior predates this PR and callers may rely on it.

contracts/test/UniswapV4Integration.t.sol (1)

235-266: Consider removing this no-op test with commented-out code.

testSwapFunctionSignatures only asserts assertTrue(true) and contains ~20 lines of commented-out code. It provides no value — compile-time verification of function signatures already happens implicitly through the other tests that call the library. Removing it reduces noise and avoids giving a false sense of coverage.

client/components/jars/JarControls.tsx (2)

59-62: Network name is displayed twice — in ChainDisplay (top row) and in the stats bar (bottom row).

The ChainDisplay component in the top row already renders the network name via getNetworkName(chainId), and line 96 repeats it in the bottom stats row. If the duplication is intentional (e.g., the stats row is visible when ChainDisplay scrolls off-screen), this is fine — but if not, consider removing one to reduce visual noise.

Also applies to: 91-98


71-73: The filter dropdown only contains a single "All Jars" option.

If additional filter options (e.g., by access type) are planned, this is fine as a placeholder. Otherwise, rendering a Select with one option adds UI complexity without functionality — a static label would suffice.

.cursor/rules/root-standards.mdc (1)

21-35: Consider adding migration/setup instructions.

Since this is a standards document that developers reference, consider adding a brief section on:

  • Installing bun (curl -fsSL https://bun.sh/install | bash)
  • Migrating from pnpm (remove pnpm-lock.yaml, run bun install)
  • Basic bun workspace commands

This would improve the onboarding experience for new developers or those migrating existing clones.

📝 Suggested addition

Add a "Setup & Installation" section before "Development Commands":

+## Setup & Installation
+
+```bash
+# Install bun (macOS/Linux)
+curl -fsSL https://bun.sh/install | bash
+
+# Install dependencies
+bun install
+
+# For existing pnpm projects: remove old artifacts first
+rm -rf node_modules pnpm-lock.yaml
+bun install
+```
+
 ## Development Commands
contracts/test/SuperfluidIntegration.t.sol (1)

283-290: Several tests have no assertions — they only verify calls don't revert.

testGetSuperStream, testSuperfluidWithTraditionalStreaming, testSuperfluidRealTimeBalanceWithMultipleStreams, testSuperfluidDistributionPoolIntegration, and testSuperfluidMultipleTokenSupport contain no assert* or expectEmit checks. They only prove the calls succeed, not that the resulting state is correct.

Consider adding at least a basic post-condition assertion (e.g., verifying stream existence or recorded flow rate) to give these tests diagnostic value when contract logic changes.

Also applies to: 296-303, 381-393, 419-444

client/__tests__/hooks/jar/useJarTransactions.test.tsx (1)

56-62: Consider importing PURPOSE_MIN_LENGTH instead of hardcoding 27.

The constant PURPOSE_MIN_LENGTH = 27 is defined in client/components/jar/AllowlistWithdrawalSection.tsx. Using the shared constant here would keep the test in sync if the threshold changes again, rather than requiring manual updates across multiple files.

client/components/nft/protocols/UnlockConfig.tsx (1)

42-74: Pre-existing: onConfigChange missing from useCallback dependency array.

validateLock calls onConfigChange (line 62) but the dependency array on line 74 is empty. This means the callback captures a stale reference if the parent re-renders with a new onConfigChange. Not introduced by this PR, but worth noting for a follow-up.

client/components/nft/protocols/HypercertConfig.tsx (1)

41-73: Stale closure: onConfigChange is not in the dependency array.

validateHypercert captures onConfigChange (line 58) but the useCallback dependency array on line 73 is empty. If the parent re-renders with a new onConfigChange reference, the callback will invoke the stale one. This is a pre-existing issue, but worth fixing while you're in this file.

Suggested fix
-	}, []);
+	}, [onConfigChange]);
contracts/src/CookieJar.sol (2)

418-426: tokenId == 0 as sentinel for "any token" prevents gating on actual token ID 0.

Using nftRequirement.tokenId == 0 to mean "user owns any token" means a jar can never require specifically token ID 0. If that's an accepted constraint, document it explicitly in the NatSpec or NftRequirement struct definition so future maintainers don't trip over it.


283-288: Emergency withdrawal: pending-balance accounting looks correct but has an edge case worth noting.

When amount exceeds pendingTokenBalances[pendingKey], the pending balance is zeroed out, which is the right behavior (prevents stale swapPendingTokens() calls). The actual ETH/ERC20 transfer on lines 290-295 will revert the entire tx if the contract doesn't hold enough, so accounting can't desync.

One subtlety: for the jar's own currency (token == CURRENCY), currencyHeldByJar is decremented on line 281 and pending balance may also be cleared. These are separate accounting buckets, so this is fine — but a brief inline comment clarifying that currencyHeldByJar and pendingTokenBalances never overlap for the same token would help future readers.

client/__tests__/CurrencyLabel.test.tsx (1)

54-63: Test name is misleading after refactor.

The test is named "handles different ETH address formats" but now only passes the exact ETH_ADDRESS constant. It no longer validates case-insensitive matching (e.g., all-lowercase 0xeeee...). Consider either renaming it or adding a case-variant assertion to actually test format handling.

client/hooks/jar/useJarCreation.ts (1)

643-653: validateERC20Address is effectively a no-op for non-ETH addresses.

The try block just returns isAddress(address), which was already checked on Line 644. The catch can never fire since isAddress doesn't throw. If the intent was to validate the contract on-chain (e.g., check for symbol()/decimals()), this needs actual contract calls. As-is, any valid address passes validation.

AGENTS.md (1)

49-51: Redundant npm install alongside bun install is confusing.

The setup instructions show both npm install (for Foundry auto-install) and bun install (for dependencies). Bun executes postinstall hooks as well, so bun install alone should handle both. If there's a specific reason npm install is still needed (e.g., a lifecycle script incompatibility), a comment should clarify that; otherwise, consolidate to a single bun install.

Proposed fix
 # Zero-configuration start
 git clone https://github.com/greenpill-dev-guild/cookie-jar.git
 cd cookie-jar
-npm install  # Auto-installs Foundry
-bun install  # Install dependencies
+bun install  # Install dependencies & auto-installs Foundry
 bun dev      # Starts everything
.github/workflows/accessibility.yml (1)

77-78: Inconsistent use of bun run generate followed by npm run dev.

Line 77 uses bun run generate but Line 78 uses npm run dev. If the intent is to migrate to Bun, consider using bun run dev consistently — or add a comment explaining why npm is intentionally retained for the dev server.

.github/actions/setup-node-pnpm/action.yml (1)

9-12: Bun version 1.0 is outdated — consider pinning to 1.2 or latest.

The default is 1.0, but Bun 1.2 has been out for a while with significant improvements (native database clients, better Node compatibility, performance gains). Other files in this PR reference Bun 1.0 as well (BUN_VERSION: '1.0' in workflows). Consider bumping to align with the version actually used in development.

VERCEL_BUILD_FIX.md (1)

108-114: Nit: inconsistent use of bun vs bun run across examples.

The document alternates between bun run build:client (e.g., Line 63, 78, 126) and bare bun build:client (Line 109), bun check (Line 108), etc. While Bun treats them equivalently, picking one style consistently improves clarity for readers unfamiliar with Bun's shorthand.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 22d15b0 and a55a220.

⛔ Files ignored due to path filters (3)
  • bun.lock is excluded by !**/*.lock
  • foundry.lock is excluded by !**/*.lock
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (68)
  • .cursor/rules/deployment.mdc
  • .cursor/rules/root-standards.mdc
  • .cursor/rules/testing-patterns.mdc
  • .github/actions/setup-node-pnpm/action.yml
  • .github/workflows/accessibility.yml
  • .github/workflows/code-quality.yml
  • .github/workflows/contract-tests.yml
  • .github/workflows/e2e-tests.yml
  • .github/workflows/integration-tests.yml
  • .github/workflows/security-scan.yml
  • .github/workflows/unit-tests.yml
  • AGENTS.md
  • README.md
  • VERCEL_BUILD_FIX.md
  • bunfig.toml
  • client/.cursor/rules/frontend-standards.mdc
  • client/__tests__/CurrencyLabel.test.tsx
  • client/__tests__/TokenInfoHook.test.ts
  • client/__tests__/hooks/blockchain/useSuperfluidSubgraph.test.tsx
  • client/__tests__/hooks/jar/useJarTransactions.test.tsx
  • client/__tests__/hooks/jar/useStreamingActions.test.tsx
  • client/__tests__/hooks/useJarCreation.test.ts
  • client/__tests__/hooks/useNftValidation.test.ts
  • client/__tests__/hooks/useUnlockLocks.test.ts
  • client/__tests__/integration/unlock-sdk-integration.test.ts
  • client/__tests__/utils/ProtocolValidation.test.ts
  • client/app/profile/page.tsx
  • client/components/create/StepContent.tsx
  • client/components/jar/AllowlistWithdrawalSection.tsx
  • client/components/jar/JarDepositSection.tsx
  • client/components/jar/NFTGatedWithdrawalSection.tsx
  • client/components/jars/ChainDisplay.tsx
  • client/components/jars/JarControls.tsx
  • client/components/nft/ProtocolSelector.tsx
  • client/components/nft/doc-links.ts
  • client/components/nft/protocols/HatsConfig.tsx
  • client/components/nft/protocols/HypercertConfig.tsx
  • client/components/nft/protocols/POAPConfig.tsx
  • client/components/nft/protocols/UnlockConfig.tsx
  • client/components/ui/select.tsx
  • client/config/deployments.auto.ts
  • client/config/supported-networks.ts
  • client/generated.ts
  • client/hooks/jar/useJarCreation.ts
  • client/lib/blockchain/constants.ts
  • client/lib/blockchain/token-utils.ts
  • client/package.json
  • contracts/README.md
  • contracts/script/DeployLocal.s.sol
  • contracts/src/CookieJar.sol
  • contracts/src/CookieJarFactory.sol
  • contracts/src/libraries/CookieJarValidation.sol
  • contracts/src/libraries/Streaming.sol
  • contracts/test/CookieJar.t.sol
  • contracts/test/CookieJarProtocols.t.sol
  • contracts/test/SuperfluidIntegration.t.sol
  • contracts/test/UniswapV4Integration.t.sol
  • e2e/README.md
  • e2e/global-setup.ts
  • e2e/global-teardown.ts
  • foundry.toml
  • package.json
  • playwright.config.ts
  • pnpm-workspace.yaml
  • scripts/dev-start.sh
  • scripts/install-deps.sh
  • scripts/test-e2e.sh
  • scripts/watch-deploy.sh
💤 Files with no reviewable changes (1)
  • pnpm-workspace.yaml
🧰 Additional context used
🧬 Code graph analysis (13)
client/components/nft/protocols/POAPConfig.tsx (1)
client/components/nft/doc-links.ts (1)
  • ACCESS_CONTROL_DOC_LINKS (3-10)
client/app/profile/page.tsx (1)
client/lib/blockchain/constants.ts (1)
  • ETH_ADDRESS (7-7)
client/components/nft/protocols/HatsConfig.tsx (1)
client/components/nft/doc-links.ts (1)
  • ACCESS_CONTROL_DOC_LINKS (3-10)
client/components/nft/protocols/UnlockConfig.tsx (1)
client/components/nft/doc-links.ts (1)
  • ACCESS_CONTROL_DOC_LINKS (3-10)
client/components/nft/ProtocolSelector.tsx (1)
client/components/nft/doc-links.ts (1)
  • ACCESS_CONTROL_DOC_LINKS (3-10)
client/__tests__/CurrencyLabel.test.tsx (2)
client/lib/blockchain/constants.ts (1)
  • ETH_ADDRESS (7-7)
client/lib/blockchain/token-utils.ts (1)
  • ETH_ADDRESS (7-7)
client/config/supported-networks.ts (3)
client/lib/blockchain/constants.ts (1)
  • ETH_ADDRESS (7-7)
client/lib/blockchain/token-utils.ts (1)
  • ETH_ADDRESS (7-7)
client/config/local.ts (1)
  • anvilLocal (4-21)
client/components/jar/NFTGatedWithdrawalSection.tsx (1)
client/components/nft/NFTSelector.tsx (1)
  • SelectedNFT (25-35)
client/lib/blockchain/constants.ts (1)
client/lib/blockchain/token-utils.ts (1)
  • ETH_ADDRESS (7-7)
client/__tests__/TokenInfoHook.test.ts (2)
client/lib/blockchain/constants.ts (1)
  • ETH_ADDRESS (7-7)
client/lib/blockchain/token-utils.ts (1)
  • ETH_ADDRESS (7-7)
client/__tests__/hooks/useJarCreation.test.ts (2)
client/lib/blockchain/constants.ts (1)
  • ETH_ADDRESS (7-7)
client/lib/blockchain/token-utils.ts (1)
  • ETH_ADDRESS (7-7)
client/components/jars/JarControls.tsx (2)
client/components/ui/input.tsx (1)
  • Input (31-31)
client/components/ui/select.tsx (1)
  • SelectTrigger (153-153)
client/config/deployments.auto.ts (1)
client/config/supported-networks.ts (2)
  • V2_CHAINS (24-24)
  • isV2Chain (27-29)
🪛 Biome (2.3.14)
client/components/jar/NFTGatedWithdrawalSection.tsx

[error] 352-354: A form label must be associated with an input.

Consider adding a for or htmlFor attribute to the label element or moving the input element to inside the label element.

(lint/a11y/noLabelWithoutControl)


[error] 364-366: A form label must be associated with an input.

Consider adding a for or htmlFor attribute to the label element or moving the input element to inside the label element.

(lint/a11y/noLabelWithoutControl)


[error] 376-378: A form label must be associated with an input.

Consider adding a for or htmlFor attribute to the label element or moving the input element to inside the label element.

(lint/a11y/noLabelWithoutControl)

🪛 GitHub Actions: Code Quality & Linting
client/hooks/jar/useJarCreation.ts

[error] 623-623: tsc --noEmit failed: Type '"CUSTOM"' is not assignable to type '0x${string}'.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Client Integration Tests
🔇 Additional comments (74)
.github/workflows/contract-tests.yml (1)

33-33: Comment update looks good.
No functional changes; the wording now matches the bun migration context.

client/__tests__/hooks/jar/useStreamingActions.test.tsx (1)

29-32: LGTM — updated test command is consistent with bun migration.

client/__tests__/integration/unlock-sdk-integration.test.ts (1)

3-3: Comment update looks good.

Clear and consistent with the Bun migration context.

contracts/script/DeployLocal.s.sol (1)

80-82: LGTM — consistent with the pnpm → bun migration.

contracts/src/libraries/Streaming.sol (1)

43-43: LGTM — sensible pre-validation guard.

Early revert on address(0) prevents a confusing downstream failure inside the Superfluid host call. Clear and specific error name.

client/__tests__/hooks/useUnlockLocks.test.ts (1)

52-52: LGTM! Comment update aligns with the bun migration.

The updated comment correctly documents how to run SDK integration tests using the new package manager.

contracts/test/UniswapV4Integration.t.sol (3)

9-22: Well-implemented harness pattern for testing internal library reverts.

The SwapAdapterHarness correctly wraps the internal library call to create the external call boundary required by vm.expectRevert. The NatDoc comment clearly explains the rationale.


77-79: LGTM — harness deployment is straightforward.


184-185: Correct migration of revert tests to use the harness.

All three error-condition tests (testUnsupportedChain, testZeroAmount, testZeroAddress) now properly route through _harness so vm.expectRevert can intercept the reverts. The parameters and expected selectors remain consistent with the previous direct calls.

Also applies to: 191-197, 203-209

client/.cursor/rules/frontend-standards.mdc (2)

294-295: LGTM! Correct migration to bun commands.

The pre-commit checklist has been properly updated to use bun type-check and bun lint instead of the pnpm equivalents. These are standard bun commands that align with the project's migration to bun as the package manager.


411-416: LGTM! Development commands correctly updated.

All development commands have been properly migrated from pnpm to bun. The listed commands (dev, build, type-check, lint, test, test:watch) are all valid bun commands and align with the project-wide migration described in the PR objectives.

client/components/ui/select.tsx (2)

22-22: Styling updates look good.

The addition of explicit text/placeholder color tokens and the [data-placeholder] selector ensures consistent theming for both native placeholder and Radix's data-attribute-based placeholder state.


78-78: Content panel styling is consistent with the updated design tokens.

client/components/jars/ChainDisplay.tsx (1)

10-26: LGTM!

Clean presentational component with consistent theming via CSS custom properties. The two-line layout (label + network name) with the icon is well-structured.

client/components/jars/JarControls.tsx (2)

16-17: Good extraction of shared styling into a constant.

Centralizing the common input field classes reduces duplication across the search, filter, and other controls.


91-91: These custom utility classes are properly defined in client/app/globals.css and are used consistently across multiple components. No action needed.

Likely an incorrect or invalid review comment.

.cursor/rules/root-standards.mdc (1)

16-16: LGTM! Consistent package manager migration.

The migration from pnpm to bun is implemented consistently across workspace configuration and all development commands. According to Bun 1.2 documentation, all referenced commands (dev, test, build, workspaces) are fully supported.

Also applies to: 25-34

client/components/jar/NFTGatedWithdrawalSection.tsx (2)

102-118: Clean unification of manual and selector flows via activeNFT.

The manualNFT memo and activeNFT derivation cleanly consolidate both input paths so downstream hooks and handlers operate identically regardless of mode. This is a nice structural improvement.


274-306: Withdrawal handlers correctly branch on token type and proof availability.

The ERC-1155 path forwards the balance proof for race-condition protection, while ERC-721 and no-proof scenarios fall back to the standard handlers. Logic is sound.

contracts/test/SuperfluidIntegration.t.sol (9)

10-40: LGTM — MockCFA stubs are clean and well-documented.

The natspec explains the address param trick, and the return shapes match the CFA v1 interface.


42-60: LGTM — MockSuperfluidHost is minimal and fit for purpose.


94-97: LGTM — Mock deployment and wiring is correct.


244-257: LGTM — Properly mints to _user before transfer.

The comment accurately explains why minting is needed and the assertion is correct.


305-347: LGTM — Factory integration test correctly uses the type(uint256).max sentinel.

The sentinel value for feePercentageOnDeposit aligns with the PR objective, and the post-creation role verification is sound.


353-379: LGTM — Reentrancy test is well-designed.

The circular-dependency between attacker and jar is neatly solved with the two-phase setTarget approach, and the expected revert path (fee transfer → receive() → re-entrant depositReentrancyGuard revert → FeeTransferFailed) is correctly validated.


395-417: LGTM — Pause/unpause test correctly validates the enforcement.

The EnforcedPause() custom error matches OpenZeppelin v5's Pausable, and the comment on line 397 correctly documents that createSuperStream isn't gated by whenNotPaused.


458-483: LGTM — ReentrancyAttacker changes are sound.

The setTarget setter resolves the circular deploy dependency cleanly, and the attacking flag in receive() correctly prevents unbounded recursion.


388-390: Add a comment confirming that createSuperStream intentionally has no access control.

createSuperStream contains no access control modifiers or checks—any address can call it. The test implicitly confirms that streaming is open to all. If this is intentional, add a brief comment to the test explaining the design choice; if access control should be enforced, implement allowlist checks in createSuperStream.

client/__tests__/utils/ProtocolValidation.test.ts (1)

254-286: LGTM — threshold updated consistently.

The test logic and assertions correctly reflect the new 27-character minimum for purpose validation.

client/components/create/StepContent.tsx (1)

130-136: LGTM — UI text and styling updates are consistent.

The purpose minimum label now correctly says "27 characters", matching the contract and test changes. The text-muted-foreground class aligns with shadcn/ui conventions, and showViewToggle={false} is a reasonable configuration choice.

Also applies to: 160-162, 334-336, 384-385

client/components/nft/doc-links.ts (1)

1-10: Good centralization of doc links.

Clean module with as const for immutability. This eliminates scattered hardcoded URLs across protocol configs.

client/components/nft/protocols/UnlockConfig.tsx (1)

9-9: LGTM — doc link centralized.

Also applies to: 131-131

client/components/nft/protocols/POAPConfig.tsx (1)

10-10: LGTM — doc link centralized.

Also applies to: 144-144

client/components/nft/protocols/HatsConfig.tsx (1)

9-9: LGTM — doc link centralized.

Also applies to: 124-124

client/components/nft/protocols/HypercertConfig.tsx (1)

9-9: LGTM — centralizing doc links is a clean improvement.

Switching from a hardcoded URL to ACCESS_CONTROL_DOC_LINKS.hypercerts aligns with the other protocol configs and makes link maintenance easier.

Also applies to: 103-103

client/components/nft/ProtocolSelector.tsx (1)

47-96: LGTM — consistent centralization of doc links across all access methods.

All six access method entries now reference ACCESS_CONTROL_DOC_LINKS, keeping URLs in sync with other protocol config components.

contracts/src/CookieJar.sol (4)

411-449: NFT-specific error taxonomy looks correct and well-structured.

The three distinct error paths (InsufficientNFTBalance, NFTNotOwned, NFTValidationFailed) give callers clear failure reasons depending on whether it's a balance check, ownership mismatch, or contract-level call failure. The try/catch pattern correctly handles non-compliant contracts.


525-533: DRY period-reset helpers are a clean improvement.

_isNewPeriod and _effectiveWithdrawnInPeriod eliminate duplicated logic between _validateWithdrawalConstraints and _executeWithdrawal. The first-withdrawal edge case (where lastWithdrawalTime[user] == 0) works correctly because the condition evaluates to block.timestamp >= WITHDRAWAL_PERIOD, which is always true for any realistic block timestamp.


505-519: Withdrawal ETH transfer uses revert TransferFailed() — consistent with the rest of the contract.

The switch from require(success, ...) to the custom error is consistent with the pattern on line 292 (emergencyWithdraw) and line 168 (_handleEthReceive). Good alignment.


344-344: Fee event guard prevents misleading zero-fee events.

Emitting FeeCollected only when fee > 0 is correct and aligned with the same guard in _handleEthReceive (line 166) and _swapToJarToken (lines 635, 668).

contracts/test/CookieJar.t.sol (2)

1488-1492: Test expectations correctly updated for new NFT-specific errors.

The updated selectors across all NFT access tests (InsufficientNFTBalance for zero-balance cases, NFTNotOwned for wrong-owner cases) accurately mirror the new _checkAccess logic in CookieJar.sol.


1583-1587: Correct: specific token ownership failure yields NFTNotOwned.

User owns token 5 but jar requires token 3 → ownerOf(3) returns attackeruserNFTNotOwned. Matches the _checkAccess ERC721 specific-tokenId branch.

client/components/jar/AllowlistWithdrawalSection.tsx (1)

16-16: Good: PURPOSE_MIN_LENGTH aligns with the contract's 27-byte threshold.

This constant mirrors the Solidity bytes(purpose).length < 27 check in CookieJar._validateWithdrawalConstraints, keeping client and contract validation in sync.

contracts/test/CookieJarProtocols.t.sol (5)

10-42: MockPOAP balance tracking is correctly implemented.

The ownerTokenCount mapping and balanceOf function properly simulate ERC-721 behavior, including handling transfers (decrementing previous owner, incrementing new owner). The address(0) guards prevent spurious increments/decrements on mint and burn paths.


78-84: MockHats balanceOf(address, uint256) correctly simulates ERC-1155 for Hats Protocol.

Returning 1/0 based on wearerStatus matches how Hats Protocol implements the ERC-1155 interface, and aligns with the _checkAccess ERC1155 branch that checks bal < minBal (where minBal defaults to 1).


175-212: Per-protocol access config helpers clean up the test setup nicely.

Splitting createERC721AccessConfig, createHypercertAccessConfig, and createHatsAccessConfig replaces a single generic helper with purpose-specific ones, making each jar's configuration explicit and readable.


507-552: Excellent new test coverage for hat revocation and multi-user cooldown independence.

test_RevertWhen_HatsRevokedBetweenWithdrawals validates that revoking a hat prevents future withdrawals even after cooldown — a critical real-time access control property. test_HatsMultipleUsersIndependentCooldowns confirms per-user withdrawal tracking doesn't leak between users.


564-572: Test test_RevertWhen_ERC721WithdrawShortPurpose now properly grants access before testing purpose validation.

Previously this test may have been reverting for wrong reasons (no NFT ownership). By calling mockPoap.setOwner(88888, user) and warping past the interval, the test now isolates the purpose validation path specifically.

client/__tests__/hooks/useJarCreation.test.ts (1)

22-22: LGTM — tests now use the shared ETH_ADDRESS constant.

Using the same constant from @/lib/blockchain/constants as the production code ensures tests stay in sync if the value changes. Clean improvement.

Also applies to: 102-102, 108-108

client/__tests__/TokenInfoHook.test.ts (1)

2-2: LGTM — consistent use of the shared ETH_ADDRESS constant.

Eliminates the previously hardcoded local value, matching the pattern in useJarCreation.test.ts and the production code.

Also applies to: 7-9, 34-34

client/lib/blockchain/constants.ts (1)

1-7: LGTM!

Clean, well-documented centralization of the ETH sentinel address. The 0xEeeee…EEeE address is the widely adopted convention for representing native ETH in smart contracts.

client/components/jar/JarDepositSection.tsx (1)

31-32: LGTM!

Clean replacement of hardcoded address comparison with the centralized ETH_ADDRESS constant using case-insensitive comparison.

client/app/profile/page.tsx (1)

257-260: LGTM!

Null-safe, case-insensitive comparison against the centralized ETH_ADDRESS constant. Clean improvement over the previous hardcoded address.

client/config/supported-networks.ts (1)

87-153: LGTM!

All native currency entries consistently use the centralized ETH_ADDRESS constant. Clean DRY improvement.

client/lib/blockchain/token-utils.ts (1)

5-7: LGTM!

Clean import and re-export preserving the existing public API surface for downstream consumers.

client/__tests__/hooks/blockchain/useSuperfluidSubgraph.test.tsx (1)

42-42: LGTM! Comment correctly updated to reference bun test.

client/__tests__/hooks/useNftValidation.test.ts (1)

10-10: LGTM! Comment correctly updated to reference bun test.

scripts/watch-deploy.sh (2)

42-42: LGTM! Correctly updated to bun generate.


69-69: LGTM! Fallback message correctly updated to bun deploy:local.

e2e/global-teardown.ts (1)

6-6: LGTM! Comment correctly updated to reference bun dev.

playwright.config.ts (1)

47-47: LGTM! Web server command correctly updated to bun dev.

e2e/README.md (1)

9-14: LGTM! Quick start commands consistently updated to bun.

e2e/global-setup.ts (1)

62-62: LGTM! Error messages consistently updated to reference bun dev.

contracts/README.md (1)

84-98: LGTM! Clear distinction between bun (repo root) and npm (contracts/ directory) commands is well documented and consistent.

client/config/deployments.auto.ts (1)

22-43: LGTM — deployment data and formatting update.

Auto-generated file with updated factory address and timestamp for local development. Logic is unchanged.

scripts/test-e2e.sh (1)

100-105: LGTM — consistent pnpm → bun migration in E2E script.

All command references consistently updated to bun. Control flow and exit semantics remain unchanged.

Also applies to: 187-188, 211-215, 245-246, 285-286, 290-292

client/package.json (1)

11-13: LGTM — script runner swap from pnpm to bun.

The cd .. && bun <script> pattern is consistent with the monorepo root delegation approach.

.cursor/rules/deployment.mdc (1)

110-123: LGTM — deployment docs updated to bun commands.

Also applies to: 203-203

scripts/install-deps.sh (1)

53-65: LGTM!

The bun installation block correctly checks for existence, installs via npm, and provides helpful error messages with alternative installation methods.

.github/workflows/code-quality.yml (1)

23-27: LGTM!

The bun migration for the code quality workflow is clean and correctly updates all command invocations while preserving the error-handling logic.

VERCEL_BUILD_FIX.md (1)

1-163: Documentation looks good and accurately reflects the pnpm → bun migration.

The instructions, build flow, dashboard config, local testing steps, and rollback plan are all consistent with the package.json changes. No functional issues.

package.json (3)

79-88: Engines, overrides, and workspaces look correct for Bun.

Bun supports npm-style overrides and workspaces fields. The engines.bun >= 1.0.0 minimum is reasonable. The axios override to >=1.12.0 properly enforces a minimum version across the dependency tree.


3-3: Version still at 3.1.0 — should this be bumped to 3.2.0?

The PR branch is release/3.2.0, but the package version hasn't been updated. If the version bump is handled elsewhere (e.g., a release script or separate PR), ignore this. Otherwise, update it to match the release.


8-65: Script migration from pnpm to bun is clean overall.

The pnpm --filter patterns are correctly replaced with cd <dir> && bun <script>, and pnpm run calls are replaced with bun run. Non-reserved script names used as bare bun <name> shorthands (e.g., bun lint, bun type-check, bun generate) all resolve correctly.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +25 to +34
bun dev # Local with fresh Anvil chain
bun dev:ethereum # Fork Ethereum mainnet
bun dev:base # Fork Base network

# Testing & quality
pnpm test # All tests across packages
pnpm test:coverage # Coverage report
pnpm lint # Lint all packages
pnpm type-check # Fast TypeScript validation (preferred)
pnpm build # Build all packages
bun test # All tests across packages
bun test:coverage # Coverage report
bun lint # Lint all packages
bun type-check # Fast TypeScript validation (preferred)
bun build # Build all packages
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify bun commands exist in package.json and check for remaining pnpm references

echo "=== Checking package.json scripts ==="
# Verify documented commands exist
for cmd in dev dev:ethereum dev:base test test:coverage lint type-check build; do
  if jq -e ".scripts.\"$cmd\"" package.json >/dev/null 2>&1; then
    echo "$cmd script found"
  else
    echo "$cmd script missing"
  fi
done

echo -e "\n=== Searching for remaining pnpm references ==="
# Find any remaining pnpm references in config/docs
rg -i 'pnpm' \
  --type=json \
  --type=yaml \
  --type=md \
  -g '!**/node_modules/**' \
  -g '!**/.next/**' \
  -g '!**/dist/**' \
  -C 2

echo -e "\n=== Checking for pnpm lock files ==="
fd -H 'pnpm-lock.yaml'

echo -e "\n=== Checking for bun lock files ==="
fd -H 'bun.lockb'

Repository: greenpill-dev-guild/cookie-jar

Length of output: 428


🏁 Script executed:

# Check the content of the file to see lines 77-79
cat -n .cursor/rules/root-standards.mdc | sed -n '20,80p'

Repository: greenpill-dev-guild/cookie-jar

Length of output: 2888


Add missing bun scripts to package.json: dev:ethereum, dev:base, and test:coverage are documented but missing from package.json.

The documentation claims developers can run bun dev:ethereum, bun dev:base, and bun test:coverage, but these scripts don't exist in package.json. Define these scripts in package.json or remove them from the documentation. The commands dev, test, lint, type-check, and build are correctly configured.

🤖 Prompt for AI Agents
In @.cursor/rules/root-standards.mdc around lines 25 - 34, The docs list bun
scripts that are missing from package.json; either add the three scripts
"dev:ethereum", "dev:base", and "test:coverage" to package.json or remove them
from the documentation. If adding, implement "dev:ethereum" and "dev:base" as
variant wrappers of the existing "dev" script that pass the appropriate
network/fork flags used by your dev server (so they call the same underlying
start logic as "dev" but with the Ethereum or Base fork configuration) and
implement "test:coverage" to run the same test runner with coverage enabled
(mirroring how your CI produces coverage, calling your test command with the
coverage flags). Ensure the script names exactly match dev:ethereum, dev:base,
and test:coverage so the docs and package.json stay in sync.

Comment on lines 74 to 78
disabled={
!withdrawPurpose ||
withdrawPurpose.length < 10 ||
withdrawPurpose.length < PURPOSE_MIN_LENGTH ||
config.isWithdrawPending
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor: JS string.length counts UTF-16 code units, not bytes.

The contract checks bytes(purpose).length < 27 (UTF-8 byte length), but withdrawPurpose.length in JS counts UTF-16 code units. For multi-byte characters (emojis, CJK, etc.), a string that passes the client-side check could still fail on-chain. For typical English text this is a non-issue, but if you want parity you could use new TextEncoder().encode(withdrawPurpose).length.

Also applies to: 214-218

🤖 Prompt for AI Agents
In `@client/components/jar/AllowlistWithdrawalSection.tsx` around lines 74 - 78,
The client-side length check uses withdrawPurpose.length (UTF-16 code units)
which can differ from the contract's bytes(purpose).length (UTF-8); update the
disabled checks (e.g., the JSX control using withdrawPurpose, PURPOSE_MIN_LENGTH
and config.isWithdrawPending) and the other occurrences around lines 214-218 to
compute UTF-8 byte length via new TextEncoder().encode(withdrawPurpose).length,
and compare that value against PURPOSE_MIN_LENGTH so the client validation
matches the on‑chain byte-length check.

Comment on lines +18 to +27
remappings = [
"@openzeppelin/contracts=lib/openzeppelin-contracts/contracts",
"erc4626-tests/=lib/openzeppelin-contracts/lib/erc4626-tests/",
"forge-std/=lib/forge-std/src/",
"halmos-cheatcodes/=lib/openzeppelin-contracts/lib/halmos-cheatcodes/src/",
"openzeppelin-contracts/=lib/openzeppelin-contracts/",
"@superfluid-finance/ethereum-contracts/=lib/protocol-monorepo/packages/ethereum-contracts/contracts/",
"universal-router/=lib/universal-router/",
"permit2/=lib/permit2/",
]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if the referenced library directories exist
echo "=== Checking library paths ==="
for dir in \
  "lib/openzeppelin-contracts/contracts" \
  "lib/openzeppelin-contracts/lib/erc4626-tests" \
  "lib/forge-std/src" \
  "lib/openzeppelin-contracts/lib/halmos-cheatcodes/src" \
  "lib/protocol-monorepo/packages/ethereum-contracts/contracts" \
  "lib/universal-router" \
  "lib/permit2"; do
  if [ -d "$dir" ]; then
    echo "✓ EXISTS: $dir"
  else
    echo "✗ MISSING: $dir"
  fi
done

echo ""
echo "=== Checking foundry.toml structure ==="
cat -n foundry.toml | head -100

Repository: greenpill-dev-guild/cookie-jar

Length of output: 4863


Critical: Most remapped library paths do not exist and will prevent compilation.

Lines 19-27 reference 7 library remappings, but 5 of them point to non-existent directories:

  • lib/openzeppelin-contracts/contracts
  • lib/openzeppelin-contracts/lib/erc4626-tests
  • lib/forge-std/src
  • lib/openzeppelin-contracts/lib/halmos-cheatcodes/src
  • lib/protocol-monorepo/packages/ethereum-contracts/contracts
  • lib/universal-router (exists)
  • lib/permit2 (exists)

This will cause import resolution failures. Ensure all git submodules are initialized (git submodule update --init --recursive), or correct the remapping paths if they are wrong.

Additionally, line 19 uses @openzeppelin/contracts=lib/openzeppelin-contracts/contracts (no trailing slashes), while all other entries follow the prefix/=path/ convention. Standardize to @openzeppelin/contracts/=lib/openzeppelin-contracts/contracts/.

🤖 Prompt for AI Agents
In `@foundry.toml` around lines 18 - 27, Update the remappings array so all paths
point to real directories and use the standardized "prefix/=path/" form with
trailing slashes; run git submodule update --init --recursive to populate
missing libs or correct entries in the remappings list (e.g., change
"@openzeppelin/contracts=lib/openzeppelin-contracts/contracts" to
"@openzeppelin/contracts/=lib/openzeppelin-contracts/contracts/" and verify/fix
the paths for "erc4626-tests", "forge-std", "halmos-cheatcodes", and
"protocol-monorepo/packages/ethereum-contracts" to the actual locations or
remove unused entries) so import resolution for the remappings variable
succeeds.

@Oba-One Oba-One changed the base branch from dev to main February 11, 2026 04:56
@Oba-One Oba-One changed the base branch from main to dev February 11, 2026 04:56
…lures

- Use FOUNDRY_PROFILE=dev in CI workflows to match local build scripts
- Add fetch-depth: 0 for proper submodule resolution
- Fix TypeScript error in useJarCreation.ts (widen options array type)
- Add vercel.json to specify bun as install command

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
client/hooks/jar/useJarCreation.ts (1)

447-465: ⚠️ Potential issue | 🔴 Critical

Change feeBps to use sentinel value type(uint256).max when custom fee is not enabled.

The factory contract interprets 0 as an explicit 0% fee, not as "use the factory default." When enableCustomFee is false, the client currently sends customFeePercentage: BigInt(0), which unintentionally overrides the protocol's default fee for every jar created without a custom fee. The contract expects type(uint256).max as a sentinel to signal "use factory default" (see CookieJarFactory.sol:157-161). Change line 450 to send BigInt(2n**256n - 1n) instead of 0 when the user has not opted into a custom fee.

🧹 Nitpick comments (1)
.github/workflows/contract-tests.yml (1)

40-43: Consider setting FOUNDRY_PROFILE once at the job level instead of repeating it in every step.

The same export FOUNDRY_PROFILE=dev appears in four separate steps. A job-level env block eliminates the duplication and makes it trivial to change the profile later.

♻️ Proposed refactor

Add an env block at the job level:

   contract-tests:
     name: Foundry Tests & Analysis
     runs-on: ubuntu-latest
     timeout-minutes: 20
+    env:
+      FOUNDRY_PROFILE: dev
     
     steps:

Then remove all four export FOUNDRY_PROFILE=dev lines from the individual run blocks (lines 42, 63, 84, 119).

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a55a220 and d750241.

📒 Files selected for processing (4)
  • .github/workflows/contract-tests.yml
  • .github/workflows/integration-tests.yml
  • client/hooks/jar/useJarCreation.ts
  • vercel.json
✅ Files skipped from review due to trivial changes (1)
  • vercel.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/integration-tests.yml
🔇 Additional comments (6)
.github/workflows/contract-tests.yml (4)

23-27: LGTM on the checkout changes.

fetch-depth: 0 ensures all submodule references resolve correctly with submodules: recursive.


57-76: LGTM on the test step changes.

The dev profile export aligns CI behavior with local package.json scripts as intended.


78-101: LGTM on the coverage step changes.


114-139: LGTM on the security analysis step changes.

client/hooks/jar/useJarCreation.ts (2)

18-18: Good centralization of ETH_ADDRESS.

Moving to a shared module eliminates duplication across the codebase.


596-602: LGTM — explicit type annotation for currency options.

The added description field in the type ensures all option objects are consistently shaped.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

OZ v5 moved ERC777 interfaces from token/ERC777/ to interfaces/.
Superfluid protocol-monorepo still imports from the old v4 path.
Add oz-compat.sh script that copies the interface files to the
expected location, run in CI workflows and postinstall hook.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- docs: remove phantom dev:ethereum/dev:base scripts from root-standards,
  clarify bun run test vs bun test in testing-patterns
- fix(bunfig): remove vitest preload that breaks bun's native test runner,
  remove hardcoded NODE_ENV=production that prevents dev mode
- fix(client): use Unicode code point length for purpose validation to match
  Solidity, fix negative repeat in deposit placeholder, add safeBigInt for
  manual NFT token ID input, add htmlFor/id for label accessibility,
  fix select WCAG contrast using primary-foreground token
- fix(contracts): add countUnicodeCodePoints helper for UTF-8 aware
  purpose validation matching client-side semantics
- fix(config): standardize remapping trailing slashes in foundry.toml
  and contracts/remappings.txt
- fix(scripts): use bun run test instead of bun test in package.json
  for test:client, test:all, and test:ci
- docs: fix deploy:base-sepolia description in README

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
coderabbitai[bot]
coderabbitai bot previously requested changes Feb 11, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
foundry.toml (1)

78-80: ⚠️ Potential issue | 🟠 Major

openzeppelin-contracts/ remapping target differs between default and size profiles.

Line 23 maps openzeppelin-contracts/lib/openzeppelin-contracts/ (repo root), while line 79 maps it → ../lib/openzeppelin-contracts/contracts/ (the contracts/ subfolder). If any source file imports via the openzeppelin-contracts/contracts/… path, the size profile will resolve to a doubled contracts/contracts/… and fail to compile.

Align the two so the same import paths work under both profiles.

Suggested fix
 remappings = [
-    "openzeppelin-contracts/=../lib/openzeppelin-contracts/contracts/"
+    "openzeppelin-contracts/=../lib/openzeppelin-contracts/"
 ]
🤖 Fix all issues with AI agents
In `@client/components/jar/AllowlistWithdrawalSection.tsx`:
- Around line 16-22: The CookieJar contract's _validateWithdrawalConstraints
currently compares bytes(purpose).length against PURPOSE_MIN_LENGTH which counts
bytes instead of Unicode code points; update _validateWithdrawalConstraints in
CookieJar to call countUnicodeCodePoints(purpose) from CookieJarValidation (the
same validation helper used in CookieJarValidation.sol) and compare that result
to PURPOSE_MIN_LENGTH so the Solidity check matches the TypeScript
unicodeCodePointLength behavior and avoids misvalidating multi-byte characters.

In `@client/components/jar/NFTGatedWithdrawalSection.tsx`:
- Around line 182-185: The args construction returns a mutable (string|bigint)[]
rather than the expected readonly tuple for ERC1155 balanceOf; update the
returned tuple in the args expression (the branch that returns [userAddress,
id]) to be a readonly tuple by appending as const so TypeScript infers readonly
[account: address, id: bigint]; this change should be applied where
activeNFT.tokenId is converted via safeBigInt and userAddress is used in the
args expression.
- Line 162: The args value passed to useReadContract for ownerOf is returning a
bigint[] from the IIFE (via activeNFT?.tokenId and safeBigInt) which TypeScript
won't narrow to the expected readonly tuple type readonly [tokenId: bigint];
change the IIFE to return a readonly tuple by using "as const" when returning
the single-element tuple (and keep the existing undefined branch), so args
becomes either readonly [bigint] or undefined; locate the args property in
NFTGatedWithdrawalSection.tsx around the activeNFT?.tokenId usage and update the
returned array to a readonly tuple to satisfy the useReadContract ownerOf
signature.

In `@README.md`:
- Line 119: Replace the user-facing invocations of Bun's test runner in the
README — specifically the occurrences that currently say "bun test" (notably the
instances at the lines referenced) — with "bun run test" so the package.json
"test" script is executed (this ensures the project's composite test script that
runs contracts + client via Vitest is used). Update all occurrences (the ones
called out and any other similar mentions) to "bun run test" to match the
testing-patterns guidance.
🧹 Nitpick comments (9)
.github/workflows/contract-tests.yml (2)

39-58: Dead $? check and duplicate forge build --sizes.

GitHub Actions uses set -eo pipefail by default, so if forge build --sizes on line 48 fails, the shell exits immediately — the if [ $? -ne 0 ] block on lines 50-53 is unreachable dead code. Additionally, forge build --sizes runs twice (lines 48 and 58), doubling build time for this step.

Proposed fix
     - name: Build contracts
       run: |
         echo "🔧 Building contracts..."
         cd contracts

         # Use dev profile (matches local package.json scripts)
         export FOUNDRY_PROFILE=dev

         # Build contracts with size reporting
-        forge build --sizes
-
-        if [ $? -ne 0 ]; then
-          echo "❌ Contract compilation failed"
-          exit 1
-        fi
-
-        echo "✅ Contracts compiled successfully"
-        echo ""
-        echo "📊 Contract sizes:"
-        forge build --sizes | tail -20  # Show size information
+        if ! forge build --sizes; then
+          echo "❌ Contract compilation failed"
+          exit 1
+        fi
+
+        echo "✅ Contracts compiled successfully"

27-27: fetch-depth: 0 fetches full git history — is this intentional?

This significantly increases checkout time. For submodule support, fetch-depth: 1 (the default) is sufficient. Full history is typically only needed for git log/git describe-based versioning. If that's the intent, a brief comment would help.

.github/workflows/integration-tests.yml (1)

77-121: Dead $? checks — same pattern as contract-tests.yml.

Lines 96, 109, and 116 check $? after commands, but GitHub Actions' default set -eo pipefail means the script already exits on failure, making these checks unreachable. Use if ! cmd; then ... instead for clarity and correctness.

Example fix for the deploy step (lines 86–101)
         # Build contracts first
         forge build
         
         # Deploy using same script as dev-start.sh
-        forge script script/DeployLocal.s.sol:DeployLocalScript \
+        if ! forge script script/DeployLocal.s.sol:DeployLocalScript \
           --rpc-url http://127.0.0.1:8545 \
           --broadcast \
           --force \
           --sender 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 \
-          --private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
-        
-        if [ $? -ne 0 ]; then
+          --private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80; then
             echo "❌ Contract deployment failed"
             exit 1
         fi
package.json (2)

10-10: Complex postinstall one-liner is fragile and hard to maintain.

This single line chains 5 operations with && and a broad || echo fallback that swallows all failures (including git submodule update or oz-compat.sh errors — not just a missing forge). Consider extracting this into a dedicated script (e.g., scripts/postinstall.sh) for readability and more granular error handling.

Proposed refactor
-"postinstall": "rm -f package-lock.json && (command -v forge > /dev/null && git submodule update --init --recursive && ./scripts/oz-compat.sh && cd contracts && forge install || echo 'Forge not found, skipping forge operations')",
+"postinstall": "./scripts/postinstall.sh",

Then in scripts/postinstall.sh:

#!/usr/bin/env bash
set -euo pipefail

rm -f package-lock.json

if ! command -v forge > /dev/null; then
  echo "Forge not found, skipping forge operations"
  exit 0
fi

git submodule update --init --recursive
./scripts/oz-compat.sh
cd contracts && forge install

22-22: bun test:coverage in test:client:coverage — verify this resolves to the package script, not Bun's test runner.

While bun test:coverage is not a reserved Bun subcommand (only bare bun test is), the test: prefix could cause confusion given the established pattern of using bun run <script>. For consistency with the fix applied to test:client on line 21, consider using bun run test:coverage.

Similarly applies to lines 23-26 (bun test:version-detection, bun test:lazy-loading, bun test:hooks, bun test:components).

client/components/jar/NFTGatedWithdrawalSection.tsx (3)

162-170: Consider extracting the IIFE arg-building into a useMemo for readability.

The inline IIFEs in args make these blocks harder to read and review. A small useMemo for each set of args would be clearer and avoid re-evaluation on every render.

♻️ Example refactor
const erc721Args = useMemo(() => {
  if (!activeNFT?.tokenId) return undefined;
  const id = safeBigInt(activeNFT.tokenId);
  return id !== undefined ? [id] as const : undefined;
}, [activeNFT?.tokenId]);

const erc1155Args = useMemo(() => {
  if (!userAddress || !activeNFT?.tokenId) return undefined;
  const id = safeBigInt(activeNFT.tokenId);
  return id !== undefined ? [userAddress, id] as const : undefined;
}, [userAddress, activeNFT?.tokenId]);

Then use args: erc721Args and args: erc1155Args in the respective useReadContract calls.

Also applies to: 182-195


216-222: Wagmi error details are discarded in the displayed message.

error (Line 216) can be an Error object from erc721Error or erc1155Error, but Line 219 only uses proofError (a string). When proofError is null and a wagmi error is present, the user sees the generic "Failed to verify NFT ownership" message, losing the specific failure reason.

Consider including the wagmi error message:

♻️ Proposed improvement
 		if (error) {
-			setBalanceError(proofError || "Failed to verify NFT ownership");
+			const message =
+				proofError ||
+				(erc721Error instanceof Error ? erc721Error.message : null) ||
+				(erc1155Error instanceof Error ? erc1155Error.message : null) ||
+				"Failed to verify NFT ownership";
+			setBalanceError(message);
 			setOwnershipVerified(false);
 			return;
 		}

490-501: "Withdrawal Amount" label (Line 492) lacks htmlFor/id association.

This label is not linked to its <Input> via htmlFor/id, same pattern that was fixed for the manual entry fields. Since this line wasn't changed in this PR it's not blocking, but worth a follow-up for consistency.

♻️ Proposed fix
-				<label className="text-sm font-medium text-[`#3c2a14`]">
+				<label htmlFor="nft-withdraw-amount" className="text-sm font-medium text-[`#3c2a14`]">
 					Withdrawal Amount
 				</label>
 				<Input
+					id="nft-withdraw-amount"
 					type="number"
client/components/jar/AllowlistWithdrawalSection.tsx (1)

54-109: Four near-identical branches could be consolidated, but not in scope for this PR.

The four early-return branches share most of their markup and differ only in the amount input presence, purpose textarea, and disabled conditions. This is existing code structure and not introduced by this PR, so no action needed now — just noting for future refactoring consideration.

Also applies to: 112-153, 156-241, 244-312

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d750241 and e85470f.

📒 Files selected for processing (15)
  • .cursor/rules/root-standards.mdc
  • .cursor/rules/testing-patterns.mdc
  • .github/workflows/contract-tests.yml
  • .github/workflows/integration-tests.yml
  • README.md
  • bunfig.toml
  • client/components/jar/AllowlistWithdrawalSection.tsx
  • client/components/jar/JarDepositSection.tsx
  • client/components/jar/NFTGatedWithdrawalSection.tsx
  • client/components/ui/select.tsx
  • contracts/remappings.txt
  • contracts/src/libraries/CookieJarValidation.sol
  • foundry.toml
  • package.json
  • scripts/oz-compat.sh
🚧 Files skipped from review as they are similar to previous changes (4)
  • client/components/jar/JarDepositSection.tsx
  • .cursor/rules/root-standards.mdc
  • bunfig.toml
  • client/components/ui/select.tsx
🧰 Additional context used
🪛 actionlint (1.7.10)
.github/workflows/integration-tests.yml

[error] 78-78: shellcheck reported issue in this script: SC2181:style:18:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?

(shellcheck)


[error] 78-78: shellcheck reported issue in this script: SC2181:style:31:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?

(shellcheck)


[error] 78-78: shellcheck reported issue in this script: SC2181:style:38:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?

(shellcheck)

.github/workflows/contract-tests.yml

[error] 40-40: shellcheck reported issue in this script: SC2181:style:10:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?

(shellcheck)

🪛 GitHub Actions: Code Quality & Linting
client/components/jar/NFTGatedWithdrawalSection.tsx

[error] 162-162: TS2322: Type 'bigint[] | undefined' is not assignable to type 'readonly [tokenId: bigint] | undefined'.

🪛 GitHub Actions: Smart Contract Tests
.cursor/rules/testing-patterns.mdc

[error] 1-1: Process completed with exit code 1.

foundry.toml

[error] 1-1: Process completed with exit code 1.

client/components/jar/AllowlistWithdrawalSection.tsx

[error] 1-1: Process completed with exit code 1.

.github/workflows/integration-tests.yml

[error] 1-1: Process completed with exit code 1.

.github/workflows/contract-tests.yml

[error] 1-1: Process completed with exit code 1.

client/components/jar/NFTGatedWithdrawalSection.tsx

[error] 1-1: Process completed with exit code 1.

scripts/oz-compat.sh

[error] 1-1: Process completed with exit code 1.

contracts/src/libraries/CookieJarValidation.sol

[error] 1-1: Process completed with exit code 1.

contracts/remappings.txt

[error] 1-1: Process completed with exit code 1.

README.md

[error] 1-1: Process completed with exit code 1.

package.json

[error] 1-1: Process completed with exit code 1.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Client Integration Tests
🔇 Additional comments (11)
contracts/remappings.txt (1)

1-1: Trailing-slash fix looks correct.

The =/= and trailing / on the target path now match the standard Foundry remapping convention used by the other entries.

contracts/src/libraries/CookieJarValidation.sol (2)

51-61: Well-implemented UTF-8 code point counter.

The bitmask logic (uint8(b[i]) & 0xC0) != 0x80 correctly identifies UTF-8 leading bytes vs. continuation bytes. This is the standard approach and handles all valid multi-byte sequences (1–4 bytes per code point).

Minor consideration: for very long strings, a word-at-a-time (32-byte chunks) approach could reduce gas, but for typical purpose descriptions this byte-level iteration is perfectly fine.


65-65: Review comment is incorrect—tests using short purpose strings intentionally verify the new threshold.

The three tests (test_RevertWhen_WithdrawAllowlistModeShortPurpose, test_RevertWhen_WithdrawNFTModeShortPurpose, and test_RevertWhen_ERC721WithdrawShortPurpose) explicitly use shortPurpose ("Too short", 9 code points) and expect the InvalidPurpose() revert via vm.expectRevert(). These tests are not failing due to the threshold change—they intentionally verify that the new 27-code-point minimum is enforced correctly.

The pipeline failure may exist, but it is not caused by this validation logic or tests breaking due to the threshold. Investigate the actual CI failure logs for the root cause.

Likely an incorrect or invalid review comment.

scripts/oz-compat.sh (1)

1-19: LGTM — script is well-structured.

One minor note: the success message on line 18 prints even if none of the inner cp commands executed (i.e., none of the source files existed in interfaces/). Consider moving the echo inside a success check or making it conditional, but this is unlikely to cause real issues.

.cursor/rules/testing-patterns.mdc (1)

215-237: Good update — addresses prior feedback on bun run vs bun test and workspace scoping.

The explicit note about bun run <script> vs bun test and the workspace-scoped watch mode section look correct and address the previously raised concern.

.github/workflows/integration-tests.yml (1)

26-30: Action already accepts and uses bun-version correctly.

The .github/actions/setup-node-pnpm/action.yml already declares bun-version as an accepted input and properly forwards it to the oven-sh/setup-bun@v2 action. The action name and description have been updated to reflect the bun migration. The step will not silently ignore the bun-version input.

Likely an incorrect or invalid review comment.

client/components/jar/NFTGatedWithdrawalSection.tsx (3)

32-39: safeBigInt helper looks good — addresses the previous review feedback.

Cleanly prevents SyntaxError from invalid manual input reaching BigInt().


111-127: manualNFT / activeNFT / isManualMode — clean unification of both flows.

The useMemo for manualNFT has appropriate dependencies and the activeNFT derivation is straightforward.

One minor observation: isManualMode (Line 127) is a simple alias for showManualInput — it adds readability but is redundant. Not blocking.


197-205: Sync effect correctly skips updates in manual mode.

The early return when showManualInput is true prevents overwriting manual values — good fix.

client/components/jar/AllowlistWithdrawalSection.tsx (2)

80-84: Updated purpose validation looks correct.

The unicodeCodePointLength check replaces the previous withdrawPurpose.length < 10 with the new threshold and proper code-point counting. Same pattern correctly applied at Line 222 for the variable+purpose path.


45-45: Stray empty line after destructured props.

Trivial formatting nit — the blank line between the closing brace of the destructuring and the hook call is harmless but inconsistent with the rest of the file's style.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

The safeBigInt IIFE returned bigint[] which isn't assignable to wagmi's
expected readonly tuple type. Adding as const narrows the array literal
to the correct tuple shape.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…brary

CookieJar._validateWithdrawalConstraints had an inline bytes(purpose).length
check that duplicated and diverged from CookieJarValidation.validatePurpose.
Now delegates to the library which uses countUnicodeCodePoints, matching
client-side unicodeCodePointLength semantics for multi-byte characters.

Also replace remaining bun test references in README with bun run test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Oba-One Oba-One changed the title fix: NFT gating audit fixes and production polish release: v3.2.0 — NFT gating hardening, Unicode validation, CI fixes Feb 11, 2026
Oba-One and others added 3 commits February 11, 2026 12:25
…ok stability

Phase 1: Migrate useJarCreation from manual useState to React Hook Form + Zod
- Extract jarCreationSchema.ts with METHOD_TO_ACCESS_TYPE mapping
- Replace ~15 useState calls with useForm + zodResolver
- Simplify StepContent bridge with lookup instead of 6-case switch

Phase 2: Redesign ProtocolSelector UX and consolidate NFTGateInput state
- Merge duplicate ConfigurationPanels into one using real protocol components
- Add visibleMethods, recommendedMethod props
- Replace 11 useState calls in NFTGateInput with typed useReducer
- Enhanced useRateLimit with countdown timer and amber UI banner

Phase 3: Fix useHats race condition and parallelize ERC detection
- Wrap async functions in useCallback with AbortSignal to prevent infinite
  re-render loops caused by unstable function references in useEffect deps
- Add AbortController cleanup to all effects
- Remove duplicate cache check in useEnhancedNFTValidation

Phase 4: Add ARIA accessibility across all protocol config inputs
- ProtocolSelector: role="button", tabIndex, aria-pressed, keyboard handlers
- POAPConfig: Convert div onClick to semantic button for search results
- ProtocolConfigBase: Add errorId prop + role="alert" on error alerts
- All protocol configs: aria-invalid + aria-describedby on inputs
- NFTGateInput: aria-label on sliders, remove redundant aria-disabled

Phase 5: NFTSelector accessibility and Coming Soon features
- NFTCard: role="button", tabIndex, aria-pressed, keyboard handlers
- View toggle: aria-label, aria-pressed, role="group"
- Skip @tanstack/react-virtual (pagination already bounds DOM nodes)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lection-ux

refactor(client): overhaul gating selection UX, accessibility, and hook stability
coderabbitai[bot]
coderabbitai bot previously requested changes Feb 12, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (14)
client/__tests__/MetadataParser.test.tsx (2)

36-42: ⚠️ Potential issue | 🟡 Minor

Empty string src passed to next/image will error at runtime.

When parsed.image is "" (the undefined/empty/legacy test cases), Next.js <Image> requires a valid src prop and will throw. The tests for those cases (lines 51–65, 124–144) assert src="", but this would fail at render in a real Next.js environment. The tests likely only pass because the test environment stubs next/image. This is fine for unit testing, but worth noting that the component as written would break if used with real Next.js rendering.


5-27: ⚠️ Potential issue | 🔴 Critical

Import and test the actual parseJarMetadata utility instead of using an inline mock.

The test uses an inlined parseMetadata function that differs significantly from the real parseJarMetadata in client/lib/jar/metadata-utils.ts:

  • Real implementation returns 5 fields (isV2Format included); inline mock returns 4
  • Real implementation has length-based legacy string logic (≤50 chars → name; >50 chars → description); inline mock always sets name with empty description
  • Real implementation falls back to empty description on JSON parse; inline mock falls back to raw string
  • Real implementation validates JSON is an object; inline mock doesn't
  • Real implementation handles empty/trimmed strings; inline mock doesn't

This means the tests exercise completely different logic than production code and will not catch regressions in the actual parseJarMetadata function. Import parseJarMetadata from @/lib/jar/metadata-utils and test the real implementation directly.

Additionally, fix inconsistent indentation in the assertion blocks (lines 77–85, 101–107, 157–165 have extra tab indentation compared to surrounding code).

client/hooks/nft/useHats.ts (1)

164-167: ⚠️ Potential issue | 🔴 Critical

Replace direct BigInt() calls with validated conversion to prevent runtime crashes on invalid hatId.

BigInt(options.hatId) throws a SyntaxError if hatId is empty, non-numeric, or malformed. This crashes the component during render. Validate the input before conversion.

Proposed approach
- ? [userAddress, BigInt(options.hatId)]
+ ? [userAddress, safeBigInt(options.hatId, 0n)]
+ : undefined,

+ function safeBigInt(value: string | undefined, fallback: bigint): bigint {
+   if (!value) return fallback;
+   try {
+     return BigInt(value);
+   } catch {
+     return fallback;
+   }
+ }

Or validate before the hook call in the parent component.

Affects lines 166, 186, 208, 228.

Also: Replace console.error calls at lines 265, 303, 330 with the logger — use log.error() from @/lib/app/logger per coding guidelines.

client/config/supported-networks.ts (1)

333-336: ⚠️ Potential issue | 🔴 Critical

Bug: Regular string instead of template literal — Alchemy key not interpolated.

Line 335 uses double quotes instead of backticks, so ${alchemyId} is sent as the literal string "${alchemyId}" rather than the actual API key value. This breaks the Optimism Sepolia Alchemy RPC endpoint.

🐛 Fix: use template literal
 [optimismSepolia.id]: createFallbackTransport(
 	[
 		"https://op-sepolia-pokt.nodies.app",
-		"https://opt-sepolia.g.alchemy.com/v2/${alchemyId}",
+		`https://opt-sepolia.g.alchemy.com/v2/${alchemyId}`,
 	],
client/hooks/nft/useEnhancedNFTValidation.ts (1)

317-375: ⚠️ Potential issue | 🟡 Minor

useBatchNFTValidation is a non-functional stub — addresses without cache hits are never validated.

Non-cached addresses are marked isLoading: true (line 352) but no actual validation is ever performed. Additionally, contractAddresses (an array) in the useCallback dependency (line 362) will cause the callback to recreate on every render if the caller passes a new array reference, potentially triggering infinite effect re-runs via the useEffect on line 364.

If this hook is not yet ready, consider removing it or marking it explicitly as unimplemented to prevent consumers from relying on it.

client/components/nft/NFTSelector.tsx (1)

162-175: ⚠️ Potential issue | 🟠 Major

Image with fill requires position: relative on its parent container.

The parent <div> at line 162 (className="w-16 h-16 rounded-lg overflow-hidden bg-gray-100 flex-shrink-0") doesn't include relative positioning. When using fill, Next.js Image renders with position: absolute, so it needs a positioned ancestor to size correctly. The same issue exists in the grid view at line 239.

Proposed fix
-<div className="w-16 h-16 rounded-lg overflow-hidden bg-gray-100 flex-shrink-0">
+<div className="relative w-16 h-16 rounded-lg overflow-hidden bg-gray-100 flex-shrink-0">
client/components/nft/protocols/HatsConfig.tsx (1)

87-87: ⚠️ Potential issue | 🟡 Minor

Use the project logger instead of console.error.

As per coding guidelines: "Use log from @/lib/app/logger for logging, never use console.log, console.error, or console.warn".

Proposed fix
+import { log } from "@/lib/app/logger";
 ...
-console.error("Error validating Hat:", err);
+log.error("Error validating Hat:", err);
client/components/nft/protocols/POAPConfig.tsx (1)

83-83: ⚠️ Potential issue | 🟡 Minor

Use the project logger instead of console.error.

As per coding guidelines: "Use log from @/lib/app/logger for logging, never use console.log, console.error, or console.warn".

client/hooks/jar/useJarCreation.ts (4)

339-339: ⚠️ Potential issue | 🟡 Minor

Multiple console.error/console.log calls violate the logging guideline.

Lines 339, 370, 446, and 470 use console.error or console.log. Per coding guidelines: "Use log from @/lib/app/logger for logging, never use console.log, console.error, or console.warn".

Proposed fix (example for line 339)
+import { log } from "@/lib/app/logger";
 ...
-console.error("Error creating jar:", error);
+log.error("Error creating jar:", error);

Apply the same pattern to lines 370, 446, and 470.

Also applies to: 370-370, 446-446, 470-470


190-219: 🛠️ Refactor suggestion | 🟠 Major

Duplicate NFT address validation may produce redundant error messages.

validateStep3 validates NFT addresses inline (lines 198-211) and then also calls getAccessConfigValidationError(values) (line 213), which re-validates NFT addresses (see createV2CreateArgs.ts lines 180-188). For AccessType.NFTGated, users may see both "At least one NFT address is required..." and the same error from getAccessConfigValidationError.

Consider removing the inline NFT address checks and relying solely on getAccessConfigValidationError, or skipping getAccessConfigValidationError for NFTGated since the inline check is more specific.

Proposed fix — remove inline duplication
 	const validateStep3 = useCallback((): {
 		isValid: boolean;
 		errors: string[];
 	} => {
 		const values = form.getValues();
-		const { accessType, nftAddresses } = values;
 		const errors: string[] = [];
 
-		if (accessType === AccessType.NFTGated) {
-			if (nftAddresses.length === 0) {
-				errors.push(
-					"At least one NFT address is required for NFT-gated access",
-				);
-			}
-
-			for (const addr of nftAddresses) {
-				if (!isAddress(addr)) {
-					errors.push(`NFT address is not a valid Ethereum address`);
-					break;
-				}
-			}
-		}
-
 		const accessValidationError = getAccessConfigValidationError(values);
 		if (accessValidationError) {
 			errors.push(accessValidationError);
 		}
 
 		return { isValid: errors.length === 0, errors };
 	}, [form]);

296-335: ⚠️ Potential issue | 🟡 Minor

Inconsistent indentation in V2/V1 branches of confirmSubmit.

The V2 branch (lines 296-312) is indented one extra level relative to the V1 branch (lines 313-335). This same pattern appears in the transaction-success effect at lines 397-423.


377-465: ⚠️ Potential issue | 🟡 Minor

triggerConfetti is missing from the effect dependency array.

The static analysis tool correctly flags that triggerConfetti is used at line 391 but not listed in the dependency array at line 455. Since triggerConfetti is defined in the component scope (not useCallback-wrapped), it gets a new reference each render. Wrap it in useCallback or move it inside the effect.

Proposed fix — move into the effect or wrap in useCallback
+	const triggerConfetti = useCallback(async () => {
+		try {
+			const confettiModule = await import("canvas-confetti");
+			const confetti = confettiModule.default;
+			confetti({
+				particleCount: 100,
+				spread: 70,
+				origin: { y: 0.6 },
+			});
+		} catch (error) {
+			log.debug("Confetti animation failed:", error);
+		}
+	}, []);
client/__tests__/hooks/useJarCreation.test.ts (1)

174-229: ⚠️ Potential issue | 🟠 Major

Missing Step 3 (access control) validation test.

Validation tests cover Step 1 (jar name), Step 2 (withdrawal amounts), and Step 4 (custom fee), but Step 3 (access control) is not tested. Access control is a critical path per project guidelines.

Would you like me to generate Step 3 validation tests covering NFTGated, POAP, Unlock, Hypercert, and Hats access types?

client/components/create/StepContent.tsx (1)

270-279: ⚠️ Potential issue | 🟡 Minor

Use the project logger instead of console.error.

Line 277 uses console.error directly, which violates the project's coding guidelines. Replace it with the logger from @/lib/app/logger.

Proposed fix
+import { log } from "@/lib/app/logger";
 // ... in the clipboard handler:
-								} catch (err) {
-									console.error("Failed to read clipboard:", err);
+								} catch (err) {
+									log.error("Failed to read clipboard:", err);
 								}

As per coding guidelines: Use logfrom@/lib/app/loggerfor logging, never useconsole.log, console.error, or console.warn``

🤖 Fix all issues with AI agents
In `@client/__tests__/hooks/jar/createV2CreateArgs.test.ts`:
- Around line 1-212: Add unit tests for the exported
getAccessConfigValidationError function to achieve full coverage: create tests
that call getAccessConfigValidationError for each AccessType (NFTGated, POAP,
Unlock, Hypercert, Hats) covering both success and all failure/revert
conditions—e.g., NFTGated: missing address, invalid address format, multiple
addresses mismatch with nftTypes; POAP: missing eventId, non-numeric eventId,
invalid contract fallback; Unlock: missing or invalid unlockAddress; Hypercert:
missing hypercertAddress, missing/non-numeric hypercertTokenId, invalid
hypercertMinBalance; Hats: missing/non-numeric hatsId; use the same test harness
utilities as other tests (makeValues, AccessType/NFTType constants) and assert
returned error strings/undefined as appropriate from
getAccessConfigValidationError so each branch is exercised.

In `@client/__tests__/MetadataParser.test.tsx`:
- Around line 77-85: The test has inconsistent extra indentation for several
assertion blocks in client/__tests__/MetadataParser.test.tsx; fix the formatting
so the expect(...) assertions use the same indentation level as surrounding
asserts—specifically adjust the blocks using screen.getByTestId("description") /
screen.getByTestId("image") and the two other blocks (lines containing
expect(...).toHaveTextContent and expect(...).toHaveAttribute for the image and
other asserts) to remove the extra one-level indent so all expect and
screen.getByTestId calls align with adjacent assertions.

In `@client/components/create/StepContent.tsx`:
- Around line 607-615: handleAddNFT currently appends address to the
nftAddresses array without checking for duplicates; update handleAddNFT to first
normalize the incoming address (trim and lower/upper case consistently), read
currentAddresses via getValues("nftAddresses"), check if the normalized address
already exists (using the same normalization) and if so either return early or
update the corresponding entry in nftTypes, otherwise append the new address and
type using setValue("nftAddresses", ...) and setValue("nftTypes", ...);
reference the handleAddNFT function and the getValues/setValue usage as the
locations to implement this guard and address normalization.

In `@client/components/nft/NFTGateInput.tsx`:
- Line 84: Replace all direct console calls in NFTGateInput (e.g., the
console.warn at the sanitization check and the console.* occurrences around
lines handling input validation and NFT gating logic) with the centralized
logger: import and use the named export log from '@/lib/app/logger' and call the
appropriate log methods (e.g., log.warn/log.error/log.info) matching the
original console usage; update every occurrence referenced in the review (the
console.warn in the sanitizing branch and the other console.* usages in the
input handling/validation functions) to maintain the same message text and
context but routed through log.
- Around line 475-478: The render is calling canMakeApiCall() (used when
building the useEnhancedNFTValidation argument and collection preview queries)
which mutates callsRef and thus records a timestamp on every render; change the
rate-limiter API so render-time checks are read-only and mutation only happens
when an actual network request is made: add a read-only predicate (e.g.,
canCall() or canMakeApiCallReadOnly) and a separate recorder (e.g.,
recordCall()) on the rate limiter, replace render-time calls in NFTGateInput
(places invoking canMakeApiCall() to decide query inputs) to use the read-only
predicate, and move the timestamp recording into the actual queryFn(s) used by
useEnhancedNFTValidation and the collection preview so recordCall() is invoked
only when performing the real API request.

In `@client/components/nft/protocols/HatsConfig.tsx`:
- Around line 74-85: The code inside the try block has inconsistent indentation
around the if (hatDetails) branch; normalize the indentation so the if
(hatDetails) block lines up with surrounding statements (e.g.,
setIsValidating(true) and setValidationError). Locate the block that uses
hatDetails, setSelectedHat, onConfigChange (with hatId: hatIdToValidate,
hatsContract/hatsAddress: contractAddress, hatsId: hatDetails.id) and the else
branch calling setValidationError and setSelectedHat(null), and re-indent those
lines to the same level as the surrounding try scope for consistent readability.

In `@client/components/nft/protocols/POAPConfig.tsx`:
- Around line 135-147: The try block in handleValidateEventId has inconsistent
indentation for the if (event) branch making it deeper than surrounding code;
fix by aligning the if (event) { ... } and its corresponding else { ... } with
the surrounding try block scope so setSelectedEvent, onConfigChange and
setValidationError lines have the same indentation level as the surrounding
statements in handleValidateEventId, leaving the internal block contents
indented one level under the if/else; check the block around
POAPProvider.getEventById(eventId), setSelectedEvent, onConfigChange,
setValidationError and POAP_TOKEN_ADDRESS to ensure consistent indentation
matching the HatsConfig pattern.

In `@client/components/nft/ProtocolSelector.tsx`:
- Around line 233-240: The effect in ProtocolSelector can cause a render loop
when onConfigChange or initialConfig are unstable; change the effect to use refs
for onConfigChange and initialConfig (e.g., onConfigChangeRef and
initialConfigRef updated in separate useEffect hooks) and add a fallbackRef that
stores the last-applied fallback method id so you only call setSelectedMethod
and onConfigChange when filteredMethods produces a new fallback different from
fallbackRef.current; keep the effect dependent only on filteredMethods and
selectedMethod and reference the refs inside it to avoid repeatedly invoking
onConfigChange with unstable references (update fallbackRef.current after
calling onConfigChange).

In `@client/hooks/jar/schemas/jarCreationSchema.ts`:
- Around line 31-50: protocolConfigSchema currently defines hatsId as z.number()
and omits hypercertTokenId, causing validation failures when strings from
HatsConfig.tsx (hatDetails.id) and HypercertConfig.tsx are passed; update
protocolConfigSchema to declare hatsId: z.string().optional() and add
hypercertTokenId: z.string().optional(), then keep the existing downstream
coercion in createV2CreateArgs.ts (where strings are converted to bigint) so
validation succeeds early while preserving the bigint conversion later.

In `@client/hooks/nft/useHats.ts`:
- Around line 254-260: The mapping in fetchUserHatsImpl currently hardcodes
UserHat flags (isWearing, isEligible, isInGoodStanding) to true for every hat
returned by getUserHats; change this so the mapped UserHat either leaves those
fields undefined/omitted or sets them based on the actual on-chain checks (calls
that use isWearerOfHat, isEligible, isInGoodStanding) instead of defaulting to
true—update the mapped creation in fetchUserHatsImpl (the mapped: UserHat[]
assignment) to populate those flags from the on-chain results or omit them until
verification completes so consumers don’t get false positives.
- Around line 135-142: The mountedRef declared in useHats (mountedRef and the
useEffect that toggles it) is dead code because no code, including the refetch
function, reads mountedRef.current; fix by either removing mountedRef and its
useEffect entirely, or (preferred) remove mountedRef and implement an
AbortController-based guard inside refetch: create an AbortController in refetch
or the surrounding effect, pass controller.signal to any async helpers called by
refetch, have those helpers respect signal.aborted (or throw on abort) and check
signal.aborted before calling setState (or catch the abort and skip state
updates), and ensure the controller is aborted in the cleanup so in-flight work
is cancelled; update any functions that currently call setState without an
AbortSignal to accept and use the signal.
- Line 265: In useHats.ts inside the useHats hook, replace the three direct
console.error calls with the project logger: add an import for log from
"@/lib/app/logger" and change each console.error(...) to log.error(...)
(preserving the original message and error object) so all error logging uses the
centralized logger; ensure the import is added at the top of the file and remove
the console.* usages (references: the useHats hook function and its error
handling blocks where console.error is currently used).
- Around line 282-284: The current validation uses Number(hatId) which loses
precision for large uint256 values; update the validation around the hatId check
in useHats.ts to reject null/empty inputs and validate via BigInt: replace the
Number.isNaN(Number(hatId)) logic with a try/catch that attempts BigInt(hatId)
(and also ensure it’s non-negative if required), and throw the same "Hat ID must
be a valid number" Error on failure so large integer hat IDs are handled
correctly.

In `@contracts/src/CookieJar.sol`:
- Around line 424-438: The ERC-721 branch currently treats
nftRequirement.minBalance>0 as an implicit POAP discriminator which causes
misrouting when integrators set minBalance for ERC-721; update the
NftRequirement struct to add an explicit boolean (e.g., isPoapEventGate) or enum
to mark POAP/event gating, then change the branching in the ERC-721 handling:
keep tokenId==0 -> _requireAnyErc721Balance(...), if isPoapEventGate == true ->
call _requirePoapEventOwnership(nftContract, msg.sender, tokenId), otherwise
call _requireErc721TokenOwnership(nftContract, msg.sender, tokenId); also ensure
minBalance remains used only for ERC-1155 logic and add a validation/error for
invalid combinations if needed.

In `@docs/RELEASES.md`:
- Line 150: The RELEASES.md reference to MIGRATIONS.md is broken; either add a
new MIGRATIONS.md containing clear upgrade/migration steps between major
versions (including compatibility notes, breaking changes, required
config/schema changes and example commands) and leave the link in
docs/RELEASES.md, or remove/replace the link in docs/RELEASES.md to an existing
document; locate the reference via the string "MIGRATIONS.md" in
docs/RELEASES.md and ensure any new MIGRATIONS.md is committed alongside the
docs change and referenced with the same filename.
🧹 Nitpick comments (24)
docs/RELEASES.md (2)

23-23: Clarify the bunfig.toml change description.

The phrase "Removed Vitest preload that broke Bun's native test runner" is ambiguous—it could be read as the removal causing the breakage, when the intended meaning is that the preload itself was incompatible and removing it fixed the issue.

📝 Suggested clarification
-- **bunfig.toml**: Removed Vitest preload that broke Bun's native test runner; removed hardcoded `NODE_ENV=production` that prevented dev mode
+- **bunfig.toml**: Removed Vitest preload that was incompatible with Bun's native test runner; removed hardcoded `NODE_ENV=production` that prevented dev mode

94-94: Consider more concise wording.

The phrase "not compatible with" can be simplified to "incompatible with" for brevity.

✍️ Suggested improvement
-- New contract interfaces (not compatible with 2.x)
+- New contract interfaces (incompatible with 2.x)
client/hooks/nft/useHats.ts (2)

198-232: Two useReadContract calls (isInGoodStanding, hatSupply) have their results discarded.

Neither the returned data nor error from these calls is captured or used. This wastes RPC calls and is confusing to readers. Either capture and expose the results (e.g., feed them into UserHat or HatInfo) or remove these calls.


56-63: All chains map to the same Hats contract address — consider a single constant.

Every entry in HATS_CONTRACTS maps to "0x3bc1A0Ad72417f2d411118085256fC53CBdDd137". A single const HATS_CONTRACT_ADDRESS = "0x3bc1..." with a Set of supported chain IDs would be clearer and easier to maintain.

contracts/src/CookieJar.sol (1)

18-21: IPOAP interface defined inline — consider moving to a separate file.

The interface is small, but for maintainability and reuse (e.g., in tests or other contracts), extracting it into its own file (e.g., interfaces/IPOAP.sol) would be cleaner. Not blocking.

contracts/test/CookieJarFactory.t.sol (1)

98-109: testCreateCookieJar and testCreateCookieJarBasic are identical tests.

Both create the same jar with the same parameters and assert the same conditions. Consider removing the duplicate to keep the test suite clean.

Also applies to: 130-141

contracts/test/CookieJarProtocols.t.sol (1)

227-238: createUnlockAccessConfig uses mockPoap instead of mockUnlock — intentional workaround.

MockPublicLock lacks balanceOf(address) and ownerOf(uint256), so it would fail the constructor's _validateNftContract ERC721 interface check. Using mockPoap (which does implement these) is a valid test workaround.

However, this means the jarUnlock tests don't actually exercise the Unlock Protocol's getHasValidKey interface at all — they test generic ERC721-balance-based access on a POAP mock. If Unlock Protocol support is a real feature, consider either:

  1. Adding ERC721 stubs to MockPublicLock, or
  2. Documenting that Unlock gating requires an ERC721-compatible lock contract.
client/config/supported-networks.ts (1)

228-232: any[] type violates the strict TypeScript / no-any coding guideline.

The connectors array is typed as any[]. Consider using the appropriate wagmi connector type or at least a narrower type.

As per coding guidelines: "Enable TypeScript strict mode with no any types and exhaustive enums."

client/hooks/nft/useEnhancedNFTValidation.ts (1)

7-11: Cache uses any type for data, violating the strict TypeScript guideline.

The validation cache stores data: any — consider typing it with NFTValidationResult to maintain type safety throughout the caching layer.

♻️ Proposed fix
 const validationCache = new Map<
 	string,
-	{ data: any; timestamp: number; blockNumber: number }
+	{ data: NFTValidationResult; timestamp: number; blockNumber: number }
 >();

As per coding guidelines: "Use explicit types instead of any type in TypeScript."

client/components/nft/NFTGateInput.tsx (1)

803-824: Static analysis: role="status" div should use a semantic <output> element.

Biome flags that the element with role="status" (line 806) can be replaced with a semantic <output> element, which is the native HTML equivalent and improves accessibility without needing ARIA.

♻️ Replace div with output element
-				<div
+				<output
 					id="nft-address-errors"
-					role="status"
 					className={`text-xs px-3 py-2 rounded ${
 						...
 					}`}
 				>
 					...
-				</div>
+				</output>
client/components/nft/ProtocolSelector.tsx (1)

149-204: ConfigurationPanel uses any for config and onConfigChange props.

Both config?: any and onConfigChange: (config: any) => void bypass type safety. Consider using a discriminated union or the existing ProtocolConfig type to preserve type checking through the configuration panel.

As per coding guidelines: "Use explicit types instead of any type in TypeScript."

client/lib/blockchain/constants.ts (1)

9-11: Use viem's built-in zeroAddress instead of re-declaring.

viem exports zeroAddress (typed as Address), which would avoid maintaining a duplicate definition and aligns with the Web3 interaction patterns.

♻️ Proposed change
-import type { Address } from "viem";
+import { zeroAddress, type Address } from "viem";

-/** Canonical zero address. */
-export const ZERO_ADDRESS =
-	"0x0000000000000000000000000000000000000000" as Address;
+/** Canonical zero address. */
+export const ZERO_ADDRESS: Address = zeroAddress;
client/components/nft/NFTSelector.tsx (2)

57-81: Pervasive any types in helper functions violate TypeScript strict mode guidelines.

All six helper functions (getContractAddress, getNFTName, getNFTImage, getCollectionName, getTokenType, getVerifiedStatus) accept nft: any, and the NFTCard memo (line 99) also types its nft prop as any. This cascades any throughout the component.

Consider defining a union type or a minimal common interface that covers the NFT data shapes you support, and use that instead.

As per coding guidelines: "Enable TypeScript strict mode with no any types and exhaustive enums".


393-393: Use <fieldset> instead of <div role="group"> for semantic correctness.

Biome flags this: elements with role="group" should use the native <fieldset> element instead. You can visually hide the <legend> or style the fieldset to remove default borders.

Proposed fix
-<div className="flex gap-2" role="group" aria-label="View mode">
+<fieldset className="flex gap-2 border-0 p-0 m-0" aria-label="View mode">
   ...
-</div>
+</fieldset>
client/hooks/jar/createV2CreateArgs.ts (2)

255-260: Hardcoded multiTokenConfig defaults lack documentation.

maxSlippagePercent: 500n (5%), minSwapAmount: 0n, and defaultFee: 3000 (0.3% Uniswap fee tier) are magic numbers. A brief comment explaining these defaults and their units would improve maintainability.

Proposed fix
 	const multiTokenConfig: MultiTokenConfigArg = {
 		enabled: values.autoSwapEnabled,
-		maxSlippagePercent: 500n,
-		minSwapAmount: 0n,
-		defaultFee: 3000,
+		maxSlippagePercent: 500n,   // 5% max slippage (basis points)
+		minSwapAmount: 0n,          // No minimum swap threshold
+		defaultFee: 3000,           // 0.3% Uniswap fee tier (uint24)
 	};

162-173: Default fallback to Hats config may mask unrecognized access types.

The final return at line 163 serves as the default case for any AccessType not explicitly handled (including AccessType.Allowlist, which returns earlier). However, if new access types are added to the enum, they'll silently fall through to Hats configuration. Consider adding an explicit check or a never exhaustiveness guard.

Proposed fix
+	if (values.accessType === AccessType.Hats) {
+		return {
+			accessType: ContractAccessType.ERC1155,
+			nftRequirement: {
+				nftContract: toAddressOrZero(
+					protocolValue(protocolConfig, ["hatsAddress", "hatsContract"]),
+				),
+				tokenId: toBigIntOr(0n, protocolValue(protocolConfig, ["hatsId", "hatId"])),
+				minBalance: 1n,
+			},
+		};
+	}
+
 	// Hats maps to ERC-1155: tokenId == hatId, minBalance == 1.
-	return {
-		accessType: ContractAccessType.ERC1155,
-		nftRequirement: {
-			nftContract: toAddressOrZero(
-				protocolValue(protocolConfig, ["hatsAddress", "hatsContract"]),
-			),
-			tokenId: toBigIntOr(0n, protocolValue(protocolConfig, ["hatsId", "hatId"])),
-			minBalance: 1n,
-		},
-	};
+	// Exhaustiveness guard — should never reach here
+	const _exhaustive: never = values.accessType;
+	throw new Error(`Unhandled access type: ${_exhaustive}`);
client/__tests__/hooks/jar/createV2CreateArgs.test.ts (1)

52-53: Consider centralizing test addresses in a constants file.

Wallet and contract addresses (e.g., "0x1234567890123456789012345678901234567890", "0x1111111111111111111111111111111111111111") are scattered as inline strings. As per coding guidelines: "Test constants (wallet addresses, contract addresses) must be centralized in a constants.ts file".

Also applies to: 135-136, 151-152

client/hooks/jar/useJarCreation.ts (1)

411-418: (decodedLog.args as any) bypasses type safety.

At lines 412 and 417, decodedLog.args is cast to any to access .jarAddress and .cookieJarAddress. Since you already know the event name, you can narrow the type by checking decodedLog.eventName (which you do) and then using a type assertion to the specific event args type, or use a type guard.

As per coding guidelines: "Enable TypeScript strict mode with no any types".

client/components/create/StepContent.tsx (5)

134-148: Unnecessary async on handleCustomCurrencySubmit.

This callback doesn't contain any await expressions, so the async keyword is superfluous and causes it to return a Promise<void> instead of void.

Proposed fix
-	const handleCustomCurrencySubmit = useCallback(async () => {
+	const handleCustomCurrencySubmit = useCallback(() => {

563-574: as any casts bypass TypeScript strict mode.

Lines 572 and 582 cast protocolConfig objects to any, violating the strict typing guideline. This hides potential type mismatches between SelectorProtocolConfig and the form's protocolConfig field.

Consider defining a proper union type or extending the form schema's protocolConfig type to accommodate the method field, so both assignments can be type-safe without as any.

As per coding guidelines: Enable TypeScript strict mode with no any types and exhaustive enums


693-736: List key includes array index — items are not uniquely identified.

The key ${address}-${index} still relies on the array index, which the static analysis tool flagged. Since NFT addresses should be unique per the gate list (especially if the duplicate guard suggested above is added), using just the address as the key would be stable across additions/removals.

Proposed fix (assuming duplicates are prevented)
-										key={`${address}-${index}`}
+										key={address}

85-119: Hardcoded token addresses per chain — consider extracting to a config.

Lines 98–110 hardcode token addresses for chainId === 31337 and chainId === 84532. These would be more maintainable in a shared config/constants file (e.g., alongside supported-networks or token-utils), especially as more chains/tokens are added.


767-1057: Excessive watch() calls — consider a single watch() or useWatch.

FinalSettingsStep calls watch() individually 15+ times (lines 773–792). Each watch("field") subscribes independently. Using a single watch() (no args, returns entire form) or useWatch for a subset would reduce subscription overhead and improve readability.

Example consolidation
-	const enableCustomFee = watch("enableCustomFee");
-	const streamingEnabled = watch("streamingEnabled");
-	const requireStreamApproval = watch("requireStreamApproval");
-	// ... 12 more watch calls
+	const values = watch(); // subscribe to entire form
+	const {
+		enableCustomFee,
+		streamingEnabled,
+		requireStreamApproval,
+		autoSwapEnabled,
+		jarName,
+		jarOwnerAddress,
+		supportedCurrency,
+		accessType,
+		withdrawalOption,
+		fixedAmount,
+		maxWithdrawal,
+		withdrawalInterval,
+		strictPurpose,
+		emergencyWithdrawalEnabled,
+		oneTimeWithdrawal,
+		customFee,
+		maxStreamRate,
+		minStreamDuration,
+	} = values;
client/app/create/page.tsx (1)

51-64: isCurrentStepValid is recreated every render — consider useCallback.

This function is passed as a prop to CreateJarForm, so a new reference is created on every render of the page, causing CreateJarForm to re-render even if nothing changed. Wrapping it in useCallback with [currentStep, isV2Contract, validateStep1, validateStep2, validateStep3, validateStep4] as dependencies would stabilize the reference.

Proposed fix
-	const isCurrentStepValid = () => {
+	const isCurrentStepValid = useCallback(() => {
 		switch (currentStep) {
 			case 1:
 				return validateStep1().isValid;
 			case 2:
 				return validateStep2().isValid;
 			case 3:
 				return isV2Contract ? validateStep3().isValid : true;
 			case 4:
 				return validateStep4().isValid;
 			default:
 				return false;
 		}
-	};
+	}, [currentStep, isV2Contract, validateStep1, validateStep2, validateStep3, validateStep4]);

Based on learnings: "Use React hooks (useState, useMemo, useCallback) with proper dependency arrays and include all dependencies in exhaustive-deps"

Comment on lines +1 to +212
import { decodeFunctionData, encodeFunctionData } from "viem";
import { describe, expect, it, vi } from "vitest";
import { cookieJarFactoryAbi } from "@/generated";
import {
FACTORY_DEFAULT_FEE_SENTINEL,
buildV2CreateCookieJarArgs,
getFeePercentageOnDeposit,
} from "@/hooks/jar/createV2CreateArgs";
import { ETH_ADDRESS, HATS_PROTOCOL_ADDRESS, POAP_TOKEN_ADDRESS } from "@/lib/blockchain/constants";

vi.mock("@/hooks/jar/schemas/jarCreationSchema", () => ({
AccessType: {
Allowlist: 0,
NFTGated: 1,
POAP: 2,
Unlock: 3,
Hypercert: 4,
Hats: 5,
},
NFTType: {
None: 0,
ERC721: 1,
ERC1155: 2,
},
}));

const AccessType = {
Allowlist: 0,
NFTGated: 1,
POAP: 2,
Unlock: 3,
Hypercert: 4,
Hats: 5,
} as const;

const NFTType = {
None: 0,
ERC721: 1,
ERC1155: 2,
} as const;

type JarCreationFormData = Parameters<typeof buildV2CreateCookieJarArgs>[0]["values"];
type ProtocolConfig = JarCreationFormData["protocolConfig"];

type MakeValuesOverrides = Partial<Omit<JarCreationFormData, "protocolConfig">> & {
protocolConfig?: Partial<ProtocolConfig>;
};

function makeValues(overrides: MakeValuesOverrides = {}): JarCreationFormData {
const baseValues: JarCreationFormData = {
jarName: "Test Jar",
jarOwnerAddress: "0x1234567890123456789012345678901234567890",
supportedCurrency: ETH_ADDRESS,
metadata: "Test metadata",
imageUrl: "",
externalLink: "",
showCustomCurrency: false,
customCurrencyAddress: "",
withdrawalOption: 0,
fixedAmount: "1",
maxWithdrawal: "2",
withdrawalInterval: "7",
strictPurpose: true,
emergencyWithdrawalEnabled: true,
oneTimeWithdrawal: false,
accessType: AccessType.Allowlist,
nftAddresses: [] as string[],
nftTypes: [] as number[],
protocolConfig: { accessType: "Allowlist" },
enableCustomFee: false,
customFee: "",
streamingEnabled: false,
requireStreamApproval: true,
maxStreamRate: "1.0",
minStreamDuration: "1",
autoSwapEnabled: false,
};

return {
...baseValues,
...overrides,
protocolConfig: {
...baseValues.protocolConfig,
...overrides.protocolConfig,
},
};
}

describe("buildV2CreateCookieJarArgs", () => {
it("encodes createCookieJar args compatible with current factory ABI", () => {
const args = buildV2CreateCookieJarArgs({
values: makeValues(),
metadata: "metadata",
parseAmount: (amount) => BigInt(Math.floor(Number.parseFloat(amount) * 1e18)),
});

const data = encodeFunctionData({
abi: cookieJarFactoryAbi,
functionName: "createCookieJar",
args,
});
expect(data.startsWith("0x")).toBe(true);

const decoded = decodeFunctionData({
abi: cookieJarFactoryAbi,
data,
});
expect(decoded.functionName).toBe("createCookieJar");
expect(decoded.args?.length).toBe(3);
});

it("uses default fee sentinel when custom fee is disabled", () => {
const fee = getFeePercentageOnDeposit(makeValues({ enableCustomFee: false }));
expect(fee).toBe(FACTORY_DEFAULT_FEE_SENTINEL);
});

it("uses explicit custom fee when provided", () => {
const fee = getFeePercentageOnDeposit(
makeValues({ enableCustomFee: true, customFee: "2.5" }),
);
expect(fee).toBe(250n);
});

it("supports explicit zero-percent fee", () => {
const fee = getFeePercentageOnDeposit(
makeValues({ enableCustomFee: true, customFee: "0" }),
);
expect(fee).toBe(0n);
});

it("maps NFT ERC721 access to contract enum ERC721", () => {
const [jarConfig, accessConfig] = buildV2CreateCookieJarArgs({
values: makeValues({
accessType: AccessType.NFTGated,
nftAddresses: ["0x1111111111111111111111111111111111111111"],
nftTypes: [NFTType.ERC721],
}),
metadata: "metadata",
parseAmount: () => 1n,
});
expect(jarConfig.accessType).toBe(1);
expect(accessConfig.nftRequirement.nftContract).toBe(
"0x1111111111111111111111111111111111111111",
);
});

it("maps NFT ERC1155 access to contract enum ERC1155", () => {
const [jarConfig] = buildV2CreateCookieJarArgs({
values: makeValues({
accessType: AccessType.NFTGated,
nftAddresses: ["0x1111111111111111111111111111111111111111"],
nftTypes: [NFTType.ERC1155],
}),
metadata: "metadata",
parseAmount: () => 1n,
});
expect(jarConfig.accessType).toBe(2);
});

it("maps POAP, Unlock, Hypercert, and Hats to supported contract enum domain", () => {
const [poapJar, poapAccess] = buildV2CreateCookieJarArgs({
values: makeValues({
accessType: AccessType.POAP,
protocolConfig: { accessType: "POAP", eventId: "1234" },
}),
metadata: "metadata",
parseAmount: () => 1n,
});
expect(poapJar.accessType).toBe(1);
expect(poapAccess.nftRequirement.nftContract).toBe(POAP_TOKEN_ADDRESS);
expect(poapAccess.nftRequirement.tokenId).toBe(1234n);
expect(poapAccess.nftRequirement.minBalance).toBe(1n);

const [unlockJar] = buildV2CreateCookieJarArgs({
values: makeValues({
accessType: AccessType.Unlock,
protocolConfig: {
accessType: "Unlock",
unlockAddress: "0x2222222222222222222222222222222222222222",
},
}),
metadata: "metadata",
parseAmount: () => 1n,
});
expect(unlockJar.accessType).toBe(1);

const [hypercertJar] = buildV2CreateCookieJarArgs({
values: makeValues({
accessType: AccessType.Hypercert,
protocolConfig: {
accessType: "Hypercert",
hypercertAddress: "0x3333333333333333333333333333333333333333",
hypercertTokenId: "42",
hypercertMinBalance: 5,
},
}),
metadata: "metadata",
parseAmount: () => 1n,
});
expect(hypercertJar.accessType).toBe(2);

const [hatsJar, hatsAccess] = buildV2CreateCookieJarArgs({
values: makeValues({
accessType: AccessType.Hats,
protocolConfig: { accessType: "Hats", hatsId: 99 },
}),
metadata: "metadata",
parseAmount: () => 1n,
});
expect(hatsJar.accessType).toBe(2);
expect(hatsAccess.nftRequirement.nftContract).toBe(HATS_PROTOCOL_ADDRESS);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if getAccessConfigValidationError is tested elsewhere
rg -rn 'getAccessConfigValidationError' --type=ts client/__tests__/

Repository: greenpill-dev-guild/cookie-jar

Length of output: 56


🏁 Script executed:

#!/bin/bash
# Check the source module to see what's exported
rg -n 'getAccessConfigValidationError' --type=ts client/src/ --type=ts

Repository: greenpill-dev-guild/cookie-jar

Length of output: 125


🏁 Script executed:

#!/bin/bash
# Look at the actual exports from createV2CreateArgs module
head -50 client/src/hooks/jar/createV2CreateArgs.ts

Repository: greenpill-dev-guild/cookie-jar

Length of output: 175


🏁 Script executed:

#!/bin/bash
# Discover repository structure
git ls-files | head -30

Repository: greenpill-dev-guild/cookie-jar

Length of output: 856


🏁 Script executed:

#!/bin/bash
# Find the actual createV2CreateArgs module
find . -name "createV2CreateArgs*" -type f

Repository: greenpill-dev-guild/cookie-jar

Length of output: 170


🏁 Script executed:

#!/bin/bash
# Search for getAccessConfigValidationError anywhere in the repo
rg -rn 'getAccessConfigValidationError'

Repository: greenpill-dev-guild/cookie-jar

Length of output: 248


🏁 Script executed:

#!/bin/bash
# Read the source module to see what functions are exported
cat client/hooks/jar/createV2CreateArgs.ts

Repository: greenpill-dev-guild/cookie-jar

Length of output: 8877


🏁 Script executed:

#!/bin/bash
# Check the test file to see what's being tested
cat client/__tests__/hooks/jar/createV2CreateArgs.test.ts

Repository: greenpill-dev-guild/cookie-jar

Length of output: 5961


Add comprehensive tests for getAccessConfigValidationError—critical access control validation function.

The exported getAccessConfigValidationError function validates access requirements for each protocol (NFTGated, POAP, Unlock, Hypercert, Hats) but has no test coverage. This function handles critical access control validation and must have 100% coverage per project standards, including:

  • NFTGated: address presence and validity validation
  • POAP: event ID requirement and contract address validation
  • Unlock: contract address requirement and validity
  • Hypercert: contract address and token ID requirement validation
  • Hats: hat ID requirement validation

Add tests covering success cases and all revert conditions for each access type.

🤖 Prompt for AI Agents
In `@client/__tests__/hooks/jar/createV2CreateArgs.test.ts` around lines 1 - 212,
Add unit tests for the exported getAccessConfigValidationError function to
achieve full coverage: create tests that call getAccessConfigValidationError for
each AccessType (NFTGated, POAP, Unlock, Hypercert, Hats) covering both success
and all failure/revert conditions—e.g., NFTGated: missing address, invalid
address format, multiple addresses mismatch with nftTypes; POAP: missing
eventId, non-numeric eventId, invalid contract fallback; Unlock: missing or
invalid unlockAddress; Hypercert: missing hypercertAddress, missing/non-numeric
hypercertTokenId, invalid hypercertMinBalance; Hats: missing/non-numeric hatsId;
use the same test harness utilities as other tests (makeValues,
AccessType/NFTType constants) and assert returned error strings/undefined as
appropriate from getAccessConfigValidationError so each branch is exercised.

Comment on lines +607 to +615
const handleAddNFT = useCallback(
(address: string, type: number) => {
const currentAddresses = getValues("nftAddresses");
const currentTypes = getValues("nftTypes");
setValue("nftAddresses", [...currentAddresses, address]);
setValue("nftTypes", [...currentTypes, type as NFTType]);
},
[getValues, setValue],
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

No duplicate-address guard in handleAddNFT.

handleAddNFT blindly appends to nftAddresses without checking if the same contract address already exists. This could lead to duplicate entries in the NFT gate list, which would be confusing for users and could cause unexpected behavior at the contract level.

Proposed fix
 	const handleAddNFT = useCallback(
 		(address: string, type: number) => {
 			const currentAddresses = getValues("nftAddresses");
+			if (currentAddresses.includes(address)) {
+				return; // already added
+			}
 			const currentTypes = getValues("nftTypes");
 			setValue("nftAddresses", [...currentAddresses, address]);
 			setValue("nftTypes", [...currentTypes, type as NFTType]);
 		},
 		[getValues, setValue],
 	);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleAddNFT = useCallback(
(address: string, type: number) => {
const currentAddresses = getValues("nftAddresses");
const currentTypes = getValues("nftTypes");
setValue("nftAddresses", [...currentAddresses, address]);
setValue("nftTypes", [...currentTypes, type as NFTType]);
},
[getValues, setValue],
);
const handleAddNFT = useCallback(
(address: string, type: number) => {
const currentAddresses = getValues("nftAddresses");
if (currentAddresses.includes(address)) {
return; // already added
}
const currentTypes = getValues("nftTypes");
setValue("nftAddresses", [...currentAddresses, address]);
setValue("nftTypes", [...currentTypes, type as NFTType]);
},
[getValues, setValue],
);
🤖 Prompt for AI Agents
In `@client/components/create/StepContent.tsx` around lines 607 - 615,
handleAddNFT currently appends address to the nftAddresses array without
checking for duplicates; update handleAddNFT to first normalize the incoming
address (trim and lower/upper case consistently), read currentAddresses via
getValues("nftAddresses"), check if the normalized address already exists (using
the same normalization) and if so either return early or update the
corresponding entry in nftTypes, otherwise append the new address and type using
setValue("nftAddresses", ...) and setValue("nftTypes", ...); reference the
handleAddNFT function and the getValues/setValue usage as the locations to
implement this guard and address normalization.

Comment on lines +475 to 478
// Contract validation
const { isValid, detectedType, isLoading, error } = useEnhancedNFTValidation(
canMakeApiCall() ? debouncedAddress : "",
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bug: canMakeApiCall() called during render has a write side-effect, exhausting the rate limit on re-renders.

canMakeApiCall() pushes a timestamp into callsRef every time it returns true. It's called during render (line 477) to compute the useEnhancedNFTValidation argument, and again at lines 491 and 557 for the collection preview query. Since React re-renders on every state change, the rate limiter counts renders — not actual API calls — and will prematurely block requests well before the intended 30/minute threshold.

Separate the "check" from the "record" — e.g., add a read-only canCall() predicate and only record a call inside the actual queryFn where an API request is made.

🐛 Sketch of a fix for useRateLimit
 const useRateLimit = (maxCalls: number, windowMs: number) => {
 	const callsRef = useRef<number[]>([]);
 	const [resetIn, setResetIn] = useState(0);
 	const timerRef = useRef<ReturnType<typeof setInterval> | null>(null);

-	const isAllowed = useCallback(() => {
+	// Read-only check — safe to call during render
+	const canCall = useCallback(() => {
+		const now = Date.now();
+		const cutoff = now - windowMs;
+		const activeCalls = callsRef.current.filter((time) => time > cutoff);
+		return activeCalls.length < maxCalls;
+	}, [maxCalls, windowMs]);
+
+	// Records a call — only invoke when actually making an API request
+	const recordCall = useCallback(() => {
 		const now = Date.now();
 		const cutoff = now - windowMs;
-
 		callsRef.current = callsRef.current.filter((time) => time > cutoff);
-
-		if (callsRef.current.length < maxCalls) {
-			callsRef.current.push(now);
-			return true;
-		}
-
-		return false;
-	}, [maxCalls, windowMs]);
+		callsRef.current.push(now);
+	}, [windowMs]);

 	// ...countdown effect unchanged...

-	return { isAllowed, resetIn };
+	return { canCall, recordCall, resetIn };
 };

Then use canCall() for the render-time guard and recordCall() inside queryFn.

🤖 Prompt for AI Agents
In `@client/components/nft/NFTGateInput.tsx` around lines 475 - 478, The render is
calling canMakeApiCall() (used when building the useEnhancedNFTValidation
argument and collection preview queries) which mutates callsRef and thus records
a timestamp on every render; change the rate-limiter API so render-time checks
are read-only and mutation only happens when an actual network request is made:
add a read-only predicate (e.g., canCall() or canMakeApiCallReadOnly) and a
separate recorder (e.g., recordCall()) on the rate limiter, replace render-time
calls in NFTGateInput (places invoking canMakeApiCall() to decide query inputs)
to use the read-only predicate, and move the timestamp recording into the actual
queryFn(s) used by useEnhancedNFTValidation and the collection preview so
recordCall() is invoked only when performing the real API request.

Comment on lines +254 to +260
const mapped: UserHat[] = result.hats.map((hat) => ({
hatId: hat.id,
hat: toHatInfo(hat),
isWearing: true,
isEligible: true,
isInGoodStanding: true,
}));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded isWearing: true, isEligible: true, isInGoodStanding: true is misleading.

fetchUserHatsImpl maps every hat returned by getUserHats with all status flags set to true, regardless of actual on-chain state. Since the hook already performs on-chain isWearerOfHat / isEligible / isInGoodStanding reads, these fields should either reflect those reads or be left undefined/omitted until verified. Consumers relying on UserHat.isEligible will get a false positive.

🤖 Prompt for AI Agents
In `@client/hooks/nft/useHats.ts` around lines 254 - 260, The mapping in
fetchUserHatsImpl currently hardcodes UserHat flags (isWearing, isEligible,
isInGoodStanding) to true for every hat returned by getUserHats; change this so
the mapped UserHat either leaves those fields undefined/omitted or sets them
based on the actual on-chain checks (calls that use isWearerOfHat, isEligible,
isInGoodStanding) instead of defaulting to true—update the mapped creation in
fetchUserHatsImpl (the mapped: UserHat[] assignment) to populate those flags
from the on-chain results or omit them until verification completes so consumers
don’t get false positives.

Comment on lines +424 to +438
// tokenId == 0 means "any token from contract" (generic ERC-721 gate)
if (nftRequirement.tokenId == 0) {
// Check if user owns at least one token from the contract
try IERC721(nftRequirement.nftContract).balanceOf(msg.sender) returns (uint256 balance) {
if (balance == 0) revert CookieJarLib.NotAuthorized();
} catch {
revert CookieJarLib.NotAuthorized();
}
} else {
// Check ownership of specific token ID
try IERC721(nftRequirement.nftContract).ownerOf(nftRequirement.tokenId) returns (address owner) {
if (owner != msg.sender) revert CookieJarLib.NotAuthorized();
} catch {
revert CookieJarLib.NotAuthorized();
}
_requireAnyErc721Balance(nftRequirement.nftContract, msg.sender);
return;
}

// POAP event gate marker: tokenId=eventId and minBalance>0.
// This preserves ABI shape while enabling strict event-level enforcement.
if (nftRequirement.minBalance > 0) {
_requirePoapEventOwnership(nftRequirement.nftContract, msg.sender, nftRequirement.tokenId);
return;
}

// Otherwise require ownership of one specific ERC-721 token ID.
_requireErc721TokenOwnership(nftRequirement.nftContract, msg.sender, nftRequirement.tokenId);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find NftRequirement struct definition
ast-grep --pattern 'struct NftRequirement {
  $$$
}'

Repository: greenpill-dev-guild/cookie-jar

Length of output: 56


🏁 Script executed:

# Find all minBalance usages in Solidity files
rg -n 'minBalance' -t sol -C 3

Repository: greenpill-dev-guild/cookie-jar

Length of output: 101


🏁 Script executed:

# First, find all .sol files
fd -e sol

echo "---"

# Search for NftRequirement struct definition
rg -n 'struct NftRequirement' -A 10

Repository: greenpill-dev-guild/cookie-jar

Length of output: 1929


🏁 Script executed:

# Search for all minBalance usages more broadly
rg -n 'minBalance' -A 2 -B 2

Repository: greenpill-dev-guild/cookie-jar

Length of output: 14184


🏁 Script executed:

# Find the implementations of the validation functions
rg -n '_requirePoapEventOwnership|_requireAnyErc721Balance|_requireErc721TokenOwnership' -A 5

Repository: greenpill-dev-guild/cookie-jar

Length of output: 3288


🏁 Script executed:

# Check what happens with ERC1155 - look at the full validation logic around line 440-450
sed -n '420,455p' contracts/src/CookieJar.sol

Repository: greenpill-dev-guild/cookie-jar

Length of output: 1883


MinBalance field is overloaded and causes silent misconfiguration for standard ERC721 gates with specific token requirements.

The three-way branching logic for ERC721 validation relies on minBalance > 0 as an implicit discriminator for POAP event-level gating. This prevents expressing "require ownership of specific tokenId X with minimum balance check" for standard ERC721—if an integrator sets tokenId=X and minBalance=1 to enforce a minimum balance requirement, the contract silently enters the POAP code path and calls tokenDetailsOfOwnerByEvent on a non-POAP contract, causing NFTValidationFailed with no clear indication of misconfiguration.

The struct comment states minBalance is "for ERC1155," yet the code uses it as a POAP discriminator in the ERC721 path, violating explicit over implicit principles. Introduce an explicit field (e.g., bool isPoapEventGate or an enum) in NftRequirement to cleanly separate POAP-specific logic from balance requirements.

🤖 Prompt for AI Agents
In `@contracts/src/CookieJar.sol` around lines 424 - 438, The ERC-721 branch
currently treats nftRequirement.minBalance>0 as an implicit POAP discriminator
which causes misrouting when integrators set minBalance for ERC-721; update the
NftRequirement struct to add an explicit boolean (e.g., isPoapEventGate) or enum
to mark POAP/event gating, then change the branching in the ERC-721 handling:
keep tokenId==0 -> _requireAnyErc721Balance(...), if isPoapEventGate == true ->
call _requirePoapEventOwnership(nftContract, msg.sender, tokenId), otherwise
call _requireErc721TokenOwnership(nftContract, msg.sender, tokenId); also ensure
minBalance remains used only for ERC-1155 logic and add a validation/error for
invalid combinations if needed.

docs/RELEASES.md Outdated

## Upgrade Guides

See [MIGRATIONS.md](MIGRATIONS.md) for upgrade instructions between major versions.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f "MIGRATIONS\.md"

Repository: greenpill-dev-guild/cookie-jar

Length of output: 56


Create MIGRATIONS.md or update the documentation reference.

Line 150 references MIGRATIONS.md for upgrade instructions, but this file does not exist in the repository. Either create the file with migration guidance between major versions or remove the broken link from the documentation.

🤖 Prompt for AI Agents
In `@docs/RELEASES.md` at line 150, The RELEASES.md reference to MIGRATIONS.md is
broken; either add a new MIGRATIONS.md containing clear upgrade/migration steps
between major versions (including compatibility notes, breaking changes,
required config/schema changes and example commands) and leave the link in
docs/RELEASES.md, or remove/replace the link in docs/RELEASES.md to an existing
document; locate the reference via the string "MIGRATIONS.md" in
docs/RELEASES.md and ensure any new MIGRATIONS.md is committed alongside the
docs change and referenced with the same filename.

@Oba-One Oba-One merged commit 8803392 into dev Feb 12, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant