release: v3.2.0 — NFT gating hardening, Unicode validation, CI fixes#28
release: v3.2.0 — NFT gating hardening, Unicode validation, CI fixes#28
Conversation
… scripts This commit transitions the project from using pnpm to bun for dependency management and script execution. Key changes include updating installation commands, modifying script definitions in package.json, and adjusting documentation to reflect the new setup. The migration aims to streamline the development process and improve performance. Additionally, the removal of pnpm-related files enhances project clarity and maintainability.
… process This commit refines the `package.json` scripts for better organization and clarity, specifically adjusting the client build and type generation commands to ensure they operate within the correct directory context. Additionally, it updates the deployment configuration in `deployments.auto.ts` to reflect the latest factory addresses and timestamps, enhancing the accuracy of deployment tracking. These changes aim to streamline the development workflow and improve maintainability.
- Replace generic NotAuthorized with distinct errors in _checkAccess(): InsufficientNFTBalance, NFTNotOwned, NFTValidationFailed - Use type(uint256).max sentinel in factory to allow explicit 0% fees - Replace require string with custom error TransferFailed in withdrawals - Guard FeeCollected event emission on fee > 0 (reduces noise for 0% jars) - DRY period reset logic via _isNewPeriod/_effectiveWithdrawnInPeriod helpers - Fix emergency withdraw to update pendingTokenBalances accounting - Increase strict purpose minimum from 10 to 27 characters - Add ERC-1155 balanceOf to MockHats for Hats Protocol compatibility - Add proper balanceOf tracking to MockPOAP for ERC-721 validation - Split test access config helpers per protocol (ERC721/ERC1155/Hats) - Fix jarHats to use mockHats with correct testHatId - Fix cooldown test with proper vm.expectRevert assertions - Add hat revocation, wrong hat ID, and multi-user cooldown tests - Add inline remappings and skip rules to foundry.toml Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Update NFT protocol config components (Hats, Hypercert, POAP, Unlock) - Polish NFTGatedWithdrawalSection with improved UX - Update deployment config and supported networks - Fix jar creation hook and token utils - Add doc-links and blockchain constants - Update tests for currency labels, token info, and jar transactions Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces pnpm with Bun across CI, scripts, docs, and manifests; centralizes Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Client UI
participant Hook as useJarCreation (Form)
participant Factory as CookieJarFactory (contract)
participant Jar as CookieJar (contract)
participant StreamLib as Streaming lib
participant SuperHost as Superfluid Host (mock)
UI->>Hook: submit form (build args)
Hook->>Hook: buildV2CreateCookieJarArgs (Zod + helper)
Hook->>Factory: createCookieJar(args) (tx)
Factory-->>Hook: tx receipt (event with jar address)
Hook->>UI: newJarPreview / confirmation
UI->>StreamLib: createStream(superToken, rate)
StreamLib->>SuperHost: callAgreement(createFlow)
SuperHost-->>StreamLib: mocked agreement result
StreamLib->>Jar: record stream metadata / emit events
Jar-->>UI: event/confirmation
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
contracts/test/SuperfluidIntegration.t.sol (1)
263-277:⚠️ Potential issue | 🟡 MinorTest is a no-op — no withdrawal is executed and assertions are commented out.
testEmergencyWithdrawSuperTokentransfers tokens to the jar but never callsemergencyWithdraw. The two assertions on lines 275-276 are commented out,withdrawAmountandinitialOwnerBalanceare unused, andvm.prank(_owner)on line 272 has no subsequent call. This test always passes while testing nothing.Given the PR objective explicitly fixes "emergencyWithdraw accounting for
pendingTokenBalances", this seems like the right place to validate that fix.client/config/supported-networks.ts (1)
326-334:⚠️ Potential issue | 🟠 MajorPre-existing bug: Alchemy API key not interpolated for Optimism Sepolia.
Line 329 uses regular double quotes instead of backticks, so
${alchemyId}is sent as a literal string rather than being interpolated. Compare with other Alchemy URLs (e.g., Line 267) that correctly use template literals.Proposed fix
[optimismSepolia.id]: createFallbackTransport( [ "https://op-sepolia-pokt.nodies.app", - "https://opt-sepolia.g.alchemy.com/v2/${alchemyId}", + `https://opt-sepolia.g.alchemy.com/v2/${alchemyId}`, ],client/hooks/jar/useJarCreation.ts (1)
595-629:⚠️ Potential issue | 🔴 CriticalPipeline failure:
"CUSTOM"is not assignable to`0x${string}`.TypeScript infers the
valuefield asAddressfrom the first element (ETH_ADDRESS). The"CUSTOM"sentinel on Line 623 doesn't match. Explicitly type the array to allow both.Proposed fix
const getCurrencyOptions = () => { - const options = [ + const options: { value: string; label: string; description: string }[] = [ { value: ETH_ADDRESS, label: "ETH (Native)", description: "Use native Ethereum", }, ];client/lib/blockchain/token-utils.ts (1)
17-17: 🛠️ Refactor suggestion | 🟠 MajorInconsistent comparison: strict equality here vs case-insensitive in other files.
useTokenInfouses strict===/!==againstETH_ADDRESS(Lines 17, 38), whileJarDepositSection.tsxandprofile/page.tsxuse.toLowerCase()comparisons. IftokenAddressarrives with different casing (e.g., all-lowercase from a contract event), the strict check here would fail to recognize it as native ETH, causing spurious ERC-20 contract reads.Consider normalizing to lowercase here as well, or using viem's
isAddressEqualfor consistency:Proposed fix
+import { erc20Abi, formatUnits, isAddress, isAddressEqual, parseUnits } from "viem"; ... -const isERC20 = isAddress(tokenAddress) && tokenAddress !== ETH_ADDRESS; +const isERC20 = isAddress(tokenAddress) && !isAddressEqual(tokenAddress, ETH_ADDRESS); ... -if (tokenAddress === ETH_ADDRESS) { +if (isAddressEqual(tokenAddress, ETH_ADDRESS)) {Also applies to: 38-38
client/__tests__/hooks/useNftValidation.test.ts (1)
187-196:⚠️ Potential issue | 🟡 MinorPre-existing bug: test assertion doesn't match mock return value.
The mock on line 49 returns
"Failed to validate contract", but the assertion on line 194 expects"Failed to validate contract. Not a valid NFT contract or network error.". This test will fail whenRUN_BLOCKCHAIN_TESTS=true.🐛 Proposed fix
Either update the mock (line 49) to match:
- error: "Failed to validate contract", + error: "Failed to validate contract. Not a valid NFT contract or network error.",Or update the assertion (line 194) to match the mock:
- expect(result.error).toBe( - "Failed to validate contract. Not a valid NFT contract or network error.", - ); + expect(result.error).toBe("Failed to validate contract");.github/workflows/integration-tests.yml (1)
143-156:⚠️ Potential issue | 🟡 Minor
$?check on Line 151 is always 0 due to|| echofallback.After
bun run test || echo "⚠️ ..."on Line 147,$?will always be0becauseechosucceeds. This means the failure exit on Line 155 is unreachable from this code path. The same issue applies to Line 143'sif— though that one is structured correctly as a conditional.Proposed fix
# Run vitest with proper syntax for integration tests - if bun run test -- --run --testTimeout 60000 2>/dev/null; then + if bun run test -- --run --testTimeout 60000 2>/dev/null; then echo "✅ Integration tests completed with vitest" else echo "⚠️ Fallback: Running basic unit tests..." - bun run test || echo "⚠️ Some tests may have failed due to missing blockchain context" + if ! bun run test; then + echo "⚠️ Some tests may have failed due to missing blockchain context" + exit 1 + fi fi - - if [ $? -eq 0 ]; then - echo "✅ Integration tests passed!" - else - echo "❌ Some integration tests failed" - exit 1 - fi + + echo "✅ Integration tests passed!".github/actions/setup-node-pnpm/action.yml (1)
34-42:⚠️ Potential issue | 🟠 MajorCI/local lockfile logic is inverted — CI should use
--frozen-lockfile.The comment on Line 35 says "Try frozen lockfile first, fallback to regular install in CI", but the CI branch (Line 38) runs
bun install --no-frozen-lockfilefirst, which is the opposite. Best practice is for CI to enforce a frozen lockfile for reproducible builds, while local dev should be more flexible.Proposed fix
# Try frozen lockfile first, fallback to regular install in CI if [ "$CI" = "true" ] || [ "$GITHUB_ACTIONS" = "true" ]; then echo "🤖 CI environment detected" - bun install --no-frozen-lockfile || bun install + bun install --frozen-lockfile || bun install else echo "🔒 Using frozen lockfile for local consistency" - bun install --frozen-lockfile + bun install fi.github/workflows/unit-tests.yml (1)
46-60:⚠️ Potential issue | 🟡 MinorTest failures are silently swallowed — CI will always pass.
The
if/elseconstruct logs test failures but never exits with a non-zero code, so the workflow always succeeds regardless of test results. While the comments indicate this is intentional during an "environment transition," it effectively disables CI test gating.Consider adding a TODO or tracking issue so this gets reverted once the transition is complete, or at minimum exit with a warning code.
Would you like me to open a tracking issue to re-enable strict test failure handling once the environment transition is complete?
scripts/dev-start.sh (1)
124-141:⚠️ Potential issue | 🔴 CriticalWorking directory bug:
cd clienton Line 126 leaves cwd inclient/, causing Line 140'scd clientto resolve toclient/client/.After Line 126 (
cd client && bun generate), the script's working directory is$PROJECT_ROOT/client. The unchanged Line 140 then doescd clientagain, which would try to enter$PROJECT_ROOT/client/client/— a directory that doesn't exist. This will cause the Next.js dev server launch to fail.The previous code likely used a workspace-filtered command (e.g.,
pnpm --filter client generate) that didn't change directories. The fix is to return to$PROJECT_ROOTafter the generate step.Proposed fix
# Generate client types (from project root, cd into client directory) echo "⚙️ Generating client types..." -cd client && bun generate +cd "$PROJECT_ROOT/client" && bun generate if [ $? -ne 0 ]; then echo "❌ Client type generation failed" kill $ANVIL_PID $WATCHER_PID 2>/dev/null exit 1 fi +cd "$PROJECT_ROOT" + # Start client dev server with proper logging.github/workflows/security-scan.yml (1)
78-93:⚠️ Potential issue | 🟠 Major
npm auditJSON format mismatch —.advisoriesis npm v6; Node 18+ ships npm v7+ which uses.vulnerabilities.Line 85 checks
jq -e '.advisories | length > 0', but Node 18 ships with npm 9+ which uses a different JSON structure (.vulnerabilitieskey instead of.advisories). This means the vulnerability check will likely always report "No vulnerabilities found" regardless of actual findings.Proposed fix
if [ -s audit-results.json ]; then # Check if we have any vulnerabilities - if jq -e '.advisories | length > 0' audit-results.json > /dev/null 2>&1; then + if jq -e '.vulnerabilities | length > 0' audit-results.json > /dev/null 2>&1; then echo "⚠️ Vulnerabilities found!" npm audit --audit-level=moderate
🤖 Fix all issues with AI agents
In @.cursor/rules/root-standards.mdc:
- Around line 25-34: The docs list bun scripts that are missing from
package.json; either add the three scripts "dev:ethereum", "dev:base", and
"test:coverage" to package.json or remove them from the documentation. If
adding, implement "dev:ethereum" and "dev:base" as variant wrappers of the
existing "dev" script that pass the appropriate network/fork flags used by your
dev server (so they call the same underlying start logic as "dev" but with the
Ethereum or Base fork configuration) and implement "test:coverage" to run the
same test runner with coverage enabled (mirroring how your CI produces coverage,
calling your test command with the coverage flags). Ensure the script names
exactly match dev:ethereum, dev:base, and test:coverage so the docs and
package.json stay in sync.
In @.cursor/rules/testing-patterns.mdc:
- Around line 217-224: Update the documentation to reflect the monorepo scripts:
explicitly state that bun test:watch and bun test:coverage are only defined in
the client workspace (require cd client) or add root-level aliases in
package.json (similar to existing test:client and test:client:coverage) so
commands like bun test:watch and bun test:coverage work from the repo root; for
consistency and to avoid ambiguity with Bun's built-in test runner, recommend
and document using bun run <script> for all npm-style scripts (e.g., bun run
test:client, bun run test:e2e) and clarify that the bun test:e2e* commands are
defined at the root.
In `@bunfig.toml`:
- Around line 27-30: The bunfig.toml [test] preload entry currently references
./client/__tests__/vitest.setup.ts which imports Vitest and breaks Bun's native
test runner; remove that preload or replace it with a Bun-compatible setup file
(or point it to a new client/__tests__/bun.setup.ts) so Bun won't attempt to
preload Vitest code when running test:client, and ensure any test
runner-specific configuration is kept in Vitest config instead of bunfig.toml.
- Around line 42-45: The bunfig currently hardcodes "process.env.NODE_ENV" =
"production" in the [define] section which statically replaces the identifier at
bundle/runtime and prevents runtime dev envs from working; remove that define
(or move it out of the global [define] block into build-only configuration) and
instead apply the production define via the build command (e.g., use bun build
--define "process.env.NODE_ENV=production") so that development runs
(dev-start.sh) can rely on runtime NODE_ENV values; locate the
"process.env.NODE_ENV" entry in the [define] block and either delete it or
relocate it to build-specific flags.
In `@client/components/jar/AllowlistWithdrawalSection.tsx`:
- Around line 74-78: The client-side length check uses withdrawPurpose.length
(UTF-16 code units) which can differ from the contract's bytes(purpose).length
(UTF-8); update the disabled checks (e.g., the JSX control using
withdrawPurpose, PURPOSE_MIN_LENGTH and config.isWithdrawPending) and the other
occurrences around lines 214-218 to compute UTF-8 byte length via new
TextEncoder().encode(withdrawPurpose).length, and compare that value against
PURPOSE_MIN_LENGTH so the client validation matches the on‑chain byte-length
check.
In `@client/components/jar/JarDepositSection.tsx`:
- Around line 68-71: The ERC-20 placeholder in JarDepositSection.tsx has an
inverted ternary and can pass a negative value to "0".repeat(), causing a
RangeError; update the placeholder construction used when isNativeCurrency is
false (the `1.${"0".repeat(...)} ${tokenSymbol || "Tokens"}` expression) to
compute a non-negative count of decimals (e.g., let clampedDecimals =
Math.max(0, tokenDecimals) and use clampedDecimals instead of the current
ternary) so that positive tokenDecimals produce that many zeros and negatives
become 0; keep the isNativeCurrency branch unchanged.
In `@client/components/jar/NFTGatedWithdrawalSection.tsx`:
- Around line 150-153: The ownerOf useReadContract invocation can throw because
BigInt(activeNFT.tokenId) is evaluated even for invalid manual input; add a
safeBigInt helper (e.g., safeBigInt(value: string): bigint | undefined that
try/catches BigInt and returns undefined on failure) and replace direct
BigInt(...) uses in the args for useReadContract (and any other place converting
activeNFT.tokenId) with safeBigInt(activeNFT.tokenId) so args become undefined
for invalid IDs; alternatively, validate/normalize tokenId when building
manualNFT/activeNFT so invalid token IDs are set to undefined/null before
reaching useReadContract.
- Around line 350-395: The three label elements in NFTGatedWithdrawalSection are
not associated with their inputs; add unique id attributes (e.g.,
gateAddressInput, tokenIdInput, tokenTypeSelect) to the corresponding Input
components (the ones bound to gateAddress and tokenId) and to the SelectTrigger
used for manualTokenType, then set each label's htmlFor to the matching id so
screen readers can associate the labels with the form controls (update the Input
for gateAddress and tokenId and the Select/SelectTrigger for manualTokenType
accordingly).
In `@client/components/ui/select.tsx`:
- Around line 121-128: Checked items use a semi-transparent brand background
with a creme background token for text which may fail WCAG contrast and misuses
a background semantic token as foreground; update the Select item checked styles
(around SelectPrimitive.ItemIndicator / Check and the item className usage) to
use a dedicated foreground token (e.g., replace text-[hsl(var(--cj-creme-bg))]
with text-[hsl(var(--cj-creme-fg))] or text-[hsl(var(--cj-on-brand))]) and/or
increase the background opacity or adjust colors to meet 4.5:1 contrast, then
verify the effective contrast in both light and dark themes and rename or add
the token in the design system if needed so text tokens are not derived from
background tokens.
In `@contracts/src/libraries/CookieJarValidation.sol`:
- Around line 52-56: The current validatePurpose uses bytes(purpose).length
which measures UTF-8 byte length and can differ from client-side JS
purpose.length (UTF-16 code units); implement a proper Unicode-aware length
check and use it in validatePurpose to align validation semantics. Add a helper
(e.g., utf8StringLength or countUnicodeCodePoints) that iterates the bytes of
the string and counts UTF-8 code points by treating bytes that are not
continuation bytes (i.e., leading bytes) as a new character, then replace
bytes(purpose).length with that helper in validatePurpose and keep the same
threshold and revert with CookieJarLib.InvalidPurpose when the counted length is
below 27. Ensure the helper is internal/pure and referenced from validatePurpose
so behavior matches client-side expectations for multi-byte characters.
In `@foundry.toml`:
- Around line 18-27: Update the remappings array so all paths point to real
directories and use the standardized "prefix/=path/" form with trailing slashes;
run git submodule update --init --recursive to populate missing libs or correct
entries in the remappings list (e.g., change
"@openzeppelin/contracts=lib/openzeppelin-contracts/contracts" to
"@openzeppelin/contracts/=lib/openzeppelin-contracts/contracts/" and verify/fix
the paths for "erc4626-tests", "forge-std", "halmos-cheatcodes", and
"protocol-monorepo/packages/ethereum-contracts" to the actual locations or
remove unused entries) so import resolution for the remappings variable
succeeds.
In `@package.json`:
- Line 21: The "test:client" npm script currently uses "cd client && bun test"
which invokes Bun's built-in test runner and bypasses the client's configured
"test" script (Vitest); update the "test:client" script to run the client's
package script instead (use "cd client && bun run test") so the client's "test"
script (the "test" field in the client's package.json) is executed via Bun.
- Around line 40-44: The scripts "test:all" and "test:ci" wrongly invoke Bun's
native runner via "bun test" instead of running the package script; update those
script commands to use "bun run test" (and similarly use "bun run test:e2e" if
you want explicit invocation) so that the root "test" script (and "test:e2e")
defined in package.json are executed; edit the "test:all" and "test:ci" entries
to replace "bun test" with "bun run test" (and optionally change "bun test:e2e"
to "bun run test:e2e") to fix the behavior.
In `@README.md`:
- Line 168: The README line for the command "bun deploy:base-sepolia" has an
incorrect description saying "Deploy to mainnet"; update the description to
accurately reflect the target network (e.g., "Deploy to Base Sepolia testnet" or
similar) so the command string bun deploy:base-sepolia and its explanatory text
are consistent and clear.
🧹 Nitpick comments (17)
contracts/src/CookieJarFactory.sol (1)
157-162: Sentinel pattern looks correct and well-documented.The use of
type(uint256).maxas a sentinel to distinguish "use factory default" from an explicit0is a clean, idiomatic Solidity pattern. The inline comments clearly communicate the intent.One minor observation on the pre-existing clamping at Line 162: any fee value between
PERCENTAGE_BASE + 1andtype(uint256).max - 1is silently capped rather than reverted. This could mask a caller bug (e.g., passing a raw percentage like5instead of500for 5%). Consider whether arevertfor out-of-range values (excluding the sentinel) would be more appropriate:Suggested stricter validation
- if (feePerc > CookieJarLib.PERCENTAGE_BASE) feePerc = CookieJarLib.PERCENTAGE_BASE; + if (feePerc > CookieJarLib.PERCENTAGE_BASE) revert CookieJarLib.InvalidFeePercentage();This is optional since the clamping behavior predates this PR and callers may rely on it.
contracts/test/UniswapV4Integration.t.sol (1)
235-266: Consider removing this no-op test with commented-out code.
testSwapFunctionSignaturesonly assertsassertTrue(true)and contains ~20 lines of commented-out code. It provides no value — compile-time verification of function signatures already happens implicitly through the other tests that call the library. Removing it reduces noise and avoids giving a false sense of coverage.client/components/jars/JarControls.tsx (2)
59-62: Network name is displayed twice — inChainDisplay(top row) and in the stats bar (bottom row).The
ChainDisplaycomponent in the top row already renders the network name viagetNetworkName(chainId), and line 96 repeats it in the bottom stats row. If the duplication is intentional (e.g., the stats row is visible whenChainDisplayscrolls off-screen), this is fine — but if not, consider removing one to reduce visual noise.Also applies to: 91-98
71-73: The filter dropdown only contains a single "All Jars" option.If additional filter options (e.g., by access type) are planned, this is fine as a placeholder. Otherwise, rendering a
Selectwith one option adds UI complexity without functionality — a static label would suffice..cursor/rules/root-standards.mdc (1)
21-35: Consider adding migration/setup instructions.Since this is a standards document that developers reference, consider adding a brief section on:
- Installing bun (
curl -fsSL https://bun.sh/install | bash)- Migrating from pnpm (remove
pnpm-lock.yaml, runbun install)- Basic bun workspace commands
This would improve the onboarding experience for new developers or those migrating existing clones.
📝 Suggested addition
Add a "Setup & Installation" section before "Development Commands":
+## Setup & Installation + +```bash +# Install bun (macOS/Linux) +curl -fsSL https://bun.sh/install | bash + +# Install dependencies +bun install + +# For existing pnpm projects: remove old artifacts first +rm -rf node_modules pnpm-lock.yaml +bun install +``` + ## Development Commandscontracts/test/SuperfluidIntegration.t.sol (1)
283-290: Several tests have no assertions — they only verify calls don't revert.
testGetSuperStream,testSuperfluidWithTraditionalStreaming,testSuperfluidRealTimeBalanceWithMultipleStreams,testSuperfluidDistributionPoolIntegration, andtestSuperfluidMultipleTokenSupportcontain noassert*orexpectEmitchecks. They only prove the calls succeed, not that the resulting state is correct.Consider adding at least a basic post-condition assertion (e.g., verifying stream existence or recorded flow rate) to give these tests diagnostic value when contract logic changes.
Also applies to: 296-303, 381-393, 419-444
client/__tests__/hooks/jar/useJarTransactions.test.tsx (1)
56-62: Consider importingPURPOSE_MIN_LENGTHinstead of hardcoding27.The constant
PURPOSE_MIN_LENGTH = 27is defined inclient/components/jar/AllowlistWithdrawalSection.tsx. Using the shared constant here would keep the test in sync if the threshold changes again, rather than requiring manual updates across multiple files.client/components/nft/protocols/UnlockConfig.tsx (1)
42-74: Pre-existing:onConfigChangemissing fromuseCallbackdependency array.
validateLockcallsonConfigChange(line 62) but the dependency array on line 74 is empty. This means the callback captures a stale reference if the parent re-renders with a newonConfigChange. Not introduced by this PR, but worth noting for a follow-up.client/components/nft/protocols/HypercertConfig.tsx (1)
41-73: Stale closure:onConfigChangeis not in the dependency array.
validateHypercertcapturesonConfigChange(line 58) but theuseCallbackdependency array on line 73 is empty. If the parent re-renders with a newonConfigChangereference, the callback will invoke the stale one. This is a pre-existing issue, but worth fixing while you're in this file.Suggested fix
- }, []); + }, [onConfigChange]);contracts/src/CookieJar.sol (2)
418-426:tokenId == 0as sentinel for "any token" prevents gating on actual token ID 0.Using
nftRequirement.tokenId == 0to mean "user owns any token" means a jar can never require specifically token ID 0. If that's an accepted constraint, document it explicitly in the NatSpec orNftRequirementstruct definition so future maintainers don't trip over it.
283-288: Emergency withdrawal: pending-balance accounting looks correct but has an edge case worth noting.When
amountexceedspendingTokenBalances[pendingKey], the pending balance is zeroed out, which is the right behavior (prevents staleswapPendingTokens()calls). The actual ETH/ERC20 transfer on lines 290-295 will revert the entire tx if the contract doesn't hold enough, so accounting can't desync.One subtlety: for the jar's own currency (
token == CURRENCY),currencyHeldByJaris decremented on line 281 and pending balance may also be cleared. These are separate accounting buckets, so this is fine — but a brief inline comment clarifying thatcurrencyHeldByJarandpendingTokenBalancesnever overlap for the same token would help future readers.client/__tests__/CurrencyLabel.test.tsx (1)
54-63: Test name is misleading after refactor.The test is named "handles different ETH address formats" but now only passes the exact
ETH_ADDRESSconstant. It no longer validates case-insensitive matching (e.g., all-lowercase0xeeee...). Consider either renaming it or adding a case-variant assertion to actually test format handling.client/hooks/jar/useJarCreation.ts (1)
643-653:validateERC20Addressis effectively a no-op for non-ETH addresses.The
tryblock just returnsisAddress(address), which was already checked on Line 644. Thecatchcan never fire sinceisAddressdoesn't throw. If the intent was to validate the contract on-chain (e.g., check forsymbol()/decimals()), this needs actual contract calls. As-is, any valid address passes validation.AGENTS.md (1)
49-51: Redundantnpm installalongsidebun installis confusing.The setup instructions show both
npm install(for Foundry auto-install) andbun install(for dependencies). Bun executespostinstallhooks as well, sobun installalone should handle both. If there's a specific reasonnpm installis still needed (e.g., a lifecycle script incompatibility), a comment should clarify that; otherwise, consolidate to a singlebun install.Proposed fix
# Zero-configuration start git clone https://github.com/greenpill-dev-guild/cookie-jar.git cd cookie-jar -npm install # Auto-installs Foundry -bun install # Install dependencies +bun install # Install dependencies & auto-installs Foundry bun dev # Starts everything.github/workflows/accessibility.yml (1)
77-78: Inconsistent use ofbun run generatefollowed bynpm run dev.Line 77 uses
bun run generatebut Line 78 usesnpm run dev. If the intent is to migrate to Bun, consider usingbun run devconsistently — or add a comment explaining whynpmis intentionally retained for the dev server..github/actions/setup-node-pnpm/action.yml (1)
9-12: Bun version1.0is outdated — consider pinning to1.2orlatest.The default is
1.0, but Bun 1.2 has been out for a while with significant improvements (native database clients, better Node compatibility, performance gains). Other files in this PR reference Bun 1.0 as well (BUN_VERSION: '1.0'in workflows). Consider bumping to align with the version actually used in development.VERCEL_BUILD_FIX.md (1)
108-114: Nit: inconsistent use ofbunvsbun runacross examples.The document alternates between
bun run build:client(e.g., Line 63, 78, 126) and barebun build:client(Line 109),bun check(Line 108), etc. While Bun treats them equivalently, picking one style consistently improves clarity for readers unfamiliar with Bun's shorthand.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
bun.lockis excluded by!**/*.lockfoundry.lockis excluded by!**/*.lockpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (68)
.cursor/rules/deployment.mdc.cursor/rules/root-standards.mdc.cursor/rules/testing-patterns.mdc.github/actions/setup-node-pnpm/action.yml.github/workflows/accessibility.yml.github/workflows/code-quality.yml.github/workflows/contract-tests.yml.github/workflows/e2e-tests.yml.github/workflows/integration-tests.yml.github/workflows/security-scan.yml.github/workflows/unit-tests.ymlAGENTS.mdREADME.mdVERCEL_BUILD_FIX.mdbunfig.tomlclient/.cursor/rules/frontend-standards.mdcclient/__tests__/CurrencyLabel.test.tsxclient/__tests__/TokenInfoHook.test.tsclient/__tests__/hooks/blockchain/useSuperfluidSubgraph.test.tsxclient/__tests__/hooks/jar/useJarTransactions.test.tsxclient/__tests__/hooks/jar/useStreamingActions.test.tsxclient/__tests__/hooks/useJarCreation.test.tsclient/__tests__/hooks/useNftValidation.test.tsclient/__tests__/hooks/useUnlockLocks.test.tsclient/__tests__/integration/unlock-sdk-integration.test.tsclient/__tests__/utils/ProtocolValidation.test.tsclient/app/profile/page.tsxclient/components/create/StepContent.tsxclient/components/jar/AllowlistWithdrawalSection.tsxclient/components/jar/JarDepositSection.tsxclient/components/jar/NFTGatedWithdrawalSection.tsxclient/components/jars/ChainDisplay.tsxclient/components/jars/JarControls.tsxclient/components/nft/ProtocolSelector.tsxclient/components/nft/doc-links.tsclient/components/nft/protocols/HatsConfig.tsxclient/components/nft/protocols/HypercertConfig.tsxclient/components/nft/protocols/POAPConfig.tsxclient/components/nft/protocols/UnlockConfig.tsxclient/components/ui/select.tsxclient/config/deployments.auto.tsclient/config/supported-networks.tsclient/generated.tsclient/hooks/jar/useJarCreation.tsclient/lib/blockchain/constants.tsclient/lib/blockchain/token-utils.tsclient/package.jsoncontracts/README.mdcontracts/script/DeployLocal.s.solcontracts/src/CookieJar.solcontracts/src/CookieJarFactory.solcontracts/src/libraries/CookieJarValidation.solcontracts/src/libraries/Streaming.solcontracts/test/CookieJar.t.solcontracts/test/CookieJarProtocols.t.solcontracts/test/SuperfluidIntegration.t.solcontracts/test/UniswapV4Integration.t.sole2e/README.mde2e/global-setup.tse2e/global-teardown.tsfoundry.tomlpackage.jsonplaywright.config.tspnpm-workspace.yamlscripts/dev-start.shscripts/install-deps.shscripts/test-e2e.shscripts/watch-deploy.sh
💤 Files with no reviewable changes (1)
- pnpm-workspace.yaml
🧰 Additional context used
🧬 Code graph analysis (13)
client/components/nft/protocols/POAPConfig.tsx (1)
client/components/nft/doc-links.ts (1)
ACCESS_CONTROL_DOC_LINKS(3-10)
client/app/profile/page.tsx (1)
client/lib/blockchain/constants.ts (1)
ETH_ADDRESS(7-7)
client/components/nft/protocols/HatsConfig.tsx (1)
client/components/nft/doc-links.ts (1)
ACCESS_CONTROL_DOC_LINKS(3-10)
client/components/nft/protocols/UnlockConfig.tsx (1)
client/components/nft/doc-links.ts (1)
ACCESS_CONTROL_DOC_LINKS(3-10)
client/components/nft/ProtocolSelector.tsx (1)
client/components/nft/doc-links.ts (1)
ACCESS_CONTROL_DOC_LINKS(3-10)
client/__tests__/CurrencyLabel.test.tsx (2)
client/lib/blockchain/constants.ts (1)
ETH_ADDRESS(7-7)client/lib/blockchain/token-utils.ts (1)
ETH_ADDRESS(7-7)
client/config/supported-networks.ts (3)
client/lib/blockchain/constants.ts (1)
ETH_ADDRESS(7-7)client/lib/blockchain/token-utils.ts (1)
ETH_ADDRESS(7-7)client/config/local.ts (1)
anvilLocal(4-21)
client/components/jar/NFTGatedWithdrawalSection.tsx (1)
client/components/nft/NFTSelector.tsx (1)
SelectedNFT(25-35)
client/lib/blockchain/constants.ts (1)
client/lib/blockchain/token-utils.ts (1)
ETH_ADDRESS(7-7)
client/__tests__/TokenInfoHook.test.ts (2)
client/lib/blockchain/constants.ts (1)
ETH_ADDRESS(7-7)client/lib/blockchain/token-utils.ts (1)
ETH_ADDRESS(7-7)
client/__tests__/hooks/useJarCreation.test.ts (2)
client/lib/blockchain/constants.ts (1)
ETH_ADDRESS(7-7)client/lib/blockchain/token-utils.ts (1)
ETH_ADDRESS(7-7)
client/components/jars/JarControls.tsx (2)
client/components/ui/input.tsx (1)
Input(31-31)client/components/ui/select.tsx (1)
SelectTrigger(153-153)
client/config/deployments.auto.ts (1)
client/config/supported-networks.ts (2)
V2_CHAINS(24-24)isV2Chain(27-29)
🪛 Biome (2.3.14)
client/components/jar/NFTGatedWithdrawalSection.tsx
[error] 352-354: A form label must be associated with an input.
Consider adding a for or htmlFor attribute to the label element or moving the input element to inside the label element.
(lint/a11y/noLabelWithoutControl)
[error] 364-366: A form label must be associated with an input.
Consider adding a for or htmlFor attribute to the label element or moving the input element to inside the label element.
(lint/a11y/noLabelWithoutControl)
[error] 376-378: A form label must be associated with an input.
Consider adding a for or htmlFor attribute to the label element or moving the input element to inside the label element.
(lint/a11y/noLabelWithoutControl)
🪛 GitHub Actions: Code Quality & Linting
client/hooks/jar/useJarCreation.ts
[error] 623-623: tsc --noEmit failed: Type '"CUSTOM"' is not assignable to type '0x${string}'.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Client Integration Tests
🔇 Additional comments (74)
.github/workflows/contract-tests.yml (1)
33-33: Comment update looks good.
No functional changes; the wording now matches the bun migration context.client/__tests__/hooks/jar/useStreamingActions.test.tsx (1)
29-32: LGTM — updated test command is consistent with bun migration.client/__tests__/integration/unlock-sdk-integration.test.ts (1)
3-3: Comment update looks good.Clear and consistent with the Bun migration context.
contracts/script/DeployLocal.s.sol (1)
80-82: LGTM — consistent with the pnpm → bun migration.contracts/src/libraries/Streaming.sol (1)
43-43: LGTM — sensible pre-validation guard.Early revert on
address(0)prevents a confusing downstream failure inside the Superfluid host call. Clear and specific error name.client/__tests__/hooks/useUnlockLocks.test.ts (1)
52-52: LGTM! Comment update aligns with the bun migration.The updated comment correctly documents how to run SDK integration tests using the new package manager.
contracts/test/UniswapV4Integration.t.sol (3)
9-22: Well-implemented harness pattern for testing internal library reverts.The
SwapAdapterHarnesscorrectly wraps the internal library call to create the external call boundary required byvm.expectRevert. The NatDoc comment clearly explains the rationale.
77-79: LGTM — harness deployment is straightforward.
184-185: Correct migration of revert tests to use the harness.All three error-condition tests (
testUnsupportedChain,testZeroAmount,testZeroAddress) now properly route through_harnesssovm.expectRevertcan intercept the reverts. The parameters and expected selectors remain consistent with the previous direct calls.Also applies to: 191-197, 203-209
client/.cursor/rules/frontend-standards.mdc (2)
294-295: LGTM! Correct migration to bun commands.The pre-commit checklist has been properly updated to use
bun type-checkandbun lintinstead of the pnpm equivalents. These are standard bun commands that align with the project's migration to bun as the package manager.
411-416: LGTM! Development commands correctly updated.All development commands have been properly migrated from pnpm to bun. The listed commands (
dev,build,type-check,lint,test,test:watch) are all valid bun commands and align with the project-wide migration described in the PR objectives.client/components/ui/select.tsx (2)
22-22: Styling updates look good.The addition of explicit text/placeholder color tokens and the
[data-placeholder]selector ensures consistent theming for both native placeholder and Radix's data-attribute-based placeholder state.
78-78: Content panel styling is consistent with the updated design tokens.client/components/jars/ChainDisplay.tsx (1)
10-26: LGTM!Clean presentational component with consistent theming via CSS custom properties. The two-line layout (label + network name) with the icon is well-structured.
client/components/jars/JarControls.tsx (2)
16-17: Good extraction of shared styling into a constant.Centralizing the common input field classes reduces duplication across the search, filter, and other controls.
91-91: These custom utility classes are properly defined inclient/app/globals.cssand are used consistently across multiple components. No action needed.Likely an incorrect or invalid review comment.
.cursor/rules/root-standards.mdc (1)
16-16: LGTM! Consistent package manager migration.The migration from pnpm to bun is implemented consistently across workspace configuration and all development commands. According to Bun 1.2 documentation, all referenced commands (dev, test, build, workspaces) are fully supported.
Also applies to: 25-34
client/components/jar/NFTGatedWithdrawalSection.tsx (2)
102-118: Clean unification of manual and selector flows viaactiveNFT.The
manualNFTmemo andactiveNFTderivation cleanly consolidate both input paths so downstream hooks and handlers operate identically regardless of mode. This is a nice structural improvement.
274-306: Withdrawal handlers correctly branch on token type and proof availability.The ERC-1155 path forwards the balance proof for race-condition protection, while ERC-721 and no-proof scenarios fall back to the standard handlers. Logic is sound.
contracts/test/SuperfluidIntegration.t.sol (9)
10-40: LGTM — MockCFA stubs are clean and well-documented.The natspec explains the
addressparam trick, and the return shapes match the CFA v1 interface.
42-60: LGTM — MockSuperfluidHost is minimal and fit for purpose.
94-97: LGTM — Mock deployment and wiring is correct.
244-257: LGTM — Properly mints to_userbefore transfer.The comment accurately explains why minting is needed and the assertion is correct.
305-347: LGTM — Factory integration test correctly uses thetype(uint256).maxsentinel.The sentinel value for
feePercentageOnDepositaligns with the PR objective, and the post-creation role verification is sound.
353-379: LGTM — Reentrancy test is well-designed.The circular-dependency between attacker and jar is neatly solved with the two-phase
setTargetapproach, and the expected revert path (fee transfer →receive()→ re-entrantdeposit→ReentrancyGuardrevert →FeeTransferFailed) is correctly validated.
395-417: LGTM — Pause/unpause test correctly validates the enforcement.The
EnforcedPause()custom error matches OpenZeppelin v5'sPausable, and the comment on line 397 correctly documents thatcreateSuperStreamisn't gated bywhenNotPaused.
458-483: LGTM —ReentrancyAttackerchanges are sound.The
setTargetsetter resolves the circular deploy dependency cleanly, and theattackingflag inreceive()correctly prevents unbounded recursion.
388-390: Add a comment confirming thatcreateSuperStreamintentionally has no access control.
createSuperStreamcontains no access control modifiers or checks—any address can call it. The test implicitly confirms that streaming is open to all. If this is intentional, add a brief comment to the test explaining the design choice; if access control should be enforced, implement allowlist checks increateSuperStream.client/__tests__/utils/ProtocolValidation.test.ts (1)
254-286: LGTM — threshold updated consistently.The test logic and assertions correctly reflect the new 27-character minimum for purpose validation.
client/components/create/StepContent.tsx (1)
130-136: LGTM — UI text and styling updates are consistent.The purpose minimum label now correctly says "27 characters", matching the contract and test changes. The
text-muted-foregroundclass aligns with shadcn/ui conventions, andshowViewToggle={false}is a reasonable configuration choice.Also applies to: 160-162, 334-336, 384-385
client/components/nft/doc-links.ts (1)
1-10: Good centralization of doc links.Clean module with
as constfor immutability. This eliminates scattered hardcoded URLs across protocol configs.client/components/nft/protocols/UnlockConfig.tsx (1)
9-9: LGTM — doc link centralized.Also applies to: 131-131
client/components/nft/protocols/POAPConfig.tsx (1)
10-10: LGTM — doc link centralized.Also applies to: 144-144
client/components/nft/protocols/HatsConfig.tsx (1)
9-9: LGTM — doc link centralized.Also applies to: 124-124
client/components/nft/protocols/HypercertConfig.tsx (1)
9-9: LGTM — centralizing doc links is a clean improvement.Switching from a hardcoded URL to
ACCESS_CONTROL_DOC_LINKS.hypercertsaligns with the other protocol configs and makes link maintenance easier.Also applies to: 103-103
client/components/nft/ProtocolSelector.tsx (1)
47-96: LGTM — consistent centralization of doc links across all access methods.All six access method entries now reference
ACCESS_CONTROL_DOC_LINKS, keeping URLs in sync with other protocol config components.contracts/src/CookieJar.sol (4)
411-449: NFT-specific error taxonomy looks correct and well-structured.The three distinct error paths (
InsufficientNFTBalance,NFTNotOwned,NFTValidationFailed) give callers clear failure reasons depending on whether it's a balance check, ownership mismatch, or contract-level call failure. Thetry/catchpattern correctly handles non-compliant contracts.
525-533: DRY period-reset helpers are a clean improvement.
_isNewPeriodand_effectiveWithdrawnInPeriodeliminate duplicated logic between_validateWithdrawalConstraintsand_executeWithdrawal. The first-withdrawal edge case (wherelastWithdrawalTime[user] == 0) works correctly because the condition evaluates toblock.timestamp >= WITHDRAWAL_PERIOD, which is always true for any realistic block timestamp.
505-519: Withdrawal ETH transfer usesrevert TransferFailed()— consistent with the rest of the contract.The switch from
require(success, ...)to the custom error is consistent with the pattern on line 292 (emergencyWithdraw) and line 168 (_handleEthReceive). Good alignment.
344-344: Fee event guard prevents misleading zero-fee events.Emitting
FeeCollectedonly whenfee > 0is correct and aligned with the same guard in_handleEthReceive(line 166) and_swapToJarToken(lines 635, 668).contracts/test/CookieJar.t.sol (2)
1488-1492: Test expectations correctly updated for new NFT-specific errors.The updated selectors across all NFT access tests (
InsufficientNFTBalancefor zero-balance cases,NFTNotOwnedfor wrong-owner cases) accurately mirror the new_checkAccesslogic inCookieJar.sol.
1583-1587: Correct: specific token ownership failure yieldsNFTNotOwned.User owns token 5 but jar requires token 3 →
ownerOf(3)returnsattacker≠user→NFTNotOwned. Matches the_checkAccessERC721 specific-tokenId branch.client/components/jar/AllowlistWithdrawalSection.tsx (1)
16-16: Good:PURPOSE_MIN_LENGTHaligns with the contract's 27-byte threshold.This constant mirrors the Solidity
bytes(purpose).length < 27check inCookieJar._validateWithdrawalConstraints, keeping client and contract validation in sync.contracts/test/CookieJarProtocols.t.sol (5)
10-42: MockPOAP balance tracking is correctly implemented.The
ownerTokenCountmapping andbalanceOffunction properly simulate ERC-721 behavior, including handling transfers (decrementing previous owner, incrementing new owner). Theaddress(0)guards prevent spurious increments/decrements on mint and burn paths.
78-84: MockHatsbalanceOf(address, uint256)correctly simulates ERC-1155 for Hats Protocol.Returning 1/0 based on
wearerStatusmatches how Hats Protocol implements the ERC-1155 interface, and aligns with the_checkAccessERC1155 branch that checksbal < minBal(whereminBaldefaults to 1).
175-212: Per-protocol access config helpers clean up the test setup nicely.Splitting
createERC721AccessConfig,createHypercertAccessConfig, andcreateHatsAccessConfigreplaces a single generic helper with purpose-specific ones, making each jar's configuration explicit and readable.
507-552: Excellent new test coverage for hat revocation and multi-user cooldown independence.
test_RevertWhen_HatsRevokedBetweenWithdrawalsvalidates that revoking a hat prevents future withdrawals even after cooldown — a critical real-time access control property.test_HatsMultipleUsersIndependentCooldownsconfirms per-user withdrawal tracking doesn't leak between users.
564-572: Testtest_RevertWhen_ERC721WithdrawShortPurposenow properly grants access before testing purpose validation.Previously this test may have been reverting for wrong reasons (no NFT ownership). By calling
mockPoap.setOwner(88888, user)and warping past the interval, the test now isolates the purpose validation path specifically.client/__tests__/hooks/useJarCreation.test.ts (1)
22-22: LGTM — tests now use the sharedETH_ADDRESSconstant.Using the same constant from
@/lib/blockchain/constantsas the production code ensures tests stay in sync if the value changes. Clean improvement.Also applies to: 102-102, 108-108
client/__tests__/TokenInfoHook.test.ts (1)
2-2: LGTM — consistent use of the sharedETH_ADDRESSconstant.Eliminates the previously hardcoded local value, matching the pattern in
useJarCreation.test.tsand the production code.Also applies to: 7-9, 34-34
client/lib/blockchain/constants.ts (1)
1-7: LGTM!Clean, well-documented centralization of the ETH sentinel address. The
0xEeeee…EEeEaddress is the widely adopted convention for representing native ETH in smart contracts.client/components/jar/JarDepositSection.tsx (1)
31-32: LGTM!Clean replacement of hardcoded address comparison with the centralized
ETH_ADDRESSconstant using case-insensitive comparison.client/app/profile/page.tsx (1)
257-260: LGTM!Null-safe, case-insensitive comparison against the centralized
ETH_ADDRESSconstant. Clean improvement over the previous hardcoded address.client/config/supported-networks.ts (1)
87-153: LGTM!All native currency entries consistently use the centralized
ETH_ADDRESSconstant. Clean DRY improvement.client/lib/blockchain/token-utils.ts (1)
5-7: LGTM!Clean import and re-export preserving the existing public API surface for downstream consumers.
client/__tests__/hooks/blockchain/useSuperfluidSubgraph.test.tsx (1)
42-42: LGTM! Comment correctly updated to referencebun test.client/__tests__/hooks/useNftValidation.test.ts (1)
10-10: LGTM! Comment correctly updated to referencebun test.scripts/watch-deploy.sh (2)
42-42: LGTM! Correctly updated tobun generate.
69-69: LGTM! Fallback message correctly updated tobun deploy:local.e2e/global-teardown.ts (1)
6-6: LGTM! Comment correctly updated to referencebun dev.playwright.config.ts (1)
47-47: LGTM! Web server command correctly updated tobun dev.e2e/README.md (1)
9-14: LGTM! Quick start commands consistently updated tobun.e2e/global-setup.ts (1)
62-62: LGTM! Error messages consistently updated to referencebun dev.contracts/README.md (1)
84-98: LGTM! Clear distinction betweenbun(repo root) andnpm(contracts/ directory) commands is well documented and consistent.client/config/deployments.auto.ts (1)
22-43: LGTM — deployment data and formatting update.Auto-generated file with updated factory address and timestamp for local development. Logic is unchanged.
scripts/test-e2e.sh (1)
100-105: LGTM — consistent pnpm → bun migration in E2E script.All command references consistently updated to
bun. Control flow and exit semantics remain unchanged.Also applies to: 187-188, 211-215, 245-246, 285-286, 290-292
client/package.json (1)
11-13: LGTM — script runner swap from pnpm to bun.The
cd .. && bun <script>pattern is consistent with the monorepo root delegation approach..cursor/rules/deployment.mdc (1)
110-123: LGTM — deployment docs updated to bun commands.Also applies to: 203-203
scripts/install-deps.sh (1)
53-65: LGTM!The bun installation block correctly checks for existence, installs via npm, and provides helpful error messages with alternative installation methods.
.github/workflows/code-quality.yml (1)
23-27: LGTM!The bun migration for the code quality workflow is clean and correctly updates all command invocations while preserving the error-handling logic.
VERCEL_BUILD_FIX.md (1)
1-163: Documentation looks good and accurately reflects the pnpm → bun migration.The instructions, build flow, dashboard config, local testing steps, and rollback plan are all consistent with the
package.jsonchanges. No functional issues.package.json (3)
79-88: Engines, overrides, and workspaces look correct for Bun.Bun supports npm-style
overridesandworkspacesfields. Theengines.bun >= 1.0.0minimum is reasonable. The axios override to>=1.12.0properly enforces a minimum version across the dependency tree.
3-3: Version still at3.1.0— should this be bumped to3.2.0?The PR branch is
release/3.2.0, but the package version hasn't been updated. If the version bump is handled elsewhere (e.g., a release script or separate PR), ignore this. Otherwise, update it to match the release.
8-65: Script migration from pnpm to bun is clean overall.The
pnpm --filterpatterns are correctly replaced withcd <dir> && bun <script>, andpnpm runcalls are replaced withbun run. Non-reserved script names used as barebun <name>shorthands (e.g.,bun lint,bun type-check,bun generate) all resolve correctly.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| bun dev # Local with fresh Anvil chain | ||
| bun dev:ethereum # Fork Ethereum mainnet | ||
| bun dev:base # Fork Base network | ||
|
|
||
| # Testing & quality | ||
| pnpm test # All tests across packages | ||
| pnpm test:coverage # Coverage report | ||
| pnpm lint # Lint all packages | ||
| pnpm type-check # Fast TypeScript validation (preferred) | ||
| pnpm build # Build all packages | ||
| bun test # All tests across packages | ||
| bun test:coverage # Coverage report | ||
| bun lint # Lint all packages | ||
| bun type-check # Fast TypeScript validation (preferred) | ||
| bun build # Build all packages |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify bun commands exist in package.json and check for remaining pnpm references
echo "=== Checking package.json scripts ==="
# Verify documented commands exist
for cmd in dev dev:ethereum dev:base test test:coverage lint type-check build; do
if jq -e ".scripts.\"$cmd\"" package.json >/dev/null 2>&1; then
echo "✓ $cmd script found"
else
echo "✗ $cmd script missing"
fi
done
echo -e "\n=== Searching for remaining pnpm references ==="
# Find any remaining pnpm references in config/docs
rg -i 'pnpm' \
--type=json \
--type=yaml \
--type=md \
-g '!**/node_modules/**' \
-g '!**/.next/**' \
-g '!**/dist/**' \
-C 2
echo -e "\n=== Checking for pnpm lock files ==="
fd -H 'pnpm-lock.yaml'
echo -e "\n=== Checking for bun lock files ==="
fd -H 'bun.lockb'Repository: greenpill-dev-guild/cookie-jar
Length of output: 428
🏁 Script executed:
# Check the content of the file to see lines 77-79
cat -n .cursor/rules/root-standards.mdc | sed -n '20,80p'Repository: greenpill-dev-guild/cookie-jar
Length of output: 2888
Add missing bun scripts to package.json: dev:ethereum, dev:base, and test:coverage are documented but missing from package.json.
The documentation claims developers can run bun dev:ethereum, bun dev:base, and bun test:coverage, but these scripts don't exist in package.json. Define these scripts in package.json or remove them from the documentation. The commands dev, test, lint, type-check, and build are correctly configured.
🤖 Prompt for AI Agents
In @.cursor/rules/root-standards.mdc around lines 25 - 34, The docs list bun
scripts that are missing from package.json; either add the three scripts
"dev:ethereum", "dev:base", and "test:coverage" to package.json or remove them
from the documentation. If adding, implement "dev:ethereum" and "dev:base" as
variant wrappers of the existing "dev" script that pass the appropriate
network/fork flags used by your dev server (so they call the same underlying
start logic as "dev" but with the Ethereum or Base fork configuration) and
implement "test:coverage" to run the same test runner with coverage enabled
(mirroring how your CI produces coverage, calling your test command with the
coverage flags). Ensure the script names exactly match dev:ethereum, dev:base,
and test:coverage so the docs and package.json stay in sync.
| disabled={ | ||
| !withdrawPurpose || | ||
| withdrawPurpose.length < 10 || | ||
| withdrawPurpose.length < PURPOSE_MIN_LENGTH || | ||
| config.isWithdrawPending | ||
| } |
There was a problem hiding this comment.
Minor: JS string.length counts UTF-16 code units, not bytes.
The contract checks bytes(purpose).length < 27 (UTF-8 byte length), but withdrawPurpose.length in JS counts UTF-16 code units. For multi-byte characters (emojis, CJK, etc.), a string that passes the client-side check could still fail on-chain. For typical English text this is a non-issue, but if you want parity you could use new TextEncoder().encode(withdrawPurpose).length.
Also applies to: 214-218
🤖 Prompt for AI Agents
In `@client/components/jar/AllowlistWithdrawalSection.tsx` around lines 74 - 78,
The client-side length check uses withdrawPurpose.length (UTF-16 code units)
which can differ from the contract's bytes(purpose).length (UTF-8); update the
disabled checks (e.g., the JSX control using withdrawPurpose, PURPOSE_MIN_LENGTH
and config.isWithdrawPending) and the other occurrences around lines 214-218 to
compute UTF-8 byte length via new TextEncoder().encode(withdrawPurpose).length,
and compare that value against PURPOSE_MIN_LENGTH so the client validation
matches the on‑chain byte-length check.
| remappings = [ | ||
| "@openzeppelin/contracts=lib/openzeppelin-contracts/contracts", | ||
| "erc4626-tests/=lib/openzeppelin-contracts/lib/erc4626-tests/", | ||
| "forge-std/=lib/forge-std/src/", | ||
| "halmos-cheatcodes/=lib/openzeppelin-contracts/lib/halmos-cheatcodes/src/", | ||
| "openzeppelin-contracts/=lib/openzeppelin-contracts/", | ||
| "@superfluid-finance/ethereum-contracts/=lib/protocol-monorepo/packages/ethereum-contracts/contracts/", | ||
| "universal-router/=lib/universal-router/", | ||
| "permit2/=lib/permit2/", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the referenced library directories exist
echo "=== Checking library paths ==="
for dir in \
"lib/openzeppelin-contracts/contracts" \
"lib/openzeppelin-contracts/lib/erc4626-tests" \
"lib/forge-std/src" \
"lib/openzeppelin-contracts/lib/halmos-cheatcodes/src" \
"lib/protocol-monorepo/packages/ethereum-contracts/contracts" \
"lib/universal-router" \
"lib/permit2"; do
if [ -d "$dir" ]; then
echo "✓ EXISTS: $dir"
else
echo "✗ MISSING: $dir"
fi
done
echo ""
echo "=== Checking foundry.toml structure ==="
cat -n foundry.toml | head -100Repository: greenpill-dev-guild/cookie-jar
Length of output: 4863
Critical: Most remapped library paths do not exist and will prevent compilation.
Lines 19-27 reference 7 library remappings, but 5 of them point to non-existent directories:
- ✗
lib/openzeppelin-contracts/contracts - ✗
lib/openzeppelin-contracts/lib/erc4626-tests - ✗
lib/forge-std/src - ✗
lib/openzeppelin-contracts/lib/halmos-cheatcodes/src - ✗
lib/protocol-monorepo/packages/ethereum-contracts/contracts - ✓
lib/universal-router(exists) - ✓
lib/permit2(exists)
This will cause import resolution failures. Ensure all git submodules are initialized (git submodule update --init --recursive), or correct the remapping paths if they are wrong.
Additionally, line 19 uses @openzeppelin/contracts=lib/openzeppelin-contracts/contracts (no trailing slashes), while all other entries follow the prefix/=path/ convention. Standardize to @openzeppelin/contracts/=lib/openzeppelin-contracts/contracts/.
🤖 Prompt for AI Agents
In `@foundry.toml` around lines 18 - 27, Update the remappings array so all paths
point to real directories and use the standardized "prefix/=path/" form with
trailing slashes; run git submodule update --init --recursive to populate
missing libs or correct entries in the remappings list (e.g., change
"@openzeppelin/contracts=lib/openzeppelin-contracts/contracts" to
"@openzeppelin/contracts/=lib/openzeppelin-contracts/contracts/" and verify/fix
the paths for "erc4626-tests", "forge-std", "halmos-cheatcodes", and
"protocol-monorepo/packages/ethereum-contracts" to the actual locations or
remove unused entries) so import resolution for the remappings variable
succeeds.
…lures - Use FOUNDRY_PROFILE=dev in CI workflows to match local build scripts - Add fetch-depth: 0 for proper submodule resolution - Fix TypeScript error in useJarCreation.ts (widen options array type) - Add vercel.json to specify bun as install command Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
client/hooks/jar/useJarCreation.ts (1)
447-465:⚠️ Potential issue | 🔴 CriticalChange
feeBpsto use sentinel valuetype(uint256).maxwhen custom fee is not enabled.The factory contract interprets
0as an explicit 0% fee, not as "use the factory default." WhenenableCustomFeeis false, the client currently sendscustomFeePercentage: BigInt(0), which unintentionally overrides the protocol's default fee for every jar created without a custom fee. The contract expectstype(uint256).maxas a sentinel to signal "use factory default" (seeCookieJarFactory.sol:157-161). Change line 450 to sendBigInt(2n**256n - 1n)instead of0when the user has not opted into a custom fee.
🧹 Nitpick comments (1)
.github/workflows/contract-tests.yml (1)
40-43: Consider settingFOUNDRY_PROFILEonce at the job level instead of repeating it in every step.The same
export FOUNDRY_PROFILE=devappears in four separate steps. A job-levelenvblock eliminates the duplication and makes it trivial to change the profile later.♻️ Proposed refactor
Add an
envblock at the job level:contract-tests: name: Foundry Tests & Analysis runs-on: ubuntu-latest timeout-minutes: 20 + env: + FOUNDRY_PROFILE: dev steps:Then remove all four
export FOUNDRY_PROFILE=devlines from the individualrunblocks (lines 42, 63, 84, 119).
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/workflows/contract-tests.yml.github/workflows/integration-tests.ymlclient/hooks/jar/useJarCreation.tsvercel.json
✅ Files skipped from review due to trivial changes (1)
- vercel.json
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/integration-tests.yml
🔇 Additional comments (6)
.github/workflows/contract-tests.yml (4)
23-27: LGTM on the checkout changes.
fetch-depth: 0ensures all submodule references resolve correctly withsubmodules: recursive.
57-76: LGTM on the test step changes.The dev profile export aligns CI behavior with local
package.jsonscripts as intended.
78-101: LGTM on the coverage step changes.
114-139: LGTM on the security analysis step changes.client/hooks/jar/useJarCreation.ts (2)
18-18: Good centralization ofETH_ADDRESS.Moving to a shared module eliminates duplication across the codebase.
596-602: LGTM — explicit type annotation for currency options.The added
descriptionfield in the type ensures all option objects are consistently shaped.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
OZ v5 moved ERC777 interfaces from token/ERC777/ to interfaces/. Superfluid protocol-monorepo still imports from the old v4 path. Add oz-compat.sh script that copies the interface files to the expected location, run in CI workflows and postinstall hook. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- docs: remove phantom dev:ethereum/dev:base scripts from root-standards, clarify bun run test vs bun test in testing-patterns - fix(bunfig): remove vitest preload that breaks bun's native test runner, remove hardcoded NODE_ENV=production that prevents dev mode - fix(client): use Unicode code point length for purpose validation to match Solidity, fix negative repeat in deposit placeholder, add safeBigInt for manual NFT token ID input, add htmlFor/id for label accessibility, fix select WCAG contrast using primary-foreground token - fix(contracts): add countUnicodeCodePoints helper for UTF-8 aware purpose validation matching client-side semantics - fix(config): standardize remapping trailing slashes in foundry.toml and contracts/remappings.txt - fix(scripts): use bun run test instead of bun test in package.json for test:client, test:all, and test:ci - docs: fix deploy:base-sepolia description in README Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
foundry.toml (1)
78-80:⚠️ Potential issue | 🟠 Major
openzeppelin-contracts/remapping target differs betweendefaultandsizeprofiles.Line 23 maps
openzeppelin-contracts/→lib/openzeppelin-contracts/(repo root), while line 79 maps it →../lib/openzeppelin-contracts/contracts/(thecontracts/subfolder). If any source file imports via theopenzeppelin-contracts/contracts/…path, thesizeprofile will resolve to a doubledcontracts/contracts/…and fail to compile.Align the two so the same import paths work under both profiles.
Suggested fix
remappings = [ - "openzeppelin-contracts/=../lib/openzeppelin-contracts/contracts/" + "openzeppelin-contracts/=../lib/openzeppelin-contracts/" ]
🤖 Fix all issues with AI agents
In `@client/components/jar/AllowlistWithdrawalSection.tsx`:
- Around line 16-22: The CookieJar contract's _validateWithdrawalConstraints
currently compares bytes(purpose).length against PURPOSE_MIN_LENGTH which counts
bytes instead of Unicode code points; update _validateWithdrawalConstraints in
CookieJar to call countUnicodeCodePoints(purpose) from CookieJarValidation (the
same validation helper used in CookieJarValidation.sol) and compare that result
to PURPOSE_MIN_LENGTH so the Solidity check matches the TypeScript
unicodeCodePointLength behavior and avoids misvalidating multi-byte characters.
In `@client/components/jar/NFTGatedWithdrawalSection.tsx`:
- Around line 182-185: The args construction returns a mutable (string|bigint)[]
rather than the expected readonly tuple for ERC1155 balanceOf; update the
returned tuple in the args expression (the branch that returns [userAddress,
id]) to be a readonly tuple by appending as const so TypeScript infers readonly
[account: address, id: bigint]; this change should be applied where
activeNFT.tokenId is converted via safeBigInt and userAddress is used in the
args expression.
- Line 162: The args value passed to useReadContract for ownerOf is returning a
bigint[] from the IIFE (via activeNFT?.tokenId and safeBigInt) which TypeScript
won't narrow to the expected readonly tuple type readonly [tokenId: bigint];
change the IIFE to return a readonly tuple by using "as const" when returning
the single-element tuple (and keep the existing undefined branch), so args
becomes either readonly [bigint] or undefined; locate the args property in
NFTGatedWithdrawalSection.tsx around the activeNFT?.tokenId usage and update the
returned array to a readonly tuple to satisfy the useReadContract ownerOf
signature.
In `@README.md`:
- Line 119: Replace the user-facing invocations of Bun's test runner in the
README — specifically the occurrences that currently say "bun test" (notably the
instances at the lines referenced) — with "bun run test" so the package.json
"test" script is executed (this ensures the project's composite test script that
runs contracts + client via Vitest is used). Update all occurrences (the ones
called out and any other similar mentions) to "bun run test" to match the
testing-patterns guidance.
🧹 Nitpick comments (9)
.github/workflows/contract-tests.yml (2)
39-58: Dead$?check and duplicateforge build --sizes.GitHub Actions uses
set -eo pipefailby default, so ifforge build --sizeson line 48 fails, the shell exits immediately — theif [ $? -ne 0 ]block on lines 50-53 is unreachable dead code. Additionally,forge build --sizesruns twice (lines 48 and 58), doubling build time for this step.Proposed fix
- name: Build contracts run: | echo "🔧 Building contracts..." cd contracts # Use dev profile (matches local package.json scripts) export FOUNDRY_PROFILE=dev # Build contracts with size reporting - forge build --sizes - - if [ $? -ne 0 ]; then - echo "❌ Contract compilation failed" - exit 1 - fi - - echo "✅ Contracts compiled successfully" - echo "" - echo "📊 Contract sizes:" - forge build --sizes | tail -20 # Show size information + if ! forge build --sizes; then + echo "❌ Contract compilation failed" + exit 1 + fi + + echo "✅ Contracts compiled successfully"
27-27:fetch-depth: 0fetches full git history — is this intentional?This significantly increases checkout time. For submodule support,
fetch-depth: 1(the default) is sufficient. Full history is typically only needed forgit log/git describe-based versioning. If that's the intent, a brief comment would help..github/workflows/integration-tests.yml (1)
77-121: Dead$?checks — same pattern ascontract-tests.yml.Lines 96, 109, and 116 check
$?after commands, but GitHub Actions' defaultset -eo pipefailmeans the script already exits on failure, making these checks unreachable. Useif ! cmd; then ...instead for clarity and correctness.Example fix for the deploy step (lines 86–101)
# Build contracts first forge build # Deploy using same script as dev-start.sh - forge script script/DeployLocal.s.sol:DeployLocalScript \ + if ! forge script script/DeployLocal.s.sol:DeployLocalScript \ --rpc-url http://127.0.0.1:8545 \ --broadcast \ --force \ --sender 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266 \ - --private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 - - if [ $? -ne 0 ]; then + --private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80; then echo "❌ Contract deployment failed" exit 1 fipackage.json (2)
10-10: Complexpostinstallone-liner is fragile and hard to maintain.This single line chains 5 operations with
&&and a broad|| echofallback that swallows all failures (includinggit submodule updateoroz-compat.sherrors — not just a missingforge). Consider extracting this into a dedicated script (e.g.,scripts/postinstall.sh) for readability and more granular error handling.Proposed refactor
-"postinstall": "rm -f package-lock.json && (command -v forge > /dev/null && git submodule update --init --recursive && ./scripts/oz-compat.sh && cd contracts && forge install || echo 'Forge not found, skipping forge operations')", +"postinstall": "./scripts/postinstall.sh",Then in
scripts/postinstall.sh:#!/usr/bin/env bash set -euo pipefail rm -f package-lock.json if ! command -v forge > /dev/null; then echo "Forge not found, skipping forge operations" exit 0 fi git submodule update --init --recursive ./scripts/oz-compat.sh cd contracts && forge install
22-22:bun test:coverageintest:client:coverage— verify this resolves to the package script, not Bun's test runner.While
bun test:coverageis not a reserved Bun subcommand (only barebun testis), thetest:prefix could cause confusion given the established pattern of usingbun run <script>. For consistency with the fix applied totest:clienton line 21, consider usingbun run test:coverage.Similarly applies to lines 23-26 (
bun test:version-detection,bun test:lazy-loading,bun test:hooks,bun test:components).client/components/jar/NFTGatedWithdrawalSection.tsx (3)
162-170: Consider extracting the IIFE arg-building into auseMemofor readability.The inline IIFEs in
argsmake these blocks harder to read and review. A smalluseMemofor each set of args would be clearer and avoid re-evaluation on every render.♻️ Example refactor
const erc721Args = useMemo(() => { if (!activeNFT?.tokenId) return undefined; const id = safeBigInt(activeNFT.tokenId); return id !== undefined ? [id] as const : undefined; }, [activeNFT?.tokenId]); const erc1155Args = useMemo(() => { if (!userAddress || !activeNFT?.tokenId) return undefined; const id = safeBigInt(activeNFT.tokenId); return id !== undefined ? [userAddress, id] as const : undefined; }, [userAddress, activeNFT?.tokenId]);Then use
args: erc721Argsandargs: erc1155Argsin the respectiveuseReadContractcalls.Also applies to: 182-195
216-222: Wagmi error details are discarded in the displayed message.
error(Line 216) can be anErrorobject fromerc721Errororerc1155Error, but Line 219 only usesproofError(a string). WhenproofErrorisnulland a wagmi error is present, the user sees the generic "Failed to verify NFT ownership" message, losing the specific failure reason.Consider including the wagmi error message:
♻️ Proposed improvement
if (error) { - setBalanceError(proofError || "Failed to verify NFT ownership"); + const message = + proofError || + (erc721Error instanceof Error ? erc721Error.message : null) || + (erc1155Error instanceof Error ? erc1155Error.message : null) || + "Failed to verify NFT ownership"; + setBalanceError(message); setOwnershipVerified(false); return; }
490-501: "Withdrawal Amount" label (Line 492) lackshtmlFor/idassociation.This label is not linked to its
<Input>viahtmlFor/id, same pattern that was fixed for the manual entry fields. Since this line wasn't changed in this PR it's not blocking, but worth a follow-up for consistency.♻️ Proposed fix
- <label className="text-sm font-medium text-[`#3c2a14`]"> + <label htmlFor="nft-withdraw-amount" className="text-sm font-medium text-[`#3c2a14`]"> Withdrawal Amount </label> <Input + id="nft-withdraw-amount" type="number"client/components/jar/AllowlistWithdrawalSection.tsx (1)
54-109: Four near-identical branches could be consolidated, but not in scope for this PR.The four early-return branches share most of their markup and differ only in the amount input presence, purpose textarea, and disabled conditions. This is existing code structure and not introduced by this PR, so no action needed now — just noting for future refactoring consideration.
Also applies to: 112-153, 156-241, 244-312
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.cursor/rules/root-standards.mdc.cursor/rules/testing-patterns.mdc.github/workflows/contract-tests.yml.github/workflows/integration-tests.ymlREADME.mdbunfig.tomlclient/components/jar/AllowlistWithdrawalSection.tsxclient/components/jar/JarDepositSection.tsxclient/components/jar/NFTGatedWithdrawalSection.tsxclient/components/ui/select.tsxcontracts/remappings.txtcontracts/src/libraries/CookieJarValidation.solfoundry.tomlpackage.jsonscripts/oz-compat.sh
🚧 Files skipped from review as they are similar to previous changes (4)
- client/components/jar/JarDepositSection.tsx
- .cursor/rules/root-standards.mdc
- bunfig.toml
- client/components/ui/select.tsx
🧰 Additional context used
🪛 actionlint (1.7.10)
.github/workflows/integration-tests.yml
[error] 78-78: shellcheck reported issue in this script: SC2181:style:18:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?
(shellcheck)
[error] 78-78: shellcheck reported issue in this script: SC2181:style:31:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?
(shellcheck)
[error] 78-78: shellcheck reported issue in this script: SC2181:style:38:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?
(shellcheck)
.github/workflows/contract-tests.yml
[error] 40-40: shellcheck reported issue in this script: SC2181:style:10:6: Check exit code directly with e.g. 'if ! mycmd;', not indirectly with $?
(shellcheck)
🪛 GitHub Actions: Code Quality & Linting
client/components/jar/NFTGatedWithdrawalSection.tsx
[error] 162-162: TS2322: Type 'bigint[] | undefined' is not assignable to type 'readonly [tokenId: bigint] | undefined'.
🪛 GitHub Actions: Smart Contract Tests
.cursor/rules/testing-patterns.mdc
[error] 1-1: Process completed with exit code 1.
foundry.toml
[error] 1-1: Process completed with exit code 1.
client/components/jar/AllowlistWithdrawalSection.tsx
[error] 1-1: Process completed with exit code 1.
.github/workflows/integration-tests.yml
[error] 1-1: Process completed with exit code 1.
.github/workflows/contract-tests.yml
[error] 1-1: Process completed with exit code 1.
client/components/jar/NFTGatedWithdrawalSection.tsx
[error] 1-1: Process completed with exit code 1.
scripts/oz-compat.sh
[error] 1-1: Process completed with exit code 1.
contracts/src/libraries/CookieJarValidation.sol
[error] 1-1: Process completed with exit code 1.
contracts/remappings.txt
[error] 1-1: Process completed with exit code 1.
README.md
[error] 1-1: Process completed with exit code 1.
package.json
[error] 1-1: Process completed with exit code 1.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Client Integration Tests
🔇 Additional comments (11)
contracts/remappings.txt (1)
1-1: Trailing-slash fix looks correct.The
=→/=and trailing/on the target path now match the standard Foundry remapping convention used by the other entries.contracts/src/libraries/CookieJarValidation.sol (2)
51-61: Well-implemented UTF-8 code point counter.The bitmask logic
(uint8(b[i]) & 0xC0) != 0x80correctly identifies UTF-8 leading bytes vs. continuation bytes. This is the standard approach and handles all valid multi-byte sequences (1–4 bytes per code point).Minor consideration: for very long strings, a word-at-a-time (32-byte chunks) approach could reduce gas, but for typical purpose descriptions this byte-level iteration is perfectly fine.
65-65: Review comment is incorrect—tests using short purpose strings intentionally verify the new threshold.The three tests (
test_RevertWhen_WithdrawAllowlistModeShortPurpose,test_RevertWhen_WithdrawNFTModeShortPurpose, andtest_RevertWhen_ERC721WithdrawShortPurpose) explicitly useshortPurpose("Too short", 9 code points) and expect theInvalidPurpose()revert viavm.expectRevert(). These tests are not failing due to the threshold change—they intentionally verify that the new 27-code-point minimum is enforced correctly.The pipeline failure may exist, but it is not caused by this validation logic or tests breaking due to the threshold. Investigate the actual CI failure logs for the root cause.
Likely an incorrect or invalid review comment.
scripts/oz-compat.sh (1)
1-19: LGTM — script is well-structured.One minor note: the success message on line 18 prints even if none of the inner
cpcommands executed (i.e., none of the source files existed ininterfaces/). Consider moving the echo inside a success check or making it conditional, but this is unlikely to cause real issues..cursor/rules/testing-patterns.mdc (1)
215-237: Good update — addresses prior feedback onbun runvsbun testand workspace scoping.The explicit note about
bun run <script>vsbun testand the workspace-scoped watch mode section look correct and address the previously raised concern..github/workflows/integration-tests.yml (1)
26-30: Action already accepts and usesbun-versioncorrectly.The
.github/actions/setup-node-pnpm/action.ymlalready declaresbun-versionas an accepted input and properly forwards it to theoven-sh/setup-bun@v2action. The action name and description have been updated to reflect the bun migration. The step will not silently ignore thebun-versioninput.Likely an incorrect or invalid review comment.
client/components/jar/NFTGatedWithdrawalSection.tsx (3)
32-39:safeBigInthelper looks good — addresses the previous review feedback.Cleanly prevents
SyntaxErrorfrom invalid manual input reachingBigInt().
111-127:manualNFT/activeNFT/isManualMode— clean unification of both flows.The
useMemoformanualNFThas appropriate dependencies and theactiveNFTderivation is straightforward.One minor observation:
isManualMode(Line 127) is a simple alias forshowManualInput— it adds readability but is redundant. Not blocking.
197-205: Sync effect correctly skips updates in manual mode.The early return when
showManualInputis true prevents overwriting manual values — good fix.client/components/jar/AllowlistWithdrawalSection.tsx (2)
80-84: Updated purpose validation looks correct.The
unicodeCodePointLengthcheck replaces the previouswithdrawPurpose.length < 10with the new threshold and proper code-point counting. Same pattern correctly applied at Line 222 for the variable+purpose path.
45-45: Stray empty line after destructured props.Trivial formatting nit — the blank line between the closing brace of the destructuring and the hook call is harmless but inconsistent with the rest of the file's style.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
The safeBigInt IIFE returned bigint[] which isn't assignable to wagmi's expected readonly tuple type. Adding as const narrows the array literal to the correct tuple shape. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…brary CookieJar._validateWithdrawalConstraints had an inline bytes(purpose).length check that duplicated and diverged from CookieJarValidation.validatePurpose. Now delegates to the library which uses countUnicodeCodePoints, matching client-side unicodeCodePointLength semantics for multi-byte characters. Also replace remaining bun test references in README with bun run test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All quiet on the western front
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ok stability Phase 1: Migrate useJarCreation from manual useState to React Hook Form + Zod - Extract jarCreationSchema.ts with METHOD_TO_ACCESS_TYPE mapping - Replace ~15 useState calls with useForm + zodResolver - Simplify StepContent bridge with lookup instead of 6-case switch Phase 2: Redesign ProtocolSelector UX and consolidate NFTGateInput state - Merge duplicate ConfigurationPanels into one using real protocol components - Add visibleMethods, recommendedMethod props - Replace 11 useState calls in NFTGateInput with typed useReducer - Enhanced useRateLimit with countdown timer and amber UI banner Phase 3: Fix useHats race condition and parallelize ERC detection - Wrap async functions in useCallback with AbortSignal to prevent infinite re-render loops caused by unstable function references in useEffect deps - Add AbortController cleanup to all effects - Remove duplicate cache check in useEnhancedNFTValidation Phase 4: Add ARIA accessibility across all protocol config inputs - ProtocolSelector: role="button", tabIndex, aria-pressed, keyboard handlers - POAPConfig: Convert div onClick to semantic button for search results - ProtocolConfigBase: Add errorId prop + role="alert" on error alerts - All protocol configs: aria-invalid + aria-describedby on inputs - NFTGateInput: aria-label on sliders, remove redundant aria-disabled Phase 5: NFTSelector accessibility and Coming Soon features - NFTCard: role="button", tabIndex, aria-pressed, keyboard handlers - View toggle: aria-label, aria-pressed, role="group" - Skip @tanstack/react-virtual (pagination already bounds DOM nodes) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lection-ux refactor(client): overhaul gating selection UX, accessibility, and hook stability
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (14)
client/__tests__/MetadataParser.test.tsx (2)
36-42:⚠️ Potential issue | 🟡 MinorEmpty string
srcpassed tonext/imagewill error at runtime.When
parsed.imageis""(the undefined/empty/legacy test cases), Next.js<Image>requires a validsrcprop and will throw. The tests for those cases (lines 51–65, 124–144) assertsrc="", but this would fail at render in a real Next.js environment. The tests likely only pass because the test environment stubsnext/image. This is fine for unit testing, but worth noting that the component as written would break if used with real Next.js rendering.
5-27:⚠️ Potential issue | 🔴 CriticalImport and test the actual
parseJarMetadatautility instead of using an inline mock.The test uses an inlined
parseMetadatafunction that differs significantly from the realparseJarMetadatainclient/lib/jar/metadata-utils.ts:
- Real implementation returns 5 fields (
isV2Formatincluded); inline mock returns 4- Real implementation has length-based legacy string logic (≤50 chars → name; >50 chars → description); inline mock always sets name with empty description
- Real implementation falls back to empty description on JSON parse; inline mock falls back to raw string
- Real implementation validates JSON is an object; inline mock doesn't
- Real implementation handles empty/trimmed strings; inline mock doesn't
This means the tests exercise completely different logic than production code and will not catch regressions in the actual
parseJarMetadatafunction. ImportparseJarMetadatafrom@/lib/jar/metadata-utilsand test the real implementation directly.Additionally, fix inconsistent indentation in the assertion blocks (lines 77–85, 101–107, 157–165 have extra tab indentation compared to surrounding code).
client/hooks/nft/useHats.ts (1)
164-167:⚠️ Potential issue | 🔴 CriticalReplace direct
BigInt()calls with validated conversion to prevent runtime crashes on invalidhatId.
BigInt(options.hatId)throws aSyntaxErrorifhatIdis empty, non-numeric, or malformed. This crashes the component during render. Validate the input before conversion.Proposed approach
- ? [userAddress, BigInt(options.hatId)] + ? [userAddress, safeBigInt(options.hatId, 0n)] + : undefined, + function safeBigInt(value: string | undefined, fallback: bigint): bigint { + if (!value) return fallback; + try { + return BigInt(value); + } catch { + return fallback; + } + }Or validate before the hook call in the parent component.
Affects lines 166, 186, 208, 228.
Also: Replace
console.errorcalls at lines 265, 303, 330 with the logger — uselog.error()from@/lib/app/loggerper coding guidelines.client/config/supported-networks.ts (1)
333-336:⚠️ Potential issue | 🔴 CriticalBug: Regular string instead of template literal — Alchemy key not interpolated.
Line 335 uses double quotes instead of backticks, so
${alchemyId}is sent as the literal string"${alchemyId}"rather than the actual API key value. This breaks the Optimism Sepolia Alchemy RPC endpoint.🐛 Fix: use template literal
[optimismSepolia.id]: createFallbackTransport( [ "https://op-sepolia-pokt.nodies.app", - "https://opt-sepolia.g.alchemy.com/v2/${alchemyId}", + `https://opt-sepolia.g.alchemy.com/v2/${alchemyId}`, ],client/hooks/nft/useEnhancedNFTValidation.ts (1)
317-375:⚠️ Potential issue | 🟡 Minor
useBatchNFTValidationis a non-functional stub — addresses without cache hits are never validated.Non-cached addresses are marked
isLoading: true(line 352) but no actual validation is ever performed. Additionally,contractAddresses(an array) in theuseCallbackdependency (line 362) will cause the callback to recreate on every render if the caller passes a new array reference, potentially triggering infinite effect re-runs via theuseEffecton line 364.If this hook is not yet ready, consider removing it or marking it explicitly as unimplemented to prevent consumers from relying on it.
client/components/nft/NFTSelector.tsx (1)
162-175:⚠️ Potential issue | 🟠 Major
Imagewithfillrequiresposition: relativeon its parent container.The parent
<div>at line 162 (className="w-16 h-16 rounded-lg overflow-hidden bg-gray-100 flex-shrink-0") doesn't includerelativepositioning. When usingfill, Next.js Image renders withposition: absolute, so it needs a positioned ancestor to size correctly. The same issue exists in the grid view at line 239.Proposed fix
-<div className="w-16 h-16 rounded-lg overflow-hidden bg-gray-100 flex-shrink-0"> +<div className="relative w-16 h-16 rounded-lg overflow-hidden bg-gray-100 flex-shrink-0">client/components/nft/protocols/HatsConfig.tsx (1)
87-87:⚠️ Potential issue | 🟡 MinorUse the project logger instead of
console.error.As per coding guidelines: "Use
logfrom@/lib/app/loggerfor logging, never useconsole.log,console.error, orconsole.warn".Proposed fix
+import { log } from "@/lib/app/logger"; ... -console.error("Error validating Hat:", err); +log.error("Error validating Hat:", err);client/components/nft/protocols/POAPConfig.tsx (1)
83-83:⚠️ Potential issue | 🟡 MinorUse the project logger instead of
console.error.As per coding guidelines: "Use
logfrom@/lib/app/loggerfor logging, never useconsole.log,console.error, orconsole.warn".client/hooks/jar/useJarCreation.ts (4)
339-339:⚠️ Potential issue | 🟡 MinorMultiple
console.error/console.logcalls violate the logging guideline.Lines 339, 370, 446, and 470 use
console.errororconsole.log. Per coding guidelines: "Uselogfrom@/lib/app/loggerfor logging, never useconsole.log,console.error, orconsole.warn".Proposed fix (example for line 339)
+import { log } from "@/lib/app/logger"; ... -console.error("Error creating jar:", error); +log.error("Error creating jar:", error);Apply the same pattern to lines 370, 446, and 470.
Also applies to: 370-370, 446-446, 470-470
190-219: 🛠️ Refactor suggestion | 🟠 MajorDuplicate NFT address validation may produce redundant error messages.
validateStep3validates NFT addresses inline (lines 198-211) and then also callsgetAccessConfigValidationError(values)(line 213), which re-validates NFT addresses (seecreateV2CreateArgs.tslines 180-188). ForAccessType.NFTGated, users may see both "At least one NFT address is required..." and the same error fromgetAccessConfigValidationError.Consider removing the inline NFT address checks and relying solely on
getAccessConfigValidationError, or skippinggetAccessConfigValidationErrorfor NFTGated since the inline check is more specific.Proposed fix — remove inline duplication
const validateStep3 = useCallback((): { isValid: boolean; errors: string[]; } => { const values = form.getValues(); - const { accessType, nftAddresses } = values; const errors: string[] = []; - if (accessType === AccessType.NFTGated) { - if (nftAddresses.length === 0) { - errors.push( - "At least one NFT address is required for NFT-gated access", - ); - } - - for (const addr of nftAddresses) { - if (!isAddress(addr)) { - errors.push(`NFT address is not a valid Ethereum address`); - break; - } - } - } - const accessValidationError = getAccessConfigValidationError(values); if (accessValidationError) { errors.push(accessValidationError); } return { isValid: errors.length === 0, errors }; }, [form]);
296-335:⚠️ Potential issue | 🟡 MinorInconsistent indentation in V2/V1 branches of
confirmSubmit.The V2 branch (lines 296-312) is indented one extra level relative to the V1 branch (lines 313-335). This same pattern appears in the transaction-success effect at lines 397-423.
377-465:⚠️ Potential issue | 🟡 Minor
triggerConfettiis missing from the effect dependency array.The static analysis tool correctly flags that
triggerConfettiis used at line 391 but not listed in the dependency array at line 455. SincetriggerConfettiis defined in the component scope (notuseCallback-wrapped), it gets a new reference each render. Wrap it inuseCallbackor move it inside the effect.Proposed fix — move into the effect or wrap in useCallback
+ const triggerConfetti = useCallback(async () => { + try { + const confettiModule = await import("canvas-confetti"); + const confetti = confettiModule.default; + confetti({ + particleCount: 100, + spread: 70, + origin: { y: 0.6 }, + }); + } catch (error) { + log.debug("Confetti animation failed:", error); + } + }, []);client/__tests__/hooks/useJarCreation.test.ts (1)
174-229:⚠️ Potential issue | 🟠 MajorMissing Step 3 (access control) validation test.
Validation tests cover Step 1 (jar name), Step 2 (withdrawal amounts), and Step 4 (custom fee), but Step 3 (access control) is not tested. Access control is a critical path per project guidelines.
Would you like me to generate Step 3 validation tests covering NFTGated, POAP, Unlock, Hypercert, and Hats access types?
client/components/create/StepContent.tsx (1)
270-279:⚠️ Potential issue | 🟡 MinorUse the project logger instead of
console.error.Line 277 uses
console.errordirectly, which violates the project's coding guidelines. Replace it with the logger from@/lib/app/logger.Proposed fix
+import { log } from "@/lib/app/logger"; // ... in the clipboard handler: - } catch (err) { - console.error("Failed to read clipboard:", err); + } catch (err) { + log.error("Failed to read clipboard:", err); }As per coding guidelines:
Uselogfrom@/lib/app/loggerfor logging, never useconsole.log,console.error, orconsole.warn``
🤖 Fix all issues with AI agents
In `@client/__tests__/hooks/jar/createV2CreateArgs.test.ts`:
- Around line 1-212: Add unit tests for the exported
getAccessConfigValidationError function to achieve full coverage: create tests
that call getAccessConfigValidationError for each AccessType (NFTGated, POAP,
Unlock, Hypercert, Hats) covering both success and all failure/revert
conditions—e.g., NFTGated: missing address, invalid address format, multiple
addresses mismatch with nftTypes; POAP: missing eventId, non-numeric eventId,
invalid contract fallback; Unlock: missing or invalid unlockAddress; Hypercert:
missing hypercertAddress, missing/non-numeric hypercertTokenId, invalid
hypercertMinBalance; Hats: missing/non-numeric hatsId; use the same test harness
utilities as other tests (makeValues, AccessType/NFTType constants) and assert
returned error strings/undefined as appropriate from
getAccessConfigValidationError so each branch is exercised.
In `@client/__tests__/MetadataParser.test.tsx`:
- Around line 77-85: The test has inconsistent extra indentation for several
assertion blocks in client/__tests__/MetadataParser.test.tsx; fix the formatting
so the expect(...) assertions use the same indentation level as surrounding
asserts—specifically adjust the blocks using screen.getByTestId("description") /
screen.getByTestId("image") and the two other blocks (lines containing
expect(...).toHaveTextContent and expect(...).toHaveAttribute for the image and
other asserts) to remove the extra one-level indent so all expect and
screen.getByTestId calls align with adjacent assertions.
In `@client/components/create/StepContent.tsx`:
- Around line 607-615: handleAddNFT currently appends address to the
nftAddresses array without checking for duplicates; update handleAddNFT to first
normalize the incoming address (trim and lower/upper case consistently), read
currentAddresses via getValues("nftAddresses"), check if the normalized address
already exists (using the same normalization) and if so either return early or
update the corresponding entry in nftTypes, otherwise append the new address and
type using setValue("nftAddresses", ...) and setValue("nftTypes", ...);
reference the handleAddNFT function and the getValues/setValue usage as the
locations to implement this guard and address normalization.
In `@client/components/nft/NFTGateInput.tsx`:
- Line 84: Replace all direct console calls in NFTGateInput (e.g., the
console.warn at the sanitization check and the console.* occurrences around
lines handling input validation and NFT gating logic) with the centralized
logger: import and use the named export log from '@/lib/app/logger' and call the
appropriate log methods (e.g., log.warn/log.error/log.info) matching the
original console usage; update every occurrence referenced in the review (the
console.warn in the sanitizing branch and the other console.* usages in the
input handling/validation functions) to maintain the same message text and
context but routed through log.
- Around line 475-478: The render is calling canMakeApiCall() (used when
building the useEnhancedNFTValidation argument and collection preview queries)
which mutates callsRef and thus records a timestamp on every render; change the
rate-limiter API so render-time checks are read-only and mutation only happens
when an actual network request is made: add a read-only predicate (e.g.,
canCall() or canMakeApiCallReadOnly) and a separate recorder (e.g.,
recordCall()) on the rate limiter, replace render-time calls in NFTGateInput
(places invoking canMakeApiCall() to decide query inputs) to use the read-only
predicate, and move the timestamp recording into the actual queryFn(s) used by
useEnhancedNFTValidation and the collection preview so recordCall() is invoked
only when performing the real API request.
In `@client/components/nft/protocols/HatsConfig.tsx`:
- Around line 74-85: The code inside the try block has inconsistent indentation
around the if (hatDetails) branch; normalize the indentation so the if
(hatDetails) block lines up with surrounding statements (e.g.,
setIsValidating(true) and setValidationError). Locate the block that uses
hatDetails, setSelectedHat, onConfigChange (with hatId: hatIdToValidate,
hatsContract/hatsAddress: contractAddress, hatsId: hatDetails.id) and the else
branch calling setValidationError and setSelectedHat(null), and re-indent those
lines to the same level as the surrounding try scope for consistent readability.
In `@client/components/nft/protocols/POAPConfig.tsx`:
- Around line 135-147: The try block in handleValidateEventId has inconsistent
indentation for the if (event) branch making it deeper than surrounding code;
fix by aligning the if (event) { ... } and its corresponding else { ... } with
the surrounding try block scope so setSelectedEvent, onConfigChange and
setValidationError lines have the same indentation level as the surrounding
statements in handleValidateEventId, leaving the internal block contents
indented one level under the if/else; check the block around
POAPProvider.getEventById(eventId), setSelectedEvent, onConfigChange,
setValidationError and POAP_TOKEN_ADDRESS to ensure consistent indentation
matching the HatsConfig pattern.
In `@client/components/nft/ProtocolSelector.tsx`:
- Around line 233-240: The effect in ProtocolSelector can cause a render loop
when onConfigChange or initialConfig are unstable; change the effect to use refs
for onConfigChange and initialConfig (e.g., onConfigChangeRef and
initialConfigRef updated in separate useEffect hooks) and add a fallbackRef that
stores the last-applied fallback method id so you only call setSelectedMethod
and onConfigChange when filteredMethods produces a new fallback different from
fallbackRef.current; keep the effect dependent only on filteredMethods and
selectedMethod and reference the refs inside it to avoid repeatedly invoking
onConfigChange with unstable references (update fallbackRef.current after
calling onConfigChange).
In `@client/hooks/jar/schemas/jarCreationSchema.ts`:
- Around line 31-50: protocolConfigSchema currently defines hatsId as z.number()
and omits hypercertTokenId, causing validation failures when strings from
HatsConfig.tsx (hatDetails.id) and HypercertConfig.tsx are passed; update
protocolConfigSchema to declare hatsId: z.string().optional() and add
hypercertTokenId: z.string().optional(), then keep the existing downstream
coercion in createV2CreateArgs.ts (where strings are converted to bigint) so
validation succeeds early while preserving the bigint conversion later.
In `@client/hooks/nft/useHats.ts`:
- Around line 254-260: The mapping in fetchUserHatsImpl currently hardcodes
UserHat flags (isWearing, isEligible, isInGoodStanding) to true for every hat
returned by getUserHats; change this so the mapped UserHat either leaves those
fields undefined/omitted or sets them based on the actual on-chain checks (calls
that use isWearerOfHat, isEligible, isInGoodStanding) instead of defaulting to
true—update the mapped creation in fetchUserHatsImpl (the mapped: UserHat[]
assignment) to populate those flags from the on-chain results or omit them until
verification completes so consumers don’t get false positives.
- Around line 135-142: The mountedRef declared in useHats (mountedRef and the
useEffect that toggles it) is dead code because no code, including the refetch
function, reads mountedRef.current; fix by either removing mountedRef and its
useEffect entirely, or (preferred) remove mountedRef and implement an
AbortController-based guard inside refetch: create an AbortController in refetch
or the surrounding effect, pass controller.signal to any async helpers called by
refetch, have those helpers respect signal.aborted (or throw on abort) and check
signal.aborted before calling setState (or catch the abort and skip state
updates), and ensure the controller is aborted in the cleanup so in-flight work
is cancelled; update any functions that currently call setState without an
AbortSignal to accept and use the signal.
- Line 265: In useHats.ts inside the useHats hook, replace the three direct
console.error calls with the project logger: add an import for log from
"@/lib/app/logger" and change each console.error(...) to log.error(...)
(preserving the original message and error object) so all error logging uses the
centralized logger; ensure the import is added at the top of the file and remove
the console.* usages (references: the useHats hook function and its error
handling blocks where console.error is currently used).
- Around line 282-284: The current validation uses Number(hatId) which loses
precision for large uint256 values; update the validation around the hatId check
in useHats.ts to reject null/empty inputs and validate via BigInt: replace the
Number.isNaN(Number(hatId)) logic with a try/catch that attempts BigInt(hatId)
(and also ensure it’s non-negative if required), and throw the same "Hat ID must
be a valid number" Error on failure so large integer hat IDs are handled
correctly.
In `@contracts/src/CookieJar.sol`:
- Around line 424-438: The ERC-721 branch currently treats
nftRequirement.minBalance>0 as an implicit POAP discriminator which causes
misrouting when integrators set minBalance for ERC-721; update the
NftRequirement struct to add an explicit boolean (e.g., isPoapEventGate) or enum
to mark POAP/event gating, then change the branching in the ERC-721 handling:
keep tokenId==0 -> _requireAnyErc721Balance(...), if isPoapEventGate == true ->
call _requirePoapEventOwnership(nftContract, msg.sender, tokenId), otherwise
call _requireErc721TokenOwnership(nftContract, msg.sender, tokenId); also ensure
minBalance remains used only for ERC-1155 logic and add a validation/error for
invalid combinations if needed.
In `@docs/RELEASES.md`:
- Line 150: The RELEASES.md reference to MIGRATIONS.md is broken; either add a
new MIGRATIONS.md containing clear upgrade/migration steps between major
versions (including compatibility notes, breaking changes, required
config/schema changes and example commands) and leave the link in
docs/RELEASES.md, or remove/replace the link in docs/RELEASES.md to an existing
document; locate the reference via the string "MIGRATIONS.md" in
docs/RELEASES.md and ensure any new MIGRATIONS.md is committed alongside the
docs change and referenced with the same filename.
🧹 Nitpick comments (24)
docs/RELEASES.md (2)
23-23: Clarify the bunfig.toml change description.The phrase "Removed Vitest preload that broke Bun's native test runner" is ambiguous—it could be read as the removal causing the breakage, when the intended meaning is that the preload itself was incompatible and removing it fixed the issue.
📝 Suggested clarification
-- **bunfig.toml**: Removed Vitest preload that broke Bun's native test runner; removed hardcoded `NODE_ENV=production` that prevented dev mode +- **bunfig.toml**: Removed Vitest preload that was incompatible with Bun's native test runner; removed hardcoded `NODE_ENV=production` that prevented dev mode
94-94: Consider more concise wording.The phrase "not compatible with" can be simplified to "incompatible with" for brevity.
✍️ Suggested improvement
-- New contract interfaces (not compatible with 2.x) +- New contract interfaces (incompatible with 2.x)client/hooks/nft/useHats.ts (2)
198-232: TwouseReadContractcalls (isInGoodStanding,hatSupply) have their results discarded.Neither the returned
datanorerrorfrom these calls is captured or used. This wastes RPC calls and is confusing to readers. Either capture and expose the results (e.g., feed them intoUserHatorHatInfo) or remove these calls.
56-63: All chains map to the same Hats contract address — consider a single constant.Every entry in
HATS_CONTRACTSmaps to"0x3bc1A0Ad72417f2d411118085256fC53CBdDd137". A singleconst HATS_CONTRACT_ADDRESS = "0x3bc1..."with aSetof supported chain IDs would be clearer and easier to maintain.contracts/src/CookieJar.sol (1)
18-21: IPOAP interface defined inline — consider moving to a separate file.The interface is small, but for maintainability and reuse (e.g., in tests or other contracts), extracting it into its own file (e.g.,
interfaces/IPOAP.sol) would be cleaner. Not blocking.contracts/test/CookieJarFactory.t.sol (1)
98-109:testCreateCookieJarandtestCreateCookieJarBasicare identical tests.Both create the same jar with the same parameters and assert the same conditions. Consider removing the duplicate to keep the test suite clean.
Also applies to: 130-141
contracts/test/CookieJarProtocols.t.sol (1)
227-238:createUnlockAccessConfigusesmockPoapinstead ofmockUnlock— intentional workaround.
MockPublicLocklacksbalanceOf(address)andownerOf(uint256), so it would fail the constructor's_validateNftContractERC721 interface check. UsingmockPoap(which does implement these) is a valid test workaround.However, this means the
jarUnlocktests don't actually exercise the Unlock Protocol'sgetHasValidKeyinterface at all — they test generic ERC721-balance-based access on a POAP mock. If Unlock Protocol support is a real feature, consider either:
- Adding ERC721 stubs to
MockPublicLock, or- Documenting that Unlock gating requires an ERC721-compatible lock contract.
client/config/supported-networks.ts (1)
228-232:any[]type violates the strict TypeScript / no-anycoding guideline.The
connectorsarray is typed asany[]. Consider using the appropriate wagmi connector type or at least a narrower type.As per coding guidelines: "Enable TypeScript strict mode with no
anytypes and exhaustive enums."client/hooks/nft/useEnhancedNFTValidation.ts (1)
7-11: Cache usesanytype fordata, violating the strict TypeScript guideline.The validation cache stores
data: any— consider typing it withNFTValidationResultto maintain type safety throughout the caching layer.♻️ Proposed fix
const validationCache = new Map< string, - { data: any; timestamp: number; blockNumber: number } + { data: NFTValidationResult; timestamp: number; blockNumber: number } >();As per coding guidelines: "Use explicit types instead of
anytype in TypeScript."client/components/nft/NFTGateInput.tsx (1)
803-824: Static analysis:role="status"div should use a semantic<output>element.Biome flags that the element with
role="status"(line 806) can be replaced with a semantic<output>element, which is the native HTML equivalent and improves accessibility without needing ARIA.♻️ Replace div with output element
- <div + <output id="nft-address-errors" - role="status" className={`text-xs px-3 py-2 rounded ${ ... }`} > ... - </div> + </output>client/components/nft/ProtocolSelector.tsx (1)
149-204:ConfigurationPanelusesanyforconfigandonConfigChangeprops.Both
config?: anyandonConfigChange: (config: any) => voidbypass type safety. Consider using a discriminated union or the existingProtocolConfigtype to preserve type checking through the configuration panel.As per coding guidelines: "Use explicit types instead of
anytype in TypeScript."client/lib/blockchain/constants.ts (1)
9-11: Use viem's built-inzeroAddressinstead of re-declaring.viem exports
zeroAddress(typed asAddress), which would avoid maintaining a duplicate definition and aligns with the Web3 interaction patterns.♻️ Proposed change
-import type { Address } from "viem"; +import { zeroAddress, type Address } from "viem"; -/** Canonical zero address. */ -export const ZERO_ADDRESS = - "0x0000000000000000000000000000000000000000" as Address; +/** Canonical zero address. */ +export const ZERO_ADDRESS: Address = zeroAddress;client/components/nft/NFTSelector.tsx (2)
57-81: Pervasiveanytypes in helper functions violate TypeScript strict mode guidelines.All six helper functions (
getContractAddress,getNFTName,getNFTImage,getCollectionName,getTokenType,getVerifiedStatus) acceptnft: any, and theNFTCardmemo (line 99) also types itsnftprop asany. This cascadesanythroughout the component.Consider defining a union type or a minimal common interface that covers the NFT data shapes you support, and use that instead.
As per coding guidelines: "Enable TypeScript strict mode with no
anytypes and exhaustive enums".
393-393: Use<fieldset>instead of<div role="group">for semantic correctness.Biome flags this: elements with
role="group"should use the native<fieldset>element instead. You can visually hide the<legend>or style the fieldset to remove default borders.Proposed fix
-<div className="flex gap-2" role="group" aria-label="View mode"> +<fieldset className="flex gap-2 border-0 p-0 m-0" aria-label="View mode"> ... -</div> +</fieldset>client/hooks/jar/createV2CreateArgs.ts (2)
255-260: Hardcoded multiTokenConfig defaults lack documentation.
maxSlippagePercent: 500n(5%),minSwapAmount: 0n, anddefaultFee: 3000(0.3% Uniswap fee tier) are magic numbers. A brief comment explaining these defaults and their units would improve maintainability.Proposed fix
const multiTokenConfig: MultiTokenConfigArg = { enabled: values.autoSwapEnabled, - maxSlippagePercent: 500n, - minSwapAmount: 0n, - defaultFee: 3000, + maxSlippagePercent: 500n, // 5% max slippage (basis points) + minSwapAmount: 0n, // No minimum swap threshold + defaultFee: 3000, // 0.3% Uniswap fee tier (uint24) };
162-173: Default fallback to Hats config may mask unrecognized access types.The final
returnat line 163 serves as the default case for anyAccessTypenot explicitly handled (includingAccessType.Allowlist, which returns earlier). However, if new access types are added to the enum, they'll silently fall through to Hats configuration. Consider adding an explicit check or aneverexhaustiveness guard.Proposed fix
+ if (values.accessType === AccessType.Hats) { + return { + accessType: ContractAccessType.ERC1155, + nftRequirement: { + nftContract: toAddressOrZero( + protocolValue(protocolConfig, ["hatsAddress", "hatsContract"]), + ), + tokenId: toBigIntOr(0n, protocolValue(protocolConfig, ["hatsId", "hatId"])), + minBalance: 1n, + }, + }; + } + // Hats maps to ERC-1155: tokenId == hatId, minBalance == 1. - return { - accessType: ContractAccessType.ERC1155, - nftRequirement: { - nftContract: toAddressOrZero( - protocolValue(protocolConfig, ["hatsAddress", "hatsContract"]), - ), - tokenId: toBigIntOr(0n, protocolValue(protocolConfig, ["hatsId", "hatId"])), - minBalance: 1n, - }, - }; + // Exhaustiveness guard — should never reach here + const _exhaustive: never = values.accessType; + throw new Error(`Unhandled access type: ${_exhaustive}`);client/__tests__/hooks/jar/createV2CreateArgs.test.ts (1)
52-53: Consider centralizing test addresses in a constants file.Wallet and contract addresses (e.g.,
"0x1234567890123456789012345678901234567890","0x1111111111111111111111111111111111111111") are scattered as inline strings. As per coding guidelines: "Test constants (wallet addresses, contract addresses) must be centralized in a constants.ts file".Also applies to: 135-136, 151-152
client/hooks/jar/useJarCreation.ts (1)
411-418:(decodedLog.args as any)bypasses type safety.At lines 412 and 417,
decodedLog.argsis cast toanyto access.jarAddressand.cookieJarAddress. Since you already know the event name, you can narrow the type by checkingdecodedLog.eventName(which you do) and then using a type assertion to the specific event args type, or use a type guard.As per coding guidelines: "Enable TypeScript strict mode with no
anytypes".client/components/create/StepContent.tsx (5)
134-148: UnnecessaryasynconhandleCustomCurrencySubmit.This callback doesn't contain any
awaitexpressions, so theasynckeyword is superfluous and causes it to return aPromise<void>instead ofvoid.Proposed fix
- const handleCustomCurrencySubmit = useCallback(async () => { + const handleCustomCurrencySubmit = useCallback(() => {
563-574:as anycasts bypass TypeScript strict mode.Lines 572 and 582 cast
protocolConfigobjects toany, violating the strict typing guideline. This hides potential type mismatches betweenSelectorProtocolConfigand the form'sprotocolConfigfield.Consider defining a proper union type or extending the form schema's
protocolConfigtype to accommodate themethodfield, so both assignments can be type-safe withoutas any.As per coding guidelines:
Enable TypeScript strict mode with noanytypes and exhaustive enums
693-736: List key includes array index — items are not uniquely identified.The key
${address}-${index}still relies on the array index, which the static analysis tool flagged. Since NFT addresses should be unique per the gate list (especially if the duplicate guard suggested above is added), using just theaddressas the key would be stable across additions/removals.Proposed fix (assuming duplicates are prevented)
- key={`${address}-${index}`} + key={address}
85-119: Hardcoded token addresses per chain — consider extracting to a config.Lines 98–110 hardcode token addresses for
chainId === 31337andchainId === 84532. These would be more maintainable in a shared config/constants file (e.g., alongsidesupported-networksortoken-utils), especially as more chains/tokens are added.
767-1057: Excessivewatch()calls — consider a singlewatch()oruseWatch.
FinalSettingsStepcallswatch()individually 15+ times (lines 773–792). Eachwatch("field")subscribes independently. Using a singlewatch()(no args, returns entire form) oruseWatchfor a subset would reduce subscription overhead and improve readability.Example consolidation
- const enableCustomFee = watch("enableCustomFee"); - const streamingEnabled = watch("streamingEnabled"); - const requireStreamApproval = watch("requireStreamApproval"); - // ... 12 more watch calls + const values = watch(); // subscribe to entire form + const { + enableCustomFee, + streamingEnabled, + requireStreamApproval, + autoSwapEnabled, + jarName, + jarOwnerAddress, + supportedCurrency, + accessType, + withdrawalOption, + fixedAmount, + maxWithdrawal, + withdrawalInterval, + strictPurpose, + emergencyWithdrawalEnabled, + oneTimeWithdrawal, + customFee, + maxStreamRate, + minStreamDuration, + } = values;client/app/create/page.tsx (1)
51-64:isCurrentStepValidis recreated every render — consideruseCallback.This function is passed as a prop to
CreateJarForm, so a new reference is created on every render of the page, causingCreateJarFormto re-render even if nothing changed. Wrapping it inuseCallbackwith[currentStep, isV2Contract, validateStep1, validateStep2, validateStep3, validateStep4]as dependencies would stabilize the reference.Proposed fix
- const isCurrentStepValid = () => { + const isCurrentStepValid = useCallback(() => { switch (currentStep) { case 1: return validateStep1().isValid; case 2: return validateStep2().isValid; case 3: return isV2Contract ? validateStep3().isValid : true; case 4: return validateStep4().isValid; default: return false; } - }; + }, [currentStep, isV2Contract, validateStep1, validateStep2, validateStep3, validateStep4]);Based on learnings: "Use React hooks (useState, useMemo, useCallback) with proper dependency arrays and include all dependencies in exhaustive-deps"
| import { decodeFunctionData, encodeFunctionData } from "viem"; | ||
| import { describe, expect, it, vi } from "vitest"; | ||
| import { cookieJarFactoryAbi } from "@/generated"; | ||
| import { | ||
| FACTORY_DEFAULT_FEE_SENTINEL, | ||
| buildV2CreateCookieJarArgs, | ||
| getFeePercentageOnDeposit, | ||
| } from "@/hooks/jar/createV2CreateArgs"; | ||
| import { ETH_ADDRESS, HATS_PROTOCOL_ADDRESS, POAP_TOKEN_ADDRESS } from "@/lib/blockchain/constants"; | ||
|
|
||
| vi.mock("@/hooks/jar/schemas/jarCreationSchema", () => ({ | ||
| AccessType: { | ||
| Allowlist: 0, | ||
| NFTGated: 1, | ||
| POAP: 2, | ||
| Unlock: 3, | ||
| Hypercert: 4, | ||
| Hats: 5, | ||
| }, | ||
| NFTType: { | ||
| None: 0, | ||
| ERC721: 1, | ||
| ERC1155: 2, | ||
| }, | ||
| })); | ||
|
|
||
| const AccessType = { | ||
| Allowlist: 0, | ||
| NFTGated: 1, | ||
| POAP: 2, | ||
| Unlock: 3, | ||
| Hypercert: 4, | ||
| Hats: 5, | ||
| } as const; | ||
|
|
||
| const NFTType = { | ||
| None: 0, | ||
| ERC721: 1, | ||
| ERC1155: 2, | ||
| } as const; | ||
|
|
||
| type JarCreationFormData = Parameters<typeof buildV2CreateCookieJarArgs>[0]["values"]; | ||
| type ProtocolConfig = JarCreationFormData["protocolConfig"]; | ||
|
|
||
| type MakeValuesOverrides = Partial<Omit<JarCreationFormData, "protocolConfig">> & { | ||
| protocolConfig?: Partial<ProtocolConfig>; | ||
| }; | ||
|
|
||
| function makeValues(overrides: MakeValuesOverrides = {}): JarCreationFormData { | ||
| const baseValues: JarCreationFormData = { | ||
| jarName: "Test Jar", | ||
| jarOwnerAddress: "0x1234567890123456789012345678901234567890", | ||
| supportedCurrency: ETH_ADDRESS, | ||
| metadata: "Test metadata", | ||
| imageUrl: "", | ||
| externalLink: "", | ||
| showCustomCurrency: false, | ||
| customCurrencyAddress: "", | ||
| withdrawalOption: 0, | ||
| fixedAmount: "1", | ||
| maxWithdrawal: "2", | ||
| withdrawalInterval: "7", | ||
| strictPurpose: true, | ||
| emergencyWithdrawalEnabled: true, | ||
| oneTimeWithdrawal: false, | ||
| accessType: AccessType.Allowlist, | ||
| nftAddresses: [] as string[], | ||
| nftTypes: [] as number[], | ||
| protocolConfig: { accessType: "Allowlist" }, | ||
| enableCustomFee: false, | ||
| customFee: "", | ||
| streamingEnabled: false, | ||
| requireStreamApproval: true, | ||
| maxStreamRate: "1.0", | ||
| minStreamDuration: "1", | ||
| autoSwapEnabled: false, | ||
| }; | ||
|
|
||
| return { | ||
| ...baseValues, | ||
| ...overrides, | ||
| protocolConfig: { | ||
| ...baseValues.protocolConfig, | ||
| ...overrides.protocolConfig, | ||
| }, | ||
| }; | ||
| } | ||
|
|
||
| describe("buildV2CreateCookieJarArgs", () => { | ||
| it("encodes createCookieJar args compatible with current factory ABI", () => { | ||
| const args = buildV2CreateCookieJarArgs({ | ||
| values: makeValues(), | ||
| metadata: "metadata", | ||
| parseAmount: (amount) => BigInt(Math.floor(Number.parseFloat(amount) * 1e18)), | ||
| }); | ||
|
|
||
| const data = encodeFunctionData({ | ||
| abi: cookieJarFactoryAbi, | ||
| functionName: "createCookieJar", | ||
| args, | ||
| }); | ||
| expect(data.startsWith("0x")).toBe(true); | ||
|
|
||
| const decoded = decodeFunctionData({ | ||
| abi: cookieJarFactoryAbi, | ||
| data, | ||
| }); | ||
| expect(decoded.functionName).toBe("createCookieJar"); | ||
| expect(decoded.args?.length).toBe(3); | ||
| }); | ||
|
|
||
| it("uses default fee sentinel when custom fee is disabled", () => { | ||
| const fee = getFeePercentageOnDeposit(makeValues({ enableCustomFee: false })); | ||
| expect(fee).toBe(FACTORY_DEFAULT_FEE_SENTINEL); | ||
| }); | ||
|
|
||
| it("uses explicit custom fee when provided", () => { | ||
| const fee = getFeePercentageOnDeposit( | ||
| makeValues({ enableCustomFee: true, customFee: "2.5" }), | ||
| ); | ||
| expect(fee).toBe(250n); | ||
| }); | ||
|
|
||
| it("supports explicit zero-percent fee", () => { | ||
| const fee = getFeePercentageOnDeposit( | ||
| makeValues({ enableCustomFee: true, customFee: "0" }), | ||
| ); | ||
| expect(fee).toBe(0n); | ||
| }); | ||
|
|
||
| it("maps NFT ERC721 access to contract enum ERC721", () => { | ||
| const [jarConfig, accessConfig] = buildV2CreateCookieJarArgs({ | ||
| values: makeValues({ | ||
| accessType: AccessType.NFTGated, | ||
| nftAddresses: ["0x1111111111111111111111111111111111111111"], | ||
| nftTypes: [NFTType.ERC721], | ||
| }), | ||
| metadata: "metadata", | ||
| parseAmount: () => 1n, | ||
| }); | ||
| expect(jarConfig.accessType).toBe(1); | ||
| expect(accessConfig.nftRequirement.nftContract).toBe( | ||
| "0x1111111111111111111111111111111111111111", | ||
| ); | ||
| }); | ||
|
|
||
| it("maps NFT ERC1155 access to contract enum ERC1155", () => { | ||
| const [jarConfig] = buildV2CreateCookieJarArgs({ | ||
| values: makeValues({ | ||
| accessType: AccessType.NFTGated, | ||
| nftAddresses: ["0x1111111111111111111111111111111111111111"], | ||
| nftTypes: [NFTType.ERC1155], | ||
| }), | ||
| metadata: "metadata", | ||
| parseAmount: () => 1n, | ||
| }); | ||
| expect(jarConfig.accessType).toBe(2); | ||
| }); | ||
|
|
||
| it("maps POAP, Unlock, Hypercert, and Hats to supported contract enum domain", () => { | ||
| const [poapJar, poapAccess] = buildV2CreateCookieJarArgs({ | ||
| values: makeValues({ | ||
| accessType: AccessType.POAP, | ||
| protocolConfig: { accessType: "POAP", eventId: "1234" }, | ||
| }), | ||
| metadata: "metadata", | ||
| parseAmount: () => 1n, | ||
| }); | ||
| expect(poapJar.accessType).toBe(1); | ||
| expect(poapAccess.nftRequirement.nftContract).toBe(POAP_TOKEN_ADDRESS); | ||
| expect(poapAccess.nftRequirement.tokenId).toBe(1234n); | ||
| expect(poapAccess.nftRequirement.minBalance).toBe(1n); | ||
|
|
||
| const [unlockJar] = buildV2CreateCookieJarArgs({ | ||
| values: makeValues({ | ||
| accessType: AccessType.Unlock, | ||
| protocolConfig: { | ||
| accessType: "Unlock", | ||
| unlockAddress: "0x2222222222222222222222222222222222222222", | ||
| }, | ||
| }), | ||
| metadata: "metadata", | ||
| parseAmount: () => 1n, | ||
| }); | ||
| expect(unlockJar.accessType).toBe(1); | ||
|
|
||
| const [hypercertJar] = buildV2CreateCookieJarArgs({ | ||
| values: makeValues({ | ||
| accessType: AccessType.Hypercert, | ||
| protocolConfig: { | ||
| accessType: "Hypercert", | ||
| hypercertAddress: "0x3333333333333333333333333333333333333333", | ||
| hypercertTokenId: "42", | ||
| hypercertMinBalance: 5, | ||
| }, | ||
| }), | ||
| metadata: "metadata", | ||
| parseAmount: () => 1n, | ||
| }); | ||
| expect(hypercertJar.accessType).toBe(2); | ||
|
|
||
| const [hatsJar, hatsAccess] = buildV2CreateCookieJarArgs({ | ||
| values: makeValues({ | ||
| accessType: AccessType.Hats, | ||
| protocolConfig: { accessType: "Hats", hatsId: 99 }, | ||
| }), | ||
| metadata: "metadata", | ||
| parseAmount: () => 1n, | ||
| }); | ||
| expect(hatsJar.accessType).toBe(2); | ||
| expect(hatsAccess.nftRequirement.nftContract).toBe(HATS_PROTOCOL_ADDRESS); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if getAccessConfigValidationError is tested elsewhere
rg -rn 'getAccessConfigValidationError' --type=ts client/__tests__/Repository: greenpill-dev-guild/cookie-jar
Length of output: 56
🏁 Script executed:
#!/bin/bash
# Check the source module to see what's exported
rg -n 'getAccessConfigValidationError' --type=ts client/src/ --type=tsRepository: greenpill-dev-guild/cookie-jar
Length of output: 125
🏁 Script executed:
#!/bin/bash
# Look at the actual exports from createV2CreateArgs module
head -50 client/src/hooks/jar/createV2CreateArgs.tsRepository: greenpill-dev-guild/cookie-jar
Length of output: 175
🏁 Script executed:
#!/bin/bash
# Discover repository structure
git ls-files | head -30Repository: greenpill-dev-guild/cookie-jar
Length of output: 856
🏁 Script executed:
#!/bin/bash
# Find the actual createV2CreateArgs module
find . -name "createV2CreateArgs*" -type fRepository: greenpill-dev-guild/cookie-jar
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Search for getAccessConfigValidationError anywhere in the repo
rg -rn 'getAccessConfigValidationError'Repository: greenpill-dev-guild/cookie-jar
Length of output: 248
🏁 Script executed:
#!/bin/bash
# Read the source module to see what functions are exported
cat client/hooks/jar/createV2CreateArgs.tsRepository: greenpill-dev-guild/cookie-jar
Length of output: 8877
🏁 Script executed:
#!/bin/bash
# Check the test file to see what's being tested
cat client/__tests__/hooks/jar/createV2CreateArgs.test.tsRepository: greenpill-dev-guild/cookie-jar
Length of output: 5961
Add comprehensive tests for getAccessConfigValidationError—critical access control validation function.
The exported getAccessConfigValidationError function validates access requirements for each protocol (NFTGated, POAP, Unlock, Hypercert, Hats) but has no test coverage. This function handles critical access control validation and must have 100% coverage per project standards, including:
- NFTGated: address presence and validity validation
- POAP: event ID requirement and contract address validation
- Unlock: contract address requirement and validity
- Hypercert: contract address and token ID requirement validation
- Hats: hat ID requirement validation
Add tests covering success cases and all revert conditions for each access type.
🤖 Prompt for AI Agents
In `@client/__tests__/hooks/jar/createV2CreateArgs.test.ts` around lines 1 - 212,
Add unit tests for the exported getAccessConfigValidationError function to
achieve full coverage: create tests that call getAccessConfigValidationError for
each AccessType (NFTGated, POAP, Unlock, Hypercert, Hats) covering both success
and all failure/revert conditions—e.g., NFTGated: missing address, invalid
address format, multiple addresses mismatch with nftTypes; POAP: missing
eventId, non-numeric eventId, invalid contract fallback; Unlock: missing or
invalid unlockAddress; Hypercert: missing hypercertAddress, missing/non-numeric
hypercertTokenId, invalid hypercertMinBalance; Hats: missing/non-numeric hatsId;
use the same test harness utilities as other tests (makeValues,
AccessType/NFTType constants) and assert returned error strings/undefined as
appropriate from getAccessConfigValidationError so each branch is exercised.
| const handleAddNFT = useCallback( | ||
| (address: string, type: number) => { | ||
| const currentAddresses = getValues("nftAddresses"); | ||
| const currentTypes = getValues("nftTypes"); | ||
| setValue("nftAddresses", [...currentAddresses, address]); | ||
| setValue("nftTypes", [...currentTypes, type as NFTType]); | ||
| }, | ||
| [getValues, setValue], | ||
| ); |
There was a problem hiding this comment.
No duplicate-address guard in handleAddNFT.
handleAddNFT blindly appends to nftAddresses without checking if the same contract address already exists. This could lead to duplicate entries in the NFT gate list, which would be confusing for users and could cause unexpected behavior at the contract level.
Proposed fix
const handleAddNFT = useCallback(
(address: string, type: number) => {
const currentAddresses = getValues("nftAddresses");
+ if (currentAddresses.includes(address)) {
+ return; // already added
+ }
const currentTypes = getValues("nftTypes");
setValue("nftAddresses", [...currentAddresses, address]);
setValue("nftTypes", [...currentTypes, type as NFTType]);
},
[getValues, setValue],
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleAddNFT = useCallback( | |
| (address: string, type: number) => { | |
| const currentAddresses = getValues("nftAddresses"); | |
| const currentTypes = getValues("nftTypes"); | |
| setValue("nftAddresses", [...currentAddresses, address]); | |
| setValue("nftTypes", [...currentTypes, type as NFTType]); | |
| }, | |
| [getValues, setValue], | |
| ); | |
| const handleAddNFT = useCallback( | |
| (address: string, type: number) => { | |
| const currentAddresses = getValues("nftAddresses"); | |
| if (currentAddresses.includes(address)) { | |
| return; // already added | |
| } | |
| const currentTypes = getValues("nftTypes"); | |
| setValue("nftAddresses", [...currentAddresses, address]); | |
| setValue("nftTypes", [...currentTypes, type as NFTType]); | |
| }, | |
| [getValues, setValue], | |
| ); |
🤖 Prompt for AI Agents
In `@client/components/create/StepContent.tsx` around lines 607 - 615,
handleAddNFT currently appends address to the nftAddresses array without
checking for duplicates; update handleAddNFT to first normalize the incoming
address (trim and lower/upper case consistently), read currentAddresses via
getValues("nftAddresses"), check if the normalized address already exists (using
the same normalization) and if so either return early or update the
corresponding entry in nftTypes, otherwise append the new address and type using
setValue("nftAddresses", ...) and setValue("nftTypes", ...); reference the
handleAddNFT function and the getValues/setValue usage as the locations to
implement this guard and address normalization.
| // Contract validation | ||
| const { isValid, detectedType, isLoading, error } = useEnhancedNFTValidation( | ||
| canMakeApiCall() ? debouncedAddress : "", | ||
| ); |
There was a problem hiding this comment.
Bug: canMakeApiCall() called during render has a write side-effect, exhausting the rate limit on re-renders.
canMakeApiCall() pushes a timestamp into callsRef every time it returns true. It's called during render (line 477) to compute the useEnhancedNFTValidation argument, and again at lines 491 and 557 for the collection preview query. Since React re-renders on every state change, the rate limiter counts renders — not actual API calls — and will prematurely block requests well before the intended 30/minute threshold.
Separate the "check" from the "record" — e.g., add a read-only canCall() predicate and only record a call inside the actual queryFn where an API request is made.
🐛 Sketch of a fix for useRateLimit
const useRateLimit = (maxCalls: number, windowMs: number) => {
const callsRef = useRef<number[]>([]);
const [resetIn, setResetIn] = useState(0);
const timerRef = useRef<ReturnType<typeof setInterval> | null>(null);
- const isAllowed = useCallback(() => {
+ // Read-only check — safe to call during render
+ const canCall = useCallback(() => {
+ const now = Date.now();
+ const cutoff = now - windowMs;
+ const activeCalls = callsRef.current.filter((time) => time > cutoff);
+ return activeCalls.length < maxCalls;
+ }, [maxCalls, windowMs]);
+
+ // Records a call — only invoke when actually making an API request
+ const recordCall = useCallback(() => {
const now = Date.now();
const cutoff = now - windowMs;
-
callsRef.current = callsRef.current.filter((time) => time > cutoff);
-
- if (callsRef.current.length < maxCalls) {
- callsRef.current.push(now);
- return true;
- }
-
- return false;
- }, [maxCalls, windowMs]);
+ callsRef.current.push(now);
+ }, [windowMs]);
// ...countdown effect unchanged...
- return { isAllowed, resetIn };
+ return { canCall, recordCall, resetIn };
};Then use canCall() for the render-time guard and recordCall() inside queryFn.
🤖 Prompt for AI Agents
In `@client/components/nft/NFTGateInput.tsx` around lines 475 - 478, The render is
calling canMakeApiCall() (used when building the useEnhancedNFTValidation
argument and collection preview queries) which mutates callsRef and thus records
a timestamp on every render; change the rate-limiter API so render-time checks
are read-only and mutation only happens when an actual network request is made:
add a read-only predicate (e.g., canCall() or canMakeApiCallReadOnly) and a
separate recorder (e.g., recordCall()) on the rate limiter, replace render-time
calls in NFTGateInput (places invoking canMakeApiCall() to decide query inputs)
to use the read-only predicate, and move the timestamp recording into the actual
queryFn(s) used by useEnhancedNFTValidation and the collection preview so
recordCall() is invoked only when performing the real API request.
| const mapped: UserHat[] = result.hats.map((hat) => ({ | ||
| hatId: hat.id, | ||
| hat: toHatInfo(hat), | ||
| isWearing: true, | ||
| isEligible: true, | ||
| isInGoodStanding: true, | ||
| })); |
There was a problem hiding this comment.
Hardcoded isWearing: true, isEligible: true, isInGoodStanding: true is misleading.
fetchUserHatsImpl maps every hat returned by getUserHats with all status flags set to true, regardless of actual on-chain state. Since the hook already performs on-chain isWearerOfHat / isEligible / isInGoodStanding reads, these fields should either reflect those reads or be left undefined/omitted until verified. Consumers relying on UserHat.isEligible will get a false positive.
🤖 Prompt for AI Agents
In `@client/hooks/nft/useHats.ts` around lines 254 - 260, The mapping in
fetchUserHatsImpl currently hardcodes UserHat flags (isWearing, isEligible,
isInGoodStanding) to true for every hat returned by getUserHats; change this so
the mapped UserHat either leaves those fields undefined/omitted or sets them
based on the actual on-chain checks (calls that use isWearerOfHat, isEligible,
isInGoodStanding) instead of defaulting to true—update the mapped creation in
fetchUserHatsImpl (the mapped: UserHat[] assignment) to populate those flags
from the on-chain results or omit them until verification completes so consumers
don’t get false positives.
| // tokenId == 0 means "any token from contract" (generic ERC-721 gate) | ||
| if (nftRequirement.tokenId == 0) { | ||
| // Check if user owns at least one token from the contract | ||
| try IERC721(nftRequirement.nftContract).balanceOf(msg.sender) returns (uint256 balance) { | ||
| if (balance == 0) revert CookieJarLib.NotAuthorized(); | ||
| } catch { | ||
| revert CookieJarLib.NotAuthorized(); | ||
| } | ||
| } else { | ||
| // Check ownership of specific token ID | ||
| try IERC721(nftRequirement.nftContract).ownerOf(nftRequirement.tokenId) returns (address owner) { | ||
| if (owner != msg.sender) revert CookieJarLib.NotAuthorized(); | ||
| } catch { | ||
| revert CookieJarLib.NotAuthorized(); | ||
| } | ||
| _requireAnyErc721Balance(nftRequirement.nftContract, msg.sender); | ||
| return; | ||
| } | ||
|
|
||
| // POAP event gate marker: tokenId=eventId and minBalance>0. | ||
| // This preserves ABI shape while enabling strict event-level enforcement. | ||
| if (nftRequirement.minBalance > 0) { | ||
| _requirePoapEventOwnership(nftRequirement.nftContract, msg.sender, nftRequirement.tokenId); | ||
| return; | ||
| } | ||
|
|
||
| // Otherwise require ownership of one specific ERC-721 token ID. | ||
| _requireErc721TokenOwnership(nftRequirement.nftContract, msg.sender, nftRequirement.tokenId); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find NftRequirement struct definition
ast-grep --pattern 'struct NftRequirement {
$$$
}'Repository: greenpill-dev-guild/cookie-jar
Length of output: 56
🏁 Script executed:
# Find all minBalance usages in Solidity files
rg -n 'minBalance' -t sol -C 3Repository: greenpill-dev-guild/cookie-jar
Length of output: 101
🏁 Script executed:
# First, find all .sol files
fd -e sol
echo "---"
# Search for NftRequirement struct definition
rg -n 'struct NftRequirement' -A 10Repository: greenpill-dev-guild/cookie-jar
Length of output: 1929
🏁 Script executed:
# Search for all minBalance usages more broadly
rg -n 'minBalance' -A 2 -B 2Repository: greenpill-dev-guild/cookie-jar
Length of output: 14184
🏁 Script executed:
# Find the implementations of the validation functions
rg -n '_requirePoapEventOwnership|_requireAnyErc721Balance|_requireErc721TokenOwnership' -A 5Repository: greenpill-dev-guild/cookie-jar
Length of output: 3288
🏁 Script executed:
# Check what happens with ERC1155 - look at the full validation logic around line 440-450
sed -n '420,455p' contracts/src/CookieJar.solRepository: greenpill-dev-guild/cookie-jar
Length of output: 1883
MinBalance field is overloaded and causes silent misconfiguration for standard ERC721 gates with specific token requirements.
The three-way branching logic for ERC721 validation relies on minBalance > 0 as an implicit discriminator for POAP event-level gating. This prevents expressing "require ownership of specific tokenId X with minimum balance check" for standard ERC721—if an integrator sets tokenId=X and minBalance=1 to enforce a minimum balance requirement, the contract silently enters the POAP code path and calls tokenDetailsOfOwnerByEvent on a non-POAP contract, causing NFTValidationFailed with no clear indication of misconfiguration.
The struct comment states minBalance is "for ERC1155," yet the code uses it as a POAP discriminator in the ERC721 path, violating explicit over implicit principles. Introduce an explicit field (e.g., bool isPoapEventGate or an enum) in NftRequirement to cleanly separate POAP-specific logic from balance requirements.
🤖 Prompt for AI Agents
In `@contracts/src/CookieJar.sol` around lines 424 - 438, The ERC-721 branch
currently treats nftRequirement.minBalance>0 as an implicit POAP discriminator
which causes misrouting when integrators set minBalance for ERC-721; update the
NftRequirement struct to add an explicit boolean (e.g., isPoapEventGate) or enum
to mark POAP/event gating, then change the branching in the ERC-721 handling:
keep tokenId==0 -> _requireAnyErc721Balance(...), if isPoapEventGate == true ->
call _requirePoapEventOwnership(nftContract, msg.sender, tokenId), otherwise
call _requireErc721TokenOwnership(nftContract, msg.sender, tokenId); also ensure
minBalance remains used only for ERC-1155 logic and add a validation/error for
invalid combinations if needed.
docs/RELEASES.md
Outdated
|
|
||
| ## Upgrade Guides | ||
|
|
||
| See [MIGRATIONS.md](MIGRATIONS.md) for upgrade instructions between major versions. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "MIGRATIONS\.md"Repository: greenpill-dev-guild/cookie-jar
Length of output: 56
Create MIGRATIONS.md or update the documentation reference.
Line 150 references MIGRATIONS.md for upgrade instructions, but this file does not exist in the repository. Either create the file with migration guidance between major versions or remove the broken link from the documentation.
🤖 Prompt for AI Agents
In `@docs/RELEASES.md` at line 150, The RELEASES.md reference to MIGRATIONS.md is
broken; either add a new MIGRATIONS.md containing clear upgrade/migration steps
between major versions (including compatibility notes, breaking changes,
required config/schema changes and example commands) and leave the link in
docs/RELEASES.md, or remove/replace the link in docs/RELEASES.md to an existing
document; locate the reference via the string "MIGRATIONS.md" in
docs/RELEASES.md and ensure any new MIGRATIONS.md is committed alongside the
docs change and referenced with the same filename.
# Conflicts: # client/__tests__/hooks/jar/createV2CreateArgs.test.ts # contracts/src/CookieJar.sol
…-fixes fix: address review feedback across client and contracts
Summary
Production-hardening release for Cookie Jar v3.2.0, focused on NFT gating audit fixes, cross-stack validation alignment, CI pipeline reliability, and accessibility improvements.
NFT Gating Audit Fixes
safeBigInt()helper preventsBigInt()throws on invalid manual token ID inputas consttuple types for wagmiUnicode-Aware Purpose Validation
countUnicodeCodePoints()counts UTF-8 leading bytes instead of rawbytes().lengthunicodeCodePointLength()via[...str].length(string iterator handles surrogate pairs)CookieJar._validateWithdrawalConstraints()delegates toCookieJarValidation.validatePurpose()CI & Build Pipeline
scripts/oz-compat.sh) for Superfluid importsvercel.jsonfor bun buildsFOUNDRY_PROFILE=devconsistency across CIConfig & Script Fixes
bun test→bun run testin 3 package.json scripts + all README/docs referencesbunfig.tomlvitest preload (broke bun's native runner) and hardcodedNODE_ENV=productionAccessibility & UX
--primary-foregroundtoken for WCAG 4.5:1 contrasthtmlFor/idlinking for 3 label-input pairsMath.max(0, tokenDecimals)preventsRangeErrorTest Plan
forge build+forge test -vvverified🤖 Generated with Claude Code