Skip to content

Comments

feat(e2e): add limit order cleanup scripts and workflows#4274

Open
jasbanza wants to merge 5 commits intostagefrom
jason/e2e-account-limitorder-cleanup
Open

feat(e2e): add limit order cleanup scripts and workflows#4274
jasbanza wants to merge 5 commits intostagefrom
jason/e2e-account-limitorder-cleanup

Conversation

@jasbanza
Copy link
Member

@jasbanza jasbanza commented Feb 18, 2026

What is the purpose of the change:

E2E monitoring tests create limit orders that are not always cancelled, accumulating open orders on test accounts over time. This PR adds standalone scripts and GitHub Actions workflows to cancel all open limit orders for every e2e test account directly on-chain via cosmjs — without needing Playwright or browser automation.

Linear Task

MER-82: E2E Tests - Cleanup accounts > close limit orders

Brief Changelog

  • Add utils/order-utils.ts — reusable module for wallet address derivation from private key, SQS active order fetching, signing client creation, and cancel message building
  • Add scripts/cancel-all-orders.ts — CLI to batch-cancel all open limit orders with up to 3 retry rounds and DRY_RUN support
  • Add scripts/get-active-orders.ts — read-only CLI to list all active orders for an account
  • Add cancel-open-orders.yml — manually-triggered workflow that cancels orders for all four test accounts in parallel
  • Add cancel-open-orders-dry-run.yml — safe dry-run variant (no transactions sent) for verifying the scripts on CI
  • Update packages/e2e/package.json with cosmjs and tsx dependencies (pinned to match monorepo versions)
  • Update packages/e2e/README.md — correct CI secrets account mapping (add missing TEST_PRIVATE_KEY_3), document new scripts, workflows, and local setup

Testing and Verifying

  • Scripts type-check cleanly via tsc --noEmit
  • Dry run tested locally — address derivation confirmed correct (osmo1dkmsds5j6q9l9lv4dkhas68767tlqfx8ls5j0c matches TEST_PRIVATE_KEY_1), SQS query returns successfully
  • Trigger the Cancel Open Limit Orders — Dry Run workflow from GitHub Actions to verify on CI before running the real cancel workflow

Note

Medium Risk
Introduces automation that can broadcast real on-chain cancel transactions from CI secrets, so misconfiguration or script bugs could cancel unintended orders or fail tests. Changes are scoped to E2E tooling/workflows and include a dry-run mode and non-blocking cleanup steps.

Overview
Adds standalone E2E utility scripts (get-active-orders.ts and cancel-all-orders.ts) that derive an address from PRIVATE_KEY, query open/partially-filled limit orders via SQS, and (optionally) broadcast batched cancel transactions via cosmjs with DRY_RUN support and retry rounds.

Introduces two manually-triggered GitHub Actions workflows (cancel-open-orders.yml and a dry-run variant) that run the cancellation script in parallel across the four E2E/monitoring accounts, and adds pre-test “cancel leftover open orders” cleanup steps (non-blocking via continue-on-error) to existing frontend-e2e-tests.yml and monitoring-limit-geo-e2e-tests.yml jobs.

Updates packages/e2e dependencies to include cosmjs + tsx, and expands packages/e2e/README.md with account/secret mapping and usage docs for the new scripts and workflows.

Written by Cursor Bugbot for commit 3d1f35b. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Feb 18, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
osmosis-frontend Ready Ready Preview, Comment Feb 20, 2026 8:43pm
4 Skipped Deployments
Project Deployment Actions Updated (UTC)
osmosis-frontend-datadog Ignored Ignored Feb 20, 2026 8:43pm
osmosis-frontend-dev Ignored Ignored Feb 20, 2026 8:43pm
osmosis-frontend-edgenet Ignored Ignored Feb 20, 2026 8:43pm
osmosis-testnet Ignored Ignored Feb 20, 2026 8:43pm

Request Review

@jasbanza jasbanza force-pushed the jason/e2e-account-limitorder-cleanup branch from a5e4618 to 97736bc Compare February 18, 2026 16:12
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 18, 2026

Walkthrough

Adds extensive e2e documentation and three new tools: a read-only active-order lister, a multi-round batch cancel CLI with DRY_RUN, and an order-utils module for wallet derivation, SQS order queries, cancel-message construction, and batching. (No exported public APIs changed.)

Changes

Cohort / File(s) Summary
Documentation
packages/e2e/README.md
Expanded README: environment and .env guidance, CI secrets/account mapping, frontend E2E wallet and token requirements, synthetic geo monitoring notes, and a new "Order Cleanup Scripts" section detailing usage, local/CI workflows, dry-run-first recommendations, and parallel job model.
Order Management CLIs
packages/e2e/scripts/get-active-orders.ts, packages/e2e/scripts/cancel-all-orders.ts
Added CLIs: get-active-orders.ts (read-only; derives address from PRIVATE_KEY and lists active SQS orders) and cancel-all-orders.ts (multi-round batch cancellation with DRY_RUN, batching, re-fetch rounds, exits non-zero on persistent/fatal errors).
Order Utilities Module
packages/e2e/utils/order-utils.ts
New utilities and types: deriveAddress, fetchActiveOrders, createSigningClient, buildCancelMessages, chunk, constants (OSMOSIS_RPC, SQS_BASE_URL, CANCEL_BATCH_SIZE), and SQSActiveOrder interface supporting the CLIs.

Sequence Diagram

sequenceDiagram
    actor CLI as CLI
    participant Wallet as deriveAddress
    participant SQS as SQS_API
    participant RPC as SigningClient
    participant Chain as Blockchain

    CLI->>Wallet: provide PRIVATE_KEY
    Wallet-->>CLI: return signing wallet + address

    loop up to MAX_ROUNDS
        CLI->>SQS: fetch active orders for address
        SQS-->>CLI: return orders

        alt orders exist
            CLI->>CLI: chunk orders into batches
            loop per batch
                CLI->>RPC: build & sign cancel messages
                RPC-->>CLI: signed tx(s)
                alt DRY_RUN
                    CLI-->>CLI: log intended cancellations
                else
                    CLI->>Chain: broadcast tx(s)
                    Chain-->>CLI: tx result
                end
            end
            CLI->>SQS: re-fetch remaining orders
            SQS-->>CLI: updated orders
        else no orders
            CLI-->>CLI: exit loop
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(e2e): add limit order cleanup scripts and workflows' clearly and specifically describes the main change—adding new scripts and workflows for cleaning up limit orders in e2e tests.
Description check ✅ Passed The description comprehensively addresses all template sections: purpose explains the problem and solution, Linear Task link is provided, Brief Changelog details all files added/modified, and Testing section includes type-check, local dry-run validation, and CI verification guidance.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jason/e2e-account-limitorder-cleanup

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
packages/e2e/utils/order-utils.ts (2)

118-127: No validation on hex key length before passing to wallet.

Buffer.from(normalized, "hex") silently ignores trailing non-hex characters and produces a shorter-than-expected buffer for odd-length strings. While DirectSecp256k1Wallet.fromKey will reject invalid key lengths, an explicit check (e.g., normalized.length === 64 and /^[0-9a-fA-F]+$/.test(normalized)) would produce a much clearer error message for misconfigured environments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/e2e/utils/order-utils.ts` around lines 118 - 127, The deriveAddress
function accepts privateKeyHex and currently passes a normalized hex string
straight to Buffer.from which can silently produce wrong-length buffers; before
creating keyBytes and calling DirectSecp256k1Wallet.fromKey, validate that the
normalized string matches /^[0-9a-fA-F]+$/ and has length 64 (32 bytes) and
throw a clear, descriptive error if not (include the received value or its
length), so callers get an explicit message instead of a cryptic wallet error.

143-155: No timeout on the fetch call — CI jobs can hang indefinitely.

fetchActiveOrders uses fetch without a timeout or AbortSignal. If the SQS endpoint is slow or unresponsive, the workflow job will hang until the GitHub Actions job-level timeout kills it. Adding an AbortSignal.timeout() would make failures surface promptly.

Proposed fix
 export async function fetchActiveOrders(
   address: string
 ): Promise<SQSActiveOrder[]> {
   const url = `${SQS_BASE_URL}/passthrough/active-orders?userOsmoAddress=${address}`;
-  const response = await fetch(url);
+  const response = await fetch(url, {
+    signal: AbortSignal.timeout(30_000), // 30s timeout
+  });
   if (!response.ok) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/e2e/utils/order-utils.ts` around lines 143 - 155, fetchActiveOrders
currently calls fetch(url) without any timeout, which can cause CI to hang;
modify fetchActiveOrders to use an AbortSignal with a timeout (e.g.,
AbortSignal.timeout(ms) or an AbortController with setTimeout) when calling
fetch(url, { signal }) so the request is aborted after a reasonable duration;
ensure you clear any timers if using AbortController and propagate or throw a
clear error when the fetch is aborted; update references to SQS_BASE_URL and the
fetch call in fetchActiveOrders so the function returns or throws promptly on
timeout rather than waiting indefinitely.
packages/e2e/scripts/cancel-all-orders.ts (1)

98-112: Dry-run exits after the first round — consider noting this in the log.

In DRY_RUN mode, the script returns after listing orders from the first fetch (line 111). If orders were partially cancelled in a prior real run, the dry-run output reflects current state, which is correct. The behavior is fine, just worth confirming this is intentional — a user might expect all 3 rounds to be simulated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/e2e/scripts/cancel-all-orders.ts` around lines 98 - 112, The dry-run
branch currently returns after listing the first fetched batch (see isDryRun,
orders, batches in cancel-all-orders.ts), which can mislead users who expect all
rounds to be simulated; either (A) remove the early return and iterate through
all batches/rounds to print every order the real run would cancel, or (B) keep
the early return but add a clear log line stating "DRY RUN only displays the
first round of fetched orders — set DRY_RUN=false to perform all rounds" (and
mention DRY_RUN env var) so users know this is intentional; implement one of
these two fixes in the isDryRun block.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/e2e/README.md`:
- Around line 141-144: Update the fenced code block that currently shows the
.env example (containing PRIVATE_KEY and ACCOUNT_LABEL) to include a language
specifier such as "env" (or "ini"/"text") after the opening ``` so markdownlint
MD040 is satisfied; locate the block with the lines
"PRIVATE_KEY=<your-hex-private-key>" and "ACCOUNT_LABEL=Local Test" and change
the opening fence from ``` to ```env (or equivalent).
- Around line 94-104: Update the fenced code block that begins with "=== Active
Orders: Monitoring EU ===" to include a language specifier (use "text" or
"console") so markdownlint stops flagging it; specifically modify the opening
fence from ``` to ```text (or ```console) immediately before the "=== Active
Orders: Monitoring EU ===" sample output and keep the closing ``` unchanged.

In `@packages/e2e/scripts/get-active-orders.ts`:
- Around line 22-26: dotenv.config is being called after importing SQS_BASE_URL
from order-utils, so SQS_BASE_URL is evaluated before .env is loaded; move
dotenv initialization before importing order-utils by either calling
dotenv.config({ path: path.resolve(__dirname, "../.env") }) at the very top of
this file or performing a dynamic import of "../utils/order-utils" (await
import(...)) after dotenv.config; ensure you reference the SQS_BASE_URL constant
and any other module-level values in order-utils only after dotenv.config has
run so .env values like SQS_URL and NEXT_PUBLIC_SIDECAR_BASE_URL are picked up.

In `@packages/e2e/utils/order-utils.ts`:
- Around line 37-54: OSMOSIS_RPC and SQS_BASE_URL are captured at module import
time and won't reflect env vars loaded later; change them to lazy accessors
(e.g., export functions getOsmosisRpc() and getSqsBaseUrl()) that compute the
same fallback logic (preserving the fallback order and the .replace(/\/+$/, "")
on the final URL) so callers get current values at call time, then update usages
in fetchActiveOrders, createSigningClient and the scripts get-active-orders.ts
and cancel-all-orders.ts to call the new getters after dotenv.config() (or
ensure envs are set) rather than relying on the module-level constants.

---

Duplicate comments:
In `@packages/e2e/scripts/cancel-all-orders.ts`:
- Around line 27-40: The file calls dotenv.config() after importing constants
from order-utils (SQS_BASE_URL, OSMOSIS_RPC, etc.), causing those values to be
read before env vars are loaded; move dotenv.config({ path:
path.resolve(__dirname, "../.env") }) to run before the import that references
order-utils so environment variables are populated prior to evaluating
SQS_BASE_URL/OSMOSIS_RPC and other exports (i.e., call dotenv.config() at the
top of the module before importing buildCancelMessages, createSigningClient,
deriveAddress, fetchActiveOrders, chunk, CANCEL_BATCH_SIZE).

---

Nitpick comments:
In `@packages/e2e/scripts/cancel-all-orders.ts`:
- Around line 98-112: The dry-run branch currently returns after listing the
first fetched batch (see isDryRun, orders, batches in cancel-all-orders.ts),
which can mislead users who expect all rounds to be simulated; either (A) remove
the early return and iterate through all batches/rounds to print every order the
real run would cancel, or (B) keep the early return but add a clear log line
stating "DRY RUN only displays the first round of fetched orders — set
DRY_RUN=false to perform all rounds" (and mention DRY_RUN env var) so users know
this is intentional; implement one of these two fixes in the isDryRun block.

In `@packages/e2e/utils/order-utils.ts`:
- Around line 118-127: The deriveAddress function accepts privateKeyHex and
currently passes a normalized hex string straight to Buffer.from which can
silently produce wrong-length buffers; before creating keyBytes and calling
DirectSecp256k1Wallet.fromKey, validate that the normalized string matches
/^[0-9a-fA-F]+$/ and has length 64 (32 bytes) and throw a clear, descriptive
error if not (include the received value or its length), so callers get an
explicit message instead of a cryptic wallet error.
- Around line 143-155: fetchActiveOrders currently calls fetch(url) without any
timeout, which can cause CI to hang; modify fetchActiveOrders to use an
AbortSignal with a timeout (e.g., AbortSignal.timeout(ms) or an AbortController
with setTimeout) when calling fetch(url, { signal }) so the request is aborted
after a reasonable duration; ensure you clear any timers if using
AbortController and propagate or throw a clear error when the fetch is aborted;
update references to SQS_BASE_URL and the fetch call in fetchActiveOrders so the
function returns or throws promptly on timeout rather than waiting indefinitely.

@jasbanza
Copy link
Member Author

jasbanza commented Feb 18, 2026

For reviewer, I addressed CodeRabbit and Cursor Bugbot concerns:

TL;DR:
None of these are real concerns for the intended use case. The dotenv one is technically correct but has zero practical impact given how the scripts are actually run.

The main finding is about import order vs dotenv: SQS_BASE_URL and OSMOSIS_RPC are evaluated when the module is imported, but dotenv.config() runs after the import. So if you put SQS_URL=... in .env, it wouldn't be picked up.
In practice, this doesn't matter:
In CI: env vars are set by the GitHub Actions env: block before the process starts, so they're available at import time. No issue.
Locally: the defaults (sqs.osmosis.zone / rpc.osmosis.zone) are already correct. You'd only hit this if you put a custom SQS_URL or OSMOSIS_RPC override in .env, which is an edge case nobody is likely to do.
Shell overrides work fine: $env:SQS_URL="..." is set before the process runs, so it's picked up.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/e2e/README.md`:
- Around line 22-24: The README footnote is misleading: update the note that
currently says `TEST_WALLET_ID_*` "mirror the addresses above" to explicitly
state that `TEST_PRIVATE_KEY_3`'s address is derived at runtime and therefore
there is no static `TEST_WALLET_ID_3` to mirror; clarify that `TEST_WALLET_ID_1`
and `TEST_WALLET_ID_2` (if present) may mirror their respective keys but
`TEST_WALLET_ID_3` is not required/does not apply for the US region job because
the address is computed at runtime from `TEST_PRIVATE_KEY_3`. Ensure the text
references `TEST_PRIVATE_KEY_3` and `TEST_WALLET_ID_*` so readers know which
variables are affected.
- Around line 168-174: The workflow is missing a fe-limit-sg job which causes SG
region open limit orders to not be cleaned up; add a new job named fe-limit-sg
to monitoring-limit-geo-e2e-tests.yml mirroring fe-limit-eu/fe-limit-us, set
needs: fe-trade-sg, include the same "Cancel leftover open orders" step and use
the secret TEST_PRIVATE_KEY_1 for authentication so the cleanup step runs (with
continue-on-error: true as in the other limit jobs).

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive tooling to clean up accumulated limit orders from E2E test accounts on Osmosis mainnet. The implementation introduces standalone CosmJS-based scripts that can cancel orders directly on-chain without requiring Playwright or browser automation, along with GitHub Actions workflows for automated and manual cleanup.

Changes:

  • New utility module (order-utils.ts) providing reusable functions for address derivation, SQS order fetching, CosmJS client creation, and cancel message building
  • Two CLI scripts: get-active-orders.ts (read-only order listing) and cancel-all-orders.ts (batch cancellation with retry logic and dry-run support)
  • Two new GitHub Actions workflows: a production cancel workflow and a safe dry-run variant for verification, both running all four test accounts in parallel
  • Integration of pre-test cleanup steps into existing E2E workflows (frontend-e2e-tests.yml and monitoring-limit-geo-e2e-tests.yml) using continue-on-error: true to prevent blocking on transient failures
  • Updated dependencies in packages/e2e/package.json: CosmJS packages (0.32.3) and tsx (^4.6.2), consistent with monorepo versions
  • Comprehensive README documentation covering account/secret mapping, script usage, workflow operation, and local setup instructions

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
packages/e2e/utils/order-utils.ts Core utilities: address derivation, SQS API integration, signing client factory, cancel message builders, and batching helper
packages/e2e/scripts/get-active-orders.ts Read-only CLI to list active orders with human-readable output
packages/e2e/scripts/cancel-all-orders.ts Batch cancellation CLI with 3-round retry mechanism and dry-run mode
packages/e2e/package.json Added CosmJS dependencies (0.32.3) and tsx (^4.6.2) for script execution
packages/e2e/README.md Documented CI secret mapping, added script usage guide, workflow instructions, and local setup
.github/workflows/cancel-open-orders.yml Manual workflow to cancel orders for all four test accounts in parallel
.github/workflows/cancel-open-orders-dry-run.yml Safe dry-run variant for testing the cancel logic without sending transactions
.github/workflows/monitoring-limit-geo-e2e-tests.yml Added pre-test cleanup for EU and US limit order tests
.github/workflows/frontend-e2e-tests.yml Added pre-test cleanup for preview trade tests

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +120 to +125
for (const [batchIndex, batch] of batches.entries()) {
const batchStart = batchIndex * CANCEL_BATCH_SIZE + 1;
const batchEnd = batchStart + batch.length - 1;

try {
const result = await client!.executeMultiple(address, batch, "auto");
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the non-null assertion operator (client!) here assumes that client is never null at this point. While this is currently true because the dry-run path returns early at line 111, this creates a subtle coupling between the dry-run logic and the batch processing logic that could be fragile if the code is refactored. Consider restructuring to avoid the non-null assertion, for example by checking if (!client) throw new Error('Signing client not initialized') before the batch loop, or by extracting the cancel logic into a separate function that takes a non-null client parameter.

Suggested change
for (const [batchIndex, batch] of batches.entries()) {
const batchStart = batchIndex * CANCEL_BATCH_SIZE + 1;
const batchEnd = batchStart + batch.length - 1;
try {
const result = await client!.executeMultiple(address, batch, "auto");
if (!client) {
throw new Error("Signing client not initialized");
}
for (const [batchIndex, batch] of batches.entries()) {
const batchStart = batchIndex * CANCEL_BATCH_SIZE + 1;
const batchEnd = batchStart + batch.length - 1;
try {
const result = await client.executeMultiple(address, batch, "auto");

Copilot uses AI. Check for mistakes.
Comment on lines +220 to +226
export function chunk<T>(arr: T[], size: number): T[][] {
const result: T[][] = [];
for (let i = 0; i < arr.length; i += size) {
result.push(arr.slice(i, i + size));
}
return result;
}
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chunk function does not validate that the size parameter is positive. If size is 0 or negative, the function would infinite loop. While CANCEL_BATCH_SIZE is currently a const set to 20, if this utility function is intended to be reusable, consider adding a guard: if (size <= 0) throw new Error('Chunk size must be positive'). Alternatively, add a JSDoc comment noting this precondition.

Copilot uses AI. Check for mistakes.
wallet: DirectSecp256k1Wallet;
address: string;
}> {
const normalized = privateKeyHex.replace(/^0x/, "");
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deriveAddress function does not validate that the private key string is valid hex or that it is the correct length (64 hex characters for a 32-byte secp256k1 private key). Invalid input could result in cryptic errors from Buffer.from or DirectSecp256k1Wallet.fromKey. Consider adding validation to provide clearer error messages, for example checking that normalized.length is 64 and that it contains only valid hex characters before attempting to create the Uint8Array.

Suggested change
const normalized = privateKeyHex.replace(/^0x/, "");
if (!privateKeyHex || !privateKeyHex.trim()) {
throw new Error("Private key is required");
}
const normalized = privateKeyHex.replace(/^0x/i, "");
if (normalized.length !== 64) {
throw new Error(
`Invalid private key length: expected 64 hex characters (32 bytes), got ${normalized.length}`
);
}
if (!/^[0-9a-fA-F]+$/.test(normalized)) {
throw new Error(
"Invalid private key format: expected only hexadecimal characters 0-9, a-f"
);
}

Copilot uses AI. Check for mistakes.
Comment on lines +3 to +12
permissions:
contents: read

# Dry run version of cancel-open-orders.yml.
# No transactions are sent — this only logs what orders exist and what would be cancelled.
# Safe to run at any time. Use this to verify the scripts work correctly on GitHub Actions
# before triggering the real cancel workflow.

on:
workflow_dispatch:
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The permissions block should be placed after the on trigger, not before it. In GitHub Actions workflow syntax, the correct order is: name, on, permissions, then jobs. This workflow has permissions declared before on, which deviates from the standard structure and could cause parsing issues in some contexts. Move the permissions block to after the on block for consistency with GitHub Actions best practices and the structure used in cancel-open-orders.yml.

Copilot uses AI. Check for mistakes.
@jasbanza jasbanza requested review from JohnnyWyles and JoseRFelix and removed request for JohnnyWyles and JoseRFelix February 19, 2026 19:57
@jasbanza jasbanza marked this pull request as draft February 19, 2026 20:00
@jasbanza
Copy link
Member Author

R4R @JohnnyWyles @JoseRFelix - all the latest bot comments i've checked with the latest opus, and they are not legitimate concerns. This is basically just enhanced e2e test tooling as part of an overall optimizations

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 9 out of 9 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@JoseRFelix
Copy link
Collaborator

@jasbanza Thanks for the PR! Quick heads-up, the tests are currently failing. Could you take a look?

@@ -0,0 +1,134 @@
name: Cancel Open Limit Orders (E2E Test Accounts)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To improve readability and maintainability in this file, I suggest adding a matrix strategy.

name: Cancel Open Limit Orders (E2E Test Accounts)

# Manually triggered workflow to cancel all open limit orders for every e2e test account.
#
# Use this when monitoring tests have accumulated uncancelled open orders.
# Each matrix entry runs in parallel, one per test account.
#
# To run a dry run (no transactions sent), set DRY_RUN to "true" in the job env below.
#
# For future reuse: this job can be added as a `needs:` precursor before
# limit order test jobs in monitoring-e2e-tests.yml or monitoring-limit-geo-e2e-tests.yml.

on:
  workflow_dispatch:

permissions:
  contents: read

jobs:
  cancel-limit-orders:
    name: Cancel orders — ${{ matrix.account_label }}
    timeout-minutes: 10
    runs-on: ubuntu-latest

    strategy:
      fail-fast: false
      matrix:
        include:
          - account_label: "E2E Test Account"
            private_key_secret: "TEST_PRIVATE_KEY"
          - account_label: "Monitoring SG"
            private_key_secret: "TEST_PRIVATE_KEY_1"
          - account_label: "Monitoring EU"
            private_key_secret: "TEST_PRIVATE_KEY_2"
          - account_label: "Monitoring US"
            private_key_secret: "TEST_PRIVATE_KEY_3"

    steps:
      - name: Check out repository
        uses: actions/checkout@v4
        with:
          sparse-checkout: |
            packages/e2e

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 22.x

      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: "**/node_modules"
          key: ${{ runner.OS }}-22.x-${{ hashFiles('**/yarn.lock') }}
          restore-keys: |
            ${{ runner.OS }}-22.x-

      - name: Install dependencies
        run: yarn --cwd packages/e2e install --frozen-lockfile

      - name: Cancel open limit orders
        env:
          # Optional: DRY_RUN: "true"
          PRIVATE_KEY: ${{ secrets[matrix.private_key_secret] }}
          ACCOUNT_LABEL: ${{ matrix.account_label }}
        run: npx tsx packages/e2e/scripts/cancel-all-orders.ts

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, didn't know about that 🙏 makes sense, I'll change it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, didn't know about that 🙏 makes sense, I'll change it

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I noticed these changes haven’t been pushed yet. Is there a reason for that?

run: yarn --cwd packages/e2e install --frozen-lockfile
- name: Dry run — list and preview cancel
env:
PRIVATE_KEY: ${{ secrets.TEST_PRIVATE_KEY_3 }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify why we need three private keys? If it’s because we have one per region, the key names should reflect that like TEST_PRIVATE_KEY_US.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JoseRFelix to answer the above:

Separate wallets per region prevent concurrent jobs from interfering with each other. The geo monitoring workflow runs SG, EU, and US swap jobs in parallel every hour. If they shared a single wallet:

  1. Nonce/sequence conflicts — On-chain transactions require sequential account nonces. Three parallel jobs broadcasting from the same account would race on the nonce, causing "account sequence mismatch" failures.
  2. Balance interference — One job buys OSMO with USDC while another does the same, so the second finds less USDC than expected.
  3. Limit order contamination — One region's leftover orders would leak into another region's test assertions.

In short, separate wallets = no cross-contamination when jobs run concurrently. It's the simplest way to avoid flaky tests caused by shared mutable on-chain state.

The numbered keys map 1:1 to regions:

  • TEST_PRIVATE_KEY — Frontend E2E tests (no geo region)
  • TEST_PRIVATE_KEY_1 — SG (swap, trade, claim)
  • TEST_PRIVATE_KEY_2 — EU (swap, trade, limit, claim)
  • TEST_PRIVATE_KEY_3 — US (swap, trade, limit)

Renaming to TEST_PRIVATE_KEY_SG / _EU / _US would be more self-documenting, but it touches the GitHub repo secrets plus ~15 references across monitoring-limit-geo-e2e-tests.yml, monitoring-claim-orders.yml, and the cancel workflows.

If this is actually justifiable at this point in time, then we should do a separate linear issue to rename the secret vars and document naming them too. What do you think? 🤔

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see this is a legacy debt. Thank you for the context. Yes, let's create a Linear ticket for this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jasbanza
Copy link
Member Author

@jasbanza Thanks for the PR! Quick heads-up, the tests are currently failing. Could you take a look?

@JoseRFelix I'll look into the e2e failures today. As for the jest failures I've addressed this in another pr

@jasbanza
Copy link
Member Author

@jasbanza Thanks for the PR! Quick heads-up, the tests are currently failing. Could you take a look?

@JoseRFelix i found a possible cause I just need to test locally #4278
can

jasbanza and others added 5 commits February 20, 2026 22:36
Add reusable utilities and CLI scripts to cancel open limit orders for
e2e test accounts directly on-chain via cosmjs, replacing the need for
slow Playwright UI automation.

New files:
- utils/order-utils.ts: reusable module for address derivation, order
  fetching (SQS API), signing client, and cancel message building
- scripts/cancel-all-orders.ts: CLI to batch-cancel all open orders
  with retry logic (3 rounds) and DRY_RUN support
- scripts/get-active-orders.ts: read-only CLI to list active orders
- cancel-open-orders.yml: manual workflow to cancel orders for all
  four test accounts in parallel
- cancel-open-orders-dry-run.yml: safe dry-run variant for testing

Updated:
- package.json: add cosmjs and tsx dependencies
- README.md: correct account mapping (add TEST_PRIVATE_KEY_3), document
  new scripts, workflows, and local setup instructions

Co-authored-by: Cursor <cursoragent@cursor.com>
…ain permissions

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
…ain permissions

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
- Remove artifact upload steps from cancel-open-orders.yml that
  referenced nonexistent playwright-report directory
- Add missing language specifiers to README fenced code blocks
- Add pre-test order cleanup steps to monitoring and frontend
  e2e workflows

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
@jasbanza jasbanza force-pushed the jason/e2e-account-limitorder-cleanup branch from b47c6a8 to 3d1f35b Compare February 20, 2026 20:36
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

return;
}

totalFoundAcrossRounds += orders.length;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retry counter double-counts orders across rounds

Low Severity

totalFoundAcrossRounds accumulates orders.length from every retry round, but orders that failed cancellation in one round reappear in the next. This causes double-counting, making the summary message (e.g., "5 of 7 successfully cancelled") misleading — all unique orders may have been cancelled, yet the denominator is inflated. The counter needs to track unique orders only, for example by recording the count from round 1 alone.

Fix in Cursor Fix in Web

PRIVATE_KEY: ${{ secrets.TEST_PRIVATE_KEY_3 }}
ACCOUNT_LABEL: "Monitoring US"
DRY_RUN: "true"
run: npx tsx packages/e2e/scripts/cancel-all-orders.ts
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Near-identical workflow files create maintenance burden

Low Severity

cancel-open-orders-dry-run.yml and cancel-open-orders.yml are ~130-line near-duplicates differing only in the workflow name and the presence of DRY_RUN: "true". A single workflow with a workflow_dispatch input (or a matrix strategy as the reviewer suggested) would eliminate the duplication and the risk of the two files drifting out of sync.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
packages/e2e/scripts/cancel-all-orders.ts (1)

71-71: Optional: eliminate the non-null assertion on client.

client! is safe here (the isDryRun early-return at line 114 guarantees non-null by line 128), but the assertion is easy to break if the flow is ever refactored. Narrowing the type explicitly is more resilient.

♻️ Proposed refactor
-  const client = isDryRun ? null : await createSigningClient(wallet);
+  const client = isDryRun ? null : await createSigningClient(wallet);   // keep as-is
   ...
-        const result = await client!.executeMultiple(address, batch, "auto");
+        if (!client) throw new Error("Signing client unexpectedly null in non-dry-run mode");
+        const result = await client.executeMultiple(address, batch, "auto");

Also applies to: 128-128

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/e2e/scripts/cancel-all-orders.ts` at line 71, The code uses a
non-null assertion on client (client!) which is fragile; replace the assertion
by narrowing the type after the early-return: when isDryRun is false explicitly
assign client as the awaited createSigningClient(wallet) with a non-null type or
add a guarded branch that returns early if isDryRun, then use client as a
definite non-null variable; update occurrences around the createSigningClient
call and later usage (the client variable in cancel-all-orders script) so you no
longer rely on client! but instead ensure client is typed/checked as non-null
via control flow or an explicit typed assignment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/e2e/scripts/cancel-all-orders.ts`:
- Around line 73-74: totalFoundAcrossRounds is being incremented every retry
round causing duplicate counting of the same orders; instead capture the initial
unique total once and use that as the denominator. Update the cancel-all-orders
logic to record a single initialTotalFound (e.g., when you first fetch orders in
the routine that uses totalFoundAcrossRounds) and stop adding subsequent round
fetch counts to it, while continuing to accumulate totalCancelledAcrossRounds
across rounds; reference the variables
totalFoundAcrossRounds/totalCancelledAcrossRounds and the function/block that
loops over retry rounds to locate where to set initialTotalFound and replace
subsequent increments of totalFoundAcrossRounds with no-op or dedup logic.
- Around line 27-40: dotenv.config() is called after importing module-scope
constants from order-utils, so environment overrides in .env for OSMOSIS_RPC and
SQS_BASE_URL are ignored; fix by moving dotenv.config() to run before those
imports — either replace the static import with a dynamic import() of
"../utils/order-utils" after calling dotenv.config(), or, if you prefer the
lightweight mitigation, add a clear inline comment above the order-utils import
referencing OSMOSIS_RPC and SQS_BASE_URL to state that they are evaluated at
module load and .env values will not override them (mention dotenv.config()
timing) so future maintainers know the limitation.

---

Duplicate comments:
In `@packages/e2e/README.md`:
- Around line 168-174: The README table and the
monitoring-limit-geo-e2e-tests.yml workflow are missing the cleanup job for the
Singapore test account: add a row to the table for
`monitoring-limit-geo-e2e-tests.yml` with job `fe-limit-sg` and account
`TEST_PRIVATE_KEY_1` (SG region), and update
`monitoring-limit-geo-e2e-tests.yml` to include a `fe-limit-sg` job that runs
the same pre-test cleanup steps (with continue-on-error: true) used by
`fe-limit-eu`/`fe-limit-us` so SG's open orders are cleared before tests run.
Ensure job name `fe-limit-sg` and secret `TEST_PRIVATE_KEY_1` match the existing
pattern.
- Around line 22-24: The PR contains a duplicate reviewer note about the
`TEST_WALLET_ID_*` footnote; remove the redundant comment so the README only
contains the single intended footnote stating that `TEST_WALLET_ID_*` are not
required (addresses are derived from the private key), or consolidate both notes
into one clear sentence. Locate the README entry referencing `TEST_WALLET_ID_*`
and delete the duplicate reviewer text (or merge it) so the footnote appears
exactly once and the wording remains as the author intended.

---

Nitpick comments:
In `@packages/e2e/scripts/cancel-all-orders.ts`:
- Line 71: The code uses a non-null assertion on client (client!) which is
fragile; replace the assertion by narrowing the type after the early-return:
when isDryRun is false explicitly assign client as the awaited
createSigningClient(wallet) with a non-null type or add a guarded branch that
returns early if isDryRun, then use client as a definite non-null variable;
update occurrences around the createSigningClient call and later usage (the
client variable in cancel-all-orders script) so you no longer rely on client!
but instead ensure client is typed/checked as non-null via control flow or an
explicit typed assignment.

Comment on lines +27 to +40
import * as dotenv from "dotenv";
import * as path from "path";
import {
CANCEL_BATCH_SIZE,
OSMOSIS_RPC,
SQS_BASE_URL,
buildCancelMessages,
chunk,
createSigningClient,
deriveAddress,
fetchActiveOrders,
} from "../utils/order-utils";

dotenv.config({ path: path.resolve(__dirname, "../.env") });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

.env cannot override OSMOSIS_RPC / SQS_BASE_URLdotenv.config() fires too late.

OSMOSIS_RPC and SQS_BASE_URL are module-scope constants in order-utils.ts. Node evaluates them when the import block (lines 29-38) is resolved, which is before dotenv.config() on line 40 runs. Any values set for those variables in .env are silently ignored.

PRIVATE_KEY, DRY_RUN, and ACCOUNT_LABEL are read inside main() and are unaffected — they work correctly from .env.

The fix is to call dotenv.config() before the order-utils import (requires converting the import to a dynamic import()) or to document the limitation with an inline comment so future maintainers don't waste time adding endpoint overrides to .env.

💡 Lightweight mitigation — add a clarifying comment
+// NOTE: dotenv.config() is intentionally called before reading PRIVATE_KEY/DRY_RUN
+// inside main(). However, OSMOSIS_RPC and SQS_BASE_URL are module-scope constants
+// in order-utils.ts and are evaluated at import time — they cannot be overridden
+// via .env. Use shell exports (e.g. OSMOSIS_RPC=... npx tsx ...) to override them.
 import * as dotenv from "dotenv";
 import * as path from "path";
 import {
   ...
 } from "../utils/order-utils";

 dotenv.config({ path: path.resolve(__dirname, "../.env") });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import * as dotenv from "dotenv";
import * as path from "path";
import {
CANCEL_BATCH_SIZE,
OSMOSIS_RPC,
SQS_BASE_URL,
buildCancelMessages,
chunk,
createSigningClient,
deriveAddress,
fetchActiveOrders,
} from "../utils/order-utils";
dotenv.config({ path: path.resolve(__dirname, "../.env") });
// NOTE: dotenv.config() is intentionally called before reading PRIVATE_KEY/DRY_RUN
// inside main(). However, OSMOSIS_RPC and SQS_BASE_URL are module-scope constants
// in order-utils.ts and are evaluated at import time — they cannot be overridden
// via .env. Use shell exports (e.g. OSMOSIS_RPC=... npx tsx ...) to override them.
import * as dotenv from "dotenv";
import * as path from "path";
import {
CANCEL_BATCH_SIZE,
OSMOSIS_RPC,
SQS_BASE_URL,
buildCancelMessages,
chunk,
createSigningClient,
deriveAddress,
fetchActiveOrders,
} from "../utils/order-utils";
dotenv.config({ path: path.resolve(__dirname, "../.env") });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/e2e/scripts/cancel-all-orders.ts` around lines 27 - 40,
dotenv.config() is called after importing module-scope constants from
order-utils, so environment overrides in .env for OSMOSIS_RPC and SQS_BASE_URL
are ignored; fix by moving dotenv.config() to run before those imports — either
replace the static import with a dynamic import() of "../utils/order-utils"
after calling dotenv.config(), or, if you prefer the lightweight mitigation, add
a clear inline comment above the order-utils import referencing OSMOSIS_RPC and
SQS_BASE_URL to state that they are evaluated at module load and .env values
will not override them (mention dotenv.config() timing) so future maintainers
know the limitation.

Comment on lines +73 to +74
let totalCancelledAcrossRounds = 0;
let totalFoundAcrossRounds = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

totalFoundAcrossRounds double-counts orders that persist across retry rounds.

Orders that survive round N are re-fetched and counted again in round N+1, inflating the denominator. Example: round 1 finds 10 orders, 8 succeed, round 2 finds the same 2 remaining → totalFoundAcrossRounds becomes 12, producing the misleading message "10 of 12 successfully cancelled" even though only 10 unique orders ever existed.

🐛 Proposed fix — capture the initial count once
-  let totalCancelledAcrossRounds = 0;
-  let totalFoundAcrossRounds = 0;
+  let totalCancelledAcrossRounds = 0;
+  let totalFoundOnFirstRound = 0;

   for (let round = 1; round <= MAX_ROUNDS; round++) {
     ...
     const orders = await fetchActiveOrders(address);
     ...
-    totalFoundAcrossRounds += orders.length;
+    if (round === 1) totalFoundOnFirstRound = orders.length;
     ...
     // replace all references to totalFoundAcrossRounds with totalFoundOnFirstRound

Also applies to: 93-93

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/e2e/scripts/cancel-all-orders.ts` around lines 73 - 74,
totalFoundAcrossRounds is being incremented every retry round causing duplicate
counting of the same orders; instead capture the initial unique total once and
use that as the denominator. Update the cancel-all-orders logic to record a
single initialTotalFound (e.g., when you first fetch orders in the routine that
uses totalFoundAcrossRounds) and stop adding subsequent round fetch counts to
it, while continuing to accumulate totalCancelledAcrossRounds across rounds;
reference the variables totalFoundAcrossRounds/totalCancelledAcrossRounds and
the function/block that loops over retry rounds to locate where to set
initialTotalFound and replace subsequent increments of totalFoundAcrossRounds
with no-op or dedup logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants