Skip to content

perf(postgres:subaccounts): add partial indices and optimize query fo…#3318

Open
UnbornAztecKing wants to merge 15 commits intomainfrom
sql-transfers-ids-performance
Open

perf(postgres:subaccounts): add partial indices and optimize query fo…#3318
UnbornAztecKing wants to merge 15 commits intomainfrom
sql-transfers-ids-performance

Conversation

@UnbornAztecKing
Copy link
Contributor

@UnbornAztecKing UnbornAztecKing commented Jan 16, 2026

Summary

  • Replace full-table indexes on transfers with partial indexes that exclude NULL subaccount IDs to reduce index size and improve write performance.
  • Rewrite getSubaccountsWithTransfers query from IN (UNION) pattern to EXISTS pattern for better query planning and performance.
  • Add migration to drop old indexes and create new partial indexes on sender/recipient subaccount ID + height columns.

Details

Database schema

  • New partial indexes: transfers_sender_id_height_nn and transfers_recipient_id_height_nn on (senderSubaccountId, createdAtHeight) and (recipientSubaccountId, createdAtHeight) with WHERE ... IS NOT NULL predicates.
  • Dropped indexes: transfers_sendersubaccountid_createdatheight_index and transfers_recipientsubaccountid_createdatheight_index (full-table versions).
  • Migration runs with transaction: false to avoid long-held locks during index rebuild.

Query optimization

  • Old pattern: WHERE id IN (SELECT senderSubaccountId ... UNION SELECT recipientSubaccountId ...)
  • New pattern: WHERE EXISTS (SELECT 1 ... senderSubaccountId = s.id) OR EXISTS (SELECT 1 ... recipientSubaccountId = s.id)

Risk & Impact

  • 90% latency reduction observed in testing.
  • No breaking changes. Schema change is backward-compatible for queries. Down migration restores original indexes.
  • Index creation may briefly increase database load; non-transactional migration minimizes lock duration but requires DROP IF EXISTS / CREATE IF NOT EXISTS idempotency for safe retries.

Testing

  • No new tests added; this is a database optimization.
  • Existing integration tests for getSubaccountsWithTransfers cover functional correctness.

Summary by CodeRabbit

  • Bug Fixes

    • Improved null handling in transfer lookups for more accurate results.
  • Refactor

    • Replaced two full indices with partial indices to reduce unnecessary indexing and improve performance.
    • Queries converted to parameterized bindings and many now run inside explicit transactions for safer DB operations.
    • Broadened query binding types to accept additional formats.
    • Transaction memory setting adjusted and some reads switched to primary for consistency.
  • Tests

    • Stabilized tests by enforcing deterministic insertion order and clarifying expectations.
  • Style

    • Minor SQL and typing cleanups for clarity and consistency.

✏️ Tip: You can customize this high-level summary in your review settings.

@UnbornAztecKing UnbornAztecKing requested a review from a team as a code owner January 16, 2026 17:36
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

📝 Walkthrough

Walkthrough

Adds a migration creating partial NOT NULL indices on transfers; converts multiple SQL queries to use parameterized bindings and EXISTS/UNION ALL patterns; wraps certain DB queries in transactions with SET LOCAL work_mem; widens binding types; serializes test inserts; and changes one helper to read from primary DB.

Changes

Cohort / File(s) Summary
Database migration
indexer/packages/postgres/src/db/migrations/migration_files/20260116114440_create_indices_transfers_not_null.ts
New Knex migration: replaces non-partial composite indices on (senderSubaccountId, createdAtHeight) and (recipientSubaccountId, createdAtHeight) with partial indices using IS NOT NULL; includes down migration; config = { transaction: false }.
Subaccount queries
indexer/packages/postgres/src/stores/subaccount-table.ts
Replaced IN/UNION with two EXISTS clauses OR'd together, switched inline height to named binding :createdBeforeOrAtHeight::bigint, introduced transaction scope with SET LOCAL work_mem, added tx-scoped options and bindings, and adjusted parent-subaccount filtering logic.
Transfer queries
indexer/packages/postgres/src/stores/transfer-table.ts
Replaced interpolated SQL with parameterized rawQuery bindings across multiple functions (getNetTransfers* variants), switched some UNION -> UNION ALL patterns and minor formatting/alias tweaks.
Type widening
indexer/packages/postgres/src/types/utility-types.ts
Imported ValueDict and widened Options.bindings to `readonly RawBinding[]
Transaction helpers
indexer/services/ender/src/lib/on-message.ts
Adds rawQuery call to run SET LOCAL work_mem='128MB' inside started transactions.
Service helper change
indexer/services/roundtable/src/helpers/pnl-ticks-helper.ts
Switched BlockTable.getLatest call to readReplica: false (use primary DB instead of read replica).
Tests
indexer/packages/postgres/__tests__/stores/funding-payments-table.test.ts
Replaced concurrent Promise.all insertions with sequential await insertions in two tests and added comments about insertion order.
Minor formatting / SQL fixes
indexer/packages/postgres/src/stores/candle-table.ts, indexer/packages/postgres/src/stores/fill-table.ts
Small whitespace and SQL syntax fixes (CASE END alias, trailing spaces removed); no functional API changes.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

indexer-postgres-improvement

Suggested reviewers

  • shrenujb
  • tqin7

Poem

🐰 I hopped through rows and nudged an index bright,
Bound heights, set work_mem, and fixed a CASE just right.
Tests now wait their turn, queries bind and hum,
A tidy rabbit patch — the database goes thrum.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly identifies the main optimization: adding partial indices to the transfers table and optimizing related queries for subaccounts performance.
Description check ✅ Passed The PR description comprehensively covers the changelist with database schema changes and query optimizations, test plan (existing tests), and risk/impact assessment. All critical information is present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@indexer/packages/postgres/src/stores/subaccount-table.ts`:
- Around line 110-124: The SQL currently interpolates createdBeforeOrAtHeight
directly into queryString; change the query to use parameter placeholders (e.g.,
replace '${createdBeforeOrAtHeight}' with ? in both EXISTS clauses) and pass
createdBeforeOrAtHeight via the rawQuery bindings (ensure options.bindings is an
array and include the value twice to match the two placeholders). Update the
code that calls rawQuery so it supplies options.bindings =
[createdBeforeOrAtHeight, createdBeforeOrAtHeight] (or append if bindings exist)
so knex.raw() receives the parameterized values.
🧹 Nitpick comments (1)
indexer/packages/postgres/src/db/migrations/migration_files/20260116114440_create_indices_transfers_not_null.ts (1)

4-16: Consider CONCURRENTLY to avoid blocking writes on large tables.

Given transaction: false, this migration is eligible for DROP INDEX CONCURRENTLY / CREATE INDEX CONCURRENTLY to reduce lock impact. Note: CONCURRENTLY requires each statement to be executed separately (not in a single multi-statement knex.raw).

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 19d46af and 56b63ab.

📒 Files selected for processing (2)
  • indexer/packages/postgres/src/db/migrations/migration_files/20260116114440_create_indices_transfers_not_null.ts
  • indexer/packages/postgres/src/stores/subaccount-table.ts
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: anmolagrawal345
Repo: dydxprotocol/v4-chain PR: 2815
File: indexer/packages/postgres/src/db/migrations/migration_files/20250423144330_add_twap_fields_to_orders_and_fills_table.ts:40-71
Timestamp: 2025-08-13T14:27:43.370Z
Learning: The `formatAlterTableEnumSql` helper function in the dydxprotocol/v4-chain indexer uses PostgreSQL's `NOT VALID` clause when creating CHECK constraints, which means the constraints only apply to new data and do not validate existing rows. This allows enum-like constraint updates to proceed safely without failing on existing data that might not conform to the new constraint values.
Learnt from: anmolagrawal345
Repo: dydxprotocol/v4-chain PR: 2815
File: indexer/packages/postgres/src/db/migrations/migration_files/20250428122430_add_twap_types_to_order_table.ts:0-0
Timestamp: 2025-08-11T15:03:01.902Z
Learning: The dydxprotocol/v4-chain indexer uses CHECK constraints (not PostgreSQL native enum types) for enum-like columns in the orders table. The `formatAlterTableEnumSql` helper function in the indexer/packages/postgres migrations drops and recreates CHECK constraints with the pattern `${tableName}_${columnName}_check` to enforce allowed values on text columns.
📚 Learning: 2024-11-15T16:00:11.304Z
Learnt from: hwray
Repo: dydxprotocol/v4-chain PR: 2551
File: protocol/x/subaccounts/keeper/subaccount.go:852-865
Timestamp: 2024-11-15T16:00:11.304Z
Learning: The function `GetCrossInsuranceFundBalance` in `protocol/x/subaccounts/keeper/subaccount.go` already existed and was just moved in this PR; changes to its error handling may be out of scope.

Applied to files:

  • indexer/packages/postgres/src/stores/subaccount-table.ts
📚 Learning: 2024-11-15T15:59:28.095Z
Learnt from: hwray
Repo: dydxprotocol/v4-chain PR: 2551
File: protocol/x/subaccounts/keeper/subaccount.go:833-850
Timestamp: 2024-11-15T15:59:28.095Z
Learning: The function `GetInsuranceFundBalance` in `protocol/x/subaccounts/keeper/subaccount.go` already existed and was just moved in this PR; changes to its error handling may be out of scope.

Applied to files:

  • indexer/packages/postgres/src/stores/subaccount-table.ts
📚 Learning: 2025-08-11T15:03:01.902Z
Learnt from: anmolagrawal345
Repo: dydxprotocol/v4-chain PR: 2815
File: indexer/packages/postgres/src/db/migrations/migration_files/20250428122430_add_twap_types_to_order_table.ts:0-0
Timestamp: 2025-08-11T15:03:01.902Z
Learning: The dydxprotocol/v4-chain indexer uses CHECK constraints (not PostgreSQL native enum types) for enum-like columns in the orders table. The `formatAlterTableEnumSql` helper function in the indexer/packages/postgres migrations drops and recreates CHECK constraints with the pattern `${tableName}_${columnName}_check` to enforce allowed values on text columns.

Applied to files:

  • indexer/packages/postgres/src/db/migrations/migration_files/20260116114440_create_indices_transfers_not_null.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-vulcan / (vulcan) Build and Push
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-ecs-service-ender / (ender) Build and Push
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-auxo-lambda / (auxo) Build and Push Lambda
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-ecs-service-comlink / (comlink) Build and Push
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-ecs-service-roundtable / (roundtable) Build and Push
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-bazooka-lambda / (bazooka) Build and Push Lambda
  • GitHub Check: (Public Testnet) Build and Push ECS Services / call-build-and-push-ecs-service-socks / (socks) Build and Push
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-ecs-service-socks / (socks) Build and Push
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-vulcan / (vulcan) Build and Push
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-ecs-service-ender / (ender) Build and Push
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-ecs-service-comlink / (comlink) Build and Push
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-ecs-service-roundtable / (roundtable) Build and Push
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-bazooka-lambda / (bazooka) Build and Push Lambda
  • GitHub Check: (Mainnet) Build and Push ECS Services / call-build-and-push-auxo-lambda / (auxo) Build and Push Lambda
  • GitHub Check: test / run_command
  • GitHub Check: call-build-ecs-service-socks / (socks) Check docker image build
  • GitHub Check: check-build-auxo
  • GitHub Check: call-build-ecs-service-roundtable / (roundtable) Check docker image build
  • GitHub Check: call-build-ecs-service-vulcan / (vulcan) Check docker image build
  • GitHub Check: check-build-bazooka
  • GitHub Check: call-build-ecs-service-ender / (ender) Check docker image build
  • GitHub Check: call-build-ecs-service-comlink / (comlink) Check docker image build
  • GitHub Check: build-and-push-mainnet
  • GitHub Check: run_command
  • GitHub Check: lint
  • GitHub Check: build-and-push-testnet
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
  • GitHub Check: Summary

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@indexer/packages/postgres/src/stores/subaccount-table.ts`:
- Line 15: The import list includes an unused symbol IsolationLevel in
subaccount-table.ts; remove IsolationLevel from the import statement so it is no
longer imported (update the import that currently lists IsolationLevel), save
and run the linter/TS compile to ensure no other references remain.
- Around line 112-149: The function currently always calls Transaction.start()
and commits/rolls back that txId which breaks callers that pass an active
transaction; change it to reuse an existing transaction when options.txId is
provided: set const externalTxId = options?.txId; const startedHere =
!externalTxId; const txId = externalTxId ?? await Transaction.start(); build
txOptions = { ...options, txId } and use that for rawQuery, and only call
Transaction.commit(txId) or Transaction.rollback(txId) when startedHere is true
(if reusing options.txId, do not commit/rollback). Keep the existing use of
rawQuery and bindings unchanged.
🧹 Nitpick comments (3)
indexer/packages/postgres/src/stores/transfer-table.ts (2)

478-498: Inconsistent parameterization: getNetTransfersBetweenBlockHeightsForSubaccount still uses string interpolation.

This function still interpolates subaccountId, createdAfterHeight, and createdBeforeOrAtHeight directly into the SQL string (lines 483, 491-495), while getNetTransfersPerSubaccount was just refactored to use safe bindings. Consider applying the same parameterization pattern here for consistency and to prevent SQL injection.

♻️ Suggested parameterization
 export async function getNetTransfersBetweenBlockHeightsForSubaccount(
   subaccountId: string,
   createdAfterHeight: string,
   createdBeforeOrAtHeight: string,
   options: Options = DEFAULT_POSTGRES_OPTIONS,
 ): Promise<AssetTransferMap> {
   const queryString: string = `
     SELECT
       "assetId",
       SUM(
         CASE
-          WHEN "senderSubaccountId" = '${subaccountId}' THEN -"size"
+          WHEN "senderSubaccountId" = :subaccountId THEN -"size"
           ELSE "size"
         END
       ) AS "totalSize"
     FROM
       "transfers"
     WHERE
       (
-        "senderSubaccountId" = '${subaccountId}'
-        OR "recipientSubaccountId" = '${subaccountId}'
+        "senderSubaccountId" = :subaccountId
+        OR "recipientSubaccountId" = :subaccountId
       )
-      AND "createdAtHeight" > ${createdAfterHeight}
-      AND "createdAtHeight" <= ${createdBeforeOrAtHeight}
+      AND "createdAtHeight" > :createdAfterHeight::bigint
+      AND "createdAtHeight" <= :createdBeforeOrAtHeight::bigint
     GROUP BY
       "assetId";
   `;

-  const result: {
-    rows: {
-      assetId: string,
-      totalSize: string,
-    }[],
-  } = await rawQuery(queryString, options);
+  const result: {
+    rows: {
+      assetId: string,
+      totalSize: string,
+    }[],
+  } = await rawQuery(queryString, {
+    ...options,
+    bindings: { subaccountId, createdAfterHeight, createdBeforeOrAtHeight },
+  });

558-584: Inconsistent parameterization: getNetTransfersBetweenSubaccountIds also uses string interpolation.

This function interpolates assetId, sourceSubaccountId, and recipientSubaccountId directly into SQL. For consistency with the improved getNetTransfersPerSubaccount and to mitigate SQL injection risk, consider using bindings here as well.

♻️ Suggested parameterization
 export async function getNetTransfersBetweenSubaccountIds(
   sourceSubaccountId: string,
   recipientSubaccountId: string,
   assetId: string,
   options: Options = DEFAULT_POSTGRES_OPTIONS,
 ): Promise<string> {
   const queryString: string = `
   SELECT
     COALESCE(SUM(s."size"), '0') AS "totalSize"
   FROM (
     SELECT DISTINCT
       "size" AS "size",
       "id"
     FROM
       "transfers"
-    WHERE "transfers"."assetId" = '${assetId}'
-    AND "transfers"."senderSubaccountId" = '${sourceSubaccountId}'
-    AND "transfers"."recipientSubaccountId" = '${recipientSubaccountId}'
+    WHERE "transfers"."assetId" = :assetId
+    AND "transfers"."senderSubaccountId" = :sourceSubaccountId
+    AND "transfers"."recipientSubaccountId" = :recipientSubaccountId
     UNION
     SELECT DISTINCT
       -"size" AS "size",
       "id"
     FROM
       "transfers"
-    WHERE "transfers"."assetId" = '${assetId}'
-    AND "transfers"."senderSubaccountId" = '${recipientSubaccountId}'
-    AND "transfers"."recipientSubaccountId" = '${sourceSubaccountId}'
+    WHERE "transfers"."assetId" = :assetId
+    AND "transfers"."senderSubaccountId" = :recipientSubaccountId
+    AND "transfers"."recipientSubaccountId" = :sourceSubaccountId
   ) AS s
   `;

-  const result: {
-    rows: { totalSize: string }[],
-  } = await rawQuery(queryString, options);
+  const result: {
+    rows: { totalSize: string }[],
+  } = await rawQuery(queryString, {
+    ...options,
+    bindings: { assetId, sourceSubaccountId, recipientSubaccountId },
+  });
indexer/packages/postgres/src/db/migrations/migration_files/20260116114440_create_indices_transfers_not_null.ts (1)

3-22: Consider using DROP INDEX CONCURRENTLY in up() for consistency.

The up migration uses non-concurrent DROP INDEX (lines 5, 15) while down uses DROP INDEX CONCURRENTLY (lines 27, 36). While the synchronous drop in up may be intentional (old indexes might drop quickly), using CONCURRENTLY uniformly would minimize lock contention during migration, especially on a large transfers table.

Note: There's an inherent risk window between DROP and CREATE where queries may run slower. The idempotent IF EXISTS/IF NOT EXISTS clauses provide good retry safety if the migration needs to be rerun.

♻️ Optional: Use concurrent DROP in up()
 export async function up(knex: Knex): Promise<void> {
   await knex.raw(
-    'DROP INDEX IF EXISTS "transfers_sendersubaccountid_createdatheight_index";',
+    'DROP INDEX CONCURRENTLY IF EXISTS "transfers_sendersubaccountid_createdatheight_index";',
   );

   await knex.raw(
     `CREATE INDEX CONCURRENTLY IF NOT EXISTS transfers_sender_id_height_nn
     ON transfers("senderSubaccountId", "createdAtHeight")
     WHERE "senderSubaccountId" IS NOT NULL;`,
   );

   await knex.raw(
-    'DROP INDEX IF EXISTS "transfers_recipientsubaccountid_createdatheight_index";',
+    'DROP INDEX CONCURRENTLY IF EXISTS "transfers_recipientsubaccountid_createdatheight_index";',
   );

   await knex.raw(
     `CREATE INDEX CONCURRENTLY IF NOT EXISTS transfers_recipient_id_height_nn
     ON transfers("recipientSubaccountId", "createdAtHeight")
     WHERE "recipientSubaccountId" IS NOT NULL;`,
   );
 }

SubaccountQueryConfig,
SubaccountColumns,
SubaccountCreateObject,
IsolationLevel,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "IsolationLevel" indexer/packages/postgres/src/stores/subaccount-table.ts

Repository: dydxprotocol/v4-chain

Length of output: 85


Remove unused import IsolationLevel.

IsolationLevel is imported but never referenced in this file.

🤖 Prompt for AI Agents
In `@indexer/packages/postgres/src/stores/subaccount-table.ts` at line 15, The
import list includes an unused symbol IsolationLevel in subaccount-table.ts;
remove IsolationLevel from the import statement so it is no longer imported
(update the import that currently lists IsolationLevel), save and run the
linter/TS compile to ensure no other references remain.

Comment on lines +112 to 149
const txId: number = await Transaction.start();
const txOptions: Options = {
...options,
txId,
};

try {
await rawQuery('SET LOCAL work_mem = \'64MB\';', txOptions);

const result: { rows: SubaccountFromDatabase[] } = await rawQuery(`
SELECT s.*
FROM subaccounts s
WHERE EXISTS (
SELECT 1
FROM transfers t
WHERE t."senderSubaccountId" = s.id
AND t."createdAtHeight" <= :createdBeforeOrAtHeight::bigint
)
OR EXISTS (
SELECT 1
FROM transfers t
WHERE t."recipientSubaccountId" = s.id
AND t."createdAtHeight" <= :createdBeforeOrAtHeight::bigint
)`,
{
...txOptions,
bindings: { createdBeforeOrAtHeight },
});

await Transaction.commit(txId);

return result.rows;
return result.rows;

} catch (error) {
await Transaction.rollback(txId);
throw error;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n -B 2 -A 5 "getSubaccountsWithTransfers" --type ts

Repository: dydxprotocol/v4-chain

Length of output: 7682


🏁 Script executed:

rg -n -B 10 -A 10 'getSubaccountsWithTransfers.*txId' indexer/services/roundtable/src/helpers/pnl-ticks-helper.ts

Repository: dydxprotocol/v4-chain

Length of output: 843


🏁 Script executed:

rg -n -B 30 'getMostRecentPnlTicksForEachAccount\(\)' indexer/services/roundtable/src/helpers/pnl-ticks-helper.ts | head -60

Repository: dydxprotocol/v4-chain

Length of output: 2107


🏁 Script executed:

rg -n -B 5 -A 10 'function getPnlTicksCreateObjects' indexer/services/roundtable/src/helpers/pnl-ticks-helper.ts | head -40

Repository: dydxprotocol/v4-chain

Length of output: 700


🏁 Script executed:

# Check how BlockTable.getLatest handles txId
rg -n -A 15 'export async function getLatest' indexer/packages/postgres/src/stores/block-table.ts

Repository: dydxprotocol/v4-chain

Length of output: 609


🏁 Script executed:

# Check the Transaction class to understand txId behavior
rg -n -A 20 'class Transaction|export.*function start' indexer/packages/postgres/src/utils/transaction.ts | head -50

Repository: dydxprotocol/v4-chain

Length of output: 155


🏁 Script executed:

# Check how setupBaseQuery uses txId to verify if it's reused or created
rg -n -A 15 'setupBaseQuery' indexer/packages/postgres/src/utils/helpers.ts | head -40

Repository: dydxprotocol/v4-chain

Length of output: 151


🏁 Script executed:

# Find Transaction-related files
fd -t f 'transaction' indexer/packages/postgres/src/

Repository: dydxprotocol/v4-chain

Length of output: 393


🏁 Script executed:

# Find helper files
fd -t f 'helper' indexer/packages/postgres/src/ | head -20

Repository: dydxprotocol/v4-chain

Length of output: 484


🏁 Script executed:

# Check the Transaction class implementation
rg -n -A 20 'export.*class Transaction|export.*function start' indexer/packages/postgres/src/helpers/transaction.ts | head -80

Repository: dydxprotocol/v4-chain

Length of output: 913


🏁 Script executed:

# Check how other similar functions in subaccount-table.ts handle the txId parameter
rg -n -B 3 -A 10 'export async function' indexer/packages/postgres/src/stores/subaccount-table.ts | head -100

Repository: dydxprotocol/v4-chain

Length of output: 2900


🏁 Script executed:

# Check how getPnlTicksCreateObjects is called
rg -n -B 5 -A 5 'getPnlTicksCreateObjects' indexer/services/roundtable/src/tasks/pnl-instrumentation.ts

Repository: dydxprotocol/v4-chain

Length of output: 47


🏁 Script executed:

# Search for calls to getPnlTicksCreateObjects across the codebase
rg -n 'getPnlTicksCreateObjects' indexer/

Repository: dydxprotocol/v4-chain

Length of output: 1556


🏁 Script executed:

# Check the create-pnl-ticks.ts to understand the transaction context
rg -n -B 10 -A 10 'getPnlTicksCreateObjects\(txId\)' indexer/services/roundtable/src/tasks/create-pnl-ticks.ts

Repository: dydxprotocol/v4-chain

Length of output: 830


Fix transaction handling: reuse existing txId instead of creating a new transaction.

The function creates a new transaction and ignores any txId provided in options. When called from getPnlTicksCreateObjects(txId) (which is initiated with REPEATABLE_READ isolation), this breaks transaction consistency by starting a separate transaction instead of reusing the caller's context. Other functions in the same file (create, update, findById) correctly use Transaction.get(options.txId) to reuse the transaction.

Update the function to check if options.txId is provided and reuse it; only start a new transaction if none is provided.

🤖 Prompt for AI Agents
In `@indexer/packages/postgres/src/stores/subaccount-table.ts` around lines 112 -
149, The function currently always calls Transaction.start() and commits/rolls
back that txId which breaks callers that pass an active transaction; change it
to reuse an existing transaction when options.txId is provided: set const
externalTxId = options?.txId; const startedHere = !externalTxId; const txId =
externalTxId ?? await Transaction.start(); build txOptions = { ...options, txId
} and use that for rawQuery, and only call Transaction.commit(txId) or
Transaction.rollback(txId) when startedHere is true (if reusing options.txId, do
not commit/rollback). Keep the existing use of rawQuery and bindings unchanged.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@indexer/services/ender/src/lib/on-message.ts`:
- Line 89: The SET LOCAL work_mem query is executed before the transaction try
block, so if it throws the transaction will leak; move the rawQuery('SET LOCAL
work_mem=\'128MB\';', { txId }) call (and the isolation level setting call that
configures the transaction) into the existing try block inside the function
handling the transaction in on-message.ts so that any failure triggers the
existing catch/finally rollback logic (locate the rawQuery invocation and the
surrounding try/catch that runs rollback/finally and relocate the SET LOCAL and
isolation-level rawQuery calls to the top of that try block).
🧹 Nitpick comments (1)
indexer/services/ender/src/lib/on-message.ts (1)

14-14: Prefer public postgres exports over deep build-path imports (Line 14).
Importing from build/src/... couples this service to internal package layout; consider re-exporting rawQuery from @dydxprotocol-indexer/postgres (or a stable public module) and importing from there.

let success: boolean = false;
const txId: number = await Transaction.start();
await Transaction.setIsolationLevel(txId, IsolationLevel.READ_UNCOMMITTED);
await rawQuery('SET LOCAL work_mem=\'128MB\';', { txId });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure SET LOCAL failure triggers rollback (Line 89).
rawQuery runs before the try, so an error here skips rollback/finally and can leak the transaction. Move it (and isolation level) into the try block.

✅ Suggested fix
   let success: boolean = false;
   const txId: number = await Transaction.start();
-  await Transaction.setIsolationLevel(txId, IsolationLevel.READ_UNCOMMITTED);
-  await rawQuery('SET LOCAL work_mem=\'128MB\';', { txId });
   try {
+    await Transaction.setIsolationLevel(txId, IsolationLevel.READ_UNCOMMITTED);
+    await rawQuery('SET LOCAL work_mem=\'128MB\';', { txId });
     validateIndexerTendermintBlock(indexerTendermintBlock);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await rawQuery('SET LOCAL work_mem=\'128MB\';', { txId });
let success: boolean = false;
const txId: number = await Transaction.start();
try {
await Transaction.setIsolationLevel(txId, IsolationLevel.READ_UNCOMMITTED);
await rawQuery('SET LOCAL work_mem=\'128MB\';', { txId });
validateIndexerTendermintBlock(indexerTendermintBlock);
🤖 Prompt for AI Agents
In `@indexer/services/ender/src/lib/on-message.ts` at line 89, The SET LOCAL
work_mem query is executed before the transaction try block, so if it throws the
transaction will leak; move the rawQuery('SET LOCAL work_mem=\'128MB\';', { txId
}) call (and the isolation level setting call that configures the transaction)
into the existing try block inside the function handling the transaction in
on-message.ts so that any failure triggers the existing catch/finally rollback
logic (locate the rawQuery invocation and the surrounding try/catch that runs
rollback/finally and relocate the SET LOCAL and isolation-level rawQuery calls
to the top of that try block).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Development

Successfully merging this pull request may close these issues.

2 participants