Skip to content

Commit 22be3b8

Browse files
Yerazeclaude
andauthored
refactor: replace raw SQL duplication with col() helper for cross-DB quoting (#2372)
* docs: add Drizzle JOIN refactor Phase 1 design spec Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace raw SQL duplication with col() helper for cross-DB quoting Adds BaseRepository.col(name) helper that returns correctly quoted column names (PostgreSQL: "camelCase", SQLite/MySQL: bare identifiers). Replaces 4 isPostgres() branched queries in misc.ts with single unified queries: - getPacketLogs: packet_log LEFT JOIN nodes (x2) - getPacketLogById: same double JOIN - getPacketCountsByNode: packet_log LEFT JOIN nodes with GROUP BY - getDistinctRelayNodes: SELECT from nodes with bitwise WHERE Removes ~40 lines of duplicated SQL. Adds 9 new tests verifying JOIN queries return correct node names, handle unknown nodes, respect pagination, and maintain sort order. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add backend soak test for all three database backends Launches SQLite, PostgreSQL, and MySQL in sequence, monitors logs for errors during a configurable soak period (default 5 minutes each). Logs saved to tests/soak-logs/ for post-test review. Usage: tests/backend-soak-test.sh [duration_seconds] Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add missing decrypted_by column to messages table for PG/MySQL The PostgreSQL and MySQL baseline migrations omitted the decrypted_by column from the messages CREATE TABLE, causing DrizzleQueryError on any SELECT from messages. SQLite baseline had it correctly. Found by the new backend soak test. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add missing channel_database columns for PG/MySQL baselines Migration 014 now also adds enforceNameValidation and sortOrder to channel_database for PostgreSQL and MySQL — both were omitted from the v3.7 baseline. Also improved soak test exclude patterns for expected ECONNREFUSED errors in dev environment. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 46e1ef6 commit 22be3b8

File tree

10 files changed

+660
-72
lines changed

10 files changed

+660
-72
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Logs
22
logs
33
*.log
4+
tests/soak-logs/
45
npm-debug.log*
56
yarn-debug.log*
67
yarn-error.log*

CLAUDE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ Migrations use a centralized registry in `src/db/migrations.ts`. Each migration
108108
4. Make migrations **idempotent** — use try/catch for SQLite (`duplicate column`), `IF NOT EXISTS` for PostgreSQL, `information_schema` checks for MySQL
109109
5. **Column naming**: SQLite uses `snake_case`, PostgreSQL/MySQL use `camelCase` (quoted in PG raw SQL)
110110

111-
**Current migration count:** 13 (latest: `013_add_audit_log_missing_columns`)
111+
**Current migration count:** 14 (latest: `014_add_messages_decrypted_by`)
112112

113113
## Testing
114114

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# Drizzle JOIN Refactor (Phase 1) — Design Spec
2+
3+
**Date:** 2026-03-21
4+
**Status:** Approved
5+
6+
## Overview
7+
8+
Replace 4 raw SQL queries in `src/db/repositories/misc.ts` that branch on database type (PostgreSQL vs SQLite/MySQL) for column quoting with unified Drizzle ORM query builder calls. This eliminates ~80 lines of duplicated SQL and 4 `isPostgres()` conditional blocks.
9+
10+
## Scope
11+
12+
Only queries where the branching is purely about column name quoting. Excludes queries with genuine SQL syntax differences (DISTINCT ON, DELETE RETURNING, etc.) — those are future phases.
13+
14+
## Queries to Refactor
15+
16+
### 1. `getPacketLogs` (lines ~965-984)
17+
- `packet_log` LEFT JOIN `nodes` twice (as from_nodes, to_nodes)
18+
- Selects `longName` from each joined node
19+
- Has WHERE clause, ORDER BY timestamp DESC + created_at DESC, LIMIT/OFFSET
20+
21+
### 2. `getPacketLogById` (lines ~999-1015)
22+
- Same double LEFT JOIN as above
23+
- Filtered by `pl.id = ${id}`
24+
25+
### 3. `getPacketCountsByNode` (lines ~1175-1195)
26+
- `packet_log` LEFT JOIN `nodes` once
27+
- GROUP BY + COUNT(*) + ORDER BY count DESC
28+
- Postgres uses `COUNT(*)::int` cast (unnecessary with Drizzle)
29+
30+
### 4. `getDistinctRelayNodes` (lines ~1100-1104)
31+
- Simple SELECT from nodes with bitwise WHERE
32+
- Not a JOIN, just quoting difference on column names
33+
34+
## Design
35+
36+
### Approach
37+
Use Drizzle's `alias()` function for double-joining the nodes table, and the standard `.select().from().leftJoin()` builder for all queries. Drizzle handles column quoting per-backend automatically.
38+
39+
### Key Drizzle patterns needed
40+
41+
**Table alias for double-join:**
42+
```typescript
43+
import { alias } from 'drizzle-orm/sqlite-core'; // or pg-core, mysql-core
44+
// Use the active schema's nodes table
45+
const fromNodes = alias(this.tables.nodes, 'from_nodes');
46+
const toNodes = alias(this.tables.nodes, 'to_nodes');
47+
```
48+
49+
Note: `alias()` is backend-specific in Drizzle. Since we have three backends, we need to use the correct `alias` import or find a backend-agnostic approach. The `sql` tagged template with `this.tables.nodes` column references may be simpler — Drizzle's `sql` helper auto-quotes column references from table schemas.
50+
51+
**Unified column references:**
52+
```typescript
53+
const { packetLog, nodes } = this.tables;
54+
// Drizzle auto-quotes: nodes.longName → "longName" (PG) or longName (SQLite)
55+
```
56+
57+
**COUNT without cast:**
58+
```typescript
59+
sql<number>`COUNT(*)` // Works across all backends
60+
```
61+
62+
### What changes
63+
- Remove 4 `if (this.isPostgres()) { ... } else { ... }` blocks
64+
- Replace with single Drizzle query builder call per method
65+
- No changes to method signatures or return types
66+
- `normalizePacketLogRow()` continues to handle result normalization
67+
68+
### Files Modified
69+
70+
| File | Change |
71+
|------|--------|
72+
| `src/db/repositories/misc.ts` | Replace 4 branched queries with Drizzle query builder |
73+
74+
## Testing
75+
76+
- Existing tests cover these methods — they must continue to pass
77+
- Run full test suite (3052+ tests)
78+
- Build verification (TypeScript clean)
79+
80+
## Risk
81+
82+
Low — method signatures and return types don't change. The Drizzle query builder generates the same SQL that's currently hardcoded, just with correct quoting per backend. If any query doesn't translate cleanly to the Drizzle builder, we keep the raw SQL but use Drizzle column references for auto-quoting instead of full builder conversion.

src/db/migrations.test.ts

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@ import { describe, it, expect } from 'vitest';
22
import { registry } from './migrations.js';
33

44
describe('migrations registry', () => {
5-
it('has all 13 migrations registered', () => {
6-
expect(registry.count()).toBe(13);
5+
it('has all 14 migrations registered', () => {
6+
expect(registry.count()).toBe(14);
77
});
88

99
it('first migration is v37 baseline', () => {
@@ -12,11 +12,11 @@ describe('migrations registry', () => {
1212
expect(all[0].name).toContain('v37_baseline');
1313
});
1414

15-
it('last migration is the audit_log missing columns fix', () => {
15+
it('last migration is the messages decrypted_by fix', () => {
1616
const all = registry.getAll();
1717
const last = all[all.length - 1];
18-
expect(last.number).toBe(13);
19-
expect(last.name).toContain('audit_log');
18+
expect(last.number).toBe(14);
19+
expect(last.name).toContain('messages_decrypted_by');
2020
});
2121

2222
it('migrations are sequentially numbered from 1 to 12', () => {

src/db/migrations.ts

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
/**
22
* Migration Registry Barrel File
33
*
4-
* Registers all 13 migrations in sequential order for use by the migration runner.
4+
* Registers all 14 migrations in sequential order for use by the migration runner.
55
* Migration 001 is the v3.7 baseline (selfIdempotent — handles its own detection).
66
* Migrations 002-011 were originally 078-087 and retain their original settingsKeys
77
* for upgrade compatibility.
@@ -25,6 +25,7 @@ import { runMigration086Sqlite, runMigration086Postgres, runMigration086Mysql }
2525
import { migration as fixMessageNodeNumBigintMigration, runMigration087Postgres, runMigration087Mysql } from '../server/migrations/011_fix_message_nodenum_bigint.js';
2626
import { migration as authAlignMigration, runMigration012Postgres, runMigration012Mysql } from '../server/migrations/012_align_sqlite_auth_schema.js';
2727
import { migration as auditLogColumnsMigration, runMigration013Postgres, runMigration013Mysql } from '../server/migrations/013_add_audit_log_missing_columns.js';
28+
import { migration as messagesDecryptedByMigration, runMigration014Postgres, runMigration014Mysql } from '../server/migrations/014_add_messages_decrypted_by.js';
2829

2930
// ============================================================================
3031
// Registry
@@ -169,3 +170,17 @@ registry.register({
169170
postgres: (client) => runMigration013Postgres(client),
170171
mysql: (pool) => runMigration013Mysql(pool),
171172
});
173+
174+
// ---------------------------------------------------------------------------
175+
// Migration 014: Add missing decrypted_by column to messages table
176+
// PG/MySQL baselines omitted this column that the Drizzle schema expects.
177+
// ---------------------------------------------------------------------------
178+
179+
registry.register({
180+
number: 14,
181+
name: 'add_messages_decrypted_by',
182+
settingsKey: 'migration_014_add_messages_decrypted_by',
183+
sqlite: (db) => messagesDecryptedByMigration.up(db),
184+
postgres: (client) => runMigration014Postgres(client),
185+
mysql: (pool) => runMigration014Mysql(pool),
186+
});

src/db/repositories/base.ts

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
* Provides common functionality for all repository implementations.
55
* Supports SQLite, PostgreSQL, and MySQL through Drizzle ORM.
66
*/
7+
import { sql } from 'drizzle-orm';
78
import { BetterSQLite3Database } from 'drizzle-orm/better-sqlite3';
89
import { NodePgDatabase } from 'drizzle-orm/node-postgres';
910
import { MySql2Database } from 'drizzle-orm/mysql2';
@@ -117,6 +118,15 @@ export abstract class BaseRepository {
117118
return this.mysqlDb;
118119
}
119120

121+
/**
122+
* Quote a column name for use in raw SQL.
123+
* PostgreSQL requires double-quoted "camelCase" identifiers; SQLite/MySQL do not.
124+
* Returns a raw SQL fragment that can be interpolated into sql`` templates.
125+
*/
126+
protected col(name: string) {
127+
return this.isPostgres() ? sql.raw(`"${name}"`) : sql.raw(name);
128+
}
129+
120130
/**
121131
* Execute a raw SQL query that returns rows (SELECT) across all dialects.
122132
* SQLite's Drizzle driver doesn't have .execute() — uses .all() instead.
Lines changed: 173 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,173 @@
1+
/**
2+
* Misc Repository - Packet Log Query Tests
3+
*
4+
* Tests the refactored Drizzle JOIN queries for packet log methods.
5+
* Verifies that column references are correctly quoted across database backends.
6+
*/
7+
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
8+
import Database from 'better-sqlite3';
9+
import { drizzle, BetterSQLite3Database } from 'drizzle-orm/better-sqlite3';
10+
import { MiscRepository } from './misc.js';
11+
import * as schema from '../schema/index.js';
12+
13+
describe('MiscRepository - Packet Log Queries', () => {
14+
let db: Database.Database;
15+
let drizzleDb: BetterSQLite3Database<typeof schema>;
16+
let repo: MiscRepository;
17+
18+
beforeEach(() => {
19+
db = new Database(':memory:');
20+
21+
// Create tables needed for JOIN queries
22+
db.exec(`
23+
CREATE TABLE IF NOT EXISTS nodes (
24+
nodeNum INTEGER PRIMARY KEY,
25+
nodeId TEXT,
26+
longName TEXT,
27+
shortName TEXT,
28+
lastHeard INTEGER
29+
)
30+
`);
31+
32+
db.exec(`
33+
CREATE TABLE IF NOT EXISTS packet_log (
34+
id INTEGER PRIMARY KEY AUTOINCREMENT,
35+
packet_id INTEGER,
36+
timestamp INTEGER NOT NULL,
37+
from_node INTEGER NOT NULL,
38+
from_node_id TEXT,
39+
to_node INTEGER,
40+
to_node_id TEXT,
41+
channel INTEGER,
42+
portnum INTEGER NOT NULL,
43+
portnum_name TEXT,
44+
encrypted INTEGER DEFAULT 0,
45+
snr REAL,
46+
rssi INTEGER,
47+
hop_limit INTEGER,
48+
hop_start INTEGER,
49+
relay_node INTEGER,
50+
payload_size INTEGER,
51+
want_ack INTEGER DEFAULT 0,
52+
priority INTEGER,
53+
payload_preview TEXT,
54+
metadata TEXT,
55+
direction TEXT DEFAULT 'rx',
56+
created_at INTEGER,
57+
transport_mechanism TEXT,
58+
decrypted_by TEXT,
59+
decrypted_channel_id INTEGER
60+
)
61+
`);
62+
63+
// Create settings table (needed by MiscRepository)
64+
db.exec(`
65+
CREATE TABLE IF NOT EXISTS settings (
66+
key TEXT PRIMARY KEY,
67+
value TEXT
68+
)
69+
`);
70+
71+
drizzleDb = drizzle(db, { schema });
72+
repo = new MiscRepository(drizzleDb as any, 'sqlite');
73+
74+
// Insert test nodes
75+
db.exec(`INSERT INTO nodes (nodeNum, nodeId, longName, shortName) VALUES (100, '!00000064', 'Node Alpha', 'ALPH')`);
76+
db.exec(`INSERT INTO nodes (nodeNum, nodeId, longName, shortName) VALUES (200, '!000000c8', 'Node Beta', 'BETA')`);
77+
db.exec(`INSERT INTO nodes (nodeNum, nodeId, longName, shortName) VALUES (300, '!0000012c', 'Node Gamma', 'GAMM')`);
78+
79+
// Insert test packets
80+
const now = Math.floor(Date.now() / 1000);
81+
const nowMs = Date.now();
82+
db.exec(`INSERT INTO packet_log (packet_id, timestamp, from_node, from_node_id, to_node, to_node_id, portnum, portnum_name, direction, created_at, relay_node) VALUES (1, ${now}, 100, '!00000064', 200, '!000000c8', 1, 'TEXT_MESSAGE_APP', 'rx', ${nowMs}, 100)`);
83+
db.exec(`INSERT INTO packet_log (packet_id, timestamp, from_node, from_node_id, to_node, to_node_id, portnum, portnum_name, direction, created_at, relay_node) VALUES (2, ${now}, 200, '!000000c8', 100, '!00000064', 1, 'TEXT_MESSAGE_APP', 'rx', ${nowMs + 1}, 200)`);
84+
db.exec(`INSERT INTO packet_log (packet_id, timestamp, from_node, from_node_id, to_node, to_node_id, portnum, portnum_name, direction, created_at) VALUES (3, ${now - 60}, 100, '!00000064', 4294967295, '!ffffffff', 3, 'POSITION_APP', 'rx', ${nowMs - 60000})`);
85+
});
86+
87+
afterEach(() => {
88+
db.close();
89+
});
90+
91+
describe('getPacketLogs', () => {
92+
it('returns packets with joined node names', async () => {
93+
const packets = await repo.getPacketLogs({});
94+
expect(packets.length).toBe(3);
95+
96+
// Check that longName was joined from nodes table
97+
const pkt1 = packets.find(p => p.packet_id === 1);
98+
expect(pkt1).toBeDefined();
99+
expect(pkt1!.from_node_longName).toBe('Node Alpha');
100+
expect(pkt1!.to_node_longName).toBe('Node Beta');
101+
});
102+
103+
it('returns null longName for unknown nodes', async () => {
104+
// Insert packet from unknown node
105+
const now = Math.floor(Date.now() / 1000);
106+
db.exec(`INSERT INTO packet_log (packet_id, timestamp, from_node, from_node_id, to_node, portnum, direction, created_at) VALUES (99, ${now}, 999, '!000003e7', NULL, 1, 'rx', ${Date.now()})`);
107+
108+
const packets = await repo.getPacketLogs({});
109+
const unknownPkt = packets.find(p => p.packet_id === 99);
110+
expect(unknownPkt).toBeDefined();
111+
expect(unknownPkt!.from_node_longName).toBeNull();
112+
});
113+
114+
it('respects limit and offset', async () => {
115+
const packets = await repo.getPacketLogs({ limit: 2, offset: 0 });
116+
expect(packets.length).toBe(2);
117+
});
118+
119+
it('orders by timestamp DESC then created_at DESC', async () => {
120+
const packets = await repo.getPacketLogs({});
121+
// First two packets have same timestamp, ordered by created_at DESC
122+
expect(packets[0].packet_id).toBe(2); // higher created_at
123+
expect(packets[1].packet_id).toBe(1);
124+
expect(packets[2].packet_id).toBe(3); // older timestamp
125+
});
126+
});
127+
128+
describe('getPacketLogById', () => {
129+
it('returns a single packet with joined node names', async () => {
130+
const packets = await repo.getPacketLogs({});
131+
const firstId = packets[0].id;
132+
133+
const pkt = await repo.getPacketLogById(firstId!);
134+
expect(pkt).not.toBeNull();
135+
expect(pkt!.from_node_longName).toBeDefined();
136+
});
137+
138+
it('returns null for non-existent id', async () => {
139+
const pkt = await repo.getPacketLogById(99999);
140+
expect(pkt).toBeNull();
141+
});
142+
});
143+
144+
describe('getPacketCountsByNode', () => {
145+
it('returns counts with joined node names', async () => {
146+
const counts = await repo.getPacketCountsByNode({});
147+
expect(counts.length).toBeGreaterThan(0);
148+
149+
const alpha = counts.find(c => c.from_node === 100);
150+
expect(alpha).toBeDefined();
151+
expect(alpha!.from_node_longName).toBe('Node Alpha');
152+
expect(alpha!.count).toBe(2); // packets 1 and 3
153+
});
154+
155+
it('respects limit', async () => {
156+
const counts = await repo.getPacketCountsByNode({ limit: 1 });
157+
expect(counts.length).toBe(1);
158+
});
159+
});
160+
161+
describe('getDistinctRelayNodes', () => {
162+
it('returns relay nodes with matched node names', async () => {
163+
const relays = await repo.getDistinctRelayNodes();
164+
expect(relays.length).toBeGreaterThan(0);
165+
166+
// relay_node 100 & 0xFF = 100, matches node 100 (Node Alpha)
167+
const relay100 = relays.find(r => r.relay_node === 100);
168+
expect(relay100).toBeDefined();
169+
expect(relay100!.matching_nodes.length).toBeGreaterThan(0);
170+
expect(relay100!.matching_nodes[0].longName).toBe('Node Alpha');
171+
});
172+
});
173+
});

0 commit comments

Comments
 (0)