Skip to content

Commit cef5421

Browse files
authored
spark transactions cache (#31)
* spark transactions cache * code rabbit fix * bump version
1 parent 62fd2f1 commit cef5421

File tree

23 files changed

+1297
-217
lines changed

23 files changed

+1297
-217
lines changed

bindings/lni_nodejs/src/nwc.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,8 @@ impl NwcNode {
116116
limit: params.limit,
117117
payment_hash: params.payment_hash,
118118
search: params.search,
119+
created_after: params.created_after,
120+
created_before: params.created_before,
119121
};
120122
let txns = lni::nwc::api::list_transactions(self.inner.clone(), nwc_params)
121123
.await

bindings/lni_nodejs/src/spark.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -215,6 +215,8 @@ impl SparkNode {
215215
limit: params.limit,
216216
search: params.search,
217217
payment_hash: params.payment_hash,
218+
created_after: params.created_after,
219+
created_before: params.created_before,
218220
};
219221

220222
node.list_transactions(lni_params)

bindings/typescript/README.md

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,7 @@ const sparkNode = createNode({
112112
// defaultMaxFeeSats: 20,
113113
// sparkOptions: { ...sdk options... },
114114
// sdkEntry: 'auto' | 'bare' | 'native' | 'default'
115+
// storage: myStorageProvider, // persistent cache (see below)
115116
},
116117
});
117118

@@ -153,6 +154,91 @@ Three cryptographic bugs were discovered and fixed during implementation:
153154

154155
Reference implementation for Spark's FROST behavior: [buildonspark/spark](https://github.com/buildonspark/spark) (uses Rust WASM, not `@frosts`).
155156

157+
### StorageProvider (optional persistent cache)
158+
159+
Spark maintains an internal `paymentHash → transferId` cache that accelerates `lookupInvoice`, `onInvoiceEvents`, and `listTransactions`. By default the cache lives in-memory for the lifetime of the `SparkNode`. Pass a `StorageProvider` to persist it across app restarts.
160+
161+
```ts
162+
import { type StorageProvider } from '@sunnyln/lni';
163+
```
164+
165+
The interface is a simple async key-value store:
166+
167+
```ts
168+
interface StorageProvider {
169+
get(key: string): Promise<string | null>;
170+
set(key: string, value: string): Promise<void>;
171+
remove(key: string): Promise<void>;
172+
}
173+
```
174+
175+
**Browser — localStorage**
176+
177+
```ts
178+
const localStorageProvider: StorageProvider = {
179+
get: async (key) => localStorage.getItem(key),
180+
set: async (key, value) => localStorage.setItem(key, value),
181+
remove: async (key) => localStorage.removeItem(key),
182+
};
183+
184+
const node = createNode({
185+
kind: 'spark',
186+
config: { mnemonic: '...', storage: localStorageProvider },
187+
});
188+
```
189+
190+
**Expo / React Native — Drizzle ORM + expo-sqlite**
191+
192+
```ts
193+
import { eq } from 'drizzle-orm';
194+
import { drizzle } from 'drizzle-orm/expo-sqlite';
195+
import { integer, sqliteTable, text } from 'drizzle-orm/sqlite-core';
196+
import { openDatabaseSync } from 'expo-sqlite';
197+
198+
export const sparkTransactionsCache = sqliteTable('spark_transactions_cache', {
199+
paymentHash: text('payment_hash').primaryKey(),
200+
transferId: text('transfer_id').notNull(),
201+
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
202+
updatedAt: integer('updated_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()),
203+
});
204+
export type SparkTransactionCache = typeof sparkTransactionsCache.$inferSelect;
205+
export type NewSparkTransactionCache = typeof sparkTransactionsCache.$inferInsert;
206+
207+
const db = drizzle(openDatabaseSync('lni-cache.db'));
208+
209+
const PREFIX = 'lni:txcache:';
210+
const drizzleStorageProvider: StorageProvider = {
211+
get: async (key) => {
212+
const hash = key.startsWith(PREFIX) ? key.slice(PREFIX.length) : key;
213+
const row = db.select().from(sparkTransactionsCache)
214+
.where(eq(sparkTransactionsCache.paymentHash, hash)).get();
215+
return row?.transferId ?? null;
216+
},
217+
set: async (key, value) => {
218+
const hash = key.startsWith(PREFIX) ? key.slice(PREFIX.length) : key;
219+
db.insert(sparkTransactionsCache)
220+
.values({ paymentHash: hash, transferId: value })
221+
.onConflictDoUpdate({
222+
target: sparkTransactionsCache.paymentHash,
223+
set: { transferId: value, updatedAt: new Date() },
224+
}).run();
225+
},
226+
remove: async (key) => {
227+
const hash = key.startsWith(PREFIX) ? key.slice(PREFIX.length) : key;
228+
db.delete(sparkTransactionsCache)
229+
.where(eq(sparkTransactionsCache.paymentHash, hash)).run();
230+
},
231+
};
232+
```
233+
234+
When a `StorageProvider` is configured, `lookupInvoice` uses a tiered lookup strategy:
235+
1. **Cache hit** — O(1) lookup by cached transfer ID
236+
2. **1-hour scan**`getTransfers` with a 1-hour lookback window
237+
3. **24-hour scan**`getTransfers` with a 24-hour lookback window
238+
4. **Full scan** — pages through all transfers (last resort)
239+
240+
`onInvoiceEvents` also uses SDK event listeners (`transfer:claimed`) when available, falling back to polling when not.
241+
156242
Spark entrypoint behavior:
157243
- `sdkEntry: 'auto'` (default) uses a browser-safe bundled Spark bare runtime in browser/Expo and falls back to the default SDK entry in Node.
158244
- `sdkEntry: 'bare'` forces the browser-safe bundled no-WASM/no-native path.

0 commit comments

Comments
 (0)