-
Notifications
You must be signed in to change notification settings - Fork 9
Feat/tutorials #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/tutorials #44
Changes from 3 commits
d609e5c
7080cc7
c3f54b7
c9c7ee2
f21e78c
3d4f7c3
cff178b
3b1bcea
367a5cc
ce7a38c
2b126af
6532b63
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
--- | ||
title: "Overview" | ||
description: "A collection of tutorials exploring approaches, optimizations, and strageies across various use cases." | ||
--- | ||
|
||
<CardGroup> | ||
<Card title="Use AWS S3 for attachment storage" icon="server" href="/tutorials/code-changes/aws-s3-storage-adapter" horizontal/> | ||
<Card title="Improve Supabase Connector Performance" icon="server" href="/tutorials/code-changes/supabase-connector-performance" horizontal/> | ||
</CardGroup> |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,271 @@ | ||
--- | ||
title: "Improve Supabase Connector Performance" | ||
description: "In this tutorial we will show you how to improve the performance of the Supabase Connector for the [React Native To-Do List example app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)." | ||
--- | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This needs an intro to set the stage. All I see is various "strategy" headings, but I don't quickly have an idea what the problem is with the existing implementation, how we're going to solve it, and how all the headings fit in. Related, it's not clear to me whether I should open any of those accordions, and after opening one I just see a big blob of code, with no explanation/intro/guidance. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agreed, context is definitely a good thing! |
||
<AccordionGroup> | ||
<Accordion title="Sequential Merge Strategy"> | ||
<Note> | ||
Shoutout to Christoffer Årstrand for the original implementation of this optimization. | ||
|
||
</Note> | ||
```typescript {7-8, 11, 13-15, 17, 19-20, 24-36, 39, 43-56, 59-60, 75} | ||
async uploadData(database: AbstractPowerSyncDatabase): Promise<void> { | ||
const transaction = await database.getNextCrudTransaction(); | ||
if (!transaction) { | ||
return; | ||
} | ||
|
||
const MERGE_BATCH_LIMIT = 100; | ||
let batchedOps: CrudEntry[] = []; | ||
|
||
try { | ||
console.log(`Processing transaction with ${transaction.crud.length} operations`); | ||
|
||
for (let i = 0; i < transaction.crud.length; i++) { | ||
const cruds = transaction.crud; | ||
const op = cruds[i]; | ||
const table = this.client.from(op.table); | ||
batchedOps.push(op); | ||
|
||
let result: any; | ||
let batched = 1; | ||
|
||
switch (op.op) { | ||
case UpdateType.PUT: | ||
const records = [{ ...cruds[i].opData, id: cruds[i].id }]; | ||
while ( | ||
i + 1 < cruds.length && | ||
cruds[i + 1].op === op.op && | ||
cruds[i + 1].op === op.op && | ||
batched < MERGE_BATCH_LIMIT | ||
) { | ||
i++; | ||
records.push({ ...cruds[i].opData, id: cruds[i].id }); | ||
batchedOps.push(cruds[i]); | ||
batched++; | ||
} | ||
result = await table.upsert(records); | ||
break; | ||
case UpdateType.PATCH: | ||
batchedOps = [op]; | ||
result = await table.update(op.opData).eq('id', op.id); | ||
break; | ||
case UpdateType.DELETE: | ||
batchedOps = [op]; | ||
const ids = [op.id]; | ||
while ( | ||
i + 1 < cruds.length && | ||
cruds[i + 1].op === op.op && | ||
cruds[i + 1].op === op.table && | ||
batched < MERGE_BATCH_LIMIT | ||
) { | ||
i++; | ||
ids.push(cruds[i].id); | ||
batchedOps.push(cruds[i]); | ||
batched++; | ||
} | ||
result = await table.delete().in('id', ids); | ||
break; | ||
} | ||
if (batched > 1) { | ||
console.log(`Merged ${batched} ${op.op} operations for table ${op.table}`); | ||
} | ||
} | ||
await transaction.complete(); | ||
} catch (ex: any) { | ||
console.debug(ex); | ||
if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) { | ||
/** | ||
* Instead of blocking the queue with these errors, | ||
* discard the (rest of the) transaction. | ||
* | ||
* Note that these errors typically indicate a bug in the application. | ||
* If protecting against data loss is important, save the failing records | ||
* elsewhere instead of discarding, and/or notify the user. | ||
*/ | ||
console.error('Data upload error - discarding:', ex); | ||
await transaction.complete(); | ||
} else { | ||
// Error may be retryable - e.g. network error or temporary server error. | ||
// Throwing an error here causes this call to be retried after a delay. | ||
throw ex; | ||
} | ||
} | ||
} | ||
``` | ||
</Accordion> | ||
<Accordion title="Pre-sorted Batch Strategy "> | ||
```typescript {8-11, 17-20, 23, 26-29, 32-53, 56, 72} | ||
async uploadData(database: AbstractPowerSyncDatabase): Promise<void> { | ||
const transaction = await database.getNextCrudTransaction(); | ||
if (!transaction) { | ||
return; | ||
} | ||
|
||
try { | ||
// Group operations by type and table | ||
const putOps: { [table: string]: any[] } = {}; | ||
const deleteOps: { [table: string]: string[] } = {}; | ||
let patchOps: CrudEntry[] = []; | ||
|
||
// Organize operations | ||
for (const op of transaction.crud) { | ||
switch (op.op) { | ||
case UpdateType.PUT: | ||
if (!putOps[op.table]) { | ||
putOps[op.table] = []; | ||
} | ||
putOps[op.table].push({ ...op.opData, id: op.id }); | ||
break; | ||
case UpdateType.PATCH: | ||
patchOps.push(op); | ||
break; | ||
case UpdateType.DELETE: | ||
if (!deleteOps[op.table]) { | ||
deleteOps[op.table] = []; | ||
} | ||
deleteOps[op.table].push(op.id); | ||
break; | ||
} | ||
} | ||
|
||
// Execute bulk operations | ||
for (const table of Object.keys(putOps)) { | ||
const result = await this.client.from(table).upsert(putOps[table]); | ||
if (result.error) { | ||
console.error(result.error); | ||
throw new Error(`Could not bulk PUT data to Supabase table ${table}: ${JSON.stringify(result)}`); | ||
} | ||
} | ||
|
||
for (const table of Object.keys(deleteOps)) { | ||
const result = await this.client.from(table).delete().in('id', deleteOps[table]); | ||
if (result.error) { | ||
console.error(result.error); | ||
throw new Error(`Could not bulk DELETE data from Supabase table ${table}: ${JSON.stringify(result)}`); | ||
} | ||
} | ||
|
||
// Execute PATCH operations individually since they can't be easily batched | ||
for (const op of patchOps) { | ||
const result = await this.client.from(op.table).update(op.opData).eq('id', op.id); | ||
if (result.error) { | ||
console.error(result.error); | ||
throw new Error(`Could not PATCH data in Supabase: ${JSON.stringify(result)}`); | ||
} | ||
} | ||
|
||
await transaction.complete(); | ||
} catch (ex: any) { | ||
console.debug(ex); | ||
if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) { | ||
/** | ||
* Instead of blocking the queue with these errors, | ||
* discard the (rest of the) transaction. | ||
* | ||
* Note that these errors typically indicate a bug in the application. | ||
* If protecting against data loss is important, save the failing records | ||
* elsewhere instead of discarding, and/or notify the user. | ||
*/ | ||
console.error('Data upload error - discarding transaction:', ex); | ||
await transaction.complete(); | ||
} else { | ||
// Error may be retryable - e.g. network error or temporary server error. | ||
// Throwing an error here causes this call to be retried after a delay. | ||
throw ex; | ||
} | ||
} | ||
} | ||
``` | ||
</Accordion> | ||
</AccordionGroup> | ||
|
||
# Differences | ||
|
||
<AccordionGroup> | ||
<Accordion title="Operation grouping strategy"> | ||
### Sequential merge strategy | ||
```typescript | ||
const MERGE_BATCH_LIMIT = 100; | ||
let batchedOps: CrudEntry[] = []; | ||
``` | ||
- Pre-sorts all operations by type and table | ||
- Processes each type in bulk after grouping | ||
|
||
### Pre-sorted batch strategy | ||
```typescript | ||
const putOps: { [table: string]: any[] } = {}; | ||
const deleteOps: { [table: string]: string[] } = {}; | ||
let patchOps: CrudEntry[] = []; | ||
``` | ||
- Processes operations sequentially | ||
- Merges consecutive operations of the same type up to a batch limit | ||
- More dynamic/streaming approach | ||
</Accordion> | ||
<Accordion title="Batching methodology"> | ||
### Sequential merge strategy | ||
- Uses a sliding window approach with `MERGE_BATCH_LIMIT` | ||
- Merges consecutive operations up to the limit | ||
- More granular control over batch sizes | ||
- Better for mixed operation types | ||
|
||
### Pre-sorted batch strategy | ||
- Groups ALL operations of the same type together | ||
- Executes one bulk operation per type per table | ||
- Better for large numbers of similar operations | ||
</Accordion> | ||
</AccordionGroup> | ||
|
||
|
||
## Key similarities and differences | ||
<CardGroup cols={2}> | ||
<Card title="Key Similarities"> | ||
Handling of CRUD operations (PUT, PATCH, DELETE) to sync local changes to Supabase | ||
<br /> | ||
Transaction management with `getNextCrudTransaction()` | ||
<br /> | ||
Implement similar error handling for fatal and retryable errors | ||
<br /> | ||
Complete the transaction after successful processing | ||
</Card> | ||
<Card title="Key Differences"> | ||
Operation grouping strategy | ||
<br /> | ||
Batching methodology | ||
</Card> | ||
</CardGroup> | ||
|
||
# Use cases | ||
|
||
<CardGroup cols={2}> | ||
<Card title="Sequential Merge Strategy"> | ||
You need more granular control over batch sizes | ||
|
||
Memory might be constrained | ||
|
||
You want more detailed operation logging | ||
|
||
You need to handle mixed operation types more efficiently | ||
<br /> | ||
<br /> | ||
**Best for**: Mixed operation types | ||
<br /> | ||
**Optimizes for**: Memory efficiency | ||
<br /> | ||
**Trade-off**: Potentially more network requests | ||
</Card> | ||
<Card title="Pre-sorted Batch Strategy"> | ||
You have a large number of similar operations. | ||
|
||
Memory isn't a constraint. | ||
|
||
You want to minimize the number of network requests. | ||
<br /> | ||
<br /> | ||
**Best for**: Large volumes of similar operations | ||
<br /> | ||
**Optimizes for**: Minimal network requests | ||
<br /> | ||
**Trade-off**: Higher memory usage | ||
</Card> | ||
</CardGroup> |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
--- | ||
title: "Overview" | ||
description: "A collection of tutorials showcasing various use cases and strategies." | ||
--- | ||
|
||
<CardGroup> | ||
<Card title="Code example tutorials" icon="server" href="/tutorials/code-changes/code-changes-overview" horizontal/> | ||
</CardGroup> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it would have been cool if we chose one of these two implementations as the default, but maybe it's fine to have a baseline connector example of a supabase connector and give the options to improve it with these two snippets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could yes. My thinking behind it was that the examples provide an easy way for users to get started and see example implementations - which are inherently basic and not performance optimized. Should they require better performance or some other improvement, they can have a look at the tutorials to see how that could be done.
It also provides an easy and concise way for users to see what needs to be changed should they want to make that specific improvement.