Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added images/coolify/Add_resource.mp4
Binary file not shown.
Binary file added images/coolify/Update_config.mp4
Binary file not shown.
Binary file added images/coolify/expand_content.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/coolify/powersync_config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/coolify/powersync_deploy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/coolify/powersync_env.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/coolify/powersync_resource.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/coolify/powersync_storage.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/coolify/powersync_sync_rules.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
463 changes: 463 additions & 0 deletions integration-guides/coolify.mdx

Large diffs are not rendered by default.

21 changes: 20 additions & 1 deletion mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,10 @@
"name": "Self Hosting",
"url": "self-hosting"
},
{
"name": "Tutorials",
"url": "tutorials"
},
{
"name": "Resources",
"url": "resources"
Expand Down Expand Up @@ -249,7 +253,8 @@
"integration-guides/flutterflow-+-powersync/github-workflow"
]
},
"integration-guides/railway-+-powersync"
"integration-guides/railway-+-powersync",
"integration-guides/coolify"
]
},
{
Expand Down Expand Up @@ -367,6 +372,20 @@
}
]
},
{
"group": "Tutorials",
"pages": [
"tutorials/tutorial-overview",
{
"group": "Code Examples",
"pages": [
"tutorials/code-changes/code-changes-overview",
"tutorials/code-changes/aws-s3-storage-adapter",
"tutorials/code-changes/supabase-connector-performance"
]
}
]
},
{
"group": "Resources",
"pages": [
Expand Down
698 changes: 698 additions & 0 deletions tutorials/code-changes/aws-s3-storage-adapter.mdx

Large diffs are not rendered by default.

9 changes: 9 additions & 0 deletions tutorials/code-changes/code-changes-overview.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "Overview"
description: "A collection of tutorials exploring approaches, optimizations, and strageies across various use cases."
---

<CardGroup>
<Card title="Use AWS S3 for attachment storage" icon="server" href="/tutorials/code-changes/aws-s3-storage-adapter" horizontal/>
<Card title="Improve Supabase Connector Performance" icon="server" href="/tutorials/code-changes/supabase-connector-performance" horizontal/>
</CardGroup>
271 changes: 271 additions & 0 deletions tutorials/code-changes/supabase-connector-performance.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,271 @@
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it would have been cool if we chose one of these two implementations as the default, but maybe it's fine to have a baseline connector example of a supabase connector and give the options to improve it with these two snippets.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could yes. My thinking behind it was that the examples provide an easy way for users to get started and see example implementations - which are inherently basic and not performance optimized. Should they require better performance or some other improvement, they can have a look at the tutorials to see how that could be done.

It also provides an easy and concise way for users to see what needs to be changed should they want to make that specific improvement.

title: "Improve Supabase Connector Performance"
description: "In this tutorial we will show you how to improve the performance of the Supabase Connector for the [React Native To-Do List example app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)."
---

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs an intro to set the stage. All I see is various "strategy" headings, but I don't quickly have an idea what the problem is with the existing implementation, how we're going to solve it, and how all the headings fit in.

Related, it's not clear to me whether I should open any of those accordions, and after opening one I just see a big blob of code, with no explanation/intro/guidance.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, context is definitely a good thing!
Will add both context and more guidance on the code!

<AccordionGroup>
<Accordion title="Sequential Merge Strategy">
<Note>
Shoutout to Christoffer Årstrand for the original implementation of this optimization.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be safer to use people's username / handle - since some people might not be comfortable having their name out there

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, didn't think about that. Will update it to use discord handle

</Note>
```typescript {7-8, 11, 13-15, 17, 19-20, 24-36, 39, 43-56, 59-60, 75}
async uploadData(database: AbstractPowerSyncDatabase): Promise<void> {
const transaction = await database.getNextCrudTransaction();
if (!transaction) {
return;
}

const MERGE_BATCH_LIMIT = 100;
let batchedOps: CrudEntry[] = [];

try {
console.log(`Processing transaction with ${transaction.crud.length} operations`);

for (let i = 0; i < transaction.crud.length; i++) {
const cruds = transaction.crud;
const op = cruds[i];
const table = this.client.from(op.table);
batchedOps.push(op);

let result: any;
let batched = 1;

switch (op.op) {
case UpdateType.PUT:
const records = [{ ...cruds[i].opData, id: cruds[i].id }];
while (
i + 1 < cruds.length &&
cruds[i + 1].op === op.op &&
cruds[i + 1].op === op.op &&
batched < MERGE_BATCH_LIMIT
) {
i++;
records.push({ ...cruds[i].opData, id: cruds[i].id });
batchedOps.push(cruds[i]);
batched++;
}
result = await table.upsert(records);
break;
case UpdateType.PATCH:
batchedOps = [op];
result = await table.update(op.opData).eq('id', op.id);
break;
case UpdateType.DELETE:
batchedOps = [op];
const ids = [op.id];
while (
i + 1 < cruds.length &&
cruds[i + 1].op === op.op &&
cruds[i + 1].op === op.table &&
batched < MERGE_BATCH_LIMIT
) {
i++;
ids.push(cruds[i].id);
batchedOps.push(cruds[i]);
batched++;
}
result = await table.delete().in('id', ids);
break;
}
if (batched > 1) {
console.log(`Merged ${batched} ${op.op} operations for table ${op.table}`);
}
}
await transaction.complete();
} catch (ex: any) {
console.debug(ex);
if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) {
/**
* Instead of blocking the queue with these errors,
* discard the (rest of the) transaction.
*
* Note that these errors typically indicate a bug in the application.
* If protecting against data loss is important, save the failing records
* elsewhere instead of discarding, and/or notify the user.
*/
console.error('Data upload error - discarding:', ex);
await transaction.complete();
} else {
// Error may be retryable - e.g. network error or temporary server error.
// Throwing an error here causes this call to be retried after a delay.
throw ex;
}
}
}
```
</Accordion>
<Accordion title="Pre-sorted Batch Strategy ">
```typescript {8-11, 17-20, 23, 26-29, 32-53, 56, 72}
async uploadData(database: AbstractPowerSyncDatabase): Promise<void> {
const transaction = await database.getNextCrudTransaction();
if (!transaction) {
return;
}

try {
// Group operations by type and table
const putOps: { [table: string]: any[] } = {};
const deleteOps: { [table: string]: string[] } = {};
let patchOps: CrudEntry[] = [];

// Organize operations
for (const op of transaction.crud) {
switch (op.op) {
case UpdateType.PUT:
if (!putOps[op.table]) {
putOps[op.table] = [];
}
putOps[op.table].push({ ...op.opData, id: op.id });
break;
case UpdateType.PATCH:
patchOps.push(op);
break;
case UpdateType.DELETE:
if (!deleteOps[op.table]) {
deleteOps[op.table] = [];
}
deleteOps[op.table].push(op.id);
break;
}
}

// Execute bulk operations
for (const table of Object.keys(putOps)) {
const result = await this.client.from(table).upsert(putOps[table]);
if (result.error) {
console.error(result.error);
throw new Error(`Could not bulk PUT data to Supabase table ${table}: ${JSON.stringify(result)}`);
}
}

for (const table of Object.keys(deleteOps)) {
const result = await this.client.from(table).delete().in('id', deleteOps[table]);
if (result.error) {
console.error(result.error);
throw new Error(`Could not bulk DELETE data from Supabase table ${table}: ${JSON.stringify(result)}`);
}
}

// Execute PATCH operations individually since they can't be easily batched
for (const op of patchOps) {
const result = await this.client.from(op.table).update(op.opData).eq('id', op.id);
if (result.error) {
console.error(result.error);
throw new Error(`Could not PATCH data in Supabase: ${JSON.stringify(result)}`);
}
}

await transaction.complete();
} catch (ex: any) {
console.debug(ex);
if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) {
/**
* Instead of blocking the queue with these errors,
* discard the (rest of the) transaction.
*
* Note that these errors typically indicate a bug in the application.
* If protecting against data loss is important, save the failing records
* elsewhere instead of discarding, and/or notify the user.
*/
console.error('Data upload error - discarding transaction:', ex);
await transaction.complete();
} else {
// Error may be retryable - e.g. network error or temporary server error.
// Throwing an error here causes this call to be retried after a delay.
throw ex;
}
}
}
```
</Accordion>
</AccordionGroup>

# Differences

<AccordionGroup>
<Accordion title="Operation grouping strategy">
### Sequential merge strategy
```typescript
const MERGE_BATCH_LIMIT = 100;
let batchedOps: CrudEntry[] = [];
```
- Pre-sorts all operations by type and table
- Processes each type in bulk after grouping

### Pre-sorted batch strategy
```typescript
const putOps: { [table: string]: any[] } = {};
const deleteOps: { [table: string]: string[] } = {};
let patchOps: CrudEntry[] = [];
```
- Processes operations sequentially
- Merges consecutive operations of the same type up to a batch limit
- More dynamic/streaming approach
</Accordion>
<Accordion title="Batching methodology">
### Sequential merge strategy
- Uses a sliding window approach with `MERGE_BATCH_LIMIT`
- Merges consecutive operations up to the limit
- More granular control over batch sizes
- Better for mixed operation types

### Pre-sorted batch strategy
- Groups ALL operations of the same type together
- Executes one bulk operation per type per table
- Better for large numbers of similar operations
</Accordion>
</AccordionGroup>


## Key similarities and differences
<CardGroup cols={2}>
<Card title="Key Similarities">
Handling of CRUD operations (PUT, PATCH, DELETE) to sync local changes to Supabase
<br />
Transaction management with `getNextCrudTransaction()`
<br />
Implement similar error handling for fatal and retryable errors
<br />
Complete the transaction after successful processing
</Card>
<Card title="Key Differences">
Operation grouping strategy
<br />
Batching methodology
</Card>
</CardGroup>

# Use cases

<CardGroup cols={2}>
<Card title="Sequential Merge Strategy">
You need more granular control over batch sizes

Memory might be constrained

You want more detailed operation logging

You need to handle mixed operation types more efficiently
<br />
<br />
**Best for**: Mixed operation types
<br />
**Optimizes for**: Memory efficiency
<br />
**Trade-off**: Potentially more network requests
</Card>
<Card title="Pre-sorted Batch Strategy">
You have a large number of similar operations.

Memory isn't a constraint.

You want to minimize the number of network requests.
<br />
<br />
**Best for**: Large volumes of similar operations
<br />
**Optimizes for**: Minimal network requests
<br />
**Trade-off**: Higher memory usage
</Card>
</CardGroup>
8 changes: 8 additions & 0 deletions tutorials/tutorial-overview.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: "Overview"
description: "A collection of tutorials showcasing various use cases and strategies."
---

<CardGroup>
<Card title="Code example tutorials" icon="server" href="/tutorials/code-changes/code-changes-overview" horizontal/>
</CardGroup>