diff --git a/images/coolify/Add_resource.mp4 b/images/coolify/Add_resource.mp4
new file mode 100644
index 00000000..83820c83
Binary files /dev/null and b/images/coolify/Add_resource.mp4 differ
diff --git a/images/coolify/Update_config.mp4 b/images/coolify/Update_config.mp4
new file mode 100644
index 00000000..1ceb72bc
Binary files /dev/null and b/images/coolify/Update_config.mp4 differ
diff --git a/images/coolify/expand_content.png b/images/coolify/expand_content.png
new file mode 100644
index 00000000..4e9dc775
Binary files /dev/null and b/images/coolify/expand_content.png differ
diff --git a/images/coolify/powersync_config.png b/images/coolify/powersync_config.png
new file mode 100644
index 00000000..e0eb5f50
Binary files /dev/null and b/images/coolify/powersync_config.png differ
diff --git a/images/coolify/powersync_deploy.png b/images/coolify/powersync_deploy.png
new file mode 100644
index 00000000..8e2ac8d4
Binary files /dev/null and b/images/coolify/powersync_deploy.png differ
diff --git a/images/coolify/powersync_env.png b/images/coolify/powersync_env.png
new file mode 100644
index 00000000..54d34174
Binary files /dev/null and b/images/coolify/powersync_env.png differ
diff --git a/images/coolify/powersync_resource.png b/images/coolify/powersync_resource.png
new file mode 100644
index 00000000..6ecd097b
Binary files /dev/null and b/images/coolify/powersync_resource.png differ
diff --git a/images/coolify/powersync_storage.png b/images/coolify/powersync_storage.png
new file mode 100644
index 00000000..64333ef7
Binary files /dev/null and b/images/coolify/powersync_storage.png differ
diff --git a/images/coolify/powersync_sync_rules.png b/images/coolify/powersync_sync_rules.png
new file mode 100644
index 00000000..b505dae6
Binary files /dev/null and b/images/coolify/powersync_sync_rules.png differ
diff --git a/integration-guides/coolify.mdx b/integration-guides/coolify.mdx
new file mode 100644
index 00000000..4eac5374
--- /dev/null
+++ b/integration-guides/coolify.mdx
@@ -0,0 +1,463 @@
+---
+title: "Deploy PowerSync Service on Coolify"
+sidebarTitle: "Coolify + PowerSync"
+description: "Integration guide for deploying the [PowerSync Service](http://localhost:3333/architecture/powersync-service) on Coolify"
+---
+
+[Coolify](https://coolify.io/) is an open-source, self-hosted platform that simplifies the deployment and management of applications, databases, and services on your own infrastructure.
+Think of it as a self-hosted alternative to platforms like Heroku or Netlify.
+
+
+ Before following this guide, you should:
+ - Read through the [PowerSync Service Setup](/self-hosting/installation/powersync-service-setup)
+ guide to understand the requirements and configuration options. This guide assumes you have already done so, and will only cover the Coolify specific setup.
+ - Have Coolify installed and running.
+
+
+# Background
+
+For the PowerSync Service to function correctly, you will need:
+* A database,
+* Authentication service, and
+* Data upload service.
+
+The easiest way to get started is to use **Supabase** as it provides all three. However, you can also use a different database, and custom authentication and data upload services.
+
+# Steps
+
+
+ Add the [`Compose file`](/integration-guides/coolify#base-docker-compose-yaml-file) as a Docker Compose Empty resource to your project.
+
+
+
+ Update the environment variables and config files.
+
+ Instructions for each can be found in the [Configuration options](#configuration-options) section.
+
+
+ Click on the `Deploy` button to deploy the PowerSync Service.
+
+
+
+ The PowerSync Service will now be available at
+ * `http://localhost:8080` if default config was used, or
+ * `http://{your_coolify_domain}:{PS_PORT}` if a custom domain or port was specified.
+
+ To check the health of the PowerSync Service, see [Healthchecks](/self-hosting/lifecycle-maintenance/healthchecks).
+
+
+
+# Configuration options
+
+The following configuration options should be updated:
+- Environment variables
+- `sync_rules.yaml` file (according to your data requirements)
+- `powersync.yaml` file
+
+
+
+
+
+
+
+
+ Environment Variable |
+ Value |
+
+
+
+
+ PS_DATABASE_TYPE |
+ postgresql |
+
+
+ PS_DATABASE_URI |
+ **Connection string obtained from Supabase** See step 5 in [Connect PowerSync to Your Supabase](/integration-guides/supabase-+-powersync#connect-powersync-to-your-supabase) |
+
+
+ PS_PORT |
+ **Keep default value (8080)** |
+
+
+ PS_MONGO_URI |
+ mongodb://mongo:27017 |
+
+
+ PS_JWKS_URL |
+ **Keep default value** |
+
+
+
+
+
+ ```yaml {5}
+ ...
+ # Client (application end user) authentication settings
+ client_auth:
+ # Enable this if using Supabase Auth
+ supabase: true
+ ...
+ ```
+
+
+
+
+
+
+
+
+
+ Environment Variable |
+ Value |
+
+
+
+
+ PS_DATABASE_TYPE |
+ postgresql OR mongodb OR mysql |
+
+
+ PS_DATABASE_URI |
+ The database connection URI (according to your database type) where your data is stored. |
+
+
+ PS_PORT |
+ **Default value (8080)** You can change this if you want the PowerSync Service to be available on a different port. |
+
+
+ PS_MONGO_URI |
+ mongodb://mongo:27017 |
+
+
+ PS_JWKS_URL |
+ The URL of the JWKS endpoint of your authentication service. |
+
+
+
+
+
+ ```yaml {5, 11-15,18, 23}
+ ...
+ # Client (application end user) authentication settings
+ client_auth:
+ # Enable this if using Supabase Auth
+ supabase: false
+
+ # JWKS URIs can be specified here
+ jwks_uri: !env PS_JWKS_URL
+
+ # Optional static collection of public keys for JWT verification
+ jwks:
+ keys:
+ - kty: 'oct'
+ k: 'use_a_better_token_in_production'
+ alg: 'HS256'
+
+ # JWKS audience
+ audience: ["powersync-dev", "powersync", "http://localhost:8080"]
+
+ api:
+ tokens:
+ # These tokens are used for local admin API route authentication
+ - use_a_better_token_in_production
+ ```
+
+
+
+
+
+# Base `Compose` file
+
+The following Compose file serves as a universal starting point for deploying the PowerSync Service on Coolify.
+
+ ```yaml
+ services:
+ mongo:
+ image: mongo:7.0
+ command: --replSet rs0 --bind_ip_all --quiet
+ restart: unless-stopped
+ ports:
+ - 27017:27017
+ volumes:
+ - mongo_storage:/data/db
+
+ # Initializes the MongoDB replica set. This service will not usually be actively running
+ mongo-rs-init:
+ image: mongo:7.0
+ depends_on:
+ - mongo
+ restart: on-failure
+ entrypoint:
+ - bash
+ - -c
+ - 'mongosh --host mongo:27017 --eval ''try{rs.status().ok && quit(0)} catch {} rs.initiate({_id: "rs0", version: 1, members: [{ _id: 0, host : "mongo:27017" }]})'''
+
+ # PowerSync service
+ powersync:
+ image: journeyapps/powersync-service:latest
+ container_name: powersync
+ depends_on:
+ - mongo-rs-init
+ command: [ "start", "-r", "unified"]
+ restart: unless-stopped
+ environment:
+ - NODE_OPTIONS="--max-old-space-size=1000"
+ - POWERSYNC_CONFIG_PATH=/home/config/powersync.yaml
+ - PS_DATABASE_TYPE=${PS_DEMO_BACKEND_DATABASE_TYPE:-postgresql}
+ - PS_DATABASE_URI=${PS_DATABASE_URI:-postgresql://postgres:postgres@localhost:5432/postgres}
+ - PS_PORT=${PS_PORT:-8080}
+ - PS_MONGO_URI=${PS_MONGO_URI:-mongodb://mongo:27017}
+ - PS_SUPABASE_AUTH=${USE_SUPABASE_AUTH:-false}
+ - PS_JWKS_URL=${PS_JWKS_URL:-http://localhost:6060/api/auth/keys}
+ ports:
+ - ${PS_PORT}:${PS_PORT}
+ volumes:
+ - ./volumes/config:/home/config
+ - type: bind
+ source: ./volumes/config/sync_rules.yaml
+ target: /home/config/sync_rules.yaml
+ content: |
+ bucket_definitions:
+ user_lists:
+ # Separate bucket per todo list
+ parameters: select id as list_id from lists where owner_id = request.user_id()
+ data:
+ - select * from lists where id = bucket.list_id
+ - select * from todos where list_id = bucket.list_id
+ - type: bind
+ source: ./volumes/config/powersync.yaml
+ target: /home/config/powersync.yaml
+ content: |
+ # yaml-language-server: $schema=../schema/schema.json
+ # Note that this example uses YAML custom tags for environment variable substitution.
+ # Using `!env [variable name]` will substitute the value of the environment variable named
+ # [variable name].
+
+ # migrations:
+ # # Migrations run automatically by default.
+ # # Setting this to true will skip automatic migrations.
+ # # Migrations can be triggered externally by altering the container `command`.
+ # disable_auto_migration: true
+
+ # Settings for telemetry reporting
+ # See https://docs.powersync.com/self-hosting/telemetry
+ telemetry:
+ # Opt out of reporting anonymized usage metrics to PowerSync telemetry service
+ disable_telemetry_sharing: false
+
+ # Settings for source database replication
+ replication:
+ # Specify database connection details
+ # Note only 1 connection is currently supported
+ # Multiple connection support is on the roadmap
+ connections:
+ - type: !env PS_DATABASE_TYPE
+ # The PowerSync server container can access the Postgres DB via the DB's service name.
+ # In this case the hostname is pg-db
+
+ # The connection URI or individual parameters can be specified.
+ # Individual params take precedence over URI params
+ uri: !env PS_BACKEND_DATABASE_URI
+
+ # Or use individual params
+
+ # hostname: pg-db # From the Docker Compose service name
+ # port: 5432
+ # database: postgres
+ # username: postgres
+ # password: mypassword
+
+ # SSL settings
+ sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
+ # 'disable' is OK for local/private networks, not for public networks
+
+ # Required for verify-ca, optional for verify-full
+ # This should be the certificate(s) content in PEM format
+ # cacert: !env PS_PG_CA_CERT
+
+ # Include a certificate here for HTTPs
+ # This should be the certificate content in PEM format
+ # client_certificate: !env PS_PG_CLIENT_CERT
+ # This should be the key content in PEM format
+ # client_private_key: !env PS_PG_CLIENT_PRIVATE_KEY
+
+ # This is valid if using the `mongo` service defined in `ps-mongo.yaml`
+
+ # Connection settings for sync bucket storage
+ storage:
+ type: mongodb
+ uri: !env PS_MONGO_URI
+ # Use these if authentication is required. The user should have `readWrite` and `dbAdmin` roles
+ # username: my-mongo-user
+ # password: my-password
+
+ # The port which the PowerSync API server will listen on
+ port: !env PS_PORT
+
+ # Specify sync rules
+ sync_rules:
+ path: /home/config/sync_rules.yaml
+
+ # Client (application end user) authentication settings
+ client_auth:
+ # Enable this if using Supabase Auth
+ supabase: true
+
+ # JWKS URIs can be specified here
+ jwks_uri: !env PS_JWKS_URL
+
+ # Optional static collection of public keys for JWT verification
+ # jwks:
+ # keys:
+ # - kty: 'RSA'
+ # n: !env PS_JWK_N
+ # e: !env PS_JWK_E
+ # alg: 'RS256'
+ # kid: !env PS_JWK_KID
+
+ # JWKS audience
+ audience: ["powersync-dev", "powersync"]
+
+ api:
+ tokens:
+ # These tokens are used for local admin API route authentication
+ - use_a_better_token_in_production
+ ```
+
+
+{/* # Steps
+
+
+
+
+ Add the PowerSync Service resource to your project by either scrolling through the `Services` section or by searching for `powersync` in the search bar.
+
+
+
+
+ The default one-click deployable PowerSync Service uses
+ * MongoDB for internal storage,
+ * PostgreSQL for replication, and
+ * [Sync Rules](/usage/sync-rules) as defined for the To-Do List demo application found in [Demo Apps / Example Projects](/resources/demo-apps-example-projects).
+
+ If you are running the demo To-Do List application, you can jump to Step 4 and simply deploy the PowerSync Service.
+
+
+
+
+ Navigate to the `Environment Variables` tab and update the environment variables as per your requirements. For more information on what environment variables are available, see
+ [Environment Variables](/tutorials/self-host/coolify#environment-variables).
+
+
+
+
+
+
+ Navigate to the `Storages` tab and update the `sync_rules.yaml` and `powersync.yaml` files as needed.
+ For more information see [Sync Rules](/usage/sync-rules) and
+ the skeleton config file in [PowerSync Service Setup](/self-hosting/installation/powersync-service-setup).
+
+
+
+
+ You can expand the content by dragging the bottom right corner of the editor.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ There are two parameters whose values should be changed manually if necessary.
+ - `disable_telemetry_sharing` in telemetry, and
+ - `supabase` in client_auth
+
+
+
+
+
+
+
+ Click on the `Deploy` button to deploy the PowerSync Service.
+
+
+
+ The PowerSync Service will now be available at
+ * `http://localhost:8080` if default config was used, or
+ * `http://{your_coolify_domain}:{PS_PORT}` if a custom domain or port was specified.
+
+
+ */}
+
+{/* ## What to do next */}
+{/*
+
+ Update your backend/client `.env` file with the PowerSync URL from [Step 4](#step-4-deploy-the-powersync-service) above.
+ For this example we assume we have an environment variable named `POWERSYNC_URL`.
+ ```bash
+ POWERSYNC_URL=http://localhost:8080
+ ```
+
+ */}
+
+{/* ## Environment Variables
+
+
+
+
+ Environment Variable |
+ Description |
+ Example |
+
+
+
+
+ POWERSYNC_CONFIG_PATH |
+ This is the path (inside the container) to the YAML config file |
+ /home/config/powersync.yaml |
+
+
+ PS_DATABASE_TYPE |
+ Database replication type |
+ postgresql |
+
+
+ PS_BACKEND_DATABASE_URI |
+ Database connection URI |
+ postgresql://postgres:postgres@localhost:5432/postgres |
+
+
+ PS_PORT |
+ The port the PowerSync API is accessible on |
+ 8080 |
+
+
+ PS_MONGO_URI |
+ The MongoDB URI used internally by the PowerSync Service |
+ mongodb://mongo:27017 |
+
+
+ PS_JWKS_URL |
+ Auth URL |
+ http://localhost:6060/api/auth/keys |
+
+
+
*/}
\ No newline at end of file
diff --git a/mint.json b/mint.json
index 440b6600..1b8ccc1b 100644
--- a/mint.json
+++ b/mint.json
@@ -54,6 +54,10 @@
"name": "Self Hosting",
"url": "self-hosting"
},
+ {
+ "name": "Tutorials",
+ "url": "tutorials"
+ },
{
"name": "Resources",
"url": "resources"
@@ -249,7 +253,8 @@
"integration-guides/flutterflow-+-powersync/github-workflow"
]
},
- "integration-guides/railway-+-powersync"
+ "integration-guides/railway-+-powersync",
+ "integration-guides/coolify"
]
},
{
@@ -367,6 +372,34 @@
}
]
},
+ {
+ "group": "Client",
+ "pages": [
+ "tutorials/overview",
+ {
+ "group": "Attachment Storage",
+ "pages": [
+ "tutorials/client/attachment-storage/overview",
+ "tutorials/client/attachment-storage/aws-s3-storage-adapter"
+ ]
+ },
+ {
+ "group": "Performance",
+ "pages": [
+ "tutorials/client/performance/overview",
+ "tutorials/client/performance/supabase-connector-performance"
+ ]
+ }
+ ]
+ },
+ {
+ "group": "Backend",
+ "pages": ["tutorials/backend/overview"]
+ },
+ {
+ "group": "Self Host",
+ "pages": ["tutorials/self-host/overview"]
+ },
{
"group": "Resources",
"pages": [
diff --git a/tutorials/backend/overview.mdx b/tutorials/backend/overview.mdx
new file mode 100644
index 00000000..6d6aaaa5
--- /dev/null
+++ b/tutorials/backend/overview.mdx
@@ -0,0 +1,3 @@
+---
+title: "Coming Soon..."
+---
\ No newline at end of file
diff --git a/tutorials/client/attachment-storage/aws-s3-storage-adapter.mdx b/tutorials/client/attachment-storage/aws-s3-storage-adapter.mdx
new file mode 100644
index 00000000..5531186e
--- /dev/null
+++ b/tutorials/client/attachment-storage/aws-s3-storage-adapter.mdx
@@ -0,0 +1,694 @@
+---
+title: "Use AWS S3 for attachment storage"
+description: "In this tutorial, we will show you how to replace Supabase Storage with AWS S3 for handling attachments in the [React Native To-Do List example app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)."
+sidebarTitle: "AWS S3"
+---
+
+
+ The following pre-requisites are required to complete this tutorial:
+ - Clone the [To-Do List example app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist) repo
+ - Follow the instructions in the [README](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/README.md) and ensure that the app runs locally
+ - A running PowerSync Service (can be self-hosted)
+
+
+# Steps
+
+
+
+
+
+ This tutorial assumes that you have an AWS account. If you do not have an AWS account, you can create one [here](https://aws.amazon.com/).
+
+ To enable attachment storage using AWS S3, set up an S3 bucket by following these steps:
+
+
+ 1. Go to the [S3 Console](https://s3.console.aws.amazon.com/s3) and click `Create bucket`.
+ 2. Enter a unique bucket name and select your preferred region.
+ 3. Under `Object Ownership`, set ACLs disabled and ensure the bucket is private.
+ 4. Enable Bucket Versioning if you need to track changes to files (optional).
+
+
+ Go to the Permissions tab and set up the following:
+ 1. A bucket policy for access control
+ - Click Bucket policy and enter a policy allowing the necessary actions (e.g., s3:PutObject, s3:GetObject) for the specific users or roles.
+ 2. **(Optional)** Configure CORS (Cross-Origin Resource Sharing) if your app requires it
+
+
+ 1. Go to the [IAM Console](https://console.aws.amazon.com/iam) and create a new user with programmatic access.
+ 2. Attach an AmazonS3FullAccess policy to this user, or create a custom policy with specific permissions for the bucket.
+ 3. Save the Access Key ID and Secret Access Key.
+
+
+
+
+
+
+
+ Add the following dependencies to the `package.json` file in the `demos/react-native-supabase-todolist` directory:
+ ```json
+ "react-navigation-stack": "^2.10.4",
+ "react-native-crypto": "^2.2.0",
+ "react-native-randombytes": "^3.6.1",
+ "aws-sdk": "^2.1352.0"
+ ```
+
+
+ Run `pnpm install` to install the new dependencies.
+
+
+
+
+
+
+ Add the following environment variables to the `.env` file and update the values with your AWS S3 configuration created in [Step 1](#step-1-aws-s3-setup):
+ ```bash .env
+ ...
+ EXPO_PUBLIC_AWS_S3_REGION=region
+ EXPO_PUBLIC_AWS_S3_BUCKET_NAME=bucket_name
+ EXPO_PUBLIC_AWS_S3_ACCESS_KEY_ID=***
+ EXPO_PUBLIC_AWS_S3_ACCESS_SECRET_ACCESS_KEY=***
+ ```
+
+
+
+
+
+ Update `process-env.d.ts` in the `demos/react-native-supabase-todolist` directory and add the following highlighted lines:
+ ```typescript process-env.d.ts {12-15}
+ export {};
+
+ declare global {
+ namespace NodeJS {
+ interface ProcessEnv {
+ [key: string]: string | undefined;
+ EXPO_PUBLIC_SUPABASE_URL: string;
+ EXPO_PUBLIC_SUPABASE_ANON_KEY: string;
+ EXPO_PUBLIC_SUPABASE_BUCKET: string;
+ EXPO_PUBLIC_POWERSYNC_URL: string;
+ EXPO_PUBLIC_EAS_PROJECT_ID: string;
+ EXPO_PUBLIC_AWS_S3_REGION: string;
+ EXPO_PUBLIC_AWS_S3_BUCKET_NAME: string;
+ EXPO_PUBLIC_AWS_S3_ACCESS_KEY_ID: string;
+ EXPO_PUBLIC_AWS_S3_ACCESS_SECRET_ACCESS_KEY: string;
+ }
+ }
+ }
+ ```
+
+
+ Update `AppConfig.ts` in the `demos/react-native-supabase-todolist/library/supabase` directory and add the following highlighted lines:
+ ```typescript AppConfig.ts {6-9}
+ export const AppConfig = {
+ supabaseUrl: process.env.EXPO_PUBLIC_SUPABASE_URL,
+ supabaseAnonKey: process.env.EXPO_PUBLIC_SUPABASE_ANON_KEY,
+ supabaseBucket: process.env.EXPO_PUBLIC_SUPABASE_BUCKET || '',
+ powersyncUrl: process.env.EXPO_PUBLIC_POWERSYNC_URL,
+ region: process.env.EXPO_PUBLIC_AWS_S3_REGION,
+ accessKeyId: process.env.EXPO_PUBLIC_AWS_S3_ACCESS_KEY_ID || '',
+ secretAccessKey: process.env.EXPO_PUBLIC_AWS_S3_ACCESS_SECRET_ACCESS_KEY || '',
+ s3bucketName: process.env.EXPO_PUBLIC_AWS_S3_BUCKET_NAME || ''
+ };
+ ```
+
+
+
+
+
+ Create a `AWSStorageAdapter.ts` file in the `demos/react-native-supabase-todolist/library/storage` directory and add the following contents:
+
+ ```typescript AWSStorageAdapter.ts
+ import * as FileSystem from 'expo-file-system';
+ import S3 from 'aws-sdk/clients/s3';
+ import { decode as decodeBase64 } from 'base64-arraybuffer';
+ import { StorageAdapter } from '@powersync/attachments';
+ import { AppConfig } from '../supabase/AppConfig';
+
+ export interface S3StorageAdapterOptions {
+ client: S3;
+ }
+
+ export class AWSStorageAdapter implements StorageAdapter {
+ constructor(private options: S3StorageAdapterOptions) {}
+
+ async uploadFile(
+ filename: string,
+ data: ArrayBuffer,
+ options?: {
+ mediaType?: string;
+ }
+ ): Promise {
+ if (!AppConfig.s3bucketName) {
+ throw new Error('AWS S3 bucket not configured in AppConfig.ts');
+ }
+
+ try {
+ const body = Uint8Array.from(new Uint8Array(data));
+ const params = {
+ Bucket: AppConfig.s3bucketName,
+ Key: filename,
+ Body: body,
+ ContentType: options?.mediaType
+ };
+
+ await this.options.client.upload(params).promise();
+ console.log(`File uploaded successfully to ${AppConfig.s3bucketName}/${filename}`);
+ } catch (error) {
+ console.error('Error uploading file:', error);
+ throw error;
+ }
+ }
+
+ async downloadFile(filePath: string): Promise {
+ const s3 = new S3({
+ region: AppConfig.region,
+ accessKeyId: AppConfig.accessKeyId,
+ secretAccessKey: AppConfig.secretAccessKey
+ });
+
+ const params = {
+ Bucket: AppConfig.s3bucketName,
+ Key: filePath
+ };
+
+ try {
+ const obj = await s3.getObject(params).promise();
+ if (obj.Body) {
+ const data = await new Response(obj.Body as ReadableStream).arrayBuffer();
+ return new Blob([data]);
+ } else {
+ throw new Error('Object body is undefined. Could not download file.');
+ }
+ } catch (error) {
+ console.error('Error downloading file:', error);
+ throw error;
+ }
+ }
+
+ async deleteFile(uri: string, options?: { filename?: string }): Promise {
+ if (await this.fileExists(uri)) {
+ await FileSystem.deleteAsync(uri);
+ }
+
+ const { filename } = options ?? {};
+ if (!filename) {
+ return;
+ }
+
+ if (!AppConfig.s3bucketName) {
+ throw new Error('Supabase bucket not configured in AppConfig.ts');
+ }
+
+ try {
+ const params = {
+ Bucket: AppConfig.s3bucketName,
+ Key: filename
+ };
+ await this.options.client.deleteObject(params).promise();
+ console.log(`${filename} deleted successfully from ${AppConfig.s3bucketName}.`);
+ } catch (error) {
+ console.error(`Error deleting ${filename} from ${AppConfig.s3bucketName}:`, error);
+ }
+ }
+
+ async readFile(
+ fileURI: string,
+ options?: { encoding?: FileSystem.EncodingType; mediaType?: string }
+ ): Promise {
+ const { encoding = FileSystem.EncodingType.UTF8 } = options ?? {};
+ const { exists } = await FileSystem.getInfoAsync(fileURI);
+ if (!exists) {
+ throw new Error(`File does not exist: ${fileURI}`);
+ }
+ const fileContent = await FileSystem.readAsStringAsync(fileURI, options);
+ if (encoding === FileSystem.EncodingType.Base64) {
+ return this.base64ToArrayBuffer(fileContent);
+ }
+ return this.stringToArrayBuffer(fileContent);
+ }
+
+ async writeFile(
+ fileURI: string,
+ base64Data: string,
+ options?: {
+ encoding?: FileSystem.EncodingType;
+ }
+ ): Promise {
+ const { encoding = FileSystem.EncodingType.UTF8 } = options ?? {};
+ await FileSystem.writeAsStringAsync(fileURI, base64Data, { encoding });
+ }
+
+ async fileExists(fileURI: string): Promise {
+ const { exists } = await FileSystem.getInfoAsync(fileURI);
+ return exists;
+ }
+
+ async makeDir(uri: string): Promise {
+ const { exists } = await FileSystem.getInfoAsync(uri);
+ if (!exists) {
+ await FileSystem.makeDirectoryAsync(uri, { intermediates: true });
+ }
+ }
+
+ async copyFile(sourceUri: string, targetUri: string): Promise {
+ await FileSystem.copyAsync({ from: sourceUri, to: targetUri });
+ }
+
+ getUserStorageDirectory(): string {
+ return FileSystem.documentDirectory!;
+ }
+
+ async stringToArrayBuffer(str: string): Promise {
+ const encoder = new TextEncoder();
+ return encoder.encode(str).buffer;
+ }
+
+ /**
+ * Converts a base64 string to an ArrayBuffer
+ */
+ async base64ToArrayBuffer(base64: string): Promise {
+ return decodeBase64(base64);
+ }
+ }
+ ```
+
+
+
+ The `AWSStorageAdapter` class implements a storage adapter for AWS S3, allowing file operations (upload, download, delete) with an S3 bucket.
+
+
+ ```typescript
+ async uploadFile(filename: string, data: ArrayBuffer, options?: { mediaType?: string; }): Promise
+ ```
+ Converts the input ArrayBuffer to a Uint8Array for S3 compatibility
+ - Validates bucket configuration
+ - Uploads file with metadata (content type)
+ - Includes error handling and logging
+
+
+
+ ```typescript
+ async downloadFile(filePath: string): Promise
+ ```
+ - Creates a new S3 client instance with configured credentials
+ - Retrieves object from S3
+ - Converts the response to a Blob for client-side usage
+ - Includes error handling for missing files/data
+
+
+
+ ```typescript
+ async deleteFile(uri: string, options?: { filename?: string }): Promise
+ ```
+ Two-step deletion process:
+ 1. Deletes local file if it exists (using Expo's FileSystem)
+ 2. Deletes remote file from S3 if filename is provided
+
+ Includes validation and error handling
+
+
+
+
+
+
+ Update the `system.ts` file in the `demos/react-native-supabase-todolist/library/config` directory to use the new `AWSStorageAdapter` class (the highlighted lines are the only changes needed):
+ ```typescript system.ts {5-6, 13, 19, 27-34, 54}
+ import '@azure/core-asynciterator-polyfill';
+
+ import { PowerSyncDatabase } from '@powersync/react-native';
+ import React from 'react';
+ import S3 from 'aws-sdk/clients/s3';
+ import { type AttachmentRecord } from '@powersync/attachments';
+ import Logger from 'js-logger';
+ import { KVStorage } from '../storage/KVStorage';
+ import { AppConfig } from '../supabase/AppConfig';
+ import { SupabaseConnector } from '../supabase/SupabaseConnector';
+ import { AppSchema } from './AppSchema';
+ import { PhotoAttachmentQueue } from './PhotoAttachmentQueue';
+ import { AWSStorageAdapter } from '../storage/AWSStorageAdapter';
+
+ Logger.useDefaults();
+
+ export class System {
+ kvStorage: KVStorage;
+ storage: AWSStorageAdapter;
+ supabaseConnector: SupabaseConnector;
+ powersync: PowerSyncDatabase;
+ attachmentQueue: PhotoAttachmentQueue | undefined = undefined;
+
+ constructor() {
+ this.kvStorage = new KVStorage();
+ this.supabaseConnector = new SupabaseConnector(this);
+ const s3Client = new S3({
+ region: AppConfig.region,
+ credentials: {
+ accessKeyId: AppConfig.accessKeyId,
+ secretAccessKey: AppConfig.secretAccessKey
+ }
+ });
+ this.storage = new AWSStorageAdapter({ client: s3Client });
+ this.powersync = new PowerSyncDatabase({
+ schema: AppSchema,
+ database: {
+ dbFilename: 'sqlite.db'
+ }
+ });
+ /**
+ * The snippet below uses OP-SQLite as the default database adapter.
+ * You will have to uninstall `@journeyapps/react-native-quick-sqlite` and
+ * install both `@powersync/op-sqlite` and `@op-engineering/op-sqlite` to use this.
+ *
+ * import { OPSqliteOpenFactory } from '@powersync/op-sqlite'; // Add this import
+ *
+ * const factory = new OPSqliteOpenFactory({
+ * dbFilename: 'sqlite.db'
+ * });
+ * this.powersync = new PowerSyncDatabase({ database: factory, schema: AppSchema });
+ */
+
+ if (AppConfig.s3bucketName) {
+ this.attachmentQueue = new PhotoAttachmentQueue({
+ powersync: this.powersync,
+ storage: this.storage,
+ // Use this to handle download errors where you can use the attachment
+ // and/or the exception to decide if you want to retry the download
+ onDownloadError: async (attachment: AttachmentRecord, exception: any) => {
+ if (exception.toString() === 'StorageApiError: Object not found') {
+ return { retry: false };
+ }
+
+ return { retry: true };
+ }
+ });
+ }
+ }
+
+ async init() {
+ await this.powersync.init();
+ await this.powersync.connect(this.supabaseConnector);
+
+ if (this.attachmentQueue) {
+ await this.attachmentQueue.init();
+ }
+ }
+ }
+
+ export const system = new System();
+
+ export const SystemContext = React.createContext(system);
+ export const useSystem = () => React.useContext(SystemContext);
+ ```
+
+
+
+ You can now run the app and test the attachment upload and download functionality.
+
+
+
+## The complete files used in this tutorial can be found below:
+
+ ```bash .env
+ # Replace the credentials below with your Supabase, PowerSync and Expo project details.
+ EXPO_PUBLIC_SUPABASE_URL=https://foo.supabase.co
+ EXPO_PUBLIC_SUPABASE_ANON_KEY=foo
+ EXPO_PUBLIC_ATTACHMENT_STORAGE_OPTION=supabase # Change this to s3 to use AWS S3 storage for attachments
+ EXPO_PUBLIC_SUPABASE_BUCKET= # Optional. Only required when syncing attachments and using Supabase Storage. See packages/powersync-attachments.
+ EXPO_PUBLIC_POWERSYNC_URL=https://foo.powersync.journeyapps.com
+ EXPO_PUBLIC_EAS_PROJECT_ID=foo # Optional. Only required when using EAS.
+ EXPO_PUBLIC_AWS_S3_REGION=region
+ EXPO_PUBLIC_AWS_S3_BUCKET_NAME=bucket_name
+ EXPO_PUBLIC_AWS_S3_ACCESS_KEY_ID=***
+ EXPO_PUBLIC_AWS_S3_ACCESS_SECRET_ACCESS_KEY=***
+ ```
+ ```typescript process-env.d.ts
+ export {};
+
+ declare global {
+ namespace NodeJS {
+ interface ProcessEnv {
+ [key: string]: string | undefined;
+ EXPO_PUBLIC_SUPABASE_URL: string;
+ EXPO_PUBLIC_SUPABASE_ANON_KEY: string;
+ EXPO_PUBLIC_SUPABASE_BUCKET: string;
+ EXPO_PUBLIC_POWERSYNC_URL: string;
+ EXPO_PUBLIC_EAS_PROJECT_ID: string;
+ EXPO_PUBLIC_AWS_S3_REGION: string;
+ EXPO_PUBLIC_AWS_S3_BUCKET_NAME: string;
+ EXPO_PUBLIC_AWS_S3_ACCESS_KEY_ID: string;
+ EXPO_PUBLIC_AWS_S3_ACCESS_SECRET_ACCESS_KEY: string;
+ }
+ }
+ }
+ ```
+ ```typescript AppConfig.ts
+ export const AppConfig = {
+ supabaseUrl: process.env.EXPO_PUBLIC_SUPABASE_URL,
+ supabaseAnonKey: process.env.EXPO_PUBLIC_SUPABASE_ANON_KEY,
+ supabaseBucket: process.env.EXPO_PUBLIC_SUPABASE_BUCKET || '',
+ powersyncUrl: process.env.EXPO_PUBLIC_POWERSYNC_URL,
+ region: process.env.EXPO_PUBLIC_AWS_S3_REGION,
+ accessKeyId: process.env.EXPO_PUBLIC_AWS_S3_ACCESS_KEY_ID || '',
+ secretAccessKey: process.env.EXPO_PUBLIC_AWS_S3_ACCESS_SECRET_ACCESS_KEY || '',
+ s3bucketName: process.env.EXPO_PUBLIC_AWS_S3_BUCKET_NAME || ''
+ };
+ ```
+ ```typescript AWSStorageAdapter.ts
+ import * as FileSystem from 'expo-file-system';
+ import S3 from 'aws-sdk/clients/s3';
+ import { decode as decodeBase64 } from 'base64-arraybuffer';
+ import { StorageAdapter } from '@powersync/attachments';
+ import { AppConfig } from '../supabase/AppConfig';
+
+ export interface S3StorageAdapterOptions {
+ client: S3;
+ }
+
+ export class AWSStorageAdapter implements StorageAdapter {
+ constructor(private options: S3StorageAdapterOptions) {}
+
+ async uploadFile(
+ filename: string,
+ data: ArrayBuffer,
+ options?: {
+ mediaType?: string;
+ }
+ ): Promise {
+ if (!AppConfig.s3bucketName) {
+ throw new Error('AWS S3 bucket not configured in AppConfig.ts');
+ }
+
+ try {
+ const body = Uint8Array.from(new Uint8Array(data));
+ const params = {
+ Bucket: AppConfig.s3bucketName,
+ Key: filename,
+ Body: body,
+ ContentType: options?.mediaType
+ };
+
+ await this.options.client.upload(params).promise();
+ console.log(`File uploaded successfully to ${AppConfig.s3bucketName}/${filename}`);
+ } catch (error) {
+ console.error('Error uploading file:', error);
+ throw error;
+ }
+ }
+
+ async downloadFile(filePath: string): Promise {
+ const s3 = new S3({
+ region: AppConfig.region,
+ accessKeyId: AppConfig.accessKeyId,
+ secretAccessKey: AppConfig.secretAccessKey
+ });
+
+ const params = {
+ Bucket: AppConfig.s3bucketName,
+ Key: filePath
+ };
+
+ try {
+ const obj = await s3.getObject(params).promise();
+ if (obj.Body) {
+ const data = await new Response(obj.Body as ReadableStream).arrayBuffer();
+ return new Blob([data]);
+ } else {
+ throw new Error('Object body is undefined. Could not download file.');
+ }
+ } catch (error) {
+ console.error('Error downloading file:', error);
+ throw error;
+ }
+ }
+
+ async deleteFile(uri: string, options?: { filename?: string }): Promise {
+ if (await this.fileExists(uri)) {
+ await FileSystem.deleteAsync(uri);
+ }
+
+ const { filename } = options ?? {};
+ if (!filename) {
+ return;
+ }
+
+ if (!AppConfig.s3bucketName) {
+ throw new Error('Supabase bucket not configured in AppConfig.ts');
+ }
+
+ try {
+ const params = {
+ Bucket: AppConfig.s3bucketName,
+ Key: filename
+ };
+ await this.options.client.deleteObject(params).promise();
+ console.log(`${filename} deleted successfully from ${AppConfig.s3bucketName}.`);
+ } catch (error) {
+ console.error(`Error deleting ${filename} from ${AppConfig.s3bucketName}:`, error);
+ }
+ }
+
+ async readFile(
+ fileURI: string,
+ options?: { encoding?: FileSystem.EncodingType; mediaType?: string }
+ ): Promise {
+ const { encoding = FileSystem.EncodingType.UTF8 } = options ?? {};
+ const { exists } = await FileSystem.getInfoAsync(fileURI);
+ if (!exists) {
+ throw new Error(`File does not exist: ${fileURI}`);
+ }
+ const fileContent = await FileSystem.readAsStringAsync(fileURI, options);
+ if (encoding === FileSystem.EncodingType.Base64) {
+ return this.base64ToArrayBuffer(fileContent);
+ }
+ return this.stringToArrayBuffer(fileContent);
+ }
+
+ async writeFile(
+ fileURI: string,
+ base64Data: string,
+ options?: {
+ encoding?: FileSystem.EncodingType;
+ }
+ ): Promise {
+ const { encoding = FileSystem.EncodingType.UTF8 } = options ?? {};
+ await FileSystem.writeAsStringAsync(fileURI, base64Data, { encoding });
+ }
+
+ async fileExists(fileURI: string): Promise {
+ const { exists } = await FileSystem.getInfoAsync(fileURI);
+ return exists;
+ }
+
+ async makeDir(uri: string): Promise {
+ const { exists } = await FileSystem.getInfoAsync(uri);
+ if (!exists) {
+ await FileSystem.makeDirectoryAsync(uri, { intermediates: true });
+ }
+ }
+
+ async copyFile(sourceUri: string, targetUri: string): Promise {
+ await FileSystem.copyAsync({ from: sourceUri, to: targetUri });
+ }
+
+ getUserStorageDirectory(): string {
+ return FileSystem.documentDirectory!;
+ }
+
+ async stringToArrayBuffer(str: string): Promise {
+ const encoder = new TextEncoder();
+ return encoder.encode(str).buffer;
+ }
+
+ /**
+ * Converts a base64 string to an ArrayBuffer
+ */
+ async base64ToArrayBuffer(base64: string): Promise {
+ return decodeBase64(base64);
+ }
+ }
+ ```
+ ```typescript system.ts
+ import '@azure/core-asynciterator-polyfill';
+
+ import { PowerSyncDatabase } from '@powersync/react-native';
+ import React from 'react';
+ import S3 from 'aws-sdk/clients/s3';
+ import { type AttachmentRecord } from '@powersync/attachments';
+ import Logger from 'js-logger';
+ import { KVStorage } from '../storage/KVStorage';
+ import { AppConfig } from '../supabase/AppConfig';
+ import { SupabaseConnector } from '../supabase/SupabaseConnector';
+ import { AppSchema } from './AppSchema';
+ import { PhotoAttachmentQueue } from './PhotoAttachmentQueue';
+ import { AWSStorageAdapter } from '../storage/AWSStorageAdapter';
+
+ Logger.useDefaults();
+
+ export class System {
+ kvStorage: KVStorage;
+ storage: AWSStorageAdapter;
+ supabaseConnector: SupabaseConnector;
+ powersync: PowerSyncDatabase;
+ attachmentQueue: PhotoAttachmentQueue | undefined = undefined;
+
+ constructor() {
+ this.kvStorage = new KVStorage();
+ this.supabaseConnector = new SupabaseConnector(this);
+ const s3Client = new S3({
+ region: AppConfig.region,
+ credentials: {
+ accessKeyId: AppConfig.accessKeyId,
+ secretAccessKey: AppConfig.secretAccessKey
+ }
+ });
+ this.storage = new AWSStorageAdapter({ client: s3Client });
+ this.powersync = new PowerSyncDatabase({
+ schema: AppSchema,
+ database: {
+ dbFilename: 'sqlite.db'
+ }
+ });
+ /**
+ * The snippet below uses OP-SQLite as the default database adapter.
+ * You will have to uninstall `@journeyapps/react-native-quick-sqlite` and
+ * install both `@powersync/op-sqlite` and `@op-engineering/op-sqlite` to use this.
+ *
+ * import { OPSqliteOpenFactory } from '@powersync/op-sqlite'; // Add this import
+ *
+ * const factory = new OPSqliteOpenFactory({
+ * dbFilename: 'sqlite.db'
+ * });
+ * this.powersync = new PowerSyncDatabase({ database: factory, schema: AppSchema });
+ */
+
+ if (AppConfig.s3bucketName) {
+ this.attachmentQueue = new PhotoAttachmentQueue({
+ powersync: this.powersync,
+ storage: this.storage,
+ // Use this to handle download errors where you can use the attachment
+ // and/or the exception to decide if you want to retry the download
+ onDownloadError: async (attachment: AttachmentRecord, exception: any) => {
+ if (exception.toString() === 'StorageApiError: Object not found') {
+ return { retry: false };
+ }
+
+ return { retry: true };
+ }
+ });
+ }
+ }
+
+ async init() {
+ await this.powersync.init();
+ await this.powersync.connect(this.supabaseConnector);
+
+ if (this.attachmentQueue) {
+ await this.attachmentQueue.init();
+ }
+ }
+ }
+
+ export const system = new System();
+
+ export const SystemContext = React.createContext(system);
+ export const useSystem = () => React.useContext(SystemContext);
+ ```
+
diff --git a/tutorials/client/attachment-storage/overview.mdx b/tutorials/client/attachment-storage/overview.mdx
new file mode 100644
index 00000000..2503a7cc
--- /dev/null
+++ b/tutorials/client/attachment-storage/overview.mdx
@@ -0,0 +1,8 @@
+---
+title: "Overview"
+description: "A collection of tutorials exploring storage strategies."
+---
+
+
+
+
diff --git a/tutorials/client/performance/overview.mdx b/tutorials/client/performance/overview.mdx
new file mode 100644
index 00000000..1bf2ff94
--- /dev/null
+++ b/tutorials/client/performance/overview.mdx
@@ -0,0 +1,8 @@
+---
+title: "Overview"
+description: "A collection of tutorials exploring performance strategies."
+---
+
+
+
+
diff --git a/tutorials/client/performance/supabase-connector-performance.mdx b/tutorials/client/performance/supabase-connector-performance.mdx
new file mode 100644
index 00000000..41231797
--- /dev/null
+++ b/tutorials/client/performance/supabase-connector-performance.mdx
@@ -0,0 +1,295 @@
+---
+title: "Improve Supabase Connector"
+description: "In this tutorial we will show you how to improve the performance of the Supabase Connector for the [React Native To-Do List example app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)."
+---
+
+# Background
+
+The demos in the [powersync-js](https://github.com/powersync-ja/powersync-js/tree/main/demos) monorepo provide a minimal working example that illustrate the use of PowerSync with different frameworks.
+The demos are therefore not necessarily optimized for performance and can therefore be improved.
+
+This tutorial demonstrates how to improve the Supabase Connector's performance by implementing two batching strategies that reduce the number of database operations.
+
+# Batching Strategies
+
+The two batching strategies that will be implemented are:
+
+1. Sequential Merge Strategy, and
+2. Pre-sorted Batch Strategy
+
+
+
+ Overview:
+ - Merge adjacent `PUT` and `DELETE` operations for the same table
+ - Limit the number of operations that are merged into a single API request to Supabase
+
+ Shoutout to @christoffer_configura for the original implementation of this optimization.
+
+ ```typescript {6-12, 15, 17-19, 21, 23-24, 28-40, 43, 47-60, 63-64, 79}
+ async uploadData(database: AbstractPowerSyncDatabase): Promise {
+ const transaction = await database.getNextCrudTransaction();
+ if (!transaction) {
+ return;
+ }
+ /**
+ * Maximum number of PUT or DELETE operations that are merged into a single API request to Supabase.
+ * Larger numbers can speed up the sync process considerably, but watch out for possible payload size limitations.
+ * A value of 1 or below disables merging.
+ */
+ const MERGE_BATCH_LIMIT = 100;
+ let batchedOps: CrudEntry[] = [];
+
+ try {
+ console.log(`Processing transaction with ${transaction.crud.length} operations`);
+
+ for (let i = 0; i < transaction.crud.length; i++) {
+ const cruds = transaction.crud;
+ const op = cruds[i];
+ const table = this.client.from(op.table);
+ batchedOps.push(op);
+
+ let result: any;
+ let batched = 1;
+
+ switch (op.op) {
+ case UpdateType.PUT:
+ const records = [{ ...cruds[i].opData, id: cruds[i].id }];
+ while (
+ i + 1 < cruds.length &&
+ cruds[i + 1].op === op.op &&
+ cruds[i + 1].table === op.table &&
+ batched < MERGE_BATCH_LIMIT
+ ) {
+ i++;
+ records.push({ ...cruds[i].opData, id: cruds[i].id });
+ batchedOps.push(cruds[i]);
+ batched++;
+ }
+ result = await table.upsert(records);
+ break;
+ case UpdateType.PATCH:
+ batchedOps = [op];
+ result = await table.update(op.opData).eq('id', op.id);
+ break;
+ case UpdateType.DELETE:
+ batchedOps = [op];
+ const ids = [op.id];
+ while (
+ i + 1 < cruds.length &&
+ cruds[i + 1].op === op.op &&
+ cruds[i + 1].table === op.table &&
+ batched < MERGE_BATCH_LIMIT
+ ) {
+ i++;
+ ids.push(cruds[i].id);
+ batchedOps.push(cruds[i]);
+ batched++;
+ }
+ result = await table.delete().in('id', ids);
+ break;
+ }
+ if (batched > 1) {
+ console.log(`Merged ${batched} ${op.op} operations for table ${op.table}`);
+ }
+ }
+ await transaction.complete();
+ } catch (ex: any) {
+ console.debug(ex);
+ if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) {
+ /**
+ * Instead of blocking the queue with these errors,
+ * discard the (rest of the) transaction.
+ *
+ * Note that these errors typically indicate a bug in the application.
+ * If protecting against data loss is important, save the failing records
+ * elsewhere instead of discarding, and/or notify the user.
+ */
+ console.error('Data upload error - discarding:', ex);
+ await transaction.complete();
+ } else {
+ // Error may be retryable - e.g. network error or temporary server error.
+ // Throwing an error here causes this call to be retried after a delay.
+ throw ex;
+ }
+ }
+ }
+ ```
+
+
+ Overview:
+ - Create three collections to group operations by type:
+ - `putOps`: For `PUT` operations, organized by table name
+ - `deleteOps`: For `DELETE` operations, organized by table name
+ - `patchOps`: For `PATCH` operations (partial updates)
+
+ - Loop through all operations, sort them into the three collections, and then process all operations in batches.
+
+ ```typescript {8-11, 17-20, 23, 26-29, 32-53, 56, 72}
+ async uploadData(database: AbstractPowerSyncDatabase): Promise {
+ const transaction = await database.getNextCrudTransaction();
+ if (!transaction) {
+ return;
+ }
+
+ try {
+ // Group operations by type and table
+ const putOps: { [table: string]: any[] } = {};
+ const deleteOps: { [table: string]: string[] } = {};
+ let patchOps: CrudEntry[] = [];
+
+ // Organize operations
+ for (const op of transaction.crud) {
+ switch (op.op) {
+ case UpdateType.PUT:
+ if (!putOps[op.table]) {
+ putOps[op.table] = [];
+ }
+ putOps[op.table].push({ ...op.opData, id: op.id });
+ break;
+ case UpdateType.PATCH:
+ patchOps.push(op);
+ break;
+ case UpdateType.DELETE:
+ if (!deleteOps[op.table]) {
+ deleteOps[op.table] = [];
+ }
+ deleteOps[op.table].push(op.id);
+ break;
+ }
+ }
+
+ // Execute bulk operations
+ for (const table of Object.keys(putOps)) {
+ const result = await this.client.from(table).upsert(putOps[table]);
+ if (result.error) {
+ console.error(result.error);
+ throw new Error(`Could not bulk PUT data to Supabase table ${table}: ${JSON.stringify(result)}`);
+ }
+ }
+
+ for (const table of Object.keys(deleteOps)) {
+ const result = await this.client.from(table).delete().in('id', deleteOps[table]);
+ if (result.error) {
+ console.error(result.error);
+ throw new Error(`Could not bulk DELETE data from Supabase table ${table}: ${JSON.stringify(result)}`);
+ }
+ }
+
+ // Execute PATCH operations individually since they can't be easily batched
+ for (const op of patchOps) {
+ const result = await this.client.from(op.table).update(op.opData).eq('id', op.id);
+ if (result.error) {
+ console.error(result.error);
+ throw new Error(`Could not PATCH data in Supabase: ${JSON.stringify(result)}`);
+ }
+ }
+
+ await transaction.complete();
+ } catch (ex: any) {
+ console.debug(ex);
+ if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) {
+ /**
+ * Instead of blocking the queue with these errors,
+ * discard the (rest of the) transaction.
+ *
+ * Note that these errors typically indicate a bug in the application.
+ * If protecting against data loss is important, save the failing records
+ * elsewhere instead of discarding, and/or notify the user.
+ */
+ console.error('Data upload error - discarding transaction:', ex);
+ await transaction.complete();
+ } else {
+ // Error may be retryable - e.g. network error or temporary server error.
+ // Throwing an error here causes this call to be retried after a delay.
+ throw ex;
+ }
+ }
+ }
+ ```
+
+
+
+# Differences
+
+
+
+ ### Sequential merge strategy
+ ```typescript
+ const MERGE_BATCH_LIMIT = 100;
+ let batchedOps: CrudEntry[] = [];
+ ```
+ - Pre-sorts all operations by type and table
+ - Processes each type in bulk after grouping
+
+ ### Pre-sorted batch strategy
+ ```typescript
+ const putOps: { [table: string]: any[] } = {};
+ const deleteOps: { [table: string]: string[] } = {};
+ let patchOps: CrudEntry[] = [];
+ ```
+ - Processes operations sequentially
+ - Merges consecutive operations of the same type up to a batch limit
+ - More dynamic/streaming approach
+
+
+ ### Sequential merge strategy
+ - Uses a sliding window approach with `MERGE_BATCH_LIMIT`
+ - Merges consecutive operations up to the limit
+ - More granular control over batch sizes
+ - Better for mixed operation types
+
+ ### Pre-sorted batch strategy
+ - Groups ALL operations of the same type together
+ - Executes one bulk operation per type per table
+ - Better for large numbers of similar operations
+
+
+
+
+## Key similarities and differences
+
+
+ Handling of CRUD operations (PUT, PATCH, DELETE) to sync local changes to Supabase
+
+ Transaction management with `getNextCrudTransaction()`
+
+ Implement similar error handling for fatal and retryable errors
+
+ Complete the transaction after successful processing
+
+
+ Operation grouping strategy
+
+ Batching methodology
+
+
+
+# Use cases
+
+
+
+ You need more granular control over batch sizes
+
+ You want more detailed operation logging
+
+ You need to handle mixed operation types more efficiently
+
+ **Best for**: Mixed operation types
+
+ **Optimizes for**: Memory efficiency
+
+ **Trade-off**: Potentially more network requests
+
+
+ You have a large number of similar operations.
+
+ You want to minimize the number of network requests.
+
+
+ **Best for**: Large volumes of similar operations
+
+ **Optimizes for**: Minimal network requests
+
+ **Trade-off**: Higher memory usage
+
+
\ No newline at end of file
diff --git a/tutorials/overview.mdx b/tutorials/overview.mdx
new file mode 100644
index 00000000..c40192a9
--- /dev/null
+++ b/tutorials/overview.mdx
@@ -0,0 +1,9 @@
+---
+title: "Overview"
+description: "A collection of tutorials showcasing various storage attachment and performance strategies."
+---
+
+
+
+
+
diff --git a/tutorials/self-host/overview.mdx b/tutorials/self-host/overview.mdx
new file mode 100644
index 00000000..6d6aaaa5
--- /dev/null
+++ b/tutorials/self-host/overview.mdx
@@ -0,0 +1,3 @@
+---
+title: "Coming Soon..."
+---
\ No newline at end of file