Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 51 additions & 15 deletions resources/production-readiness-guide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
1. Client SDK Diagnostics - Implement a sync diagnostics screen/view in your client application that provides critical sync information.
2. Client logging - Implement logging in your client application to capture sync events and errors.
3. Issue Alerts - Trigger notifications when the PowerSync replicator runs into errors.
4. Database - Making sure your database is ready for production when integrated with PowerSync.

# Client specific
## SDK Diagnostics
Expand Down Expand Up @@ -49,12 +50,10 @@

<Info>This is just an example of how to implement Sentry logging. The actual implementation is up to you as the developer. You don't have to use `Sentry logging`, but we recommend using some sort of log aggregation service in production.</Info>

```typescript
// main.tsx or entry point of your app

createRoot(document.getElementById("root")!,
```typescript App Entry Point
createRoot(document.getElementById("root")!,
{
onUncaughtError: Sentry.reactErrorHandler((error, errorInfo) => {

Check warning on line 56 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L56

Did you really mean 'errorInfo'?
console.warn('Uncaught error', error, errorInfo.componentStack);
}),
// Callback called when React catches an error in an ErrorBoundary.
Expand All @@ -70,9 +69,7 @@
);
```

```typescript
// System.ts

```typescript System.ts
import * as Sentry from '@sentry/react';
import { createBaseLogger, LogLevel } from '@powersync/react-native';;

Expand All @@ -89,12 +86,12 @@

logger.setHandler((messages, context) => {
if (!context?.level) return;

// Get the main message and combine any additional data
const messageArray = Array.from(messages);
const mainMessage = String(messageArray[0] || 'Empty log message');
const extraData = messageArray.slice(1).reduce((acc, curr) => ({ ...acc, ...curr }), {});

const level = context.level.name.toLowerCase();

// Addbreadcrumb: creates a trail of events leading up to errors
Expand All @@ -108,7 +105,7 @@
data: extraData,
timestamp: Date.now()
});

// Only send warnings and errors to Sentry
if (level == 'warn' || level == 'error') {
console[level](`PowerSync ${level.toUpperCase()}:`, mainMessage, extraData);
Expand Down Expand Up @@ -154,11 +151,11 @@

// Usage with additional context
logger.error('PowerSync sync failed', {
userId: userID,
lastSyncAt: status?.lastSyncedAt,
connected: status?.connected,
sdkVersion: powerSync.sdkVersion || 'unknown',
});
userId: userID,
lastSyncAt: status?.lastSyncedAt,
connected: status?.connected,
sdkVersion: powerSync.sdkVersion || 'unknown',
});
```

### Best Practices
Expand Down Expand Up @@ -251,3 +248,42 @@
```

The easiest way to check for replication issues is to look at the Diagnostics endpoint on intervals and keep an eye on the errors arrays, this will populate errors as they arise on the service.

# Database Best Practices

## Postgres

### `max_slot_wal_keep_size`

This Postgres [configuration parameter](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE) limits the size of the Write-Ahead Log (WAL) files that replication slots can hold.
Because PowerSync uses logical replication it's important to consider the size of the `max_slot_wal_keep_size` and monitoring lag of replication slots used by PowerSync in a production environment to ensure lag of replication slots do not exceed the `max_slot_wal_keep_size`.

The WAL growth rate is expected to increase substantially during the initial replication of large datasets with high update frequency, particularly for tables included in the PowerSync publication.

During normal operation (after Sync Rules are deployed) the WAL growth rate is much smaller than the initial replication period, since the PowerSync service can replicate ~5k operations per second, meaning the WAL lag is typically in the MB range as opposed to the GB range.

When deciding what to set the `max_slot_wal_keep_size` configuration parameter the following should be taken in account:
1. Database size - This impacts the time it takes to complete the initial replication from the source Postgres database.
2. Sync Rules complexity - This also impacts the time it takes to complete the initial replication.
3. Postgres update frequency - The frequency of updates (of tables included in the publication you create for PowerSync) during initial replication. The WAL growth rate is directly proportional to this.

To view the current replication slots that are being used by PowerSync you can run the following query:

```
SELECT slot_name,

Check warning on line 273 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L273

Did you really mean 'slot_name'?
plugin,
slot_type,

Check warning on line 275 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L275

Did you really mean 'slot_type'?
active,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS replication_lag

Check warning on line 277 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L277

Did you really mean 'restart_lsn'?

Check warning on line 277 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L277

Did you really mean 'replication_lag'?
FROM pg_replication_slots;

Check warning on line 278 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L278

Did you really mean 'pg_replication_slots'?
```

To view the current configured value of the `max_slot_wal_keep_size` you can run the following query:
```
SELECT setting as max_slot_wal_keep_size

Check warning on line 283 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L283

Did you really mean 'max_slot_wal_keep_size'?
FROM pg_settings

Check warning on line 284 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L284

Did you really mean 'pg_settings'?
WHERE name = 'max_slot_wal_keep_size'

Check warning on line 285 in resources/production-readiness-guide.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

resources/production-readiness-guide.mdx#L285

Did you really mean 'max_slot_wal_keep_size'?
```

It's recommended to check the current replication slot lag and `max_slot_wal_keep_size` when deploying Sync Rules changes to your PowerSync Service instance, especially when you're working with large database volumes.
If you notice that the replication lag is greater than the current `max_slot_wal_keep_size` it's recommended to increase value of the `max_slot_wal_keep_size` on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays.