You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
\* This feature is not available in the Enterprise Premium edition. If you want to use this feature, please [contact us](https://www.scalar-labs.com/contact).
Copy file name to clipboardExpand all lines: docs/scalardb-cluster/remote-replication.mdx
+13-14Lines changed: 13 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Remote replication provides several key advantages for disaster recovery and bus
19
19
20
20
- Guarantees zero data loss (RPO of 0) for all committed transactions.
21
21
- Minimizes performance impact through the combination of synchronous and asynchronous processing.
22
-
- Enables backup site deployment across different regions, availability zones, or data centers.
22
+
- Enables backup site deployment in different regions, availability zones, or data centers from the primary site.
23
23
- Supports replication between different cloud service providers and database types.
24
24
- Provides built-in crash tolerance and automatic recovery mechanisms.
25
25
@@ -147,7 +147,7 @@ The current private preview version has the following limitations, but they are
147
147
148
148
Remote replication has the following architectural limitations, which are inherently challenging to relax due to the architecture:
149
149
150
-
- Only read operations with the read-committed isolation level are permitted on backup sites until failover. However, since write operations from the primary site are applied at the record level on backup sites, you may observe inconsistent snapshots across multiple records during replication until all write operations have been fully applied.
150
+
- Only [transactions in read-only mode](../api-guide.mdx#begin-or-start-a-transaction-in-read-only-mode) with the [read-committed isolation level](../consensus-commit.mdx#isolation-levels) are permitted on backup sites until failover.
151
151
- DDL operations are not replicated. Schema changes must be applied manually to both primary and backup sites.
152
152
- You cannot use the two-phase commit interface if this feature is enabled.
153
153
- There may be a slight performance impact on the primary site, depending on replication database specifications and configurations.
@@ -340,7 +340,7 @@ For the backup site, tune thread counts ([`scalar.db.replication.log_applier.tra
340
340
341
341
#### High-latency database environments
342
342
343
-
Multi-region databases may have higher latency. Consider using [group commit configurations](../api-guide.mdx#group-commit-for-the-coordinator-table) to improve throughput on high-latency Coordinator databases. The Coordinator group commit feature batches multiple transactions together, reducing the impact of network latency on overall throughput. For replication database tuning, see the [LogWriter performance tuning](#logapplier-performance-tuning) section above.
343
+
Multi-region databases may have higher latency. Consider using [group commit configurations](../api-guide.mdx#group-commit-for-the-coordinator-table) to improve throughput on high-latency Coordinator databases. The Coordinator group commit feature batches multiple transactions together, reducing the impact of network latency on overall throughput. For replication database tuning, see the [LogWriter performance tuning](#logwriter-performance-tuning) section above.
344
344
345
345
## Tutorial
346
346
@@ -457,7 +457,7 @@ Verify the primary site deployment:
457
457
kubectl logs <PRIMARY_POD_NAME> -n <NAMESPACE>
458
458
```
459
459
460
-
Replace `<PRIMARY_POD_NAME>` with your actual pod name. Ensure there are no errors.
460
+
Replace `<PRIMARY_POD_NAME>` with your actual Pod name. Ensure there are no errors.
461
461
462
462
#### 2.3 Create primary site tables
463
463
@@ -525,7 +525,7 @@ spec:
525
525
name: schema-config-primary
526
526
```
527
527
528
-
Replace `<PRIMARY_CLUSTER_CONTACT_POINTS>` with your primary site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
528
+
Replace `<PRIMARY_CLUSTER_CONTACT_POINTS>` with your primary site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx#client-configurations)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
529
529
530
530
Apply and run the Schema Loader job:
531
531
@@ -665,7 +665,7 @@ spec:
665
665
name: schema-config-backup
666
666
```
667
667
668
-
Replace `<BACKUP_CLUSTER_CONTACT_POINTS>` with your backup site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
668
+
Replace `<BACKUP_CLUSTER_CONTACT_POINTS>` with your backup site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx#client-configurations)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
669
669
670
670
Apply and run the Schema Loader job:
671
671
@@ -742,7 +742,7 @@ Verify the backup site deployment with LogApplier:
742
742
kubectl logs <BACKUP_POD_NAME> -n <NAMESPACE>
743
743
```
744
744
745
-
Replace `<BACKUP_POD_NAME>` with your actual pod name. Ensure there are no errors. You should see a message indicating that LogApplier is properly initialized:
745
+
Replace `<BACKUP_POD_NAME>` with your actual Pod name. Ensure there are no errors. You should see a message indicating that LogApplier is properly initialized:
@@ -754,7 +754,6 @@ Replace `<BACKUP_POD_NAME>` with your actual pod name. Ensure there are no error
754
754
755
755
To test replication between sites, you should use the ScalarDB SQL CLI. Create a Kubernetes Pod to run the SQL CLI for the primary site:
756
756
757
-
758
757
```yaml
759
758
# sql-cli-primary.yaml
760
759
apiVersion: v1
@@ -791,10 +790,10 @@ spec:
791
790
792
791
Replace `<PRIMARY_CLUSTER_CONTACT_POINTS>` with your primary site cluster contact points and `<VERSION>` with the ScalarDB Cluster version that you're using.
793
792
794
-
Apply and connect to the SQL CLI by running the following commands:
793
+
Create and connect to the SQL CLI Pod by running the following commands:
SELECT * FROM test_namespace.test_table WHERE id = 1;
811
810
```
812
811
813
-
Detach from the session by pressing `Ctrl + P`, then `Ctrl + Q`. Then, create a similar pod for the backup site:
812
+
Detach from the session by pressing `Ctrl + P`, then `Ctrl + Q`. Then, create a similar Pod for the backup site:
814
813
815
814
```yaml
816
815
# sql-cli-backup.yaml
@@ -848,10 +847,10 @@ spec:
848
847
849
848
Replace `<BACKUP_CLUSTER_CONTACT_POINTS>` with your backup site cluster contact points and `<VERSION>` with the ScalarDB Cluster version that you're using.
850
849
851
-
Apply and verify replication on the backup site by running the following commands:
850
+
Create and connect to the SQL CLI Pod by running the following commands:
@@ -866,7 +865,7 @@ SELECT * FROM test_namespace.test_table WHERE id = 1;
866
865
867
866
You should see the same data on both sites, confirming that replication is working correctly. You can insert additional records in the primary site and verify they appear in the backup site as well. To detach from the session, press `Ctrl + P`, then `Ctrl + Q`.
0 commit comments