Skip to content

Commit 66ec13f

Browse files
committed
AUTO: Sync ScalarDB docs in English to docs site repo
1 parent 5c4cb33 commit 66ec13f

File tree

2 files changed

+14
-14
lines changed

2 files changed

+14
-14
lines changed

docs/features.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ This document briefly explains which features are available in which editions of
2323
| [Vector search interface](scalardb-cluster/getting-started-with-vector-search.mdx) ||| ✅ (3.15+) (Private Preview**) ||
2424
| [Analytical query processing across ScalarDB-managed data sources](scalardb-samples/scalardb-analytics-spark-sample/README.mdx) |||| ✅ (3.14+) |
2525
| [Analytical query processing across non-ScalarDB-managed data sources](scalardb-samples/scalardb-analytics-spark-sample/README.mdx) |||| ✅ (3.15+) |
26+
| [Remote replication](scalardb-cluster/remote-replication.mdx) ||| ✅ (3.16+) (Private Preview**) ||
2627

2728
\* This feature is not available in the Enterprise Premium edition. If you want to use this feature, please [contact us](https://www.scalar-labs.com/contact).
2829

docs/scalardb-cluster/remote-replication.mdx

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Remote replication provides several key advantages for disaster recovery and bus
1919

2020
- Guarantees zero data loss (RPO of 0) for all committed transactions.
2121
- Minimizes performance impact through the combination of synchronous and asynchronous processing.
22-
- Enables backup site deployment across different regions, availability zones, or data centers.
22+
- Enables backup site deployment in different regions, availability zones, or data centers from the primary site.
2323
- Supports replication between different cloud service providers and database types.
2424
- Provides built-in crash tolerance and automatic recovery mechanisms.
2525

@@ -147,7 +147,7 @@ The current private preview version has the following limitations, but they are
147147

148148
Remote replication has the following architectural limitations, which are inherently challenging to relax due to the architecture:
149149

150-
- Only read operations with the read-committed isolation level are permitted on backup sites until failover. However, since write operations from the primary site are applied at the record level on backup sites, you may observe inconsistent snapshots across multiple records during replication until all write operations have been fully applied.
150+
- Only [transactions in read-only mode](../api-guide.mdx#begin-or-start-a-transaction-in-read-only-mode) with the [read-committed isolation level](../consensus-commit.mdx#isolation-levels) are permitted on backup sites until failover.
151151
- DDL operations are not replicated. Schema changes must be applied manually to both primary and backup sites.
152152
- You cannot use the two-phase commit interface if this feature is enabled.
153153
- There may be a slight performance impact on the primary site, depending on replication database specifications and configurations.
@@ -340,7 +340,7 @@ For the backup site, tune thread counts ([`scalar.db.replication.log_applier.tra
340340

341341
#### High-latency database environments
342342

343-
Multi-region databases may have higher latency. Consider using [group commit configurations](../api-guide.mdx#group-commit-for-the-coordinator-table) to improve throughput on high-latency Coordinator databases. The Coordinator group commit feature batches multiple transactions together, reducing the impact of network latency on overall throughput. For replication database tuning, see the [LogWriter performance tuning](#logapplier-performance-tuning) section above.
343+
Multi-region databases may have higher latency. Consider using [group commit configurations](../api-guide.mdx#group-commit-for-the-coordinator-table) to improve throughput on high-latency Coordinator databases. The Coordinator group commit feature batches multiple transactions together, reducing the impact of network latency on overall throughput. For replication database tuning, see the [LogWriter performance tuning](#logwriter-performance-tuning) section above.
344344

345345
## Tutorial
346346

@@ -457,7 +457,7 @@ Verify the primary site deployment:
457457
kubectl logs <PRIMARY_POD_NAME> -n <NAMESPACE>
458458
```
459459

460-
Replace `<PRIMARY_POD_NAME>` with your actual pod name. Ensure there are no errors.
460+
Replace `<PRIMARY_POD_NAME>` with your actual Pod name. Ensure there are no errors.
461461

462462
#### 2.3 Create primary site tables
463463

@@ -525,7 +525,7 @@ spec:
525525
name: schema-config-primary
526526
```
527527
528-
Replace `<PRIMARY_CLUSTER_CONTACT_POINTS>` with your primary site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
528+
Replace `<PRIMARY_CLUSTER_CONTACT_POINTS>` with your primary site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx#client-configurations)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
529529

530530
Apply and run the Schema Loader job:
531531

@@ -665,7 +665,7 @@ spec:
665665
name: schema-config-backup
666666
```
667667

668-
Replace `<BACKUP_CLUSTER_CONTACT_POINTS>` with your backup site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
668+
Replace `<BACKUP_CLUSTER_CONTACT_POINTS>` with your backup site cluster contact points (same format as [ScalarDB Cluster client configurations](scalardb-cluster-configurations.mdx#client-configurations)) and `<VERSION>` with the ScalarDB Cluster version that you're using.
669669

670670
Apply and run the Schema Loader job:
671671

@@ -742,7 +742,7 @@ Verify the backup site deployment with LogApplier:
742742
kubectl logs <BACKUP_POD_NAME> -n <NAMESPACE>
743743
```
744744

745-
Replace `<BACKUP_POD_NAME>` with your actual pod name. Ensure there are no errors. You should see a message indicating that LogApplier is properly initialized:
745+
Replace `<BACKUP_POD_NAME>` with your actual Pod name. Ensure there are no errors. You should see a message indicating that LogApplier is properly initialized:
746746

747747
```console
748748
2025-07-03 03:28:27,725 [INFO com.scalar.db.cluster.replication.logapplier.LogApplier] Starting LogApplier processing. Partition range: Range{startInclusive=0, endExclusive=256}
@@ -754,7 +754,6 @@ Replace `<BACKUP_POD_NAME>` with your actual pod name. Ensure there are no error
754754

755755
To test replication between sites, you should use the ScalarDB SQL CLI. Create a Kubernetes Pod to run the SQL CLI for the primary site:
756756

757-
758757
```yaml
759758
# sql-cli-primary.yaml
760759
apiVersion: v1
@@ -791,10 +790,10 @@ spec:
791790

792791
Replace `<PRIMARY_CLUSTER_CONTACT_POINTS>` with your primary site cluster contact points and `<VERSION>` with the ScalarDB Cluster version that you're using.
793792

794-
Apply and connect to the SQL CLI by running the following commands:
793+
Create and connect to the SQL CLI Pod by running the following commands:
795794

796795
```bash
797-
# Apply the pod
796+
# Create the SQL CLI Pod
798797
kubectl apply -f sql-cli-primary.yaml -n <NAMESPACE>
799798
800799
# Attach to the running SQL CLI
@@ -810,7 +809,7 @@ INSERT INTO test_namespace.test_table (id, name, value) VALUES (1, 'test_record'
810809
SELECT * FROM test_namespace.test_table WHERE id = 1;
811810
```
812811

813-
Detach from the session by pressing `Ctrl + P`, then `Ctrl + Q`. Then, create a similar pod for the backup site:
812+
Detach from the session by pressing `Ctrl + P`, then `Ctrl + Q`. Then, create a similar Pod for the backup site:
814813

815814
```yaml
816815
# sql-cli-backup.yaml
@@ -848,10 +847,10 @@ spec:
848847

849848
Replace `<BACKUP_CLUSTER_CONTACT_POINTS>` with your backup site cluster contact points and `<VERSION>` with the ScalarDB Cluster version that you're using.
850849

851-
Apply and verify replication on the backup site by running the following commands:
850+
Create and connect to the SQL CLI Pod by running the following commands:
852851

853852
```bash
854-
# Apply the pod
853+
# Create the SQL CLI Pod
855854
kubectl apply -f sql-cli-backup.yaml -n <NAMESPACE>
856855
857856
# Attach to the running SQL CLI
@@ -866,7 +865,7 @@ SELECT * FROM test_namespace.test_table WHERE id = 1;
866865

867866
You should see the same data on both sites, confirming that replication is working correctly. You can insert additional records in the primary site and verify they appear in the backup site as well. To detach from the session, press `Ctrl + P`, then `Ctrl + Q`.
868867

869-
Clean up the SQL CLI pods when done:
868+
Clean up the SQL CLI Pods when done:
870869

871870
```bash
872871
kubectl delete -f sql-cli-primary.yaml -n <NAMESPACE>

0 commit comments

Comments
 (0)