diff --git a/versioned_docs/version-3.X/_develop-run-analytical-queries-overview.mdx b/versioned_docs/version-3.X/_develop-run-analytical-queries-overview.mdx
new file mode 100644
index 00000000..e2be771f
--- /dev/null
+++ b/versioned_docs/version-3.X/_develop-run-analytical-queries-overview.mdx
@@ -0,0 +1,14 @@
+---
+tags:
+ - Community
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+# Run Analytical Queries Overview
+
+In this sub-category, you can learn how to set up and configure ScalarDB Analytics, an analytics component of ScalarDB. Then, you can run analytical queries over ScalarDB-managed databases, which are updated through ScalarDB transactions, and non-ScalarDB-managed databases.
+
+To learn how to run analytical queries, see the following guides:
+
+- [Run Analytical Queries on Sample Data by Using ScalarDB Analytics with PostgreSQL](scalardb-samples/scalardb-analytics-postgresql-sample/README.mdx)
diff --git a/versioned_docs/version-3.X/add-scalardb-to-your-build.mdx b/versioned_docs/version-3.X/add-scalardb-to-your-build.mdx
new file mode 100644
index 00000000..976e68d8
--- /dev/null
+++ b/versioned_docs/version-3.X/add-scalardb-to-your-build.mdx
@@ -0,0 +1,41 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Add ScalarDB to Your Build
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The ScalarDB library is available on the [Maven Central Repository](https://mvnrepository.com/artifact/com.scalar-labs/scalardb). You can add the library as a build dependency to your application by using Gradle or Maven.
+
+## Configure your application based on your build tool
+
+Select your build tool, and follow the instructions to add the build dependency for ScalarDB to your application.
+
+
+
+ To add the build dependency for ScalarDB by using Gradle, add the following to `build.gradle` in your application, replacing `` with the version of ScalarDB that you want to use:
+
+ ```gradle
+ dependencies {
+ implementation 'com.scalar-labs:scalardb:'
+ }
+ ```
+
+
+ To add the build dependency for ScalarDB by using Maven, add the following to `pom.xml` in your application, replacing `` with the version of ScalarDB that you want to use:
+
+ ```xml
+
+ com.scalar-labs
+ scalardb
+
+
+ ```
+
+
diff --git a/versioned_docs/version-3.X/api-guide.mdx b/versioned_docs/version-3.X/api-guide.mdx
new file mode 100644
index 00000000..7be336f1
--- /dev/null
+++ b/versioned_docs/version-3.X/api-guide.mdx
@@ -0,0 +1,1770 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Java API Guide
+
+import JavadocLink from '/src/theme/JavadocLink.js';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The ScalarDB Java API is mainly composed of the Administrative API and Transactional API. This guide briefly explains what kinds of APIs exist, how to use them, and related topics like how to handle exceptions.
+
+## Administrative API
+
+This section explains how to execute administrative operations programmatically by using the Administrative API in ScalarDB.
+
+:::note
+
+Another method for executing administrative operations is to use [Schema Loader](schema-loader.mdx).
+
+:::
+
+### Get a `DistributedTransactionAdmin` instance
+
+You first need to get a `DistributedTransactionAdmin` instance to execute administrative operations.
+
+To get a `DistributedTransactionAdmin` instance, you can use `TransactionFactory` as follows:
+
+```java
+TransactionFactory transactionFactory = TransactionFactory.create("");
+DistributedTransactionAdmin admin = transactionFactory.getTransactionAdmin();
+```
+
+For details about configurations, see [ScalarDB Configurations](configurations.mdx).
+
+After you have executed all administrative operations, you should close the `DistributedTransactionAdmin` instance as follows:
+
+```java
+admin.close();
+```
+
+### Create a namespace
+
+Before creating tables, namespaces must be created since a table belongs to one namespace.
+
+You can create a namespace as follows:
+
+```java
+// Create the namespace "ns". If the namespace already exists, an exception will be thrown.
+admin.createNamespace("ns");
+
+// Create the namespace only if it does not already exist.
+boolean ifNotExists = true;
+admin.createNamespace("ns", ifNotExists);
+
+// Create the namespace with options.
+Map options = ...;
+admin.createNamespace("ns", options);
+```
+
+#### Creation options
+
+In the creation operations, like creating a namespace and creating a table, you can specify options that are maps of option names and values (`Map`). By using the options, you can set storage adapter–specific configurations.
+
+Select your database to see the options available:
+
+
+
+ No options are available for JDBC databases.
+
+
+ | Name | Description | Default |
+ |------------|-----------------------------------------|---------|
+ | no-scaling | Disable auto-scaling for DynamoDB. | false |
+ | no-backup | Disable continuous backup for DynamoDB. | false |
+ | ru | Base resource unit. | 10 |
+
+
+ | Name | Description | Default |
+ |------------|-----------------------------------------------------|---------|
+ | ru | Base resource unit. | 400 |
+ | no-scaling | Disable auto-scaling for Cosmos DB for NoSQL. | false |
+
+
+ | Name | Description | Default |
+ |----------------------|----------------------------------------------------------------------------------------|------------------|
+ | replication-strategy | Cassandra replication strategy. Must be `SimpleStrategy` or `NetworkTopologyStrategy`. | `SimpleStrategy` |
+ | compaction-strategy | Cassandra compaction strategy, Must be `LCS`, `STCS` or `TWCS`. | `STCS` |
+ | replication-factor | Cassandra replication factor. | 1 |
+
+
+
+### Create a table
+
+When creating a table, you should define the table metadata and then create the table.
+
+To define the table metadata, you can use `TableMetadata`. The following shows how to define the columns, partition key, clustering key including clustering orders, and secondary indexes of a table:
+
+```java
+// Define the table metadata.
+TableMetadata tableMetadata =
+ TableMetadata.newBuilder()
+ .addColumn("c1", DataType.INT)
+ .addColumn("c2", DataType.TEXT)
+ .addColumn("c3", DataType.BIGINT)
+ .addColumn("c4", DataType.FLOAT)
+ .addColumn("c5", DataType.DOUBLE)
+ .addPartitionKey("c1")
+ .addClusteringKey("c2", Scan.Ordering.Order.DESC)
+ .addClusteringKey("c3", Scan.Ordering.Order.ASC)
+ .addSecondaryIndex("c4")
+ .build();
+```
+
+For details about the data model of ScalarDB, see [Data Model](design.mdx#data-model).
+
+Then, create a table as follows:
+
+```java
+// Create the table "ns.tbl". If the table already exists, an exception will be thrown.
+admin.createTable("ns", "tbl", tableMetadata);
+
+// Create the table only if it does not already exist.
+boolean ifNotExists = true;
+admin.createTable("ns", "tbl", tableMetadata, ifNotExists);
+
+// Create the table with options.
+Map options = ...;
+admin.createTable("ns", "tbl", tableMetadata, options);
+```
+
+### Create a secondary index
+
+You can create a secondary index as follows:
+
+```java
+// Create a secondary index on column "c5" for table "ns.tbl". If a secondary index already exists, an exception will be thrown.
+admin.createIndex("ns", "tbl", "c5");
+
+// Create the secondary index only if it does not already exist.
+boolean ifNotExists = true;
+admin.createIndex("ns", "tbl", "c5", ifNotExists);
+
+// Create the secondary index with options.
+Map options = ...;
+admin.createIndex("ns", "tbl", "c5", options);
+```
+
+### Add a new column to a table
+
+You can add a new, non-partition key column to a table as follows:
+
+```java
+// Add a new column "c6" with the INT data type to the table "ns.tbl".
+admin.addNewColumnToTable("ns", "tbl", "c6", DataType.INT)
+```
+
+:::warning
+
+You should carefully consider adding a new column to a table because the execution time may vary greatly depending on the underlying storage. Please plan accordingly and consider the following, especially if the database runs in production:
+
+- **For Cosmos DB for NoSQL and DynamoDB:** Adding a column is almost instantaneous as the table schema is not modified. Only the table metadata stored in a separate table is updated.
+- **For Cassandra:** Adding a column will only update the schema metadata and will not modify the existing schema records. The cluster topology is the main factor for the execution time. Changes to the schema metadata are shared to each cluster node via a gossip protocol. Because of this, the larger the cluster, the longer it will take for all nodes to be updated.
+- **For relational databases (MySQL, Oracle, etc.):** Adding a column shouldn't take a long time to execute.
+
+:::
+
+### Truncate a table
+
+You can truncate a table as follows:
+
+```java
+// Truncate the table "ns.tbl".
+admin.truncateTable("ns", "tbl");
+```
+
+### Drop a secondary index
+
+You can drop a secondary index as follows:
+
+```java
+// Drop the secondary index on column "c5" from table "ns.tbl". If the secondary index does not exist, an exception will be thrown.
+admin.dropIndex("ns", "tbl", "c5");
+
+// Drop the secondary index only if it exists.
+boolean ifExists = true;
+admin.dropIndex("ns", "tbl", "c5", ifExists);
+```
+
+### Drop a table
+
+You can drop a table as follows:
+
+```java
+// Drop the table "ns.tbl". If the table does not exist, an exception will be thrown.
+admin.dropTable("ns", "tbl");
+
+// Drop the table only if it exists.
+boolean ifExists = true;
+admin.dropTable("ns", "tbl", ifExists);
+```
+
+### Drop a namespace
+
+You can drop a namespace as follows:
+
+```java
+// Drop the namespace "ns". If the namespace does not exist, an exception will be thrown.
+admin.dropNamespace("ns");
+
+// Drop the namespace only if it exists.
+boolean ifExists = true;
+admin.dropNamespace("ns", ifExists);
+```
+
+### Get existing namespaces
+
+You can get the existing namespaces as follows:
+
+```java
+Set namespaces = admin.getNamespaceNames();
+```
+
+:::note
+
+This method extracts the namespace names of user tables dynamically. As a result, only namespaces that contain tables are returned. Starting from ScalarDB 4.0, we plan to improve the design to remove this limitation.
+
+:::
+
+### Get the tables of a namespace
+
+You can get the tables of a namespace as follows:
+
+```java
+// Get the tables of the namespace "ns".
+Set tables = admin.getNamespaceTableNames("ns");
+```
+
+### Get table metadata
+
+You can get table metadata as follows:
+
+```java
+// Get the table metadata for "ns.tbl".
+TableMetadata tableMetadata = admin.getTableMetadata("ns", "tbl");
+```
+### Repair a table
+
+You can repair the table metadata of an existing table as follows:
+
+```java
+// Repair the table "ns.tbl" with options.
+TableMetadata tableMetadata =
+ TableMetadata.newBuilder()
+ ...
+ .build();
+Map options = ...;
+admin.repairTable("ns", "tbl", tableMetadata, options);
+```
+
+### Specify operations for the Coordinator table
+
+The Coordinator table is used by the [Transactional API](#transactional-api) to track the statuses of transactions.
+
+When using a transaction manager, you must create the Coordinator table to execute transactions. In addition to creating the table, you can truncate and drop the Coordinator table.
+
+#### Create the Coordinator table
+
+You can create the Coordinator table as follows:
+
+```java
+// Create the Coordinator table.
+admin.createCoordinatorTables();
+
+// Create the Coordinator table only if one does not already exist.
+boolean ifNotExist = true;
+admin.createCoordinatorTables(ifNotExist);
+
+// Create the Coordinator table with options.
+Map options = ...;
+admin.createCoordinatorTables(options);
+```
+
+#### Truncate the Coordinator table
+
+You can truncate the Coordinator table as follows:
+
+```java
+// Truncate the Coordinator table.
+admin.truncateCoordinatorTables();
+```
+
+#### Drop the Coordinator table
+
+You can drop the Coordinator table as follows:
+
+```java
+// Drop the Coordinator table.
+admin.dropCoordinatorTables();
+
+// Drop the Coordinator table if one exist.
+boolean ifExist = true;
+admin.dropCoordinatorTables(ifExist);
+```
+
+### Import a table
+
+You can import an existing table to ScalarDB as follows:
+
+```java
+// Import the table "ns.tbl". If the table is already managed by ScalarDB, the target table does not
+// exist, or the table does not meet the requirements of the ScalarDB table, an exception will be thrown.
+admin.importTable("ns", "tbl", options, overrideColumnsType);
+```
+
+:::warning
+
+You should carefully plan to import a table to ScalarDB in production because it will add transaction metadata columns to your database tables and the ScalarDB metadata tables. In this case, there would also be several differences between your database and ScalarDB, as well as some limitations. For details, see [Importing Existing Tables to ScalarDB by Using ScalarDB Schema Loader](./schema-loader-import.mdx).
+
+
+:::
+
+## Transactional API
+
+This section explains how to execute transactional operations by using the Transactional API in ScalarDB.
+
+### Get a `DistributedTransactionManager` instance
+
+You first need to get a `DistributedTransactionManager` instance to execute transactional operations.
+
+To get a `DistributedTransactionManager` instance, you can use `TransactionFactory` as follows:
+
+```java
+TransactionFactory transactionFactory = TransactionFactory.create("");
+DistributedTransactionManager transactionManager = transactionFactory.getTransactionManager();
+```
+
+After you have executed all transactional operations, you should close the `DistributedTransactionManager` instance as follows:
+
+```java
+transactionManager.close();
+```
+
+### Execute transactions
+
+This subsection explains how to execute transactions with multiple CRUD operations.
+
+#### Begin or start a transaction
+
+Before executing transactional CRUD operations, you need to begin or start a transaction.
+
+You can begin a transaction as follows:
+
+```java
+// Begin a transaction.
+DistributedTransaction transaction = transactionManager.begin();
+```
+
+Or, you can start a transaction as follows:
+
+```java
+// Start a transaction.
+DistributedTransaction transaction = transactionManager.start();
+```
+
+Alternatively, you can use the `begin` method for a transaction by specifying a transaction ID as follows:
+
+```java
+// Begin a transaction by specifying a transaction ID.
+DistributedTransaction transaction = transactionManager.begin("");
+```
+
+Or, you can use the `start` method for a transaction by specifying a transaction ID as follows:
+
+```java
+// Start a transaction by specifying a transaction ID.
+DistributedTransaction transaction = transactionManager.start("");
+```
+
+:::note
+
+Specifying a transaction ID is useful when you want to link external systems to ScalarDB. Otherwise, you should use the `begin()` method or the `start()` method.
+
+When you specify a transaction ID, make sure you specify a unique ID (for example, UUID v4) throughout the system since ScalarDB depends on the uniqueness of transaction IDs for correctness.
+
+:::
+
+##### Begin or start a transaction in read-only mode
+
+You can also begin or start a transaction in read-only mode. In this case, the transaction will not allow any write operations, and it will be optimized for read operations.
+
+:::note
+
+Using read-only transactions for read-only operations is strongly recommended to improve performance and reduce resource usage.
+
+:::
+
+You can begin or start a transaction in read-only mode as follows:
+
+```java
+// Begin a transaction in read-only mode.
+DistributedTransaction transaction = transactionManager.beginReadOnly();
+```
+
+```java
+// Start a transaction in read-only mode.
+DistributedTransaction transaction = transactionManager.startReadOnly();
+```
+
+Alternatively, you can use the `beginReadOnly` and `startReadOnly` methods by specifying a transaction ID as follows:
+
+```java
+// Begin a transaction in read-only mode by specifying a transaction ID.
+DistributedTransaction transaction = transactionManager.beginReadOnly("");
+```
+
+```java
+// Start a transaction in read-only mode by specifying a transaction ID.
+DistributedTransaction transaction = transactionManager.startReadOnly("");
+```
+
+:::note
+
+Specifying a transaction ID is useful when you want to link external systems to ScalarDB. Otherwise, you should use the `beginReadOnly()` method or the `startReadOnly()` method.
+
+When you specify a transaction ID, make sure you specify a unique ID (for example, UUID v4) throughout the system since ScalarDB depends on the uniqueness of transaction IDs for correctness.
+
+:::
+
+#### Join a transaction
+
+Joining a transaction is particularly useful in a stateful application where a transaction spans multiple client requests. In such a scenario, the application can start a transaction during the first client request. Then, in subsequent client requests, the application can join the ongoing transaction by using the `join()` method.
+
+You can join an ongoing transaction that has already begun by specifying the transaction ID as follows:
+
+```java
+// Join a transaction.
+DistributedTransaction transaction = transactionManager.join("");
+```
+
+:::note
+
+To get the transaction ID with `getId()`, you can specify the following:
+
+```java
+tx.getId();
+```
+
+:::
+
+#### Resume a transaction
+
+Resuming a transaction is particularly useful in a stateful application where a transaction spans multiple client requests. In such a scenario, the application can start a transaction during the first client request. Then, in subsequent client requests, the application can resume the ongoing transaction by using the `resume()` method.
+
+You can resume an ongoing transaction that you have already begun by specifying a transaction ID as follows:
+
+```java
+// Resume a transaction.
+DistributedTransaction transaction = transactionManager.resume("");
+```
+
+:::note
+
+To get the transaction ID with `getId()`, you can specify the following:
+
+```java
+tx.getId();
+```
+
+:::
+
+#### Implement CRUD operations
+
+The following sections describe key construction and CRUD operations.
+
+:::note
+
+Although all the builders of the CRUD operations can specify consistency by using the `consistency()` methods, those methods are ignored. Instead, the `LINEARIZABLE` consistency level is always used in transactions.
+
+:::
+
+##### Key construction
+
+Most CRUD operations need to specify `Key` objects (partition-key, clustering-key, etc.). So, before moving on to CRUD operations, the following explains how to construct a `Key` object.
+
+For a single column key, you can use `Key.of()` methods to construct the key as follows:
+
+```java
+// For a key that consists of a single column of INT.
+Key key1 = Key.ofInt("col1", 1);
+
+// For a key that consists of a single column of BIGINT.
+Key key2 = Key.ofBigInt("col1", 100L);
+
+// For a key that consists of a single column of DOUBLE.
+Key key3 = Key.ofDouble("col1", 1.3d);
+
+// For a key that consists of a single column of TEXT.
+Key key4 = Key.ofText("col1", "value");
+```
+
+For a key that consists of two to five columns, you can use the `Key.of()` method to construct the key as follows. Similar to `ImmutableMap.of()` in Guava, you need to specify column names and values in turns:
+
+```java
+// For a key that consists of two to five columns.
+Key key1 = Key.of("col1", 1, "col2", 100L);
+Key key2 = Key.of("col1", 1, "col2", 100L, "col3", 1.3d);
+Key key3 = Key.of("col1", 1, "col2", 100L, "col3", 1.3d, "col4", "value");
+Key key4 = Key.of("col1", 1, "col2", 100L, "col3", 1.3d, "col4", "value", "col5", false);
+```
+
+For a key that consists of more than five columns, we can use the builder to construct the key as follows:
+
+```java
+// For a key that consists of more than five columns.
+Key key = Key.newBuilder()
+ .addInt("col1", 1)
+ .addBigInt("col2", 100L)
+ .addDouble("col3", 1.3d)
+ .addText("col4", "value")
+ .addBoolean("col5", false)
+ .addInt("col6", 100)
+ .build();
+```
+
+##### `Get` operation
+
+`Get` is an operation to retrieve a single record specified by a primary key.
+
+You need to create a `Get` object first, and then you can execute the object by using the `transaction.get()` method as follows:
+
+```java
+// Create a `Get` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .projections("c1", "c2", "c3", "c4")
+ .where(ConditionBuilder.column("c1").isNotEqualToInt(10))
+ .build();
+
+// Execute the `Get` operation.
+Optional result = transaction.get(get);
+```
+
+You can specify projections to choose which columns are returned.
+
+###### Use the `WHERE` clause
+
+You can also specify arbitrary conditions by using the `where()` method. If the retrieved record does not match the conditions specified by the `where()` method, `Option.empty()` will be returned. As an argument of the `where()` method, you can specify a condition, an AND-wise condition set, or an OR-wise condition set. After calling the `where()` method, you can add more conditions or condition sets by using the `and()` method or `or()` method as follows:
+
+```java
+// Create a `Get` operation with condition sets.
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .where(
+ ConditionSetBuilder.condition(ConditionBuilder.column("c1").isLessThanInt(10))
+ .or(ConditionBuilder.column("c1").isGreaterThanInt(20))
+ .build())
+ .and(
+ ConditionSetBuilder.condition(ConditionBuilder.column("c2").isLikeText("a%"))
+ .or(ConditionBuilder.column("c2").isLikeText("b%"))
+ .build())
+ .build();
+```
+
+:::note
+
+In the `where()` condition method chain, the conditions must be an AND-wise junction of `ConditionalExpression` or `OrConditionSet` (known as conjunctive normal form) like the above example or an OR-wise junction of `ConditionalExpression` or `AndConditionSet` (known as disjunctive normal form).
+
+:::
+
+For more details about available conditions and condition sets, see the and pages in the Javadoc.
+
+###### Handle `Result` objects
+
+The `Get` operation and `Scan` operation return `Result` objects. The following shows how to handle `Result` objects.
+
+You can get a column value of a result by using `get("")` methods as follows:
+
+```java
+// Get the BOOLEAN value of a column.
+boolean booleanValue = result.getBoolean("");
+
+// Get the INT value of a column.
+int intValue = result.getInt("");
+
+// Get the BIGINT value of a column.
+long bigIntValue = result.getBigInt("");
+
+// Get the FLOAT value of a column.
+float floatValue = result.getFloat("");
+
+// Get the DOUBLE value of a column.
+double doubleValue = result.getDouble("");
+
+// Get the TEXT value of a column.
+String textValue = result.getText("");
+
+// Get the BLOB value of a column as a `ByteBuffer`.
+ByteBuffer blobValue = result.getBlob("");
+
+// Get the BLOB value of a column as a `byte` array.
+byte[] blobValueAsBytes = result.getBlobAsBytes("");
+
+// Get the DATE value of a column as a `LocalDate`.
+LocalDate dateValue = result.getDate("");
+
+// Get the TIME value of a column as a `LocalTime`.
+LocalTime timeValue = result.getTime("");
+
+// Get the TIMESTAMP value of a column as a `LocalDateTime`.
+LocalDateTime timestampValue = result.getTimestamp("");
+
+// Get the TIMESTAMPTZ value of a column as a `Instant`.
+Instant timestampTZValue = result.getTimestampTZ("");
+```
+
+And if you need to check if a value of a column is null, you can use the `isNull("")` method.
+
+``` java
+// Check if a value of a column is null.
+boolean isNull = result.isNull("");
+```
+
+For more details, see the page in the Javadoc.
+
+###### Execute `Get` by using a secondary index
+
+You can execute a `Get` operation by using a secondary index.
+
+Instead of specifying a partition key, you can specify an index key (indexed column) to use a secondary index as follows:
+
+```java
+// Create a `Get` operation by using a secondary index.
+Key indexKey = Key.ofFloat("c4", 1.23F);
+
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .indexKey(indexKey)
+ .projections("c1", "c2", "c3", "c4")
+ .where(ConditionBuilder.column("c1").isNotEqualToInt(10))
+ .build();
+
+// Execute the `Get` operation.
+Optional result = transaction.get(get);
+```
+
+You can also specify arbitrary conditions by using the `where()` method. For details, see [Use the `WHERE` clause](#use-the-where-clause).
+
+:::note
+
+If the result has more than one record, `transaction.get()` will throw an exception. If you want to handle multiple results, see [Execute `Scan` by using a secondary index](#execute-scan-by-using-a-secondary-index).
+
+:::
+
+##### `Scan` operation
+
+`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations. To execute a `Scan` operation, you can use the `transaction.scan()` method or the `transaction.getScanner()` method:
+
+- `transaction.scan()`:
+ - This method immediately executes the given `Scan` operation and returns a list of all matching records. It is suitable when the result set is expected to be small enough to fit in memory.
+- `transaction.getScanner()`:
+ - This method returns a `Scanner` object that allows you to iterate over the result set lazily. It is useful when the result set may be large, as it avoids loading all records into memory at once.
+
+You need to create a `Scan` object first, and then you can execute the object by using the `transaction.scan()` method or the `transaction.getScanner()` method as follows:
+
+```java
+// Create a `Scan` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key startClusteringKey = Key.of("c2", "aaa", "c3", 100L);
+Key endClusteringKey = Key.of("c2", "aaa", "c3", 300L);
+
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .start(startClusteringKey, true) // Include startClusteringKey
+ .end(endClusteringKey, false) // Exclude endClusteringKey
+ .projections("c1", "c2", "c3", "c4")
+ .orderings(Scan.Ordering.desc("c2"), Scan.Ordering.asc("c3"))
+ .where(ConditionBuilder.column("c1").isNotEqualToInt(10))
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation by using the `transaction.scan()` method.
+List results = transaction.scan(scan);
+
+// Or, execute the `Scan` operation by using the `transaction.getScanner()` method.
+try (TransactionCrudOperable.Scanner scanner = transaction.getScanner(scan)) {
+ // Fetch the next result from the scanner
+ Optional result = scanner.one();
+
+ // Fetch all remaining results from the scanner
+ List allResults = scanner.all();
+}
+```
+
+You can omit the clustering-key boundaries or specify either a `start` boundary or an `end` boundary. If you don't specify `orderings`, you will get results ordered by the clustering order that you defined when creating the table.
+
+In addition, you can specify `projections` to choose which columns are returned and use `limit` to specify the number of records to return in `Scan` operations.
+
+###### Use the `WHERE` clause
+
+You can also specify arbitrary conditions by using the `where()` method to filter scanned records. As an argument of the `where()` method, you can specify a condition, an AND-wise condition set, or an OR-wise condition set. After calling the `where()` method, you can add more conditions or condition sets by using the `and()` method or `or()` method as follows:
+
+```java
+// Create a `Scan` operation with condition sets.
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .all()
+ .where(
+ ConditionSetBuilder.condition(ConditionBuilder.column("c1").isLessThanInt(10))
+ .or(ConditionBuilder.column("c1").isGreaterThanInt(20))
+ .build())
+ .and(
+ ConditionSetBuilder.condition(ConditionBuilder.column("c2").isLikeText("a%"))
+ .or(ConditionBuilder.column("c2").isLikeText("b%"))
+ .build())
+ .limit(10)
+ .build();
+```
+
+:::note
+
+In the `where()` condition method chain, the conditions must be an AND-wise junction of `ConditionalExpression` or `OrConditionSet` (known as conjunctive normal form) like the above example or an OR-wise junction of `ConditionalExpression` or `AndConditionSet` (known as disjunctive normal form).
+
+:::
+
+For more details about available conditions and condition sets, see the and pages in the Javadoc.
+
+###### Execute `Scan` by using a secondary index
+
+You can execute a `Scan` operation by using a secondary index.
+
+Instead of specifying a partition key, you can specify an index key (indexed column) to use a secondary index as follows:
+
+```java
+// Create a `Scan` operation by using a secondary index.
+Key indexKey = Key.ofFloat("c4", 1.23F);
+
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .indexKey(indexKey)
+ .projections("c1", "c2", "c3", "c4")
+ .where(ConditionBuilder.column("c1").isNotEqualToInt(10))
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation.
+List results = transaction.scan(scan);
+```
+
+You can also specify arbitrary conditions using the `where()` method. For details, see [Use the `WHERE` clause](#use-the-where-clause-1).
+
+:::note
+
+You can't specify clustering-key boundaries and orderings in `Scan` by using a secondary index.
+
+:::
+
+###### Execute cross-partition `Scan` without specifying a partition key to retrieve all the records of a table
+
+You can execute a `Scan` operation across all partitions, which we call *cross-partition scan*, without specifying a partition key by enabling the following configuration in the ScalarDB properties file.
+
+```properties
+scalar.db.cross_partition_scan.enabled=true
+```
+
+:::warning
+
+For non-JDBC databases, transactions could be executed at read-committed snapshot isolation (`SNAPSHOT`), which is a lower isolation level, even if you enable cross-partition scan with the `SERIALIZABLE` isolation level. When using non-JDBC databases, use cross-partition scan only if consistency does not matter for your transactions.
+
+:::
+
+Instead of calling the `partitionKey()` method in the builder, you can call the `all()` method to scan a table without specifying a partition key as follows:
+
+```java
+// Create a `Scan` operation without specifying a partition key.
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .all()
+ .projections("c1", "c2", "c3", "c4")
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation.
+List results = transaction.scan(scan);
+```
+
+:::note
+
+You can't specify any orderings in cross-partition `Scan` when using non-JDBC databases. For details on how to use cross-partition `Scan` with filtering or ordering, see [Execute cross-partition `Scan` with filtering and ordering](#execute-cross-partition-scan-with-filtering-and-ordering).
+
+:::
+
+###### Execute cross-partition `Scan` with filtering and ordering
+
+By enabling the cross-partition scan option with filtering and ordering as follows, you can execute a cross-partition `Scan` operation with flexible conditions and orderings:
+
+```properties
+scalar.db.cross_partition_scan.enabled=true
+scalar.db.cross_partition_scan.filtering.enabled=true
+scalar.db.cross_partition_scan.ordering.enabled=true
+```
+
+:::note
+
+You can't enable `scalar.db.cross_partition_scan.ordering` in non-JDBC databases.
+
+:::
+
+You can call the `where()` and `ordering()` methods after calling the `all()` method to specify arbitrary conditions and orderings as follows:
+
+```java
+// Create a `Scan` operation with arbitrary conditions and orderings.
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .all()
+ .where(ConditionBuilder.column("c1").isNotEqualToInt(10))
+ .projections("c1", "c2", "c3", "c4")
+ .orderings(Scan.Ordering.desc("c3"), Scan.Ordering.asc("c4"))
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation.
+List results = transaction.scan(scan);
+```
+
+For details about the `WHERE` clause, see [Use the `WHERE` clause](#use-the-where-clause-1).
+
+##### `Put` operation
+
+:::note
+
+The `Put` operation is deprecated as of ScalarDB 3.13 and will be removed in a future release. Instead of using the `Put` operation, use the `Insert` operation, the `Upsert` operation, or the `Update` operation.
+
+:::
+
+`Put` is an operation to put a record specified by a primary key. The operation behaves as an upsert operation for a record, in which the operation updates the record if the record exists or inserts the record if the record does not exist.
+
+:::note
+
+When you update an existing record, you need to read the record by using `Get` or `Scan` before using a `Put` operation. Otherwise, the operation will fail due to a conflict. This occurs because of the specification of ScalarDB to manage transactions properly. Instead of reading the record explicitly, you can enable implicit pre-read. For details, see [Enable implicit pre-read for `Put` operations](#enable-implicit-pre-read-for-put-operations).
+
+:::
+
+You need to create a `Put` object first, and then you can execute the object by using the `transaction.put()` method as follows:
+
+```java
+// Create a `Put` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Put` operation.
+transaction.put(put);
+```
+
+You can also put a record with `null` values as follows:
+
+```java
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", null)
+ .doubleValue("c5", null)
+ .build();
+```
+
+###### Enable implicit pre-read for `Put` operations
+
+In Consensus Commit, an application must read a record before mutating the record with `Put` and `Delete` operations to obtain the latest states of the record if the record exists. Instead of reading the record explicitly, you can enable *implicit pre-read*. By enabling implicit pre-read, if an application does not read the record explicitly in a transaction, ScalarDB will read the record on behalf of the application before committing the transaction.
+
+You can enable implicit pre-read for a `Put` operation by specifying `enableImplicitPreRead()` in the `Put` operation builder as follows:
+
+```java
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .enableImplicitPreRead()
+ .build();
+```
+
+:::note
+
+If you are certain that a record you are trying to mutate does not exist, you should not enable implicit pre-read for the `Put` operation for better performance. For example, if you load initial data, you should not enable implicit pre-read. A `Put` operation without implicit pre-read is faster than `Put` operation with implicit pre-read because the operation skips an unnecessary read.
+
+:::
+
+##### `Insert` operation
+
+`Insert` is an operation to insert an entry into the underlying storage through a transaction. If the entry already exists, a conflict error will occur.
+
+You need to create an `Insert` object first, and then you can execute the object by using the `transaction.insert()` method as follows:
+
+```java
+// Create an `Insert` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Insert insert =
+ Insert.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Insert` operation.
+transaction.insert(insert);
+```
+
+##### `Upsert` operation
+
+`Upsert` is an operation to insert an entry into or update an entry in the underlying storage through a transaction. If the entry already exists, it will be updated; otherwise, the entry will be inserted.
+
+You need to create an `Upsert` object first, and then you can execute the object by using the `transaction.upsert()` method as follows:
+
+```java
+// Create an `Upsert` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Upsert upsert =
+ Upsert.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Upsert` operation.
+transaction.upsert(upsert);
+```
+
+##### `Update` operation
+
+`Update` is an operation to update an entry in the underlying storage through a transaction. If the entry does not exist, the operation will not make any changes.
+
+You need to create an `Update` object first, and then you can execute the object by using the `transaction.update()` method as follows:
+
+```java
+// Create an `Update` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Update update =
+ Update.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Update` operation.
+transaction.update(update);
+```
+
+##### `Delete` operation
+
+`Delete` is an operation to delete a record specified by a primary key.
+
+:::note
+
+When you delete a record, you don't have to read the record beforehand because implicit pre-read is always enabled for `Delete` operations.
+
+:::
+
+You need to create a `Delete` object first, and then you can execute the object by using the `transaction.delete()` method as follows:
+
+```java
+// Create a `Delete` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .build();
+
+// Execute the `Delete` operation.
+transaction.delete(delete);
+```
+
+##### `Put`, `Delete`, and `Update` with a condition
+
+You can write arbitrary conditions (for example, a bank account balance must be equal to or more than zero) that you require a transaction to meet before being committed by implementing logic that checks the conditions in the transaction. Alternatively, you can write simple conditions in a mutation operation, such as `Put`, `Delete`, and `Update`.
+
+When a `Put`, `Delete`, or `Update` operation includes a condition, the operation is executed only if the specified condition is met. If the condition is not met when the operation is executed, an exception called `UnsatisfiedConditionException` will be thrown.
+
+:::note
+
+When you specify a condition in a `Put` operation, you need to read the record beforehand or enable implicit pre-read.
+
+:::
+
+###### Conditions for `Put`
+
+You can specify a condition in a `Put` operation as follows:
+
+```java
+// Build a condition.
+MutationCondition condition =
+ ConditionBuilder.putIf(ConditionBuilder.column("c4").isEqualToFloat(0.0F))
+ .and(ConditionBuilder.column("c5").isEqualToDouble(0.0))
+ .build();
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .condition(condition) // condition
+ .build();
+
+// Execute the `Put` operation.
+transaction.put(put);
+```
+
+In addition to using the `putIf` condition, you can specify the `putIfExists` and `putIfNotExists` conditions as follows:
+
+```java
+// Build a `putIfExists` condition.
+MutationCondition putIfExistsCondition = ConditionBuilder.putIfExists();
+
+// Build a `putIfNotExists` condition.
+MutationCondition putIfNotExistsCondition = ConditionBuilder.putIfNotExists();
+```
+
+###### Conditions for `Delete`
+
+You can specify a condition in a `Delete` operation as follows:
+
+```java
+// Build a condition.
+MutationCondition condition =
+ ConditionBuilder.deleteIf(ConditionBuilder.column("c4").isEqualToFloat(0.0F))
+ .and(ConditionBuilder.column("c5").isEqualToDouble(0.0))
+ .build();
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .condition(condition) // condition
+ .build();
+
+// Execute the `Delete` operation.
+transaction.delete(delete);
+```
+
+In addition to using the `deleteIf` condition, you can specify the `deleteIfExists` condition as follows:
+
+```java
+// Build a `deleteIfExists` condition.
+MutationCondition deleteIfExistsCondition = ConditionBuilder.deleteIfExists();
+```
+
+###### Conditions for `Update`
+
+You can specify a condition in an `Update` operation as follows:
+
+```java
+// Build a condition.
+MutationCondition condition =
+ ConditionBuilder.updateIf(ConditionBuilder.column("c4").isEqualToFloat(0.0F))
+ .and(ConditionBuilder.column("c5").isEqualToDouble(0.0))
+ .build();
+
+Update update =
+ Update.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .condition(condition) // condition
+ .build();
+
+// Execute the `Update` operation.
+transaction.update(update);
+```
+
+In addition to using the `updateIf` condition, you can specify the `updateIfExists` condition as follows:
+
+```java
+// Build a `updateIfExists` condition.
+MutationCondition updateIfExistsCondition = ConditionBuilder.updateIfExists();
+```
+
+##### Mutate operation
+
+Mutate is an operation to execute multiple operations for `Put`, `Insert`, `Upsert`, `Update`, and `Delete`.
+
+You need to create mutation objects first, and then you can execute the objects by using the `transaction.mutate()` method as follows:
+
+```java
+// Create `Put` and `Delete` operations.
+Key partitionKey = Key.ofInt("c1", 10);
+
+Key clusteringKeyForPut = Key.of("c2", "aaa", "c3", 100L);
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKeyForPut)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+Key clusteringKeyForDelete = Key.of("c2", "bbb", "c3", 200L);
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKeyForDelete)
+ .build();
+
+// Execute the operations.
+transaction.mutate(Arrays.asList(put, delete));
+```
+
+##### Default namespace for CRUD operations
+
+A default namespace for all CRUD operations can be set by using a property in the ScalarDB configuration.
+
+```properties
+scalar.db.default_namespace_name=
+```
+
+Any operation that does not specify a namespace will use the default namespace set in the configuration.
+
+```java
+// This operation will target the default namespace.
+Scan scanUsingDefaultNamespace =
+ Scan.newBuilder()
+ .table("tbl")
+ .all()
+ .build();
+// This operation will target the "ns" namespace.
+Scan scanUsingSpecifiedNamespace =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .all()
+ .build();
+```
+
+##### Operation attributes
+
+The operation attribute is a key-value pair that can be used to store additional information about an operation. You can set operation attributes by using the `attribute()` or `attributes()` method in the operation builder, as shown below:
+
+```java
+// Set operation attributes in the `Get` operation.
+Get get = Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .attribute("attribute1", "value1")
+ .attributes(ImmutableMap.of("attribute2", "value2", "attribute3", "value3"))
+ .build();
+
+// Set operation attributes in the `Scan` operation.
+Scan scan = Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .projections("c1", "c2", "c3", "c4")
+ .attribute("attribute1", "value1")
+ .attributes(ImmutableMap.of("attribute2", "value2", "attribute3", "value3"))
+ .build();
+
+// Set operation attributes in the `Insert` operation.
+Insert insert = Insert.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .attribute("attribute1", "value1")
+ .attributes(ImmutableMap.of("attribute2", "value2", "attribute3", "value3"))
+ .build();
+
+// Set operation attributes in the `Upsert` operation.
+Upsert upsert = Upsert.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .attribute("attribute1", "value1")
+ .attributes(ImmutableMap.of("attribute2", "value2", "attribute3", "value3"))
+ .build();
+
+// Set operation attributes in the `Update` operation.
+Update update = Update.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .attribute("attribute1", "value1")
+ .attributes(ImmutableMap.of("attribute2", "value2", "attribute3", "value3"))
+ .build();
+
+// Set operation attributes in the `Delete` operation.
+Delete delete = Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .attribute("attribute1", "value1")
+ .attributes(ImmutableMap.of("attribute2", "value2", "attribute3", "value3"))
+ .build();
+```
+
+:::note
+
+ScalarDB currently has no available operation attributes.
+
+:::
+
+#### Commit a transaction
+
+After executing CRUD operations, you need to commit a transaction to finish it.
+
+You can commit a transaction as follows:
+
+```java
+// Commit a transaction.
+transaction.commit();
+```
+
+#### Roll back or abort a transaction
+
+If an error occurs when executing a transaction, you can roll back or abort the transaction.
+
+You can roll back a transaction as follows:
+
+```java
+// Roll back a transaction.
+transaction.rollback();
+```
+
+Or, you can abort a transaction as follows:
+
+```java
+// Abort a transaction.
+transaction.abort();
+```
+
+For details about how to handle exceptions in ScalarDB, see [How to handle exceptions](#how-to-handle-exceptions).
+
+### Execute transactions without beginning or starting a transaction
+
+You can execute transactional operations without beginning or starting a transaction. In this case, ScalarDB will automatically begin a transaction before executing the operations and commit the transaction after executing the operations. This section explains how to execute transactions without beginning or starting a transaction.
+
+#### Execute `Get` operation
+
+`Get` is an operation to retrieve a single record specified by a primary key.
+
+You need to create a `Get` object first, and then you can execute the object by using the `transactionManager.get()` method as follows:
+
+```java
+// Create a `Get` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .projections("c1", "c2", "c3", "c4")
+ .build();
+
+// Execute the `Get` operation.
+Optional result = transactionManager.get(get);
+```
+
+For details about the `Get` operation, see [`Get` operation](#get-operation).
+
+#### Execute `Scan` operation
+
+`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations. To execute a `Scan` operation, you can use the `transactionManager.scan()` method or the `transactionManager.getScanner()` method:
+
+- `transactionManager.scan()`:
+ - This method immediately executes the given `Scan` operation and returns a list of all matching records. It is suitable when the result set is expected to be small enough to fit in memory.
+- `transactionManager.getScanner()`:
+ - This method returns a `Scanner` object that allows you to iterate over the result set lazily. It is useful when the result set may be large, as it avoids loading all records into memory at once.
+
+You need to create a `Scan` object first, and then you can execute the object by using the `transactionManager.scan()` method or the `transactionManager.getScanner()` method as follows:
+
+```java
+// Create a `Scan` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key startClusteringKey = Key.of("c2", "aaa", "c3", 100L);
+Key endClusteringKey = Key.of("c2", "aaa", "c3", 300L);
+
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .start(startClusteringKey, true) // Include startClusteringKey
+ .end(endClusteringKey, false) // Exclude endClusteringKey
+ .projections("c1", "c2", "c3", "c4")
+ .orderings(Scan.Ordering.desc("c2"), Scan.Ordering.asc("c3"))
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation by using the `transactionManager.scan()` method.
+List results = transactionManager.scan(scan);
+
+// Or, execute the `Scan` operation by using the `transactionManager.getScanner()` method.
+try (TransactionManagerCrudOperable.Scanner scanner = transactionManager.getScanner(scan)) {
+ // Fetch the next result from the scanner
+ Optional result = scanner.one();
+
+ // Fetch all remaining results from the scanner
+ List allResults = scanner.all();
+}
+```
+
+For details about the `Scan` operation, see [`Scan` operation](#scan-operation).
+
+#### Execute `Put` operation
+
+:::note
+
+The `Put` operation is deprecated as of ScalarDB 3.13 and will be removed in a future release. Instead of using the `Put` operation, use the `Insert` operation, the `Upsert` operation, or the `Update` operation.
+
+:::
+
+You need to create a `Put` object first, and then you can execute the object by using the `transactionManager.put()` method as follows:
+
+```java
+// Create a `Put` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Put` operation.
+transactionManager.put(put);
+```
+
+For details about the `Put` operation, see [`Put` operation](#put-operation).
+
+#### Execute `Insert` operation
+
+`Insert` is an operation to insert an entry into the underlying storage through a transaction. If the entry already exists, a conflict error will occur.
+
+You need to create an `Insert` object first, and then you can execute the object by using the `transactionManager.insert()` method as follows:
+
+```java
+// Create an `Insert` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Insert insert =
+ Insert.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Insert` operation.
+transactionManager.insert(insert);
+```
+
+For details about the `Insert` operation, see [`Insert` operation](#insert-operation).
+
+#### Execute `Upsert` operation
+
+`Upsert` is an operation to insert an entry into or update an entry in the underlying storage through a transaction. If the entry already exists, it will be updated; otherwise, the entry will be inserted.
+
+You need to create an `Upsert` object first, and then you can execute the object by using the `transactionManager.upsert()` method as follows:
+
+```java
+// Create an `Upsert` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Upsert upsert =
+ Upsert.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Upsert` operation.
+transactionManager.upsert(upsert);
+```
+
+For details about the `Insert` operation, see [`Upsert` operation](#upsert-operation).
+
+#### Execute `Update` operation
+
+`Update` is an operation to update an entry in the underlying storage through a transaction. If the entry does not exist, the operation will not make any changes.
+
+You need to create an `Update` object first, and then you can execute the object by using the `transactionManager.update()` method as follows:
+
+```java
+// Create an `Update` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Update update =
+ Update.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Update` operation.
+transactionManager.update(update);
+```
+
+For details about the `Update` operation, see [`Update` operation](#update-operation).
+
+#### Execute `Delete` operation
+
+`Delete` is an operation to delete a record specified by a primary key.
+
+You need to create a `Delete` object first, and then you can execute the object by using the `transaction.delete()` method as follows:
+
+```java
+// Create a `Delete` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .build();
+
+// Execute the `Delete` operation.
+transactionManager.delete(delete);
+```
+
+For details about the `Delete` operation, see [`Delete` operation](#delete-operation).
+
+#### Execute Mutate operation
+
+Mutate is an operation to execute multiple mutations (`Put`, `Insert`, `Upsert`, `Update`, and `Delete` operations).
+
+You need to create mutation objects first, and then you can execute the objects by using the `transactionManager.mutate()` method as follows:
+
+```java
+// Create `Put` and `Delete` operations.
+Key partitionKey = Key.ofInt("c1", 10);
+
+Key clusteringKeyForPut = Key.of("c2", "aaa", "c3", 100L);
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKeyForPut)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+Key clusteringKeyForDelete = Key.of("c2", "bbb", "c3", 200L);
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKeyForDelete)
+ .build();
+
+// Execute the operations.
+transactionManager.mutate(Arrays.asList(put, delete));
+```
+
+For details about the Mutate operation, see [Mutate operation](#mutate-operation).
+
+In addition, for details about how to handle exceptions in ScalarDB, see [How to handle exceptions](#how-to-handle-exceptions).
+
+## How to handle exceptions
+
+When executing a transaction, you will also need to handle exceptions properly.
+
+:::warning
+
+If you don't handle exceptions properly, you may face anomalies or data inconsistency.
+
+:::
+
+The following sample code shows how to handle exceptions:
+
+```java
+public class Sample {
+ public static void main(String[] args) throws Exception {
+ TransactionFactory factory = TransactionFactory.create("");
+ DistributedTransactionManager transactionManager = factory.getTransactionManager();
+
+ int retryCount = 0;
+ TransactionException lastException = null;
+
+ while (true) {
+ if (retryCount++ > 0) {
+ // Retry the transaction three times maximum.
+ if (retryCount >= 3) {
+ // Throw the last exception if the number of retries exceeds the maximum.
+ throw lastException;
+ }
+
+ // Sleep 100 milliseconds before retrying the transaction.
+ TimeUnit.MILLISECONDS.sleep(100);
+ }
+
+ DistributedTransaction transaction = null;
+ try {
+ // Begin a transaction.
+ transaction = transactionManager.begin();
+
+ // Execute CRUD operations in the transaction.
+ Optional result = transaction.get(...);
+ List results = transaction.scan(...);
+ transaction.put(...);
+ transaction.delete(...);
+
+ // Commit the transaction.
+ transaction.commit();
+ } catch (UnsatisfiedConditionException e) {
+ // You need to handle `UnsatisfiedConditionException` only if a mutation operation specifies a condition.
+ // This exception indicates the condition for the mutation operation is not met.
+
+ try {
+ transaction.rollback();
+ } catch (RollbackException ex) {
+ // Rolling back the transaction failed. Since the transaction should eventually recover,
+ // you don't need to do anything further. You can simply log the occurrence here.
+ }
+
+ // You can handle the exception here, according to your application requirements.
+
+ return;
+ } catch (UnknownTransactionStatusException e) {
+ // If you catch `UnknownTransactionStatusException` when committing the transaction,
+ // it indicates that the status of the transaction, whether it was successful or not, is unknown.
+ // In such a case, you need to check if the transaction is committed successfully or not and
+ // retry the transaction if it failed. How to identify a transaction status is delegated to users.
+ return;
+ } catch (TransactionException e) {
+ // For other exceptions, you can try retrying the transaction.
+
+ // For `CrudConflictException`, `CommitConflictException`, and `TransactionNotFoundException`,
+ // you can basically retry the transaction. However, for the other exceptions, the transaction
+ // will still fail if the cause of the exception is non-transient. In such a case, you will
+ // exhaust the number of retries and throw the last exception.
+
+ if (transaction != null) {
+ try {
+ transaction.rollback();
+ } catch (RollbackException ex) {
+ // Rolling back the transaction failed. The transaction should eventually recover,
+ // so you don't need to do anything further. You can simply log the occurrence here.
+ }
+ }
+
+ lastException = e;
+ }
+ }
+ }
+}
+```
+
+### `TransactionException` and `TransactionNotFoundException`
+
+The `begin()` API could throw `TransactionException` or `TransactionNotFoundException`:
+
+- If you catch `TransactionException`, this exception indicates that the transaction has failed to begin due to transient or non-transient faults. You can try retrying the transaction, but you may not be able to begin the transaction due to non-transient faults.
+- If you catch `TransactionNotFoundException`, this exception indicates that the transaction has failed to begin due to transient faults. In this case, you can retry the transaction.
+
+The `join()` API could also throw `TransactionNotFoundException`. You can handle this exception in the same way that you handle the exceptions for the `begin()` API.
+
+### `CrudException` and `CrudConflictException`
+
+The APIs for CRUD operations (`get()`, `scan()`, `put()`, `delete()`, and `mutate()`) could throw `CrudException` or `CrudConflictException`:
+
+- If you catch `CrudException`, this exception indicates that the transaction CRUD operation has failed due to transient or non-transient faults. You can try retrying the transaction from the beginning, but the transaction may still fail if the cause is non-transient.
+- If you catch `CrudConflictException`, this exception indicates that the transaction CRUD operation has failed due to transient faults (for example, a conflict error). In this case, you can retry the transaction from the beginning.
+
+### `UnsatisfiedConditionException`
+
+The APIs for mutation operations (`put()`, `delete()`, and `mutate()`) could also throw `UnsatisfiedConditionException`.
+
+If you catch `UnsatisfiedConditionException`, this exception indicates that the condition for the mutation operation is not met. You can handle this exception according to your application requirements.
+
+### `CommitException`, `CommitConflictException`, and `UnknownTransactionStatusException`
+
+The `commit()` API could throw `CommitException`, `CommitConflictException`, or `UnknownTransactionStatusException`:
+
+- If you catch `CommitException`, this exception indicates that committing the transaction fails due to transient or non-transient faults. You can try retrying the transaction from the beginning, but the transaction may still fail if the cause is non-transient.
+- If you catch `CommitConflictException`, this exception indicates that committing the transaction has failed due to transient faults (for example, a conflict error). In this case, you can retry the transaction from the beginning.
+- If you catch `UnknownTransactionStatusException`, this exception indicates that the status of the transaction, whether it was successful or not, is unknown. In this case, you need to check if the transaction is committed successfully and retry the transaction if it has failed.
+
+How to identify a transaction status is delegated to users. You may want to create a transaction status table and update it transactionally with other application data so that you can get the status of a transaction from the status table.
+
+### Notes about some exceptions
+
+Although not illustrated in the sample code, the `resume()` API could also throw `TransactionNotFoundException`. This exception indicates that the transaction associated with the specified ID was not found and/or the transaction might have expired. In either case, you can retry the transaction from the beginning since the cause of this exception is basically transient.
+
+In the sample code, for `UnknownTransactionStatusException`, the transaction is not retried because the application must check if the transaction was successful to avoid potential duplicate operations. For other exceptions, the transaction is retried because the cause of the exception is transient or non-transient. If the cause of the exception is transient, the transaction may succeed if you retry it. However, if the cause of the exception is non-transient, the transaction will still fail even if you retry it. In such a case, you will exhaust the number of retries.
+
+:::note
+
+In the sample code, the transaction is retried three times maximum and sleeps for 100 milliseconds before it is retried. But you can choose a retry policy, such as exponential backoff, according to your application requirements.
+
+:::
+
+## Group commit for the Coordinator table
+
+The Coordinator table that is used for Consensus Commit transactions is a vital data store, and using robust storage for it is recommended. However, utilizing more robust storage options, such as internally leveraging multi-AZ or multi-region replication, may lead to increased latency when writing records to the storage, resulting in poor throughput performance.
+
+ScalarDB provides a group commit feature for the Coordinator table that groups multiple record writes into a single write operation, improving write throughput. In this case, latency may increase or decrease, depending on the underlying database and the workload.
+
+To enable the group commit feature, add the following configuration:
+
+```properties
+# By default, this configuration is set to `false`.
+scalar.db.consensus_commit.coordinator.group_commit.enabled=true
+
+# These properties are for tuning the performance of the group commit feature.
+# scalar.db.consensus_commit.coordinator.group_commit.group_size_fix_timeout_millis=40
+# scalar.db.consensus_commit.coordinator.group_commit.delayed_slot_move_timeout_millis=800
+# scalar.db.consensus_commit.coordinator.group_commit.old_group_abort_timeout_millis=30000
+# scalar.db.consensus_commit.coordinator.group_commit.timeout_check_interval_millis=10
+# scalar.db.consensus_commit.coordinator.group_commit.metrics_monitor_log_enabled=true
+```
+
+### Limitations
+
+This section describes the limitations of the group commit feature.
+
+#### Custom transaction ID passed by users
+
+The group commit feature implicitly generates an internal value and uses it as a part of transaction ID. Therefore, a custom transaction ID manually passed by users via `com.scalar.db.transaction.consensuscommit.ConsensusCommitManager.begin(String txId)` or `com.scalar.db.transaction.consensuscommit.TwoPhaseConsensusCommitManager.begin(String txId)` can't be used as is for later API calls. You need to use a transaction ID returned from`com.scalar.db.transaction.consensuscommit.ConsensusCommit.getId()` or `com.scalar.db.transaction.consensuscommit.TwoPhaseConsensusCommit.getId()` instead.
+
+```java
+ // This custom transaction ID needs to be used for ScalarDB transactions.
+ String myTxId = UUID.randomUUID().toString();
+
+ ...
+
+ DistributedTransaction transaction = manager.begin(myTxId);
+
+ ...
+
+ // When the group commit feature is enabled, a custom transaction ID passed by users can't be used as is.
+ // logger.info("The transaction state: {}", manager.getState(myTxId));
+ logger.info("The transaction state: {}", manager.getState(transaction.getId()));
+```
+
+#### Prohibition of use with a two-phase commit interface
+
+The group commit feature manages all ongoing transactions in memory. If this feature is enabled with a two-phase commit interface, the information must be solely maintained by the coordinator service to prevent conflicts caused by participant services' inconsistent writes to the Coordinator table, which may contain different transaction distributions over groups.
+
+This limitation introduces some complexities and inflexibilities related to application development. Therefore, combining the use of the group commit feature with a two-phase commit interface is currently prohibited.
+
+##### Enabling the feature on existing applications is not supported
+
+The group commit feature uses a new column in the Coordinator table. The current [Schema Loader](schema-loader.mdx), as of ScalarDB 3, doesn't support table schema migration for the Coordinator table.
+
+Therefore, enabling the group commit feature on existing applications where any transactions have been executed is not supported. To use this feature, you'll need to start your application in a clean state.
+
+Coordinator table schema migration in [Schema Loader](schema-loader.mdx) is expected to be supported in ScalarDB 4.0.
+
+## Investigating Consensus Commit transaction manager errors
+
+To investigate errors when using the Consensus Commit transaction manager, you can enable a configuration that will return table metadata augmented with transaction metadata columns, which can be helpful when investigating transaction-related issues. This configuration, which is only available when troubleshooting the Consensus Commit transaction manager, enables you to see transaction metadata column details for a given table by using the `DistributedTransactionAdmin.getTableMetadata()` method.
+
+By adding the following configuration, `Get` and `Scan` operations results will contain [transaction metadata](schema-loader.mdx#internal-metadata-for-consensus-commit):
+
+```properties
+# By default, this configuration is set to `false`.
+scalar.db.consensus_commit.include_metadata.enabled=true
+```
diff --git a/versioned_docs/version-3.X/backup-restore.mdx b/versioned_docs/version-3.X/backup-restore.mdx
new file mode 100644
index 00000000..0efff032
--- /dev/null
+++ b/versioned_docs/version-3.X/backup-restore.mdx
@@ -0,0 +1,184 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# How to Back Up and Restore Databases Used Through ScalarDB
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Since ScalarDB provides transaction capabilities on top of non-transactional or transactional databases non-invasively, you need to take special care to back up and restore the databases in a transactionally consistent way.
+
+This guide describes how to back up and restore the databases that ScalarDB supports.
+
+## Create a backup
+
+How you create a backup depends on which database you're using and whether or not you're using multiple databases. The following decision tree shows which approach you should take.
+
+```mermaid
+flowchart TD
+ A[Are you using a single database with ScalarDB?]
+ A -->|Yes| B[Does the database have transaction support?]
+ B -->|Yes| C[Perform back up without explicit pausing]
+ B ---->|No| D[Perform back up with explicit pausing]
+ A ---->|No| D
+```
+
+### Back up without explicit pausing
+
+If you're using ScalarDB with a single database with support for transactions, you can create a backup of the database even while ScalarDB continues to accept transactions.
+
+:::warning
+
+Before creating a backup, you should consider the safest way to create a transactionally consistent backup of your databases and understand any risks that are associated with the backup process.
+
+:::
+
+One requirement for creating a backup in ScalarDB is that backups for all the ScalarDB-managed tables (including the Coordinator table) need to be transactionally consistent or automatically recoverable to a transactionally consistent state. That means that you need to create a consistent backup by dumping all tables in a single transaction.
+
+How you create a transactionally consistent backup depends on the type of database that you're using. Select a database to see how to create a transactionally consistent backup for ScalarDB.
+
+:::note
+
+The backup methods by database listed below are just examples of some of the databases that ScalarDB supports.
+
+:::
+
+
+
+ You can restore to any point within the backup retention period by using the automated backup feature.
+
+
+ Use the `mysqldump` command with the `--single-transaction` option.
+
+
+ Use the `pg_dump` command.
+
+
+ Use the `.backup` command with the `.timeout` command as specified in [Special commands to sqlite3 (dot-commands)](https://www.sqlite.org/cli.html#special_commands_to_sqlite3_dot_commands_)
+
+ For an example, see [BASH: SQLite3 .backup command](https://stackoverflow.com/questions/23164445/bash-sqlite3-backup-command).
+
+
+ Clusters are backed up automatically based on the backup policy, and these backups are retained for a specific duration. You can also perform on-demand backups. For details on performing backups, see [YugabyteDB Managed: Back up and restore clusters](https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-clusters/backup-clusters/).
+
+
+ Use the `backup` command. For details, on performing backups, see [Db2: Backup overview](https://www.ibm.com/docs/en/db2/12.1.0?topic=recovery-backup).
+
+
+
+### Back up with explicit pausing
+
+Another way to create a transactionally consistent backup is to create a backup while a cluster of ScalarDB instances does not have any outstanding transactions. Creating the backup depends on the following:
+
+- If the underlying database has a point-in-time snapshot or backup feature, you can create a backup during the period when no outstanding transactions exist.
+- If the underlying database has a point-in-time restore or recovery (PITR) feature, you can set a restore point to a time (preferably the mid-time) in the pause duration period when no outstanding transactions exist.
+
+:::note
+
+When using a PITR feature, you should minimize the clock drifts between clients and servers by using clock synchronization, such as NTP. Otherwise, the time you get as the paused duration might be too different from the time in which the pause was actually conducted, which could restore the backup to a point where ongoing transactions exist.
+
+In addition, you should pause for a sufficient amount of time (for example, five seconds) and use the mid-time of the paused duration as a restore point since clock synchronization cannot perfectly synchronize clocks between nodes.
+
+:::
+
+To make ScalarDB drain outstanding requests and stop accepting new requests so that a pause duration can be created, you should implement the [Scalar Admin](https://github.com/scalar-labs/scalar-admin) interface properly in your application that uses ScalarDB or use [ScalarDB Cluster](scalardb-cluster/index.mdx), which implements the Scalar Admin interface.
+
+By using the [Scalar Admin client tool](https://github.com/scalar-labs/scalar-admin/blob/main/README.md#scalar-admin-client-tool), you can pause nodes, servers, or applications that implement the Scalar Admin interface without losing ongoing transactions.
+
+How you create a transactionally consistent backup depends on the type of database that you're using. Select a database to see how to create a transactionally consistent backup for ScalarDB.
+
+:::note
+
+The backup methods by database listed below are just examples of some of the databases that ScalarDB supports.
+
+:::
+
+
+
+ You must enable the PITR feature for DynamoDB tables. If you're using [ScalarDB Schema Loader](schema-loader.mdx) to create schemas, the tool enables the PITR feature for tables by default.
+
+ To specify a transactionally consistent restore point, pause your application that is using ScalarDB with DynamoDB as described in [Back up with explicit pausing](#back-up-with-explicit-pausing).
+
+
+ You must create a Cosmos DB for NoSQL account with a continuous backup policy that has the PITR feature enabled. After enabling the feature, backups are created continuously.
+
+ To specify a transactionally consistent restore point, pause your application that is using ScalarDB with Cosmos DB for NoSQL as described in [Back up with explicit pausing](#back-up-with-explicit-pausing).
+
+
+ Cassandra has a built-in replication feature, so you do not always have to create a transactionally consistent backup. For example, if the replication factor is set to `3` and only the data of one of the nodes in a Cassandra cluster is lost, you won't need a transactionally consistent backup (snapshot) because the node can be recovered by using a normal, transactionally inconsistent backup (snapshot) and the repair feature.
+
+ However, if the quorum of cluster nodes loses their data, you will need a transactionally consistent backup (snapshot) to restore the cluster to a certain transactionally consistent point.
+
+ To create a transactionally consistent cluster-wide backup (snapshot), pause the application that is using ScalarDB or [ScalarDB Cluster](scalardb-cluster/index.mdx) and create backups (snapshots) of the nodes as described in [Back up with explicit pausing](#back-up-with-explicit-pausing) or stop the Cassandra cluster, take copies of all the data in the nodes, and start the cluster.
+
+
+ You can perform on-demand backups or scheduled backups during a paused duration. For details on performing backups, see [YugabyteDB Managed: Back up and restore clusters](https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-clusters/backup-clusters/).
+
+
+
+## Restore a backup
+
+How you restore a transactionally consistent backup depends on the type of database that you're using. Select a database to see how to create a transactionally consistent backup for ScalarDB.
+
+:::note
+
+The restore methods by database listed below are just examples of some of the databases that ScalarDB supports.
+
+:::
+
+
+
+ You can restore to any point within the backup retention period by using the automated backup feature.
+
+
+ First, stop all the nodes of the Cassandra cluster. Then, clean the `data`, `commitlog`, and `hints` directories, and place the backups (snapshots) in each node.
+
+ After placing the backups (snapshots) in each node, start all the nodes of the Cassandra Cluster.
+
+
+ Follow the official Azure documentation for [restore an account by using Azure portal](https://docs.microsoft.com/en-us/azure/cosmos-db/restore-account-continuous-backup#restore-account-portal). After restoring a backup, [configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#configure-the-default-consistency-level) of the restored databases to `STRONG`. In addition, you should use the mid-time of the paused duration as the restore point as previously explained.
+
+ ScalarDB implements the Cosmos DB adapter by using its stored procedures, which are installed when creating schemas by using ScalarDB Schema Loader. However, the PITR feature of Cosmos DB doesn't restore stored procedures. Because of this, you need to re-install the required stored procedures for all tables after restoration. You can do this by using ScalarDB Schema Loader with the `--repair-all` option. For details, see [Repair tables](schema-loader.mdx#repair-tables).
+
+
+ Follow the official AWS documentation for [restoring a DynamoDB table to a point in time](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.Tutorial.html), but keep in mind that a table can only be restored with an alias. Because of this, you will need to restore the table with an alias, delete the original table, and rename the alias to the original name to restore the tables with the same name.
+
+ To do this procedure:
+
+ 1. Create a backup.
+ 1. Select the mid-time of the paused duration as the restore point.
+ 2. Restore by using the PITR of table A to table B.
+ 3. Create a backup of the restored table B (assuming that the backup is named backup B).
+ 4. Remove table B.
+ 2. Restore the backup.
+ 1. Remove table A.
+ 2. Create a table named A by using backup B.
+
+:::note
+
+* You must do the steps mentioned above for each table because tables can only be restored one at a time.
+* Configurations such as PITR and auto-scaling policies are reset to the default values for restored tables, so you must manually configure the required settings. For details, see the official AWS documentation for [How to restore DynamoDB tables with DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CreateBackup.html#CreateBackup_HowItWorks-restore).
+
+:::
+
+
+ If you used `mysqldump` to create the backup file, use the `mysql` command to restore the backup as specified in [Reloading SQL-Format Backups](https://dev.mysql.com/doc/mysql-backup-excerpt/8.0/en/reloading-sql-format-dumps.html).
+
+
+ If you used `pg_dump` to create the backup file, use the `psql` command to restore the backup as specified in [Restoring the Dump](https://www.postgresql.org/docs/current/backup-dump.html#BACKUP-DUMP-RESTORE).
+
+
+ Use the `.restore` command as specified in [Special commands to sqlite3 (dot-commands)](https://www.sqlite.org/cli.html#special_commands_to_sqlite3_dot_commands_).
+
+
+ You can restore from the scheduled or on-demand backup within the backup retention period. For details on performing backups, see [YugabyteDB Managed: Back up and restore clusters](https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-clusters/backup-clusters/).
+
+
+ Use the `restore` command. For details, on restoring the database, see [Db2: Restore overview](https://www.ibm.com/docs/en/db2/12.1.0?topic=recovery-restore).
+
+
diff --git a/versioned_docs/version-3.X/configurations.mdx b/versioned_docs/version-3.X/configurations.mdx
new file mode 100644
index 00000000..3e36d53f
--- /dev/null
+++ b/versioned_docs/version-3.X/configurations.mdx
@@ -0,0 +1,243 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Core Configurations
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This page describes the available configurations for ScalarDB Core.
+
+:::tip
+
+If you are using ScalarDB Cluster, please refer to [ScalarDB Cluster Configurations](./scalardb-cluster/scalardb-cluster-configurations.mdx) instead.
+
+:::
+
+## General configurations
+
+The following configurations are available for the Consensus Commit transaction manager:
+
+| Name | Description | Default |
+|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|
+| `scalar.db.transaction_manager` | Transaction manager of ScalarDB. Specify `consensus-commit` to use [Consensus Commit](./consensus-commit.mdx) or `single-crud-operation` to [run non-transactional storage operations](./run-non-transactional-storage-operations-through-library.mdx). Note that the configurations under the `scalar.db.consensus_commit` prefix are ignored if you use `single-crud-operation`. | `consensus-commit` |
+| `scalar.db.consensus_commit.isolation_level` | Isolation level used for Consensus Commit. Either `SNAPSHOT`, `SERIALIZABLE`, or `READ_COMMITTED` can be specified. | `SNAPSHOT` |
+| `scalar.db.consensus_commit.coordinator.namespace` | Namespace name of Coordinator tables. | `coordinator` |
+| `scalar.db.consensus_commit.include_metadata.enabled` | If set to `true`, `Get` and `Scan` operations results will contain transaction metadata. To see the transaction metadata columns details for a given table, you can use the `DistributedTransactionAdmin.getTableMetadata()` method, which will return the table metadata augmented with the transaction metadata columns. Using this configuration can be useful to investigate transaction-related issues. | `false` |
+
+## Performance-related configurations
+
+The following performance-related configurations are available for the Consensus Commit transaction manager:
+
+| Name | Description | Default |
+|----------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
+| `scalar.db.consensus_commit.parallel_executor_count` | Number of executors (threads) for parallel execution. This number refers to the total number of threads across transactions in a ScalarDB Cluster node or a ScalarDB process. | `128` |
+| `scalar.db.consensus_commit.parallel_preparation.enabled` | Whether or not the preparation phase is executed in parallel. | `true` |
+| `scalar.db.consensus_commit.parallel_validation.enabled` | Whether or not the validation phase (in `EXTRA_READ`) is executed in parallel. | The value of `scalar.db.consensus_commit.parallel_commit.enabled` |
+| `scalar.db.consensus_commit.parallel_commit.enabled` | Whether or not the commit phase is executed in parallel. | `true` |
+| `scalar.db.consensus_commit.parallel_rollback.enabled` | Whether or not the rollback phase is executed in parallel. | The value of `scalar.db.consensus_commit.parallel_commit.enabled` |
+| `scalar.db.consensus_commit.async_commit.enabled` | Whether or not the commit phase is executed asynchronously. | `false` |
+| `scalar.db.consensus_commit.async_rollback.enabled` | Whether or not the rollback phase is executed asynchronously. | The value of `scalar.db.consensus_commit.async_commit.enabled` |
+| `scalar.db.consensus_commit.parallel_implicit_pre_read.enabled` | Whether or not implicit pre-read is executed in parallel. | `true` |
+| `scalar.db.consensus_commit.one_phase_commit.enabled` | Whether or not the one-phase commit optimization is enabled. | `false` |
+| `scalar.db.consensus_commit.coordinator.write_omission_on_read_only.enabled` | Whether or not the write omission optimization is enabled for read-only transactions. This optimization is useful for read-only transactions that do not modify any data, as it avoids unnecessary writes to the Coordinator tables. | `true` |
+| `scalar.db.consensus_commit.coordinator.group_commit.enabled` | Whether or not committing the transaction state is executed in batch mode. This feature can't be used with a two-phase commit interface. | `false` |
+| `scalar.db.consensus_commit.coordinator.group_commit.slot_capacity` | Maximum number of slots in a group for the group commit feature. A large value improves the efficiency of group commit, but may also increase latency and the likelihood of transaction conflicts.[^1] | `20` |
+| `scalar.db.consensus_commit.coordinator.group_commit.group_size_fix_timeout_millis` | Timeout to fix the size of slots in a group. A large value improves the efficiency of group commit, but may also increase latency and the likelihood of transaction conflicts.[^1] | `40` |
+| `scalar.db.consensus_commit.coordinator.group_commit.delayed_slot_move_timeout_millis` | Timeout to move delayed slots from a group to another isolated group to prevent the original group from being affected by delayed transactions. A large value improves the efficiency of group commit, but may also increase the latency and the likelihood of transaction conflicts.[^1] | `1200` |
+| `scalar.db.consensus_commit.coordinator.group_commit.old_group_abort_timeout_millis` | Timeout to abort an old ongoing group. A small value reduces resource consumption through aggressive aborts, but may also increase the likelihood of unnecessary aborts for long-running transactions. | `60000` |
+| `scalar.db.consensus_commit.coordinator.group_commit.timeout_check_interval_millis` | Interval for checking the group commit–related timeouts. | `20` |
+| `scalar.db.consensus_commit.coordinator.group_commit.metrics_monitor_log_enabled` | Whether or not the metrics of the group commit are logged periodically. | `false` |
+
+## Storage-related configurations
+
+ScalarDB has a storage (database) abstraction layer that supports multiple storage implementations. You can specify the storage implementation by using the `scalar.db.storage` property.
+
+Select a database to see the configurations available for each storage.
+
+
+
+ The following configurations are available for JDBC databases:
+
+ | Name | Description | Default |
+ |------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|
+ | `scalar.db.storage` | `jdbc` must be specified. | - |
+ | `scalar.db.contact_points` | JDBC connection URL. | |
+ | `scalar.db.username` | Username to access the database. | |
+ | `scalar.db.password` | Password to access the database. | |
+ | `scalar.db.jdbc.connection_pool.min_idle` | Minimum number of idle connections in the connection pool. | `20` |
+ | `scalar.db.jdbc.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool. | `50` |
+ | `scalar.db.jdbc.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool. Use a negative value for no limit. | `100` |
+ | `scalar.db.jdbc.prepared_statements_pool.enabled` | Setting this property to `true` enables prepared-statement pooling. | `false` |
+ | `scalar.db.jdbc.prepared_statements_pool.max_open` | Maximum number of open statements that can be allocated from the statement pool at the same time. Use a negative value for no limit. | `-1` |
+ | `scalar.db.jdbc.isolation_level` | Isolation level for JDBC. `READ_UNCOMMITTED`, `READ_COMMITTED`, `REPEATABLE_READ`, or `SERIALIZABLE` can be specified. | Underlying-database specific |
+ | `scalar.db.jdbc.table_metadata.schema` | Schema name for the table metadata used for ScalarDB. | `scalardb` |
+ | `scalar.db.jdbc.table_metadata.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for the table metadata. | `5` |
+ | `scalar.db.jdbc.table_metadata.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for the table metadata. | `10` |
+ | `scalar.db.jdbc.table_metadata.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for the table metadata. Use a negative value for no limit. | `25` |
+ | `scalar.db.jdbc.admin.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for admin. | `5` |
+ | `scalar.db.jdbc.admin.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for admin. | `10` |
+ | `scalar.db.jdbc.admin.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for admin. Use a negative value for no limit. | `25` |
+ | `scalar.db.jdbc.mysql.variable_key_column_size` | Column size for TEXT and BLOB columns in MySQL when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` |
+ | `scalar.db.jdbc.oracle.variable_key_column_size` | Column size for TEXT and BLOB columns in Oracle when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` |
+ | `scalar.db.jdbc.oracle.time_column.default_date_component` | Value of the date component used for storing `TIME` data in Oracle. Since Oracle has no data type to only store a time without a date component, ScalarDB stores `TIME` data with the same date component value for ease of comparison and sorting. | `1970-01-01` |
+ | `scalar.db.jdbc.db2.variable_key_column_size` | Column size for TEXT and BLOB columns in IBM Db2 when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` |
+ | `scalar.db.jdbc.db2.time_column.default_date_component` | Value of the date component used for storing `TIME` data in IBM Db2. Since the IBM Db2 TIMESTAMP type is used to store ScalarDB `TIME` type data because it provides fractional-second precision, ScalarDB stores `TIME` data with the same date component value for ease of comparison and sorting. | `1970-01-01` |
+
+:::note
+
+**SQLite3**
+
+If you're using SQLite3 as a JDBC database, you must set `scalar.db.contact_points` as follows:
+
+```properties
+scalar.db.contact_points=jdbc:sqlite:?busy_timeout=10000
+```
+
+Unlike other JDBC databases, [SQLite3 doesn't fully support concurrent access](https://www.sqlite.org/lang_transaction.html). To avoid frequent errors caused internally by [`SQLITE_BUSY`](https://www.sqlite.org/rescode.html#busy), setting a [`busy_timeout`](https://www.sqlite.org/c3ref/busy_timeout.html) parameter is recommended.
+
+**YugabyteDB**
+
+If you're using YugabyteDB as a JDBC database, you can specify multiple endpoints in `scalar.db.contact_points` as follows:
+
+```properties
+scalar.db.contact_points=jdbc:yugabytedb://127.0.0.1:5433\\,127.0.0.2:5433\\,127.0.0.3:5433/?load-balance=true
+```
+
+Multiple endpoints should be separated by escaped commas.
+
+For information on YugabyteDB's smart driver and load balancing, see [YugabyteDB smart drivers for YSQL](https://docs.yugabyte.com/preview/drivers-orms/smart-drivers/).
+
+:::
+
+
+
+ The following configurations are available for DynamoDB:
+
+ | Name | Description | Default |
+ |---------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|
+ | `scalar.db.storage` | `dynamo` must be specified. | - |
+ | `scalar.db.contact_points` | AWS region with which ScalarDB should communicate (e.g., `us-east-1`). | |
+ | `scalar.db.username` | AWS access key used to identify the user interacting with AWS. | |
+ | `scalar.db.password` | AWS secret access key used to authenticate the user interacting with AWS. | |
+ | `scalar.db.dynamo.endpoint_override` | Amazon DynamoDB endpoint with which ScalarDB should communicate. This is primarily used for testing with a local instance instead of an AWS service. | |
+ | `scalar.db.dynamo.table_metadata.namespace` | Namespace name for the table metadata used for ScalarDB. | `scalardb` |
+ | `scalar.db.dynamo.namespace.prefix` | Prefix for the user namespaces and metadata namespace names. Since AWS requires having unique tables names in a single AWS region, this is useful if you want to use multiple ScalarDB environments (development, production, etc.) in a single AWS region. | |
+
+
+ The following configurations are available for CosmosDB for NoSQL:
+
+ | Name | Description | Default |
+ |--------------------------------------------|----------------------------------------------------------------------------------------------------------|------------|
+ | `scalar.db.storage` | `cosmos` must be specified. | - |
+ | `scalar.db.contact_points` | Azure Cosmos DB for NoSQL endpoint with which ScalarDB should communicate. | |
+ | `scalar.db.password` | Either a master or read-only key used to perform authentication for accessing Azure Cosmos DB for NoSQL. | |
+ | `scalar.db.cosmos.table_metadata.database` | Database name for the table metadata used for ScalarDB. | `scalardb` |
+ | `scalar.db.cosmos.consistency_level` | Consistency level used for Cosmos DB operations. `STRONG` or `BOUNDED_STALENESS` can be specified. | `STRONG` |
+
+
+ The following configurations are available for Cassandra:
+
+ | Name | Description | Default |
+ |-----------------------------------------|-----------------------------------------------------------------------|------------|
+ | `scalar.db.storage` | `cassandra` must be specified. | - |
+ | `scalar.db.contact_points` | Comma-separated contact points. | |
+ | `scalar.db.contact_port` | Port number for all the contact points. | |
+ | `scalar.db.username` | Username to access the database. | |
+ | `scalar.db.password` | Password to access the database. | |
+
+
+
+##### Multi-storage support
+
+ScalarDB supports using multiple storage implementations simultaneously. You can use multiple storages by specifying `multi-storage` as the value for the `scalar.db.storage` property.
+
+For details about using multiple storages, see [Multi-Storage Transactions](multi-storage-transactions.mdx).
+
+##### Cross-partition scan configurations
+
+By enabling the cross-partition scan option as described below, the `Scan` operation can retrieve all records across partitions. In addition, you can specify arbitrary conditions and orderings in the cross-partition `Scan` operation by enabling `cross_partition_scan.filtering` and `cross_partition_scan.ordering`, respectively. Currently, the cross-partition scan with ordering option is available only for JDBC databases. To enable filtering and ordering, `scalar.db.cross_partition_scan.enabled` must be set to `true`.
+
+For details on how to use cross-partition scan, see [Scan operation](./api-guide.mdx#scan-operation).
+
+:::warning
+
+For non-JDBC databases, transactions could be executed at read-committed snapshot isolation (`SNAPSHOT`), which is a lower isolation level, even if you enable cross-partition scan with the `SERIALIZABLE` isolation level. When using non-JDBC databases, use cross-partition scan only if consistency does not matter for your transactions.
+
+:::
+
+| Name | Description | Default |
+|----------------------------------------------------|-----------------------------------------------|---------|
+| `scalar.db.cross_partition_scan.enabled` | Enable cross-partition scan. | `true` |
+| `scalar.db.cross_partition_scan.filtering.enabled` | Enable filtering in cross-partition scan. | `false` |
+| `scalar.db.cross_partition_scan.ordering.enabled` | Enable ordering in cross-partition scan. | `false` |
+
+##### Scan fetch size
+
+You can configure the fetch size for storage scan operations by using the following property:
+
+| Name | Description | Default |
+|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| `scalar.db.scan_fetch_size` | Specifies the number of records to fetch in a single batch during a storage scan operation. A larger value can improve performance for a large result set by reducing round trips to the storage, but it also increases memory usage. A smaller value uses less memory but may increase latency. | `10` |
+
+## Other ScalarDB configurations
+
+The following are additional configurations available for ScalarDB:
+
+| Name | Description | Default |
+|------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|
+| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` |
+| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB maintains in-progress transactions, which can be resumed by using a transaction ID. This process expires transactions that have been idle for an extended period to prevent resource leaks. This setting specifies the expiration time of this transaction management feature in milliseconds. | `-1` (no expiration) |
+| `scalar.db.default_namespace_name` | The given namespace name will be used by operations that do not already specify a namespace. | |
+
+## Placeholder usage
+
+You can use placeholders in the values, and they are replaced with environment variables (`${env:}`) or system properties (`${sys:}`). You can also specify default values in placeholders like `${sys::-}`.
+
+The following is an example of a configuration that uses placeholders:
+
+```properties
+scalar.db.username=${env:SCALAR_DB_USERNAME:-admin}
+scalar.db.password=${env:SCALAR_DB_PASSWORD}
+```
+
+In this example configuration, ScalarDB reads the username and password from environment variables. If the environment variable `SCALAR_DB_USERNAME` does not exist, ScalarDB uses the default value `admin`.
+
+## Configuration example - App and database
+
+```mermaid
+flowchart LR
+ app["App
(ScalarDB library with
Consensus Commit)"]
+ db[(Underlying storage or database)]
+ app --> db
+```
+
+In this example configuration, the app (ScalarDB library with Consensus Commit) connects to an underlying storage or database (in this case, Cassandra) directly.
+
+:::warning
+
+This configuration exists only for development purposes and isn't suitable for a production environment. This is because the app needs to implement the [Scalar Admin](https://github.com/scalar-labs/scalar-admin) interface to take transactionally consistent backups for ScalarDB, which requires additional configurations.
+
+:::
+
+The following is an example of the configuration for connecting the app to the underlying database through ScalarDB:
+
+```properties
+# Transaction manager implementation.
+scalar.db.transaction_manager=consensus-commit
+
+# Storage implementation.
+scalar.db.storage=cassandra
+
+# Comma-separated contact points.
+scalar.db.contact_points=
+
+# Credential information to access the database.
+scalar.db.username=
+scalar.db.password=
+```
diff --git a/versioned_docs/version-3.X/consensus-commit.mdx b/versioned_docs/version-3.X/consensus-commit.mdx
new file mode 100644
index 00000000..16df6dc7
--- /dev/null
+++ b/versioned_docs/version-3.X/consensus-commit.mdx
@@ -0,0 +1,246 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Consensus Commit Protocol
+
+import JavadocLink from '/src/theme/JavadocLink.js';
+
+Consensus Commit is the transaction protocol used in ScalarDB and is designed for executing transactions spanning multiple diverse databases. Its uniqueness is that the protocol achieves ACID transactions without relying on the transaction capabilities of the underlying databases, unlike X/Open XA-based solutions. This document explains the details of the protocol, including how it works, the guaranteed isolation levels, the interfaces, the performance optimization that it employs, and its limitations.
+
+## The protocol
+
+This section explains how the Consensus Commit protocol works. The Consensus Commit protocol uses a concurrency control protocol to guarantee isolation and an atomic commitment protocol to guarantee atomicity and durability.
+
+### Concurrency control protocol
+
+The Consensus Commit protocol employs optimistic concurrency control (OCC) as its concurrency control protocol. OCC operates under the assumption that conflicts are rare, allowing transactions to proceed without the need for locks and resolving conflicts only when they actually occur. Therefore, OCC performs great in low-contention environments. It is also particularly beneficial in distributed environments, where managing locks is tricky.
+
+:::note
+
+Pessimistic concurrency control (PCC), on the other hand, assumes conflicts are common and takes locks on resources when they are used to avoid interference. Therefore, PCC performs great in high-contention environments.
+
+:::
+
+The OCC protocol of ScalarDB has three phases, as the commonly used OCC protocols, each of which does the following:
+
+* Read phase:
+ * ScalarDB tracks the read and write sets of transactions. ScalarDB copies every record that a transaction accesses from databases to its local workspace and stores its writes in the local workspace.
+* Validation phase:
+ * ScalarDB checks if the committing transaction conflicts with other transactions. ScalarDB uses backward validation; it goes to the write phase only if other transactions have not written what the transaction reads and writes, which are called read validation and write validation, respectively.
+* Write phase:
+ * ScalarDB propagates the changes in the transaction's write set to the database and makes them visible to other transactions.
+
+As described next, ScalarDB provides an isolation mode (isolation level) where it skips the read validation in the validation phase to allow for more performance for some workloads that don't require the read validation for correctness.
+
+:::note
+
+The OCC of ScalarDB without the read validation works similarly to snapshot isolation. However, it works with a single version and causes read-skew anomalies because it does not create global snapshots.
+
+:::
+
+### Atomic commitment protocol
+
+The Consensus Commit protocol employs a variant of the two-phase commit protocol as an atomic commitment protocol (ACP). The ACP of ScalarDB comprises two phases, each of which has two sub-phases, and briefly works as follows:
+
+* Prepare phase (prepare-records phase \+ validate-records phase):
+ * In the prepare-records phase, ScalarDB runs the write validation of the OCC protocol for all the records written by the transaction by updating the statuses of the records to PREPARED and moves on to the next phase if all the records are successfully validated.
+ * In the validate-records phase, ScalarDB runs the read validation of the OCC protocol for all the records read by the transaction and moves on to the next phase if all the records are successfully validated.
+* Commit phase (commit-state phase \+ commit-records phase):
+ * In the commit-state phase, ScalarDB commits the transaction by writing a COMMITTED state to a special table called a coordinator table.
+ * In the commit-records phase, ScalarDB runs the write phase of the OCC protocol for all the records written by the transaction by updating the statuses of the records to COMMITTED.
+
+:::note
+
+In case of deleting records, the statuses of the records are first changed to DELETED in the prepare phase and later physically deleted in the commit phase.
+
+:::
+
+#### How it works in more detail
+
+Let's see how the protocol works in each phase in more detail.
+
+##### Before the prepare phase
+
+First, a transaction begins when a client accesses ScalarDB (or a ScalarDB Cluster node) and issues a `begin` command. When a transaction begins, ScalarDB acts as a transaction coordinator, accessing the underlying databases, and first generates a transaction ID (TxID) with UUID version 4. Then, when the client is ready to commit the transaction after performing operations such as reading and writing records, it calls a `commit` command to request ScalarDB to commit the transaction and enters the prepare phase. As described previously, ScalarDB holds the read set (readSet) and write set (writeSet) of the transaction in its local workspace at the time of committing.
+
+##### Prepare phase
+
+ScalarDB first prepares the records of the write set by propagating the records, including transaction logs like TxID as described later, with PREPARED states to the underlying databases as the prepare-records phase. Here, we assume a write set maintains updated records composed of the original records and updated columns. If any preparation fails, it aborts the transaction by writing an ABORTED state record to a Coordinator table, where all the transactions’ final states are determined and managed. We explain the Coordinator table in more detail later in this section.
+
+:::note
+
+ScalarDB checks conflicting preparations by using linearizable conditional writes. A transaction updates a record if the record has not been updated by another transaction since the transaction read it by checking if the TxID of the record has not been changed.
+
+:::
+
+ScalarDB then moves on to the validate-records phase as necessary. The validate-records phase is only necessary if the isolation level is set to SERIALIZABLE. In this phase, ScalarDB re-reads all the records in the read set to see if other transactions have written the records that the transaction has read before. If the read set has not been changed, the transaction can go to the commit-state phase since there are no anti-dependencies; otherwise, it aborts the transaction.
+
+##### Commit phase
+
+If all the validations in the prepare phase are done successfully, ScalarDB commits the transaction by writing a COMMITTED state record to the Coordinator table as the commit-state phase.
+
+:::note
+
+* ScalarDB uses linearizable conditional writes to coordinate concurrent writes to the Coordinator table, creating a state record with a TxID if there is no record for the TxID. Once the COMMITTED state is correctly written to the Coordinator table, the transaction is regarded as committed.
+* By default, if a transaction contains only read operations, ScalarDB skips the commit-state phase. However, you can configure ScalarDB to write a COMMITTED state record to the Coordinator table even for read-only transactions by setting the following parameter to `false`:
+ * `scalar.db.consensus_commit.coordinator.write_omission_on_read_only.enabled`
+
+:::
+
+Then, ScalarDB commits all the validated (prepared) records by changing the states of the records to COMMITTED as the commit-records phase.
+
+#### Distributed WAL
+
+ScalarDB stores transaction logs, which are for write-ahead logging (WAL), in the underlying database records that it manages. Specifically, as shown in the following figure, ScalarDB manages special columns for the log information in a record in addition to the columns that an application manages. The log information comprises, for example, a transaction ID (TxID) that has updated the corresponding record most recently, a record version number (Version), a record state (TxState) (for example, COMMITTED or PREPARED), timestamps (not shown in the diagram), and a before image that comprises the previous version's application data and its metadata.
+
+ScalarDB also manages transaction states separately from the application records in the Coordinator table. The Coordinator table determines and manages transaction states as a single source of truth. The Coordinator table can be collocated with application-managed tables or located in a separate dedicated database.
+
+
+
+:::note
+
+The Coordinator table can be replicated for high availability by using the replication and consensus capabilities of the underlying databases. For example, if you manage the Coordinator table by using Cassandra with a replication factor of three, you can make the transaction coordination of ScalarDB tolerate one replica crash. Hence, you can make the atomic commitment protocol of ScalarDB perform like the Paxos Commit protocol; it could mitigate liveness issues (for example, blocking problems) without sacrificing safety.
+
+:::
+
+#### Lazy recovery
+
+Transactions can crash at any time and could leave records in an uncommitted state. ScalarDB recovers uncommitted records lazily when it reads them, depending on the transaction states of the Coordinator table. Specifically, if a record is in the PREPARED state, but the transaction that updated the record has expired or been aborted, the record will be rolled back. If a record is in the PREPARED state and the transaction that updated the record is committed, the record will be rolled forward.
+
+A transaction expires after a certain amount of time (currently 15 seconds). When ScalarDB observes a record that has been prepared by an expired transaction, ScalarDB writes the ABORTED state for the transaction to the Coordinator table (with retries). If ScalarDB successfully writes the ABORTED state to the Coordinator table, the transaction is aborted. Otherwise, the transaction will be committed by the original process that is slow but still alive for some reason, or it will remain in the UNKNOWN state until it is either aborted or committed.
+
+## Isolation levels
+
+The Consensus Commit protocol supports three isolation levels: read-committed snapshot isolation (a weaker variant of snapshot isolation), serializable, and read-committed, each of which has the following characteristics:
+
+* Read-committed snapshot isolation (SNAPSHOT - default)
+ * Possible anomalies: read skew, write skew, read only
+ * Faster than serializable, but guarantees weaker correctness.
+* Serializable (SERIALIZABLE)
+ * Possible anomalies: None
+ * Slower than read-committed snapshot isolation, but guarantees stronger (strongest) correctness.
+* Read-committed (READ_COMMITTED)
+ * Possible anomalies: read skew, write skew, read only
+ * Faster than read-committed snapshot isolation because it could return non-latest committed records.
+
+As described above, serializable is preferable from a correctness perspective, but slower than read-committed snapshot isolation. Choose the appropriate one based on your application requirements and workload. For details on how to configure read-committed snapshot isolation, serializable, and read-committed, see [ScalarDB Configuration](configurations.mdx#basic-configurations).
+
+:::note
+
+The Consensus Commit protocol of ScalarDB requires each underlying database to provide linearizable operations, as described in [Configurations for the Underlying Databases of ScalarDB](database-configurations.mdx#transactions); thus, it guarantees strict serializability.
+
+:::
+
+:::warning
+
+Scanning records without specifying a partition key (for example, or `SELECT * FROM table`) for non-JDBC databases does not always guarantee serializability, even if `SERIALIZABLE` is specified. Therefore, you should do so at your own discretion and consider updating the schemas if possible. For more details, refer to [Cross-partition scan configurations](configurations.mdx#cross-partition-scan-configurations).
+
+:::
+
+## Interfaces
+
+The Consensus Commit protocol provides two interfaces: [a one-phase commit interface and a two-phase commit interface](scalardb-cluster/run-transactions-through-scalardb-cluster.mdx#run-transactions).
+
+The one-phase commit interface is a simple interface that provides only a single `commit` method, where all the phases of the atomic commitment protocol are executed in the method. On the other hand, the two-phase commit interface exposes each phase of the protocol with `prepare`, `validate`, and `commit` methods.
+
+:::note
+
+The `prepare` method is for the prepare-records phase, and the `validate` method is for the validate-records phase.
+
+:::
+
+In most cases, using the one-phase commit interface is recommended since it is easier to use and handle errors. But the two-phase commit interface is useful when running a transaction across multiple applications or services without directly accessing databases from ScalarDB, such as maintaining the consistency of databases in microservices.
+
+## Performance optimization
+
+The Consensus Commit protocol employs several performance optimizations.
+
+### Parallel execution
+
+Consensus Commit executes each phase of the atomic commitment protocol in parallel, using intra-transaction parallelism without sacrificing correctness. Specifically, it tries to execute the prepare-records phase by writing records with PREPARED status in parallel. Likewise, it uses a similar parallel execution for the validate-records phase, the commit-records phase, and the rollback process.
+
+You can enable respective parallel execution by using the following parameters:
+
+* Prepare-records phase
+ * `scalar.db.consensus_commit.parallel_preparation.enabled`
+* Validate-records phase
+ * `scalar.db.consensus_commit.parallel_validation.enabled`
+* Commit-records phase
+ * `scalar.db.consensus_commit.parallel_commit.enabled`
+* Rollback processing
+ * `scalar.db.consensus_commit.parallel_rollback.enabled`
+
+You can also configure the execution parallelism by using the following parameter:
+
+* `scalar.db.consensus_commit.parallel_executor_count`
+
+For details about the configuration, refer to [Performance-related configurations](configurations.mdx#performance-related-configurations).
+
+### Asynchronous execution
+
+Since a transaction is regarded as committed if the commit-state phase completes successfully, it can also return to the client without waiting for the completion of the commit-records phase, executing the phase asynchronously. Likewise, when a transaction fails for some reason and does a rollback, the rollback process can also be executed asynchronously without waiting for its completion.
+
+You can enable respective asynchronous execution by using the following parameters:
+
+* Commit-records phase
+ * `scalar.db.consensus_commit.async_commit.enabled`
+* Rollback processing
+ * `scalar.db.consensus_commit.async_rollback.enabled`
+
+### One-phase commit
+
+With one-phase commit optimization, ScalarDB can omit the prepare-records and commit-state phases without sacrificing correctness, provided that the transaction only updates records that the underlying database can atomically update.
+
+You can enable one-phase commit optimization by using the following parameter:
+
+* `scalar.db.consensus_commit.one_phase_commit.enabled`
+
+### Group commit
+
+Consensus Commit provides a group-commit feature to execute the commit-state phase of multiple transactions in a batch, reducing the number of writes for the commit-state phase. It is especially useful when writing to a Coordinator table is slow, for example, when the Coordinator table is deployed in a multi-region environment for high availability.
+
+You can enable group commit by using the following parameter:
+
+* `scalar.db.consensus_commit.coordinator.group_commit.enabled`
+
+Group commit has several other parameters. For more details, refer to [Performance-related configurations](configurations.mdx#performance-related-configurations).
+
+## Limitations
+
+ScalarDB has several limitations in achieving database-agnostic transactions.
+
+### Applications must access ScalarDB to access the underlying databases
+
+Since ScalarDB with the Consensus Commit protocol handles transactions in its layer without depending on the transactional capability of the underlying databases, your applications cannot bypass ScalarDB. Bypassing it will cause unexpected behavior, mostly resulting in facing some database anomalies. Even for read operations, accessing the underlying databases of ScalarDB directly will give you inconsistent data with the transaction metadata, so it is not allowed.
+
+However, for tables that are not managed or touched by ScalarDB transactions, you can read from and write to the tables. For example, it is OK to check tables' metadata information, such as information schema, by directly accessing the tables without going through ScalarDB. Also, there are several other cases where you can access databases directly without going through ScalarDB. The basic criterion is whether or not you update the data of the underlying databases. If you are sure that you do not write to the databases, you can access the databases directly. For example, it is OK to take a backup of databases by using database-native tools.
+
+:::note
+
+If you take backups from multiple databases or from non-transactional databases, you need to pause your applications or ScalarDB Cluster. For more details, refer to [How to Back Up and Restore Databases Used Through ScalarDB](backup-restore.mdx).
+
+:::
+
+### Executing particular operations in a certain sequence is prohibited for correctness
+
+In the current implementation, ScalarDB throws an exception in the following cases:
+
+* Executing scan operations after write (Put, Insert, Update, Upsert, Delete) operations for the same record in a transaction.
+* Executing write (Put, Insert, Update, and Upsert) operations after Delete operations for the same record in a transaction.
+
+## See also
+
+You can learn more about the Consesnus Commit protocol by seeing the following presentation and YouTube video, which summarizes, visually, how the protocol works:
+
+- **Speaker Deck presentation:** [ScalarDB: Universal Transaction Manager](https://speakerdeck.com/scalar/scalar-db-universal-transaction-manager)
+- **YouTube (Japanese):** [How ScalarDB runs transactions (a part of DBSJ lecture)](https://www.youtube.com/watch?v=s6Q7QQccDTc)
+
+In addition, more details about the protocol, including the background, the challenges, and the novelty, are discussed in the following research paper and its presentation:
+
+- **Research paper:** [ScalarDB: Universal Transaction Manager for Polystores](https://www.vldb.org/pvldb/vol16/p3768-yamada.pdf)
+- **Speaker Deck presentation:** [ScalarDB: Universal Transaction Manager for Polystores](https://speakerdeck.com/scalar/scalardb-universal-transaction-manager-for-polystores-vldb23)
diff --git a/versioned_docs/version-3.X/data-modeling.mdx b/versioned_docs/version-3.X/data-modeling.mdx
new file mode 100644
index 00000000..3995a8bb
--- /dev/null
+++ b/versioned_docs/version-3.X/data-modeling.mdx
@@ -0,0 +1,132 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Model Your Data
+
+Data modeling (or in other words, designing your database schemas) is the process of conceptualizing and visualizing how data will be stored and used by identifying the patterns used to access data and the types of queries to be performed within business operations.
+
+This page first explains the ScalarDB data model and then describes how to design your database schemas based on the data model.
+
+## ScalarDB data model
+
+ScalarDB's data model is an extended key-value model inspired by the Bigtable data model. It is similar to the relational model but differs in several ways, as described below. The data model is chosen to abstract various databases, such as relational databases, NoSQL databases, and NewSQL databases.
+
+The following diagram shows an example of ScalarDB tables, each of which is a collection of records. This section first explains what objects, such as tables and records, ScalarDB defines and then describes how to locate the records.
+
+
+
+### Objects in ScalarDB
+
+The ScalarDB data model has several objects.
+
+#### Namespace
+
+A namespace is a collection of tables analogous to an SQL namespace or database.
+
+#### Table
+
+A table is a collection of partitions. A namespace most often contains one or more tables, each identified by a name.
+
+#### Partition
+
+A partition is a collection of records and a unit of distribution to nodes, whether logical or physical. Therefore, records within the same partition are placed in the same node. ScalarDB assumes multiple partitions are distributed by hashing.
+
+#### Record / row
+
+A record or row is a set of columns that is uniquely identifiable among all other records.
+
+#### Column
+
+A column is a fundamental data element and does not need to be broken down any further. Each record is composed of one or more columns. Each column has a data type. For details about the data type, refer to [Data-type mapping between ScalarDB and other databases](schema-loader.mdx#data-type-mapping-between-scalardb-and-other-databases).
+
+#### Secondary index
+
+A secondary index is a sorted copy of a column in a single base table. Each index entry is linked to a corresponding table partition. ScalarDB currently doesn't support multi-column indexes, so it can create indexes with only one column.
+
+### How to locate records
+
+This section discusses how to locate records from a table.
+
+#### Primary key
+
+A primary key uniquely identifies each record; no two records can have the same primary key. Therefore, you can locate a record by specifying a primary key. A primary key comprises a partition key and, optionally, a clustering key.
+
+#### Partition key
+
+A partition key uniquely identifies a partition. A partition key comprises a set of columns, which are called partition key columns. When you specify only a partition key, you can get a set of records that belong to the partition.
+
+#### Clustering key
+
+A clustering key uniquely identifies a record within a partition. It comprises a set of columns called clustering-key columns. When you want to specify a clustering key, you should specify a partition key for efficient lookups. When you specify a clustering key without a partition key, you end up scanning all the partitions. Scanning all the partitions is time consuming, especially when the amount of data is large, so only do so at your own discretion.
+
+Records within a partition are assumed to be sorted by clustering-key columns, specified as a clustering order. Therefore, you can specify a part of clustering-key columns in the defined order to narrow down the results to be returned.
+
+#### Index key
+
+An index key identifies records by looking up the key in indexes. An index key lookup spans all the partitions, so it is not necessarily efficient, especially if the selectivity of a lookup is not low.
+
+## How to design your database schemas
+
+You can design your database schemas similarly to the relational model, but there is a basic principle and are a few best practices to follow.
+
+### Query-driven data modeling
+
+In relational databases, data is organized in normalized tables with foreign keys used to reference related data in other tables. The queries that the application will make are structured by the tables, and the related data is queried as table joins.
+
+Although ScalarDB supports join operations in ScalarDB SQL, data modeling should be more query-driven, like NoSQL databases. The data access patterns and application queries should determine the structure and organization of tables.
+
+### Best practices
+
+This section describes best practices for designing your database schemas.
+
+#### Consider data distribution
+
+Preferably, you should try to balance loads to partitions by properly selecting partition and clustering keys.
+
+For example, in a banking application, if you choose an account ID as a partition key, you can perform any account operations for a specific account within the partition to which the account belongs. So, if you operate on different account IDs, you will access different partitions.
+
+On the other hand, if you choose a branch ID as a partition key and an account ID as a clustering key, all the accesses to a branch's account IDs go to the same partition, causing an imbalance in loads and data sizes. In addition, you should choose a high-cardinality column as a partition key because creating a small number of large partitions also causes an imbalance in loads and data sizes.
+
+#### Try to read a single partition
+
+Because of the data model characteristics, single partition lookup is most efficient. If you need to issue a scan or select a request that requires multi-partition lookups or scans, which you can [enable with cross-partition scan](configurations.mdx#cross-partition-scan-configurations), do so at your own discretion and consider updating the schemas if possible.
+
+For example, in a banking application, if you choose email as a partition key and an account ID as a clustering key, and issue a query that specifies an account ID, the query will span all the partitions because it cannot identify the corresponding partition efficiently. In such a case, you should always look up the table with an account ID.
+
+:::note
+
+If you read multiple partitions on a relational database with proper indexes, your query might be efficient because the query is pushed down to the database.
+
+:::
+
+#### Try to avoid using secondary indexes
+
+Similarly to the above, if you need to issue a scan or select a request that uses a secondary index, the request will span all the partitions of a table. Therefore, you should try to avoid using secondary indexes. If you need to use a secondary index, use it through a low-selectivity query, which looks up a small portion.
+
+As an alternative to secondary indexes, you can create another table that works as a clustered index of a base table.
+
+For example, assume there is a table with three columns: `table1(A, B, C)`, with the primary key `A`. Then, you can create a table like `index-table1(C, A, B)` with `C` as the primary key so that you can look up a single partition by specifying a value for `C`. This approach could speed up read queries but might create more load to write queries because you need to write to two tables by using ScalarDB transactions.
+
+:::note
+
+There are plans to have a table-based secondary-index feature in ScalarDB in the future.
+
+:::
+
+#### Consider data is assumed to be distributed by hashing
+
+In the current ScalarDB data model, data is assumed to be distributed by hashing. Therefore, you can't perform range queries efficiently without a partition key.
+
+If you want to issue range queries efficiently, you need to do so within a partition. However, if you follow this approach, you must specify a partition key. This can pose scalability issues as the range queries always go to the same partition, potentially overloading it. This limitation is not specific to ScalarDB but to databases where data is distributed by hashing for scalability.
+
+:::note
+
+If you run ScalarDB on a relational database with proper indexes, your range query might be efficient because the query is pushed down to the database.
+
+:::
+
diff --git a/versioned_docs/version-3.X/database-configurations.mdx b/versioned_docs/version-3.X/database-configurations.mdx
new file mode 100644
index 00000000..e2b599f2
--- /dev/null
+++ b/versioned_docs/version-3.X/database-configurations.mdx
@@ -0,0 +1,120 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Configurations for the Underlying Databases of ScalarDB
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This document explains how to configure the underlying databases of ScalarDB to make applications that use ScalarDB work correctly and efficiently.
+
+## General requirements for the underlying databases
+
+ScalarDB requires each underlying database to provide certain capabilities to run transactions and analytics on the databases. This document explains the general requirements and how to configure each database to achieve the requirements.
+
+### Transactions
+
+ScalarDB requires each underlying database to provide at least the following capabilities to run transactions on the databases:
+
+- Linearizable read and conditional mutations (write and delete) on a single database record.
+- Durability of written database records.
+- Ability to store arbitrary data beside application data in each database record.
+
+### Analytics
+
+ScalarDB requires each underlying database to provide the following capability to run analytics on the databases:
+
+- Ability to return only committed records.
+
+:::note
+
+You need to have database accounts that have enough privileges to access the databases through ScalarDB since ScalarDB runs on the underlying databases not only for CRUD operations but also for performing operations like creating or altering schemas, tables, or indexes. ScalarDB basically requires a fully privileged account to access the underlying databases.
+
+:::
+
+## How to configure databases to achieve the general requirements
+
+Select your database for details on how to configure it to achieve the general requirements.
+
+
+
+ Transactions
+
+ - Use a single primary server or synchronized multi-primary servers for all operations (no read operations on read replicas that are asynchronously replicated from a primary database).
+ - Use read-committed or stricter isolation levels.
+
+ Analytics
+
+ - Use read-committed or stricter isolation levels.
+
+
+ Transactions
+
+ - Use a single primary region for all operations. (No read and write operations on global tables in non-primary regions.)
+ - There is no concept for primary regions in DynamoDB, so you must designate a primary region by yourself.
+
+ Analytics
+
+ - Not applicable. DynamoDB always returns committed records, so there are no DynamoDB-specific requirements.
+
+
+ Transactions
+
+ - Use a single primary region for all operations with `Strong` or `Bounded Staleness` consistency.
+
+ Analytics
+
+ - Not applicable. Cosmos DB always returns committed records, so there are no Cosmos DB–specific requirements.
+
+
+ Transactions
+
+ - Use a single primary cluster for all operations (no read or write operations in non-primary clusters).
+ - Use `batch` or `group` for `commitlog_sync`.
+ - If you're using Cassandra-compatible databases, those databases must properly support lightweight transactions (LWT).
+
+ Analytics
+
+ - Not applicable. Cassandra always returns committed records, so there are no Cassandra-specific requirements.
+
+
+
+## Recommendations
+
+Properly configuring each underlying database of ScalarDB for high performance and high availability is recommended. The following recommendations include some knobs and configurations to update.
+
+:::note
+
+ScalarDB can be seen as an application of underlying databases, so you may want to try updating other knobs and configurations that are commonly used to improve efficiency.
+
+:::
+
+
+
+ - Use read-committed isolation for better performance.
+ - Follow the performance optimization best practices for each database. For example, increasing the buffer size (for example, `shared_buffers` in PostgreSQL) and increasing the number of connections (for example, `max_connections` in PostgreSQL) are usually recommended for better performance.
+
+
+ - Increase the number of read capacity units (RCUs) and write capacity units (WCUs) for high throughput.
+ - Enable point-in-time recovery (PITR).
+
+:::note
+
+Since DynamoDB stores data in multiple availability zones by default, you don’t need to adjust any configurations to improve availability.
+
+:::
+
+
+ - Increase the number of Request Units (RUs) for high throughput.
+ - Enable point-in-time restore (PITR).
+ - Enable availability zones.
+
+
+ - Increase `concurrent_reads` and `concurrent_writes` for high throughput. For details, see the official Cassandra documentation about [`concurrent_writes`](https://cassandra.apache.org/doc/stable/cassandra/configuration/cass_yaml_file.html#concurrent_writes).
+
+
diff --git a/versioned_docs/version-3.X/deploy-overview.mdx b/versioned_docs/version-3.X/deploy-overview.mdx
new file mode 100644
index 00000000..d72a68d1
--- /dev/null
+++ b/versioned_docs/version-3.X/deploy-overview.mdx
@@ -0,0 +1,23 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Deploy Overview
+
+In this category, you can follow guides to help you become more familiar with deploying ScalarDB, specifically ScalarDB Cluster and ScalarDB Analytics, in local and cloud-based Kubernetes environments.
+
+## Deploy ScalarDB Cluster in a local Kubernetes environment
+
+To learn how to deploy ScalarDB Cluster in a local Kubernetes environment by using a Helm Chart and a PostgreSQL database, see [Deploy ScalarDB Cluster Locally](scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx).
+
+## Deploy ScalarDB Cluster in a cloud-based Kubernetes environment
+
+To learn how to deploy ScalarDB Cluster in a cloud-based Kubernetes environment by using a Helm Chart, see [Deploy ScalarDB Cluster on Amazon Elastic Kubernetes Service (EKS)](scalar-kubernetes/ManualDeploymentGuideScalarDBClusterOnEKS.mdx).
+
+## Deploy ScalarDB Analytics in a public cloud-based environment
+
+To learn how to deploy ScalarDB Analytics in a public cloud-based environment, see [Deploy ScalarDB Analytics in Public Cloud Environments](scalardb-analytics/deployment.mdx).
diff --git a/versioned_docs/version-3.X/design.mdx b/versioned_docs/version-3.X/design.mdx
new file mode 100644
index 00000000..34b85e49
--- /dev/null
+++ b/versioned_docs/version-3.X/design.mdx
@@ -0,0 +1,80 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Design
+
+This document briefly explains the design and implementation of ScalarDB. For what ScalarDB is and its use cases, see [ScalarDB Overview](./overview.mdx).
+
+## Overall architecture
+
+ScalarDB is hybrid transaction/analytical processing (HTAP) middleware that sits in between applications and databases. As shown in the following figure, ScalarDB consists of three components: Core, Cluster, and Analytics. ScalarDB basically employs a layered architecture, so the Cluster and Analytics components use the Core component to interact with underlying databases but sometimes bypass the Core component for performance optimization without sacrificing correctness. Likewise, each component also consists of several layers.
+
+
+
+## Components
+
+The following subsections explain each component one by one.
+
+### Core
+
+ScalarDB Core, which is provided as open-source software under the Apache 2 License, is an integral part of ScalarDB. Core provides a database manager that has an abstraction layer that abstracts underlying databases and adapters (or shims) that implement the abstraction for each database. In addition, it provides a transaction manager on top of the database abstraction that achieves database-agnostic transaction management based on Scalar's novel distributed transaction protocol called [Consensus Commit](./consensus-commit.mdx). Core is provided as a library that offers a simple CRUD interface.
+
+### Cluster
+
+ScalarDB Cluster, which is licensed under a commercial license, is a component that provides a clustering solution for the Core component to work as a clustered server. Cluster is mainly designed for OLTP workloads, which have many small, transactional and non-transactional reads and writes. In addition, it provides several enterprise features such as authentication, authorization, encryption at rest, and fine-grained access control (attribute-based access control). Not only does Cluster offer the same CRUD interface as the Core component, but it also offers SQL and GraphQL interfaces. Furthermore, it offers a vector store interface to interact with several vector stores. Since Cluster is provided as a container in a Kubernetes Pod, you can increase performance and availability by having more containers.
+
+### Analytics
+
+ScalarDB Analytics, which is licensed under a commercial license, is a component that provides scalable analytical processing for the data managed by the Core component or managed by applications that don’t use ScalarDB. Analytics is mainly designed for OLAP workloads, which have a small number of large, analytical read queries. In addition, it offers a SQL and DataSet API through Spark. Since the Analytics component is provided as a Java package that can be installed on Apache Spark engines, you can increase performance by having more Spark worker nodes.
+
+## Metadata tables
+
+ScalarDB manages various types of metadata in the underlying databases to provide its capabilities. The following table summarizes the metadata managed by each component.
+
+| Component | Metadata tables | Purpose | Location |
+| --------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | --------------------------------------------------------------------------- |
+| Core | `scalardb.metadata` | For database schema information | In all the databases under ScalarDB |
+| Core | `coordinator.state` | For transaction statuses | In one designated database specified to store the Coordinator table |
+| Core | Application-managed tables | For WAL information | In all the tables accessed by Consensus Commit |
+| Cluster | `scalardb.users`, `scalardb.namespace_privileges`, `scalardb.table_privileges`, `scalardb.auth_tokens` | For [authentication and authorization](./scalardb-cluster/scalardb-auth-with-sql.mdx) | In one designated database specified to store the scalardb system namespace |
+| Cluster | `scalardb.encrypted_columns` | For [encryption at rest](./scalardb-cluster/encrypt-data-at-rest.mdx) | In one designated database specified to store the scalardb system namespace |
+| Cluster | `scalardb.abac_*` | For [attribute-based access control](./scalardb-cluster/authorize-with-abac.mdx) | In one designated database specified to store the scalardb system namespace |
+| Analytics | All the tables managed by the catalog server | For [data catalog](./scalardb-analytics/design.mdx#universal-data-catalog) | In the catalog server database |
+
+:::note
+
+If you need to take backups of the databases accessed by ScalarDB, you will also need to take backups of the metadata managed by ScalarDB. For more details, see [How to Back Up and Restore Databases Used Through ScalarDB](./backup-restore.mdx).
+
+:::
+
+## Limitations
+
+ScalarDB operates between applications and databases, which leads to certain limitations. This section summarizes the limitations of ScalarDB.
+
+### Applications cannot bypass ScalarDB to run transactions and analytical queries
+
+ScalarDB Core offers a database-agnostic transaction capability that operates outside of databases. Therefore, applications must interact with ScalarDB to execute transactions; otherwise, ScalarDB cannot ensure transaction correctness, such as snapshot and serializable isolation. For more details, see [Consensus Commit](./consensus-commit.mdx).
+
+Likewise, ScalarDB Analytics offers a scalable analytical query processing capability that operates outside of databases. Therefore, applications must interact with ScalarDB Analytics to execute analytical queries; otherwise, ScalarDB cannot ensure correctness, such as read-committed isolation. For more details, see [ScalarDB Analytics Design](./scalardb-analytics/design.mdx).
+
+### Applications cannot use all the capabilities of the underlying databases
+
+ScalarDB serves as an abstraction layer over the underlying databases, which means that applications cannot use all the capabilities and data types of these databases. For instance, ScalarDB does not support database-specific features such as Oracle PL/SQL.
+
+ScalarDB has been enhanced to provide features that are commonly found in most supported databases. For a list of features, see [ScalarDB Features](./features.mdx). To learn about the features planned for future releases, see [Roadmap](./roadmap.mdx).
+
+## Further reading
+
+For more details about the design and implementation of ScalarDB, see the following documents:
+
+- **Speaker Deck presentation:** [ScalarDB: Universal Transaction Manager](https://speakerdeck.com/scalar/scalar-db-universal-transaction-manager)
+
+In addition, the following materials were presented at the VLDB 2023 conference:
+
+- **Speaker Deck presentation:** [ScalarDB: Universal Transaction Manager for Polystores](https://speakerdeck.com/scalar/scalardb-universal-transaction-manager-for-polystores-vldb23)
+- **Detailed paper:** [ScalarDB: Universal Transaction Manager for Polystores](https://www.vldb.org/pvldb/vol16/p3768-yamada.pdf)
diff --git a/versioned_docs/version-3.X/develop-overview.mdx b/versioned_docs/version-3.X/develop-overview.mdx
new file mode 100644
index 00000000..652ca411
--- /dev/null
+++ b/versioned_docs/version-3.X/develop-overview.mdx
@@ -0,0 +1,35 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Develop Overview
+
+In this category, you can follow guides to help you become more familiar with ScalarDB, specifically with how to run transactions, analytical queries, and non-transactional storage operations.
+
+To get started with developing applications for ScalarDB, see the following sub-categories.
+
+## Run transactions
+
+In this sub-category, you can learn how to model your data based on the ScalarDB data model and create schemas. Then, you can learn how to run transactions through the ScalarDB Core library and ScalarDB Cluster, a gRPC server that wraps the Core library.
+
+For an overview of this sub-category, see [Run Transactions Overview](develop-run-transactions-overview.mdx).
+
+## Run non-transactional operations
+
+In this sub-category, you can learn how to run such non-transactional storage operations.
+
+For an overview of this sub-category, see [Run Non-Transactional Operations Overview](develop-run-non-transactional-operations-overview.mdx).
+
+## Run analytical queries
+
+To learn how to run analytical queries by using ScalarDB Analytics, see [Run Analytical Queries Through ScalarDB Analytics](scalardb-analytics/run-analytical-queries.mdx).
+
+## Run sample applications
+
+In this sub-category, you can learn how to run various sample applications that take advantage of ScalarDB.
+
+For an overview of this sub-category, see [Run Sample Applications Overview](scalardb-samples/README.mdx).
diff --git a/versioned_docs/version-3.X/develop-run-non-transactional-operations-overview.mdx b/versioned_docs/version-3.X/develop-run-non-transactional-operations-overview.mdx
new file mode 100644
index 00000000..a4c22448
--- /dev/null
+++ b/versioned_docs/version-3.X/develop-run-non-transactional-operations-overview.mdx
@@ -0,0 +1,21 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Run Non-Transactional Storage Operations Overview
+
+ScalarDB was initially designed to provide a unified abstraction between diverse databases and transactions across such databases. However, there are cases where you only need the unified abstraction to simplify your applications that use multiple, possibly diverse, databases.
+
+ScalarDB can be configured to provide only the unified abstraction, without transaction capabilities, so that it only runs non-transactional operations on the underlying database and storage. Since ScalarDB in this configuration doesn't guarantee ACID across multiple operations, you can perform operations with better performance.
+
+In this sub-category, you can learn how to run such non-transactional storage operations.
+
+- Run Through the CRUD Interface
+ - [Use the ScalarDB Core Library](run-non-transactional-storage-operations-through-library.mdx)
+ - [Use ScalarDB Cluster](scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx)
+- [Run Through the SQL Interface](scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx)
+- [Run Through the Primitive CRUD Interface](run-non-transactional-storage-operations-through-primitive-crud-interface.mdx)
diff --git a/versioned_docs/version-3.X/develop-run-transactions-overview.mdx b/versioned_docs/version-3.X/develop-run-transactions-overview.mdx
new file mode 100644
index 00000000..17cc7831
--- /dev/null
+++ b/versioned_docs/version-3.X/develop-run-transactions-overview.mdx
@@ -0,0 +1,17 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Run Transactions Overview
+
+In this sub-category, you can learn how to model your data based on the ScalarDB data model and create schemas. Then, you can learn how to run transactions through the ScalarDB Core library and ScalarDB Cluster, a gRPC server that wraps the Core library.
+
+- [Model Your Data](data-modeling.mdx)
+- Run Through the CRUD Interface
+ - [Use the ScalarDB Core Library](run-transactions-through-scalardb-core-library.mdx)
+ - [Use ScalarDB Cluster](scalardb-cluster/run-transactions-through-scalardb-cluster.mdx)
+- [Run Through the SQL Interface](scalardb-cluster/run-transactions-through-scalardb-cluster-sql.mdx)
diff --git a/versioned_docs/version-3.X/features.mdx b/versioned_docs/version-3.X/features.mdx
new file mode 100644
index 00000000..0ba5766d
--- /dev/null
+++ b/versioned_docs/version-3.X/features.mdx
@@ -0,0 +1,29 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Features
+
+This document briefly explains which features are available in which editions of ScalarDB.
+
+| | ScalarDB Core (Community) | ScalarDB Cluster (Enterprise Standard) | ScalarDB Cluster (Enterprise Premium) | ScalarDB Analytics (Enterprise) |
+|-------------------------------------------------------------------------------------------------------------------------------------|---------------------------|----------------------------------------|------------------------------------------------------------|---------------------------------|
+| [Transaction processing across databases with primitive interfaces](getting-started-with-scalardb.mdx) | ✅ | ✅ | ✅ | – |
+| [Clustering](scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx) | - | ✅ | ✅ | – |
+| [Non-transactional storage operations](develop-run-non-transactional-operations-overview.mdx) | – | ✅ (3.14+) | ✅ (3.14+) | – |
+| [Authentication/authorization](scalardb-cluster/scalardb-auth-with-sql.mdx) | – | ✅ | ✅ | – |
+| [Encryption](scalardb-cluster/encrypt-data-at-rest.mdx) | – | – | ✅ (3.14+) | – |
+| [Attribute-based access control](scalardb-cluster/authorize-with-abac.mdx) | – | – | ✅ (3.15+) (Enterprise Premium Option*, Private Preview**) | – |
+| [SQL interface (SQL API, JDBC, Spring Data JDBC, and LINQ)](scalardb-sql/index.mdx) | – | – | ✅ | – |
+| [GraphQL interface](scalardb-graphql/index.mdx) | – | – | ✅ | – |
+| [Vector search interface](scalardb-cluster/getting-started-with-vector-search.mdx) | – | – | ✅ (3.15+) (Private Preview**) | – |
+| [Analytical query processing across ScalarDB-managed data sources](scalardb-samples/scalardb-analytics-spark-sample/README.mdx) | – | – | – | ✅ (3.14+) |
+| [Analytical query processing across non-ScalarDB-managed data sources](scalardb-samples/scalardb-analytics-spark-sample/README.mdx) | – | – | – | ✅ (3.15+) |
+
+\* This feature is not available in the Enterprise Premium edition. If you want to use this feature, please [contact us](https://www.scalar-labs.com/contact).
+
+\*\* This feature is currently in Private Preview. For details, please [contact us](https://www.scalar-labs.com/contact) or wait for this feature to become publicly available in a future version.
diff --git a/versioned_docs/version-3.X/getting-started-with-scalardb-by-using-kotlin.mdx b/versioned_docs/version-3.X/getting-started-with-scalardb-by-using-kotlin.mdx
new file mode 100644
index 00000000..2284966c
--- /dev/null
+++ b/versioned_docs/version-3.X/getting-started-with-scalardb-by-using-kotlin.mdx
@@ -0,0 +1,413 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with ScalarDB by Using Kotlin
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This getting started tutorial explains how to configure your preferred database in ScalarDB and set up a basic electronic money application by using Kotlin. Since Kotlin has Java interoperability, you can use ScalarDB directly from Kotlin.
+
+:::warning
+
+The electronic money application is simplified for this tutorial and isn't suitable for a production environment.
+
+:::
+
+## Prerequisites for this sample application
+
+- OpenJDK LTS version (8, 11, 17, or 21) from [Eclipse Temurin](https://adoptium.net/temurin/releases/)
+- [Docker](https://www.docker.com/get-started/) 20.10 or later with [Docker Compose](https://docs.docker.com/compose/install/) V2 or later
+
+:::note
+
+This sample application has been tested with OpenJDK from Eclipse Temurin. ScalarDB itself, however, has been tested with JDK distributions from various vendors. For details about the requirements for ScalarDB, including compatible JDK distributions, please see [Requirements](./requirements.mdx).
+
+:::
+
+## Clone the ScalarDB samples repository
+
+Open **Terminal**, then clone the ScalarDB samples repository by running the following command:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-samples
+```
+
+Then, go to the directory that contains the sample application by running the following command:
+
+```console
+cd scalardb-samples/scalardb-kotlin-sample
+```
+
+## Set up your database for ScalarDB
+
+Select your database, and follow the instructions to configure it for ScalarDB.
+
+For a list of databases that ScalarDB supports, see [Databases](requirements.mdx#databases).
+
+
+
+ Run MySQL locally
+
+ You can run MySQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+ To start MySQL, run the following command:
+
+ ```console
+ docker compose up -d mysql
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for MySQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For MySQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:mysql://localhost:3306/
+ scalar.db.username=root
+ scalar.db.password=mysql
+ ```
+
+
+ Run PostgreSQL locally
+
+ You can run PostgreSQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+ To start PostgreSQL, run the following command:
+
+ ```console
+ docker compose up -d postgres
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for PostgreSQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For PostgreSQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:postgresql://localhost:5432/
+ scalar.db.username=postgres
+ scalar.db.password=postgres
+ ```
+
+
+ Run Oracle Database locally
+
+ You can run Oracle Database in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+ To start Oracle Database, run the following command:
+
+ ```console
+ docker compose up -d oracle
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Oracle Database in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Oracle
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:oracle:thin:@//localhost:1521/FREEPDB1
+ scalar.db.username=SYSTEM
+ scalar.db.password=Oracle
+ ```
+
+
+ Run SQL Server locally
+
+ You can run SQL Server in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+ To start SQL Server, run the following command:
+
+ ```console
+ docker compose up -d sqlserver
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for SQL Server in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For SQL Server
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:sqlserver://localhost:1433;encrypt=true;trustServerCertificate=true
+ scalar.db.username=sa
+ scalar.db.password=SqlServer22
+ ```
+
+
+ Run Amazon DynamoDB Local
+
+ You can run Amazon DynamoDB Local in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+ To start Amazon DynamoDB Local, run the following command:
+
+ ```console
+ docker compose up -d dynamodb
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Amazon DynamoDB Local in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For DynamoDB Local
+ scalar.db.storage=dynamo
+ scalar.db.contact_points=sample
+ scalar.db.username=sample
+ scalar.db.password=sample
+ scalar.db.dynamo.endpoint_override=http://localhost:8000
+ ```
+
+
+ To use Azure Cosmos DB for NoSQL, you must have an Azure account. If you don't have an Azure account, visit [Create an Azure Cosmos DB account](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-portal#create-account).
+
+ Configure Cosmos DB for NoSQL
+
+ Set the **default consistency level** to **Strong** according to the official document at [Configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#configure-the-default-consistency-level).
+
+ Configure ScalarDB
+
+ The following instructions assume that you have properly installed and configured the JDK in your local environment and properly configured your Cosmos DB for NoSQL account in Azure.
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Be sure to change the values for `scalar.db.contact_points` and `scalar.db.password` as described.
+
+ ```properties
+ # For Cosmos DB
+ scalar.db.storage=cosmos
+ scalar.db.contact_points=
+ scalar.db.password=
+ ```
+
+:::note
+
+You can use the primary key or the secondary key in your Azure Cosmos DB account as the value for `scalar.db.password`.
+
+:::
+
+
+ Run Cassandra locally
+
+ You can run Apache Cassandra in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+ To start Apache Cassandra, run the following command:
+ ```console
+ docker compose up -d cassandra
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-kotlin-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Cassandra in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Cassandra
+ scalar.db.storage=cassandra
+ scalar.db.contact_points=localhost
+ scalar.db.username=cassandra
+ scalar.db.password=cassandra
+ ```
+
+
+
+## Load the database schema
+
+You need to define the database schema (the method in which the data will be organized) in the application. For details about the supported data types, see [Data type mapping between ScalarDB and other databases](schema-loader.mdx#data-type-mapping-between-scalardb-and-other-databases).
+
+For this tutorial, a file named **schema.json** already exists in the `scalardb-samples/scalardb-kotlin-sample` directory. To apply the schema, go to the [`scalardb` Releases](https://github.com/scalar-labs/scalardb/releases) page and download the ScalarDB Schema Loader that matches the version of ScalarDB that you are using to the `scalardb-samples/scalardb-kotlin-sample` directory.
+
+Then, based on your database, run the following command, replacing `` with the version of the ScalarDB Schema Loader that you downloaded:
+
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator --no-backup --no-scaling
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+Also, `--no-backup` and `--no-scaling` options are specified because Amazon DynamoDB Local does not support continuous backup and auto-scaling.
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator --replication-factor=1
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+In addition, the `--replication-factor=1` option has an effect only when using Cassandra. The default replication factor is `3`, but to facilitate the setup in this tutorial, `1` is used so that you only need to prepare a cluster with one node instead of three nodes. However, keep in mind that a replication factor of `1` is not suited for production.
+
+:::
+
+
+
+## Execute transactions and retrieve data in the basic electronic money application
+
+After loading the schema, you can execute transactions and retrieve data in the basic electronic money application that is included in the repository that you cloned.
+
+The application supports the following types of transactions:
+
+- Create an account.
+- Add funds to an account.
+- Send funds between two accounts.
+- Get an account balance.
+
+:::note
+
+When you first execute a Gradle command, Gradle will automatically install the necessary libraries.
+
+:::
+
+### Create an account with a balance
+
+You need an account with a balance so that you can send funds between accounts.
+
+To create an account for **customer1** that has a balance of **500**, run the following command:
+
+```console
+./gradlew run --args="-action charge -amount 500 -to customer1"
+```
+
+### Create an account without a balance
+
+After setting up an account that has a balance, you need another account for sending funds to.
+
+To create an account for **merchant1** that has a balance of **0**, run the following command:
+
+```console
+./gradlew run --args="-action charge -amount 0 -to merchant1"
+```
+
+### Add funds to an account
+
+You can add funds to an account in the same way that you created and added funds to an account in [Create an account with a balance](#create-an-account-with-a-balance).
+
+To add **500** to the account for **customer1**, run the following command:
+
+```console
+./gradlew run --args="-action charge -amount 500 -to customer1"
+```
+
+The account for **customer1** will now have a balance of **1000**.
+
+### Send electronic money between two accounts
+
+Now that you have created two accounts, with at least one of those accounts having a balance, you can send funds from one account to the other account.
+
+To have **customer1** pay **100** to **merchant1**, run the following command:
+
+```console
+./gradlew run --args="-action pay -amount 100 -from customer1 -to merchant1"
+```
+
+### Get an account balance
+
+After sending funds from one account to the other, you can check the balance of each account.
+
+To get the balance of **customer1**, run the following command:
+
+```console
+./gradlew run --args="-action getBalance -id customer1"
+```
+
+You should see the following output:
+
+```console
+...
+The balance for customer1 is 900
+...
+```
+
+To get the balance of **merchant1**, run the following command:
+
+```console
+./gradlew run --args="-action getBalance -id merchant1"
+```
+
+You should see the following output:
+
+```console
+...
+The balance for merchant1 is 100
+...
+```
+
+## Stop the database
+
+To stop the database, stop the Docker container by running the following command:
+
+```console
+docker compose down
+```
+
+## Reference
+
+To see the source code for the electronic money application used in this tutorial, see [`ElectronicMoney.kt`](https://github.com/scalar-labs/scalardb-samples/blob/main/scalardb-kotlin-sample/src/main/kotlin/sample/ElectronicMoney.kt).
diff --git a/versioned_docs/version-3.X/getting-started-with-scalardb.mdx b/versioned_docs/version-3.X/getting-started-with-scalardb.mdx
new file mode 100644
index 00000000..0c7bcd8f
--- /dev/null
+++ b/versioned_docs/version-3.X/getting-started-with-scalardb.mdx
@@ -0,0 +1,556 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with ScalarDB
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This getting started tutorial explains how to configure your preferred database in ScalarDB and illustrates the process of creating a sample e-commerce application, where items can be ordered and paid for with a credit card by using ScalarDB. The sample e-commerce application shows how users can order and pay for items by using a line of credit.
+
+:::warning
+
+Since the focus of the sample application is to demonstrate using ScalarDB, application-specific error handling, authentication processing, and similar functions are not included in the sample application. For details about exception handling in ScalarDB, see [How to handle exceptions](api-guide.mdx#how-to-handle-exceptions).
+
+:::
+
+## Prerequisites for this sample application
+
+- OpenJDK LTS version (8, 11, 17, or 21) from [Eclipse Temurin](https://adoptium.net/temurin/releases/)
+- [Docker](https://www.docker.com/get-started/) 20.10 or later with [Docker Compose](https://docs.docker.com/compose/install/) V2 or later
+
+:::note
+
+This sample application has been tested with OpenJDK from Eclipse Temurin. ScalarDB itself, however, has been tested with JDK distributions from various vendors. For details about the requirements for ScalarDB, including compatible JDK distributions, please see [Requirements](./requirements.mdx).
+
+:::
+
+## Clone the ScalarDB samples repository
+
+Open **Terminal**, then clone the ScalarDB samples repository by running the following command:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-samples
+```
+
+Then, go to the directory that contains the sample application by running the following command:
+
+```console
+cd scalardb-samples/scalardb-sample
+```
+
+## Set up your database for ScalarDB
+
+Select your database, and follow the instructions to configure it for ScalarDB.
+
+For a list of databases that ScalarDB supports, see [Databases](requirements.mdx#databases).
+
+
+
+ Run MySQL locally
+
+ You can run MySQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start MySQL, run the following command:
+
+ ```console
+ docker compose up -d mysql
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for MySQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For MySQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:mysql://localhost:3306/
+ scalar.db.username=root
+ scalar.db.password=mysql
+ ```
+
+
+ Run PostgreSQL locally
+
+ You can run PostgreSQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start PostgreSQL, run the following command:
+
+ ```console
+ docker compose up -d postgres
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for PostgreSQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For PostgreSQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:postgresql://localhost:5432/
+ scalar.db.username=postgres
+ scalar.db.password=postgres
+ ```
+
+
+ Run Oracle Database locally
+
+ You can run Oracle Database in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Oracle Database, run the following command:
+
+ ```console
+ docker compose up -d oracle
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Oracle Database in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Oracle
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:oracle:thin:@//localhost:1521/FREEPDB1
+ scalar.db.username=SYSTEM
+ scalar.db.password=Oracle
+ ```
+
+
+ Run SQL Server locally
+
+ You can run SQL Server in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start SQL Server, run the following command:
+
+ ```console
+ docker compose up -d sqlserver
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for SQL Server in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For SQL Server
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:sqlserver://localhost:1433;encrypt=true;trustServerCertificate=true
+ scalar.db.username=sa
+ scalar.db.password=SqlServer22
+ ```
+
+
+ Run Db2 locally
+
+ You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start IBM Db2, run the following command:
+
+ ```console
+ docker compose up -d db2
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Db2 in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Db2
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:db2://localhost:50000/sample
+ scalar.db.username=db2inst1
+ scalar.db.password=db2inst1
+ ```
+
+
+ Run Amazon DynamoDB Local
+
+ You can run Amazon DynamoDB Local in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Amazon DynamoDB Local, run the following command:
+
+ ```console
+ docker compose up -d dynamodb
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Amazon DynamoDB Local in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For DynamoDB Local
+ scalar.db.storage=dynamo
+ scalar.db.contact_points=sample
+ scalar.db.username=sample
+ scalar.db.password=sample
+ scalar.db.dynamo.endpoint_override=http://localhost:8000
+ ```
+
+
+ To use Azure Cosmos DB for NoSQL, you must have an Azure account. If you don't have an Azure account, visit [Create an Azure Cosmos DB account](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-portal#create-account).
+
+ Configure Cosmos DB for NoSQL
+
+ Set the **default consistency level** to **Strong** according to the official document at [Configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#configure-the-default-consistency-level).
+
+ Configure ScalarDB
+
+ The following instructions assume that you have properly installed and configured the JDK in your local environment and properly configured your Cosmos DB for NoSQL account in Azure.
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Be sure to change the values for `scalar.db.contact_points` and `scalar.db.password` as described.
+
+ ```properties
+ # For Cosmos DB
+ scalar.db.storage=cosmos
+ scalar.db.contact_points=
+ scalar.db.password=
+ ```
+
+:::note
+
+You can use the primary key or the secondary key in your Azure Cosmos DB account as the value for `scalar.db.password`.
+
+:::
+
+
+ Run Cassandra locally
+
+ You can run Apache Cassandra in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Apache Cassandra, run the following command:
+ ```console
+ docker compose up -d cassandra
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Cassandra in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Cassandra
+ scalar.db.storage=cassandra
+ scalar.db.contact_points=localhost
+ scalar.db.username=cassandra
+ scalar.db.password=cassandra
+ ```
+
+
+
+## Load the database schema
+
+You need to define the database schema (the method in which the data will be organized) in the application. For details about the supported data types, see [Data type mapping between ScalarDB and other databases](schema-loader.mdx#data-type-mapping-between-scalardb-and-other-databases).
+
+For this tutorial, a file named **schema.json** already exists in the `scalardb-samples/scalardb-sample` directory. To apply the schema, go to the [`scalardb` Releases](https://github.com/scalar-labs/scalardb/releases) page and download the ScalarDB Schema Loader that matches the version of ScalarDB that you are using to the `scalardb-samples/scalardb-sample` directory.
+
+Then, run the following command, replacing `` with the version of the ScalarDB Schema Loader that you downloaded:
+
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator --no-backup --no-scaling
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+Also, `--no-backup` and `--no-scaling` options are specified because Amazon DynamoDB Local does not support continuous backup and auto-scaling.
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+:::
+
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config database.properties --schema-file schema.json --coordinator --replication-factor=1
+ ```
+
+:::note
+
+The `--coordinator` option is specified because a table with `transaction` set to `true` exists in the schema. For details about configuring and loading a schema, see [ScalarDB Schema Loader](schema-loader.mdx).
+
+In addition, the `--replication-factor=1` option has an effect only when using Cassandra. The default replication factor is `3`, but to facilitate the setup in this tutorial, `1` is used so that you only need to prepare a cluster with one node instead of three nodes. However, keep in mind that a replication factor of `1` is not suited for production.
+
+:::
+
+
+
+### Schema details
+
+As shown in [`schema.json`](https://github.com/scalar-labs/scalardb-samples/tree/main/scalardb-sample/schema.json) for the sample application, all the tables are created in the `sample` namespace.
+
+- `sample.customers`: a table that manages customer information
+ - `credit_limit`: the maximum amount of money that the lender will allow the customer to spend from their line of credit
+ - `credit_total`: the amount of money that the customer has spent from their line of credit
+- `sample.orders`: a table that manages order information
+- `sample.statements`: a table that manages order statement information
+- `sample.items`: a table that manages information for items to be ordered
+
+The Entity Relationship Diagram for the schema is as follows:
+
+
+
+### Load the initial data
+
+Before running the sample application, you need to load the initial data by running the following command:
+
+```console
+./gradlew run --args="LoadInitialData"
+```
+
+After the initial data has loaded, the following records should be stored in the tables.
+
+**`sample.customers` table**
+
+| customer_id | name | credit_limit | credit_total |
+|-------------|---------------|--------------|--------------|
+| 1 | Yamada Taro | 10000 | 0 |
+| 2 | Yamada Hanako | 10000 | 0 |
+| 3 | Suzuki Ichiro | 10000 | 0 |
+
+**`sample.items` table**
+
+| item_id | name | price |
+|---------|--------|-------|
+| 1 | Apple | 1000 |
+| 2 | Orange | 2000 |
+| 3 | Grape | 2500 |
+| 4 | Mango | 5000 |
+| 5 | Melon | 3000 |
+
+## Execute transactions and retrieve data in the sample application
+
+The following sections describe how to execute transactions and retrieve data in the sample e-commerce application.
+
+### Get customer information
+
+Start with getting information about the customer whose ID is `1` by running the following command:
+
+```console
+./gradlew run --args="GetCustomerInfo 1"
+```
+
+You should see the following output:
+
+```console
+...
+{"id": 1, "name": "Yamada Taro", "credit_limit": 10000, "credit_total": 0}
+...
+```
+
+### Place an order
+
+Then, have customer ID `1` place an order for three apples and two oranges by running the following command:
+
+:::note
+
+The order format in this command is `./gradlew run --args="PlaceOrder :,:,..."`.
+
+:::
+
+```console
+./gradlew run --args="PlaceOrder 1 1:3,2:2"
+```
+
+You should see a similar output as below, with a different UUID for `order_id`, which confirms that the order was successful:
+
+```console
+...
+{"order_id": "dea4964a-ff50-4ecf-9201-027981a1566e"}
+...
+```
+
+### Check order details
+
+Check details about the order by running the following command, replacing `` with the UUID for the `order_id` that was shown after running the previous command:
+
+```console
+./gradlew run --args="GetOrder "
+```
+
+You should see a similar output as below, with different UUIDs for `order_id` and `timestamp`:
+
+```console
+...
+{"order": {"order_id": "dea4964a-ff50-4ecf-9201-027981a1566e","timestamp": 1650948340914,"customer_id": 1,"customer_name": "Yamada Taro","statement": [{"item_id": 1,"item_name": "Apple","price": 1000,"count": 3,"total": 3000},{"item_id": 2,"item_name": "Orange","price": 2000,"count": 2,"total": 4000}],"total": 7000}}
+...
+```
+
+### Place another order
+
+Place an order for one melon that uses the remaining amount in `credit_total` for customer ID `1` by running the following command:
+
+```console
+./gradlew run --args="PlaceOrder 1 5:1"
+```
+
+You should see a similar output as below, with a different UUID for `order_id`, which confirms that the order was successful:
+
+```console
+...
+{"order_id": "bcc34150-91fa-4bea-83db-d2dbe6f0f30d"}
+...
+```
+
+### Check order history
+
+Get the history of all orders for customer ID `1` by running the following command:
+
+```console
+./gradlew run --args="GetOrders 1"
+```
+
+You should see a similar output as below, with different UUIDs for `order_id` and `timestamp`, which shows the history of all orders for customer ID `1` in descending order by timestamp:
+
+```console
+...
+{"order": [{"order_id": "dea4964a-ff50-4ecf-9201-027981a1566e","timestamp": 1650948340914,"customer_id": 1,"customer_name": "Yamada Taro","statement": [{"item_id": 1,"item_name": "Apple","price": 1000,"count": 3,"total": 3000},{"item_id": 2,"item_name": "Orange","price": 2000,"count": 2,"total": 4000}],"total": 7000},{"order_id": "bcc34150-91fa-4bea-83db-d2dbe6f0f30d","timestamp": 1650948412766,"customer_id": 1,"customer_name": "Yamada Taro","statement": [{"item_id": 5,"item_name": "Melon","price": 3000,"count": 1,"total": 3000}],"total": 3000}]}
+...
+```
+
+### Check credit total
+
+Get the credit total for customer ID `1` by running the following command:
+
+```console
+./gradlew run --args="GetCustomerInfo 1"
+```
+
+You should see the following output, which shows that customer ID `1` has reached their `credit_limit` in `credit_total` and cannot place anymore orders:
+
+```console
+...
+{"id": 1, "name": "Yamada Taro", "credit_limit": 10000, "credit_total": 10000}
+...
+```
+
+Try to place an order for one grape and one mango by running the following command:
+
+```console
+./gradlew run --args="PlaceOrder 1 3:1,4:1"
+```
+
+You should see the following output, which shows that the order failed because the `credit_total` amount would exceed the `credit_limit` amount.
+
+```console
+...
+java.lang.RuntimeException: Credit limit exceeded
+ at sample.Sample.placeOrder(Sample.java:205)
+ at sample.command.PlaceOrderCommand.call(PlaceOrderCommand.java:33)
+ at sample.command.PlaceOrderCommand.call(PlaceOrderCommand.java:8)
+ at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
+ at picocli.CommandLine.access$900(CommandLine.java:145)
+ at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
+ at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
+ at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
+ at picocli.CommandLine.execute(CommandLine.java:1904)
+ at sample.command.SampleCommand.main(SampleCommand.java:35)
+...
+```
+
+### Make a payment
+
+To continue making orders, customer ID `1` must make a payment to reduce the `credit_total` amount.
+
+Make a payment by running the following command:
+
+```console
+./gradlew run --args="Repayment 1 8000"
+```
+
+Then, check the `credit_total` amount for customer ID `1` by running the following command:
+
+```console
+./gradlew run --args="GetCustomerInfo 1"
+```
+
+You should see the following output, which shows that a payment was applied to customer ID `1`, reducing the `credit_total` amount:
+
+```console
+...
+{"id": 1, "name": "Yamada Taro", "credit_limit": 10000, "credit_total": 2000}
+...
+```
+
+Now that customer ID `1` has made a payment, place an order for one grape and one melon by running the following command:
+
+```console
+./gradlew run --args="PlaceOrder 1 3:1,4:1"
+```
+
+You should see a similar output as below, with a different UUID for `order_id`, which confirms that the order was successful:
+
+```console
+...
+{"order_id": "8911cab3-1c2b-4322-9386-adb1c024e078"}
+...
+```
+
+## Stop the database
+
+To stop the database, stop the Docker container by running the following command:
+
+```console
+docker compose down
+```
+
+## Reference
+
+To see the source code for the e-commerce application used in this tutorial, see [`Sample.java`](https://github.com/scalar-labs/scalardb-samples/blob/main/scalardb-sample/src/main/java/sample/Sample.java).
diff --git a/versioned_docs/version-3.X/glossary.mdx b/versioned_docs/version-3.X/glossary.mdx
new file mode 100644
index 00000000..6cd569d6
--- /dev/null
+++ b/versioned_docs/version-3.X/glossary.mdx
@@ -0,0 +1,119 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Glossary
+
+This glossary includes database and distributed-system terms that are often used when using ScalarDB.
+
+## ACID
+
+Atomicity, consistency, isolation, and durability (ACID) is a set of properties that ensure database transactions are processed reliably, maintaining integrity even in cases of errors or system failures.
+
+## concurrency control
+
+Concurrency control in databases ensures that multiple transactions can occur simultaneously without causing data inconsistency, usually through mechanisms like locking or timestamp ordering.
+
+## consensus
+
+Consensus in distributed systems refers to the process of achieving agreement among multiple computers or nodes on a single data value or system state.
+
+## data federation
+
+Data federation is the process of integrating data from different sources without moving the data, creating a unified view for querying and analysis.
+
+## data mesh
+
+A data mesh is a decentralized data architecture that enables each business domain within a company to autonomously manage data and use it efficiently.
+
+## data virtualization
+
+Data virtualization is similar to data federation in many aspects, meaning that it virtualizes multiple data sources into a unified view, simplifying queries without moving the data.
+
+## database anomalies
+
+Database anomalies are inconsistencies or errors in data that can occur when operations such as insertions, updates, or deletions are performed without proper transaction management.
+
+## federation engine
+
+A federation engine facilitates data integration and querying across multiple disparate data sources, often as part of a data federation architecture.
+
+## global transaction
+
+A global transaction spans multiple databases or distributed systems and ensures that all involved systems commit or roll back changes as a single unit.
+
+## heterogeneous databases
+
+Heterogeneous databases refer to systems composed of different database technologies that may have distinct data models, query languages, and transaction mechanisms.
+
+## HTAP
+
+Hybrid transactional/analytical processing (HTAP) refers to a system that can handle both transactional and analytical workloads concurrently on the same data set, removing the need for separate databases.
+
+## JDBC
+
+Java Database Connectivity (JDBC) is an API that allows Java applications to interact with databases, providing methods for querying and updating data in relational databases.
+
+## linearizability
+
+Linearizability is a strong consistency model in distributed systems where operations appear to occur atomically in some order, and each operation takes effect between its start and end.
+
+## NoSQL database
+
+A NoSQL database is a non-relational databases designed for specific data models, such as document, key-value, wide-column, or graph stores, often used for handling large-scale, distributed data.
+
+## Paxos
+
+Paxos is a family of protocols used in distributed systems to achieve consensus, even in the presence of node failures.
+
+## PITR
+
+Point-in-time recovery (PITR) allows a database to be restored to a previous state at any specific time, usually after an unintended event like data corruption.
+
+## polystores
+
+Polystores are database architectures that allow users to interact with multiple, heterogeneous data stores, each optimized for a specific workload or data type, as if they were a single system.
+
+## read-committed isolation
+
+Read-committed isolation is an isolation level where each transaction sees only committed data, preventing dirty reads but allowing non-repeatable reads.
+
+## relational database
+
+A relational database stores data in tables with rows and columns, using a structured query language (SQL) to define, query, and manipulate the data.
+
+## replication
+
+Replication in databases involves copying and distributing data across multiple machines or locations to ensure reliability, availability, and fault tolerance.
+
+## Saga
+
+The Saga pattern is a method for managing long-running transactions in a distributed system, where each operation in the transaction is followed by a compensating action in case of failure.
+
+## serializable isolation
+
+Serializable isolation (serializability) is the highest isolation level in transactional systems, ensuring that the outcome of concurrently executed transactions is the same as if they were executed sequentially.
+
+## snapshot isolation
+
+Snapshot isolation is an isolation level that allows transactions to read a consistent snapshot of the database, protecting them from seeing changes made by other transactions until they complete.
+
+## TCC
+
+Try-Confirm/Cancel (TCC) is a pattern for distributed transactions that splits an operation into three steps, allowing for coordination and recovery across multiple systems.
+
+## transaction
+
+A transaction in databases is a sequence of operations treated as a single logical unit of work, ensuring consistency and integrity, typically conforming to ACID properties.
+
+## transaction manager
+
+A transaction manager coordinates the execution of transactions across multiple systems or databases, ensuring that all steps of the transaction succeed or fail as a unit.
+
+## two-phase commit
+
+Two-phase commit is a protocol for ensuring all participants in a distributed transaction either commit or roll back the transaction, ensuring consistency across systems.
diff --git a/versioned_docs/version-3.X/images/data_model.png b/versioned_docs/version-3.X/images/data_model.png
new file mode 100644
index 00000000..15a0e4d4
Binary files /dev/null and b/versioned_docs/version-3.X/images/data_model.png differ
diff --git a/versioned_docs/version-3.X/images/getting-started-ERD.png b/versioned_docs/version-3.X/images/getting-started-ERD.png
new file mode 100644
index 00000000..1a6d13c5
Binary files /dev/null and b/versioned_docs/version-3.X/images/getting-started-ERD.png differ
diff --git a/versioned_docs/version-3.X/images/scalardb-architecture.png b/versioned_docs/version-3.X/images/scalardb-architecture.png
new file mode 100644
index 00000000..6f22111c
Binary files /dev/null and b/versioned_docs/version-3.X/images/scalardb-architecture.png differ
diff --git a/versioned_docs/version-3.X/images/scalardb-metadata.png b/versioned_docs/version-3.X/images/scalardb-metadata.png
new file mode 100644
index 00000000..49880267
Binary files /dev/null and b/versioned_docs/version-3.X/images/scalardb-metadata.png differ
diff --git a/versioned_docs/version-3.X/images/scalardb.png b/versioned_docs/version-3.X/images/scalardb.png
new file mode 100644
index 00000000..658486cb
Binary files /dev/null and b/versioned_docs/version-3.X/images/scalardb.png differ
diff --git a/versioned_docs/version-3.X/images/scalardb_data_model.png b/versioned_docs/version-3.X/images/scalardb_data_model.png
new file mode 100644
index 00000000..7a02fa23
Binary files /dev/null and b/versioned_docs/version-3.X/images/scalardb_data_model.png differ
diff --git a/versioned_docs/version-3.X/images/software_stack.png b/versioned_docs/version-3.X/images/software_stack.png
new file mode 100644
index 00000000..75fba6e6
Binary files /dev/null and b/versioned_docs/version-3.X/images/software_stack.png differ
diff --git a/versioned_docs/version-3.X/images/two_phase_commit_load_balancing.png b/versioned_docs/version-3.X/images/two_phase_commit_load_balancing.png
new file mode 100644
index 00000000..5cdc26f0
Binary files /dev/null and b/versioned_docs/version-3.X/images/two_phase_commit_load_balancing.png differ
diff --git a/versioned_docs/version-3.X/images/two_phase_commit_sequence_diagram.png b/versioned_docs/version-3.X/images/two_phase_commit_sequence_diagram.png
new file mode 100644
index 00000000..116ef635
Binary files /dev/null and b/versioned_docs/version-3.X/images/two_phase_commit_sequence_diagram.png differ
diff --git a/versioned_docs/version-3.X/manage-backup-and-restore.mdx b/versioned_docs/version-3.X/manage-backup-and-restore.mdx
new file mode 100644
index 00000000..a89dea81
--- /dev/null
+++ b/versioned_docs/version-3.X/manage-backup-and-restore.mdx
@@ -0,0 +1,23 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Back Up and Restore Databases
+
+This guide explains how to back up and restore databases that are used by ScalarDB.
+
+## Basic guidelines to back up and restore databases
+
+Before performing a backup, be sure to read [How to Back Up and Restore Databases Used Through ScalarDB](backup-restore.mdx).
+
+## Back up databases when using ScalarDB in a Kubernetes environment
+
+For details on how to back up databases in a Kubernetes environment, see [Back up a NoSQL database in a Kubernetes environment](scalar-kubernetes/BackupNoSQL.mdx).
+
+## Restore databases when using ScalarDB in a Kubernetes environment
+
+For details on how to restore databases in a Kubernetes environment, see [Restore databases in a Kubernetes environment](scalar-kubernetes/RestoreDatabase.mdx).
diff --git a/versioned_docs/version-3.X/manage-monitor-overview.mdx b/versioned_docs/version-3.X/manage-monitor-overview.mdx
new file mode 100644
index 00000000..d120478c
--- /dev/null
+++ b/versioned_docs/version-3.X/manage-monitor-overview.mdx
@@ -0,0 +1,21 @@
+---
+tags:
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+# Monitor Overview
+
+Scalar Manager is a centralized management and monitoring solution for ScalarDB within Kubernetes cluster environments that allows you to:
+
+- Check the availability of ScalarDB.
+- Schedule or execute pausing jobs that create transactionally consistent periods in the databases used by ScalarDB.
+- Check the time-series metrics and logs of ScalarDB through Grafana dashboards.
+
+For more details about Scalar Manager, see [Scalar Manager Overview](scalar-manager/overview.mdx).
+
+## Deploy Scalar Manager
+
+You can deploy Scalar Manager by using a Helm Chart.
+
+For details on how to deploy Scalar Manager, see [Deploy Scalar Manager](helm-charts/getting-started-scalar-manager.mdx).
diff --git a/versioned_docs/version-3.X/manage-overview.mdx b/versioned_docs/version-3.X/manage-overview.mdx
new file mode 100644
index 00000000..a58fdea0
--- /dev/null
+++ b/versioned_docs/version-3.X/manage-overview.mdx
@@ -0,0 +1,26 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Manage Overview
+
+In this category, you can follow guides to help you manage ScalarDB.
+
+- For details on how to scale ScalarDB, see [Scale](scalar-kubernetes/HowToScaleScalarDB.mdx).
+- For details on how to upgrade ScalarDB, see [Upgrade](scalar-kubernetes/HowToUpgradeScalarDB.mdx).
+
+## Monitor
+
+In this sub-category, you can learn how to monitor your ScalarDB deployment.
+
+For an overview of this sub-category, see [Monitor Overview](manage-monitor-overview.mdx).
+
+## Back up and restore
+
+In this sub-category, you can learn how to back up and restore the databases that are connected to your ScalarDB deployment.
+
+For an overview of this sub-category, see [Back Up and Restore Databases](manage-backup-and-restore.mdx).
diff --git a/versioned_docs/version-3.X/migrate-overview.mdx b/versioned_docs/version-3.X/migrate-overview.mdx
new file mode 100644
index 00000000..5b67453f
--- /dev/null
+++ b/versioned_docs/version-3.X/migrate-overview.mdx
@@ -0,0 +1,14 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Migrate Overview
+
+For details on importing your tables or migrating your applications and databases to a ScalarDB-based environment, see the following guides:
+
+- [Importing Existing Tables to ScalarDB by Using ScalarDB Schema Loader](schema-loader-import.mdx)
+- [How to Migrate Your Applications and Databases into a ScalarDB-Based Environment](scalardb-sql/migration-guide.mdx)
diff --git a/versioned_docs/version-3.X/multi-storage-transactions.mdx b/versioned_docs/version-3.X/multi-storage-transactions.mdx
new file mode 100644
index 00000000..de8da288
--- /dev/null
+++ b/versioned_docs/version-3.X/multi-storage-transactions.mdx
@@ -0,0 +1,68 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Multi-Storage Transactions
+
+ScalarDB transactions can span multiple storages or databases while maintaining ACID compliance by using a feature called *multi-storage transactions*.
+
+This page explains how multi-storage transactions work and how to configure the feature in ScalarDB.
+
+## How multi-storage transactions work in ScalarDB
+
+In ScalarDB, the `multi-storage` implementation holds multiple storage instances and has mappings from a namespace name to a proper storage instance. When an operation is executed, the multi-storage transactions feature chooses a proper storage instance from the specified namespace by using the namespace-storage mapping and uses that storage instance.
+
+## How to configure ScalarDB to support multi-storage transactions
+
+To enable multi-storage transactions, you need to specify `consensus-commit` as the value for `scalar.db.transaction_manager`, `multi-storage` as the value for `scalar.db.storage`, and configure your databases in the ScalarDB properties file.
+
+The following is an example of configurations for multi-storage transactions:
+
+```properties
+# Consensus Commit is required to support multi-storage transactions.
+scalar.db.transaction_manager=consensus-commit
+
+# Multi-storage implementation is used for Consensus Commit.
+scalar.db.storage=multi-storage
+
+# Define storage names by using a comma-separated format.
+# In this case, "cassandra" and "mysql" are used.
+scalar.db.multi_storage.storages=cassandra,mysql
+
+# Define the "cassandra" storage.
+# When setting storage properties, such as `storage`, `contact_points`, `username`, and `password`, for multi-storage transactions, the format is `scalar.db.multi_storage.storages..`.
+# For example, to configure the `scalar.db.contact_points` property for Cassandra, specify `scalar.db.multi_storage.storages.cassandra.contact_point`.
+scalar.db.multi_storage.storages.cassandra.storage=cassandra
+scalar.db.multi_storage.storages.cassandra.contact_points=localhost
+scalar.db.multi_storage.storages.cassandra.username=cassandra
+scalar.db.multi_storage.storages.cassandra.password=cassandra
+
+# Define the "mysql" storage.
+# When defining JDBC-specific configurations for multi-storage transactions, you can follow a similar format of `scalar.db.multi_storage.storages..`.
+# For example, to configure the `scalar.db.jdbc.connection_pool.min_idle` property for MySQL, specify `scalar.db.multi_storage.storages.mysql.jdbc.connection_pool.min_idle`.
+scalar.db.multi_storage.storages.mysql.storage=jdbc
+scalar.db.multi_storage.storages.mysql.contact_points=jdbc:mysql://localhost:3306/
+scalar.db.multi_storage.storages.mysql.username=root
+scalar.db.multi_storage.storages.mysql.password=mysql
+# Define the JDBC-specific configurations for the "mysql" storage.
+scalar.db.multi_storage.storages.mysql.jdbc.connection_pool.min_idle=5
+scalar.db.multi_storage.storages.mysql.jdbc.connection_pool.max_idle=10
+scalar.db.multi_storage.storages.mysql.jdbc.connection_pool.max_total=25
+
+# Define namespace mapping from a namespace name to a storage.
+# The format is ":,...".
+scalar.db.multi_storage.namespace_mapping=user:cassandra,coordinator:mysql
+
+# Define the default storage that's used if a specified table doesn't have any mapping.
+scalar.db.multi_storage.default_storage=cassandra
+```
+
+For additional configurations, see [ScalarDB Configurations](configurations.mdx).
+
+## Hands-on tutorial
+
+For a hands-on tutorial, see [Create a Sample Application That Supports Multi-Storage Transactions](scalardb-samples/multi-storage-transaction-sample/README.mdx).
diff --git a/versioned_docs/version-3.X/overview.mdx b/versioned_docs/version-3.X/overview.mdx
new file mode 100644
index 00000000..b51df6be
--- /dev/null
+++ b/versioned_docs/version-3.X/overview.mdx
@@ -0,0 +1,77 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Overview
+
+This page describes what ScalarDB is and its primary use cases.
+
+## What is ScalarDB?
+
+ScalarDB is a universal hybrid transaction/analytical processing (HTAP) engine for diverse databases. It runs as middleware on databases and virtually unifies diverse databases by achieving ACID transactions and real-time analytics across them to simplify the complexity of managing multiple databases or multiple instances of a single database.
+
+
+
+As a versatile solution, ScalarDB supports a range of databases, including:
+
+- Relational databases that support JDBC, such as IBM Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle Database, PostgreSQL, SQLite, and their compatible databases, like Amazon Aurora and YugabyteDB.
+- NoSQL databases like Amazon DynamoDB, Apache Cassandra, and Azure Cosmos DB.
+
+For details on which databases ScalarDB supports, refer to [Databases](requirements.mdx#databases).
+
+## Why ScalarDB?
+
+Several solutions, such as global transaction managers, data federation engines, and HTAP systems, have similar goals, but they are limited in the following perspectives:
+
+- Global transaction managers (like Oracle MicroTx and Atomikos) are designed to run transactions across a limited set of heterogeneous databases (like only XA-compliant databases).
+- Data federation engines (like Denodo and Starburst) are designed to run analytical queries across heterogeneous databases.
+- HTAP systems (like TiDB and SingleStore) run both transactions and analytical queries only on homogeneous databases.
+
+In other words, they virtually unify databases, but with limitations. For example, with data federation engines, users can run read-only analytical queries on a virtualized view across multiple databases. However, they often need to run update queries separately for each database.
+
+Unlike other solutions, ScalarDB stands out by offering the ability to run both transactional and analytical queries on heterogeneous databases, which can significantly simplify database management.
+
+The following table summarizes how ScalarDB is different from the other solutions.
+
+| | Transactions across heterogeneous databases | Analytics across heterogeneous databases |
+| :------------------------------------------------------------: | :------------------------------------------------------------------: | :--------------------------------------: |
+| Global transaction managers (like Oracle MicroTx and Atomikos) | Yes (but existing solutions support only a limited set of databases) | No |
+| Data federation engines (like Denodo and Starburst) | No | Yes |
+| HTAP systems (like TiDB and SingleStore) | No (support homogeneous databases only) | No (support homogeneous databases only) |
+| **ScalarDB** | **Yes (supports various databases)** | **Yes** |
+
+
+## ScalarDB use cases
+
+ScalarDB can be used in various ways. Here are the three primary use cases of ScalarDB.
+
+### Managing siloed databases easily
+Many enterprises comprise several organizations, departments, and business units to support agile business operations, which often leads to siloed information systems. In particular, different organizations likely manage different applications with different databases. Managing such siloed databases is challenging because applications must communicate with each database separately and properly deal with the differences between databases.
+
+ScalarDB simplifies the management of siloed databases with a unified interface, enabling users to treat the databases as if they were a single database. For example, users can run (analytical) join queries over multiple databases without interacting with the databases respectively.
+
+### Managing consistency between multiple database
+Modern architectures, like the microservice architecture, encourage a system to separate a service and its database into smaller subsets to increase system modularity and development efficiency. However, managing diverse databases, especially of different kinds, is challenging because applications must ensure the correct states (or, in other words, consistencies) of those databases, even using transaction management patterns like Saga and TCC.
+
+ScalarDB simplifies managing such diverse databases with a correctness guarantee (or, in other words, ACID with strict serializability), enabling you to focus on application development without worrying about guaranteeing consistency between databases.
+
+### Simplifying data management in a data mesh
+
+Enterprises have been investing their time in building [data meshes](https://martinfowler.com/articles/data-mesh-principles.html) to streamline and scale data utilization. However, constructing a data mesh is not necessarily easy. For example, there are many technical issues in how to manage decentralized data.
+
+ScalarDB simplifies the management of decentralized databases in a data mesh, for example, by providing a unified API for all the databases in a data mesh to align with the data-as-a-product principle easily.
+
+### Reducing database migration hurdles
+
+Applications tend to be locked into using a certain database because of the specific capabilities that the database provides. Such database lock-in discourages upgrading or changing the database because doing so often requires rewriting the application.
+
+ScalarDB provides a unified interface for diverse databases. Thus, once an application is written by using the ScalarDB interface, it becomes portable, which helps to achieve seamless database migration without rewriting the application.
+
+## Further reading
+
+- [ScalarDB Technical Overview](https://speakerdeck.com/scalar/scalar-db-universal-transaction-manager)
+- [ScalarDB Research Paper [VLDB'23]](https://dl.acm.org/doi/10.14778/3611540.3611563)
\ No newline at end of file
diff --git a/versioned_docs/version-3.X/quickstart-overview.mdx b/versioned_docs/version-3.X/quickstart-overview.mdx
new file mode 100644
index 00000000..50647c79
--- /dev/null
+++ b/versioned_docs/version-3.X/quickstart-overview.mdx
@@ -0,0 +1,41 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Quickstart Overview
+
+In this category, you can follow quickstart tutorials for how to get started with running transactions and queries through ScalarDB.
+
+## Try running transactions through the ScalarDB Core library
+
+In this sub-category, you can follow tutorials on how to run ACID transactions through the ScalarDB Core library, which is publicly available under the Apache 2 License.
+
+For an overview of this sub-category, see [ScalarDB Core Quickstart Overview](quickstart-scalardb-core-overview.mdx).
+
+## Try running transactions through ScalarDB Cluster
+
+In this sub-category, you can see tutorials on how to run ACID transactions through ScalarDB Cluster, which is a [gRPC](https://grpc.io/) server that wraps the ScalarDB Core library.
+
+For an overview of this sub-category, see [ScalarDB Cluster Quickstart Overview](quickstart-scalardb-cluster-overview.mdx).
+
+:::note
+
+ScalarDB Cluster is available only in the Enterprise edition.
+
+:::
+
+## Try running analytical queries through ScalarDB Analytics
+
+In this sub-category, you can see tutorials on how to run analytical queries over the databases that you write through ScalarDB by using a component called ScalarDB Analytics. ScalarDB Analytics targets both ScalarDB-managed databases, which are updated through ScalarDB transactions, and non-ScalarDB-managed databases.
+
+For an overview of this sub-category, see [ScalarDB Analytics Quickstart Overview](quickstart-scalardb-analytics-overview.mdx).
+
+:::note
+
+- ScalarDB Analytics with PostgreSQL is available only under the Apache 2 License and doesn't require a commercial license.
+
+:::
diff --git a/versioned_docs/version-3.X/quickstart-scalardb-analytics-overview.mdx b/versioned_docs/version-3.X/quickstart-scalardb-analytics-overview.mdx
new file mode 100644
index 00000000..6b3bc3ec
--- /dev/null
+++ b/versioned_docs/version-3.X/quickstart-scalardb-analytics-overview.mdx
@@ -0,0 +1,13 @@
+---
+tags:
+ - Community
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Analytics Quickstart Overview
+
+In this sub-category, you can see tutorials on how to run analytical queries over the databases that you write through ScalarDB by using a component called ScalarDB Analytics.
+
+- To try running analytical queries through PostgreSQL, see [Getting Started with ScalarDB Analytics with PostgreSQL](scalardb-analytics-postgresql/getting-started.mdx).
+- To try running analytical queries through Spark, see [Getting Started with ScalarDB Analytics](scalardb-samples/scalardb-analytics-spark-sample/README.mdx).
diff --git a/versioned_docs/version-3.X/quickstart-scalardb-cluster-overview.mdx b/versioned_docs/version-3.X/quickstart-scalardb-cluster-overview.mdx
new file mode 100644
index 00000000..6d5538ca
--- /dev/null
+++ b/versioned_docs/version-3.X/quickstart-scalardb-cluster-overview.mdx
@@ -0,0 +1,15 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Cluster Quickstart Overview
+
+In this sub-category, you can see tutorials on how to run ACID transactions through ScalarDB Cluster, which is a [gRPC](https://grpc.io/) server that wraps the ScalarDB Core library.
+
+- To try running transactions, see [Getting Started with ScalarDB Cluster](scalardb-cluster/getting-started-with-scalardb-cluster.mdx).
+- To try running transactions through the SQL interface via JDBC, see [Getting Started with ScalarDB Cluster SQL via JDBC](scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx).
+- To try running transactions through the SQL interface via Spring Data JDBC, see [Getting Started with ScalarDB Cluster SQL via Spring Data JDBC for ScalarDB](scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx).
+- To try running transactions through the GraphQL interface, see [Getting Started with ScalarDB Cluster GraphQL](scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx).
diff --git a/versioned_docs/version-3.X/quickstart-scalardb-core-overview.mdx b/versioned_docs/version-3.X/quickstart-scalardb-core-overview.mdx
new file mode 100644
index 00000000..36a59c55
--- /dev/null
+++ b/versioned_docs/version-3.X/quickstart-scalardb-core-overview.mdx
@@ -0,0 +1,14 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Core Quickstart Overview
+
+In this sub-category, you can follow tutorials on how to run ACID transactions through the ScalarDB Core library, which is publicly available under the Apache 2 license.
+
+- To try running transactions, see [Getting Started with ScalarDB](getting-started-with-scalardb.mdx).
+- To try running transactions by using Kotlin, see [Getting Started with ScalarDB by Using Kotlin](getting-started-with-scalardb-by-using-kotlin.mdx).
diff --git a/versioned_docs/version-3.X/requirements.mdx b/versioned_docs/version-3.X/requirements.mdx
new file mode 100644
index 00000000..a5fdb19b
--- /dev/null
+++ b/versioned_docs/version-3.X/requirements.mdx
@@ -0,0 +1,288 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Requirements
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This page describes the required tools and their versions to use ScalarDB correctly.
+
+## Client SDK
+
+Because ScalarDB is written in Java, the easiest way to interact with ScalarDB is to use the Java client SDKs:
+
+- [SDK for ScalarDB Core](add-scalardb-to-your-build.mdx)
+- [SDK for ScalarDB Cluster](scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx)
+
+### Java
+
+The following Java Development Kits (JDKs) are verified and supported.
+
+- **[Oracle JDK](https://www.oracle.com/java/):** 8, 11, 17 or 21 (LTS versions)
+- **[OpenJDK](https://openjdk.org/) ([Eclipse Temurin](https://adoptium.net/temurin/), [Amazon Corretto](https://aws.amazon.com/corretto/), or [Microsoft Build of OpenJDK](https://learn.microsoft.com/en-us/java/openjdk/)):** 8, 11, 17, or 21 (LTS versions)
+
+### .NET
+
+ScalarDB is provided as a gRPC server called ScalarDB Cluster, which also has a [.NET client SDK](scalardb-cluster-dotnet-client-sdk/index.mdx) that wraps the .NET client generated from the proto file.
+
+The following .NET versions are verified and supported:
+
+- [.NET 8.0](https://dotnet.microsoft.com/en-us/download/dotnet/8.0)
+- [.NET 6.0](https://dotnet.microsoft.com/en-us/download/dotnet/6.0)
+
+### Other languages
+
+ScalarDB Cluster uses gRPC version 1.65.0, so you can create your own client by using the generated clients of your preferred languages.
+
+## Databases
+
+ScalarDB is middleware that runs on top of the following databases and their versions.
+
+### Relational databases
+
+
+
+
+| Version | Oracle Database 23ai | Oracle Database 21c | Oracle Database 19c |
+|:------------------|:--------------------|:------------------|:------------------|
+| **ScalarDB 3.16** | âś… | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… | âś… |
+
+
+
+
+| Version | Db2 12.1 | Db2 11.5 |
+|:------------------|:---------|:---------|
+| **ScalarDB 3.16** | âś… | âś… |
+| **ScalarDB 3.15** | ❌ | ❌ |
+| **ScalarDB 3.14** | ❌ | ❌ |
+| **ScalarDB 3.13** | ❌ | ❌ |
+| **ScalarDB 3.12** | ❌ | ❌ |
+| **ScalarDB 3.11** | ❌ | ❌ |
+| **ScalarDB 3.10** | ❌ | ❌ |
+| **ScalarDB 3.9** | ❌ | ❌ |
+| **ScalarDB 3.8** | ❌ | ❌ |
+| **ScalarDB 3.7** | ❌ | ❌ |
+
+:::note
+
+Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is not currently supported.
+
+:::
+
+
+
+
+| Version | MySQL 8.4 | MySQL 8.0 |
+|:------------------|:----------|:-----------|
+| **ScalarDB 3.16** | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… |
+
+
+
+
+| Version | PostgreSQL 17 | PostgreSQL 16 | PostgreSQL 15 | PostgreSQL 14 | PostgreSQL 13 |
+|:------------------|:--------------|:--------------|:--------------|:--------------|---------------|
+| **ScalarDB 3.16** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… | âś… | âś… | âś… |
+
+
+
+
+| Version | Aurora MySQL 3 | Aurora MySQL 2 |
+|:------------------|:----------------|:----------------|
+| **ScalarDB 3.16** | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… |
+
+
+
+
+| Version | Aurora PostgreSQL 16 | Aurora PostgreSQL 15 | Aurora PostgreSQL 14 | Aurora PostgreSQL 13 |
+|:------------------|:---------------------|:---------------------|:---------------------|:---------------------|
+| **ScalarDB 3.16** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… | âś… | âś… |
+
+
+
+
+| Version | MariaDB 11.4 | MariaDB 10.11 |
+|:------------------|:--------------|:--------------|
+| **ScalarDB 3.16** | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… |
+
+
+
+
+| Version | SQL Server 2022 | SQL Server 2019 | SQL Server 2017 |
+|:------------------|:-----------------|:-----------------|:-----------------|
+| **ScalarDB 3.16** | âś… | âś… | âś… |
+| **ScalarDB 3.15** | âś… | âś… | âś… |
+| **ScalarDB 3.14** | âś… | âś… | âś… |
+| **ScalarDB 3.13** | âś… | âś… | âś… |
+| **ScalarDB 3.12** | âś… | âś… | âś… |
+| **ScalarDB 3.11** | âś… | âś… | âś… |
+| **ScalarDB 3.10** | âś… | âś… | âś… |
+| **ScalarDB 3.9** | âś… | âś… | âś… |
+| **ScalarDB 3.8** | âś… | âś… | âś… |
+| **ScalarDB 3.7** | âś… | âś… | âś… |
+
+
+
+
+| Version | SQLite 3 |
+|:------------------|:----------|
+| **ScalarDB 3.16** | âś… |
+| **ScalarDB 3.15** | âś… |
+| **ScalarDB 3.14** | âś… |
+| **ScalarDB 3.13** | âś… |
+| **ScalarDB 3.12** | âś… |
+| **ScalarDB 3.11** | âś… |
+| **ScalarDB 3.10** | âś… |
+| **ScalarDB 3.9** | âś… |
+| **ScalarDB 3.8** | ❌ |
+| **ScalarDB 3.7** | ❌ |
+
+
+
+
+| Version | YugabyteDB 2 |
+|:------------------|:-------------|
+| **ScalarDB 3.16** | âś… |
+| **ScalarDB 3.15** | âś… |
+| **ScalarDB 3.14** | âś… |
+| **ScalarDB 3.13** | âś… |
+| **ScalarDB 3.12** | ❌ |
+| **ScalarDB 3.11** | ❌ |
+| **ScalarDB 3.10** | ❌ |
+| **ScalarDB 3.9** | ❌ |
+| **ScalarDB 3.8** | ❌ |
+| **ScalarDB 3.7** | ❌ |
+
+
+
+
+### NoSQL databases
+
+
+
+
+| Version | DynamoDB |
+|:------------------|:----------|
+| **ScalarDB 3.16** | âś… |
+| **ScalarDB 3.15** | âś… |
+| **ScalarDB 3.14** | âś… |
+| **ScalarDB 3.13** | âś… |
+| **ScalarDB 3.12** | âś… |
+| **ScalarDB 3.11** | âś… |
+| **ScalarDB 3.10** | âś… |
+| **ScalarDB 3.9** | âś… |
+| **ScalarDB 3.8** | âś… |
+| **ScalarDB 3.7** | âś… |
+
+
+
+
+| Version | Cassandra 4.1 | Cassandra 4.0 | Cassandra 3.11 | Cassandra 3.0 |
+|:------------------|:---------------|:---------------|:----------------|:---------------|
+| **ScalarDB 3.16** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.15** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.14** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.13** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.12** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.11** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.10** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.9** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.8** | ❌ | ❌ | ✅ | ✅ |
+| **ScalarDB 3.7** | ❌ | ❌ | ✅ | ✅ |
+
+
+
+
+| Version | Cosmos DB for NoSQL |
+|:------------------|:---------------------|
+| **ScalarDB 3.16** | âś… |
+| **ScalarDB 3.15** | âś… |
+| **ScalarDB 3.14** | âś… |
+| **ScalarDB 3.13** | âś… |
+| **ScalarDB 3.12** | âś… |
+| **ScalarDB 3.11** | âś… |
+| **ScalarDB 3.10** | âś… |
+| **ScalarDB 3.9** | âś… |
+| **ScalarDB 3.8** | âś… |
+| **ScalarDB 3.7** | âś… |
+
+
+
+
+:::note
+
+For details on how to configure each database, see [Configurations for the Underlying Databases of ScalarDB](./database-configurations.mdx).
+
+:::
+
+## Kubernetes
+
+ScalarDB is provided as a Pod on the Kubernetes platform in production environments. ScalarDB supports the following platforms and tools.
+
+### Platform
+- **[Kubernetes](https://kubernetes.io/):** 1.28 - 1.32
+ - **[Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)**
+ - **[Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/products/kubernetes-service)**
+- **[Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift):** TBD
+
+### Package manager
+- **[Helm](https://helm.sh/):** 3.5+
diff --git a/versioned_docs/version-3.X/roadmap.mdx b/versioned_docs/version-3.X/roadmap.mdx
new file mode 100644
index 00000000..b195d884
--- /dev/null
+++ b/versioned_docs/version-3.X/roadmap.mdx
@@ -0,0 +1,135 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Roadmap
+
+This roadmap provides a look into the proposed future of ScalarDB. The purpose of this roadmap is to provide visibility into what changes may be coming so that you can more closely follow progress, learn about key milestones, and give feedback during development. This roadmap will be updated as new versions of ScalarDB are released.
+
+:::warning
+
+During the course of development, this roadmap is subject to change based on user needs and feedback. **Do not schedule your release plans according to the contents of this roadmap.**
+
+If you have a feature request or want to prioritize feature development, please create an issue in [GitHub](https://github.com/scalar-labs/scalardb/issues).
+
+:::
+
+
+### CY2025 Q2
+
+
+#### Support for additional databases
+
+- **IBM Db2**
+ - Users will be able to use IBM Db2 as an underlying database through ScalarDB Cluster.
+- **TiDB**
+ - Users will be able to use TiDB as an underlying database through ScalarDB Cluster.
+- **Databricks**
+ - Users will be able to use Databricks as an underlying database through ScalarDB Cluster and ScalarDB Analytics.
+- **Snowflake**
+ - Users will be able to use Snowflake as an underlying database through ScalarDB Cluster and ScalarDB Analytics.
+
+#### Usability
+
+- **Addition of decimal data types**
+ - Users will be able to use decimal data types so that users can handle decimal numbers with high precision.
+- **Removal of extra-write strategy**
+ - Users will no longer be able to use the extra-write strategy to make transactions serializable. Although ScalarDB currently provides two strategies (extra-read and extra-write strategies) to make transactions serializable, the extra-write strategy has several limitations. For example, users can't issue write and scan operations in the same transaction. Therefore, the strategy will be removed so that users don't need to worry about such limitations when creating applications.
+- **Better governance in ScalarDB Analytics**
+ - Users will be able to be authenticated and authorized by using the ScalarDB Core features.
+
+#### Performance
+
+- **Addition of read-committed isolation**
+ - Users will be able to run transactions with a read-committed isolation to achieve better performance for applications that do not require strong correctness.
+- **One-phase commit optimization**
+ - Users will be able to run a transaction more efficiently by using one-phase commit if the operations of the transaction are all applied to a single database or a single partition.
+- **Optimization for multiple write operations per database**
+ - Users will be able to run transactions more efficiently with a batch preparation and commitment if there are multiple write operations for a database.
+- **Optimization for read-only transactions**
+ - Users will be able to run transactions more efficiently by avoiding coordinator writes when committing transactions.
+- **Removal of WAL-interpreted views in ScalarDB Analytics**
+ - Users will be able to read committed data by using ScalarDB Core instead of WAL-interpreted views, which will increase query performance.
+
+#### Cloud support
+
+- **Container offering in Azure Marketplace for ScalarDB Cluster**
+ - Users will be able to deploy ScalarDB Cluster by using the Azure container offering, which enables users to use a pay-as-you-go subscription model.
+- **Google Cloud Platform (GCP) support for ScalarDB Cluster**
+ - Users will be able to deploy ScalarDB Cluster in Google Kubernetes Engine (GKE) in GCP.
+- **Container offering in Amazon Marketplace for ScalarDB Analytics**
+ - Users will be able to deploy ScalarDB Analytics by using the container offering, which enables users to use a pay-as-you-go subscription model.
+
+### CY2025 Q3
+
+#### New capabilities
+
+- **Decoupled metadata management**
+ - Users will be able to start using ScalarDB Cluster without migrating or changing the schemas of existing applications by managing the transaction metadata of ScalarDB in a separate location.
+
+#### Usability
+
+- **Views**
+ - Users will be able to define views so that they can manage multiple different databases in an easier and simplified way.
+- **Addition of SQL operations for aggregation**
+ - Users will be able to issue aggregation operations in ScalarDB SQL.
+- **Elimination of out-of-memory errors due to large scans**
+ - Users will be able to issue large scans without experiencing out-of-memory errors.
+- **Enabling of read operations during a paused duration**
+ - Users will be able to issue read operations even during a paused duration so that users can still read data while taking backups.
+
+#### Scalability and availability
+
+- **Semi-synchronous replication**
+ - Users will be able to replicate the data of ScalarDB-based applications in a disaster-recoverable manner. For example, assume you provide a primary service in Tokyo and a standby service in Osaka. In case of catastrophic failure in Tokyo, you can switch the primary service to Osaka so that you can continue to provide the service without data loss and extended downtime.
+
+### CY2025 Q4
+
+#### New capabilities
+
+- **Native secondary index**
+ - Users will be able to define flexible secondary indexes. The existing secondary index is limited because it is implemented based on the common capabilities of the supported databases' secondary indexes. Therefore, for example, you cannot define a multi-column index. The new secondary index will be created at the ScalarDB layer so that you can create more flexible indexes, like a multi-column index.
+- **Universal catalog**
+ - Users will be able to manage metadata, including schemas and semantic information, for operational and analytical databases across separate business domains in a unified manner.
+- **Universal authentication and authorization**
+ - Users will be able to be given access to ScalarDB Cluster and ScalarDB Analytics by using a unified authentication and authorization method.
+
+#### Support for additional databases (object storage)
+
+- **Azure Blob Storage**
+ - Users will be able to use Azure Blob Storage as an underlying database through ScalarDB Cluster.
+- **Amazon S3**
+ - Users will be able to use Amazon S3 as an underlying database through ScalarDB Cluster.
+- **Google Cloud Storage**
+ - Users will be able to use Google Cloud Storage as an underlying database through ScalarDB Cluster and ScalarDB Analytics.
+
+#### Performance
+
+- **Reduction of storage space needed for managing ScalarDB metadata**
+ - Users will likely use less storage space to run ScalarDB. ScalarDB will remove the before image of committed transactions after they are committed. However, whether or not those committed transactions will impact actual storage space depends on the underlying databases.
+
+#### Cloud support
+
+- **Red Hat OpenShift support**
+ - Users will be able to use Red Hat–certified Helm Charts for ScalarDB Cluster in OpenShift environments.
+- **Container offering in Google Cloud Marketplace**
+ - Users will be able to deploy ScalarDB Cluster by using the Google Cloud container offering, which enables users to use a pay-as-you-go subscription model.
+
+### CY2026
+
+- **Audit logging**
+ - Users will be able to view and manage the access logs of ScalarDB Cluster and Analytics, mainly for auditing purposes.
+- **Stored procedures**
+ - Users will be able to define stored procedures so that they can execute a set of operations with a complex logic inside ScalarDB Cluster.
+- **Triggers**
+ - Users will be able to define triggers so that they can automatically execute a set of operations when a specific event occurs in ScalarDB Cluster.
+- **User-defined functions (UDFs)**
+ - Users will be able to define functions so that they can use functions in SQLs to express complex logic in a simpler way.
+- **Addition of SQL operations for sorting**
+ - Users will be able to issue arbitrary sorting (ORDER BY) operations in ScalarDB SQL for multiple or non-JDBC databases. (Currently, ScalarDB can issue sorting operations using clustering keys or arbitrary sorting operations for single JDBC databases.)
+- **Addition of more data types**
+ - Users will be able to use complex data types, such as JSON.
\ No newline at end of file
diff --git a/versioned_docs/version-3.X/run-non-transactional-storage-operations-through-library.mdx b/versioned_docs/version-3.X/run-non-transactional-storage-operations-through-library.mdx
new file mode 100644
index 00000000..ea97ee58
--- /dev/null
+++ b/versioned_docs/version-3.X/run-non-transactional-storage-operations-through-library.mdx
@@ -0,0 +1,295 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Run Non-Transactional Storage Operations Through the Core Library
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This guide explains how to run non-transactional storage operations through the ScalarDB Core library.
+
+## Preparation
+
+For the purpose of this guide, you will set up a database and ScalarDB by using a sample in the ScalarDB samples repository.
+
+### Clone the ScalarDB samples repository
+
+Open **Terminal**, then clone the ScalarDB samples repository by running the following command:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-samples
+```
+
+Then, go to the directory that contains the necessary files by running the following command:
+
+```console
+cd scalardb-samples/scalardb-sample
+```
+
+## Set up a database
+
+Select your database, and follow the instructions to configure it for ScalarDB.
+
+For a list of databases that ScalarDB supports, see [Databases](requirements.mdx#databases).
+
+
+
+ Run MySQL locally
+
+ You can run MySQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start MySQL, run the following command:
+
+ ```console
+ docker compose up -d mysql
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for MySQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For MySQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:mysql://localhost:3306/
+ scalar.db.username=root
+ scalar.db.password=mysql
+ ```
+
+
+ Run PostgreSQL locally
+
+ You can run PostgreSQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start PostgreSQL, run the following command:
+
+ ```console
+ docker compose up -d postgres
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for PostgreSQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For PostgreSQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:postgresql://localhost:5432/
+ scalar.db.username=postgres
+ scalar.db.password=postgres
+ ```
+
+
+ Run Oracle Database locally
+
+ You can run Oracle Database in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Oracle Database, run the following command:
+
+ ```console
+ docker compose up -d oracle
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Oracle Database in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Oracle
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:oracle:thin:@//localhost:1521/FREEPDB1
+ scalar.db.username=SYSTEM
+ scalar.db.password=Oracle
+ ```
+
+
+ Run SQL Server locally
+
+ You can run SQL Server in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start SQL Server, run the following command:
+
+ ```console
+ docker compose up -d sqlserver
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for SQL Server in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For SQL Server
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:sqlserver://localhost:1433;encrypt=true;trustServerCertificate=true
+ scalar.db.username=sa
+ scalar.db.password=SqlServer22
+ ```
+
+
+ Run Db2 locally
+
+ You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start IBM Db2, run the following command:
+
+ ```console
+ docker compose up -d db2
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Db2 in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Db2
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:db2://localhost:50000/sample
+ scalar.db.username=db2inst1
+ scalar.db.password=db2inst1
+ ```
+
+
+ Run Amazon DynamoDB Local
+
+ You can run Amazon DynamoDB Local in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Amazon DynamoDB Local, run the following command:
+
+ ```console
+ docker compose up -d dynamodb
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Amazon DynamoDB Local in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For DynamoDB Local
+ scalar.db.storage=dynamo
+ scalar.db.contact_points=sample
+ scalar.db.username=sample
+ scalar.db.password=sample
+ scalar.db.dynamo.endpoint_override=http://localhost:8000
+ ```
+
+
+ To use Azure Cosmos DB for NoSQL, you must have an Azure account. If you don't have an Azure account, visit [Create an Azure Cosmos DB account](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-portal#create-account).
+
+ Configure Cosmos DB for NoSQL
+
+ Set the **default consistency level** to **Strong** according to the official document at [Configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#configure-the-default-consistency-level).
+
+ Configure ScalarDB
+
+ The following instructions assume that you have properly installed and configured the JDK in your local environment and properly configured your Cosmos DB for NoSQL account in Azure.
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Be sure to change the values for `scalar.db.contact_points` and `scalar.db.password` as described.
+
+ ```properties
+ # For Cosmos DB
+ scalar.db.storage=cosmos
+ scalar.db.contact_points=
+ scalar.db.password=
+ ```
+
+:::note
+
+You can use the primary key or the secondary key in your Azure Cosmos DB account as the value for `scalar.db.password`.
+
+:::
+
+
+ Run Cassandra locally
+
+ You can run Apache Cassandra in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Apache Cassandra, run the following command:
+ ```console
+ docker compose up -d cassandra
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Cassandra in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Cassandra
+ scalar.db.storage=cassandra
+ scalar.db.contact_points=localhost
+ scalar.db.username=cassandra
+ scalar.db.password=cassandra
+ ```
+
+
+
+For a comprehensive list of configurations for ScalarDB, see [ScalarDB Configurations](configurations.mdx).
+
+## Configure ScalarDB to run non-transactional storage operations
+
+To run non-transactional storage operations, you need to configure the `scalar.db.transaction_manager` property to `single-crud-operation` in the configuration file **database.properties**:
+
+```properties
+scalar.db.transaction_manager=single-crud-operation
+```
+
+## Create or import a schema
+
+ScalarDB has its own data model and schema that maps to the implementation-specific data model and schema.
+
+- **Need to create a database schema?** See [ScalarDB Schema Loader](schema-loader.mdx).
+- **Need to import an existing database?** See [Importing Existing Tables to ScalarDB by Using ScalarDB Schema Loader](schema-loader-import.mdx).
+
+## Create your Java application
+
+This section describes how to add the ScalarDB Core library to your project and how to configure it to run non-transactional storage operations by using Java.
+
+### Add ScalarDB to your project
+
+The ScalarDB library is available on the [Maven Central Repository](https://mvnrepository.com/artifact/com.scalar-labs/scalardb). You can add the library as a build dependency to your application by using Gradle or Maven.
+
+Select your build tool, and follow the instructions to add the build dependency for ScalarDB to your application.
+
+
+
+ To add the build dependency for ScalarDB by using Gradle, add the following to `build.gradle` in your application:
+
+ ```gradle
+ dependencies {
+ implementation 'com.scalar-labs:scalardb:3.16.0'
+ }
+ ```
+
+
+ To add the build dependency for ScalarDB by using Maven, add the following to `pom.xml` in your application:
+
+ ```xml
+
+ com.scalar-labs
+ scalardb
+ 3.16.0
+
+ ```
+
+
+
+### Use the Java API
+
+For details about the Java API, see [ScalarDB Java API Guide](api-guide.mdx).
+
+:::note
+
+The following limitations apply to non-transactional storage operations:
+
+- Beginning a transaction is not supported. For more details, see [Execute transactions without beginning or starting a transaction](api-guide.mdx#execute-transactions-without-beginning-or-starting-a-transaction).
+- Executing multiple mutations in a single transaction is not supported.
+
+:::
+
+### Learn more
+
+- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.16.0/index.html)
diff --git a/versioned_docs/version-3.X/run-non-transactional-storage-operations-through-primitive-crud-interface.mdx b/versioned_docs/version-3.X/run-non-transactional-storage-operations-through-primitive-crud-interface.mdx
new file mode 100644
index 00000000..21d732d9
--- /dev/null
+++ b/versioned_docs/version-3.X/run-non-transactional-storage-operations-through-primitive-crud-interface.mdx
@@ -0,0 +1,860 @@
+---
+tags:
+ - Community
+displayed_sidebar: docsEnglish
+---
+
+# Run Non-Transactional Storage Operations Through the Primitive CRUD Interface
+
+This page explains how to run non-transactional storage operations through the primitive CRUD interface, also known as the Storage API. This guide assumes that you have an advanced understanding of ScalarDB.
+
+One of the keys to achieving storage-agnostic or database-agnostic ACID transactions on top of existing storage and database systems is the storage abstraction capabilities that ScalarDB provides. Storage abstraction defines a [data model](design.mdx#data-model) and the APIs (Storage API) that issue operations on the basis of the data model.
+
+Although you will likely use the [Transactional API](api-guide.mdx#transactional-api) in most cases, another option is to use the Storage API.
+
+The benefits of using the Storage API include the following:
+
+- As with the Transactional API, you can write your application code without worrying too much about the underlying storage implementation.
+- If you don't need transactions for some of the data in your application, you can use the Storage API to partially avoid transactions, which results in faster execution.
+
+:::warning
+
+Directly using the Storage API or mixing the Transactional API and the Storage API could cause unexpected behavior. For example, since the Storage API cannot provide transaction capability, the API could cause anomalies or data inconsistency if failures occur when executing operations.
+
+Therefore, you should be *very* careful about using the Storage API and use it only if you know exactly what you are doing.
+
+:::
+
+## Storage API Example
+
+This section explains how the Storage API can be used in a basic electronic money application.
+
+:::warning
+
+The electronic money application is simplified for this example and isn’t suitable for a production environment.
+
+:::
+
+### ScalarDB configuration
+
+Before you begin, you should configure ScalarDB in the same way mentioned in [Getting Started with ScalarDB](getting-started-with-scalardb.mdx).
+
+With that in mind, this Storage API example assumes that the configuration file `scalardb.properties` exists.
+
+### Set up the database schema
+
+You need to define the database schema (the method in which the data will be organized) in the application. For details about the supported data types, see [Data type mapping between ScalarDB and other databases](https://scalardb.scalar-labs.com/docs/latest/schema-loader/#data-type-mapping-between-scalardb-and-the-other-databases).
+
+For this example, create a file named `emoney-storage.json` in the `scalardb/docs/getting-started` directory. Then, add the following JSON code to define the schema.
+
+:::note
+
+In the following JSON, the `transaction` field is set to `false`, which indicates that you should use this table with the Storage API.
+
+:::
+
+```json
+{
+ "emoney.account": {
+ "transaction": false,
+ "partition-key": [
+ "id"
+ ],
+ "clustering-key": [],
+ "columns": {
+ "id": "TEXT",
+ "balance": "INT"
+ }
+ }
+}
+```
+
+To apply the schema, go to the [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases) page and download the ScalarDB Schema Loader that matches the version of ScalarDB that you are using to the `getting-started` folder.
+
+Then, run the following command, replacing `` with the version of the ScalarDB Schema Loader that you downloaded:
+
+```console
+java -jar scalardb-schema-loader-.jar --config scalardb.properties -f emoney-storage.json
+```
+
+### Example code
+
+The following is example source code for the electronic money application that uses the Storage API.
+
+:::warning
+
+As previously mentioned, since the Storage API cannot provide transaction capability, the API could cause anomalies or data inconsistency if failures occur when executing operations. Therefore, you should be *very* careful about using the Storage API and use it only if you know exactly what you are doing.
+
+:::
+
+```java
+public class ElectronicMoney {
+
+ private static final String SCALARDB_PROPERTIES =
+ System.getProperty("user.dir") + File.separator + "scalardb.properties";
+ private static final String NAMESPACE = "emoney";
+ private static final String TABLENAME = "account";
+ private static final String ID = "id";
+ private static final String BALANCE = "balance";
+
+ private final DistributedStorage storage;
+
+ public ElectronicMoney() throws IOException {
+ StorageFactory factory = StorageFactory.create(SCALARDB_PROPERTIES);
+ storage = factory.getStorage();
+ }
+
+ public void charge(String id, int amount) throws ExecutionException {
+ // Retrieve the current balance for id
+ Get get =
+ Get.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, id))
+ .build();
+ Optional result = storage.get(get);
+
+ // Calculate the balance
+ int balance = amount;
+ if (result.isPresent()) {
+ int current = result.get().getInt(BALANCE);
+ balance += current;
+ }
+
+ // Update the balance
+ Put put =
+ Put.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, id))
+ .intValue(BALANCE, balance)
+ .build();
+ storage.put(put);
+ }
+
+ public void pay(String fromId, String toId, int amount) throws ExecutionException {
+ // Retrieve the current balances for ids
+ Get fromGet =
+ Get.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, fromId))
+ .build();
+ Get toGet =
+ Get.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, toId))
+ .build();
+ Optional fromResult = storage.get(fromGet);
+ Optional toResult = storage.get(toGet);
+
+ // Calculate the balances (it assumes that both accounts exist)
+ int newFromBalance = fromResult.get().getInt(BALANCE) - amount;
+ int newToBalance = toResult.get().getInt(BALANCE) + amount;
+ if (newFromBalance < 0) {
+ throw new RuntimeException(fromId + " doesn't have enough balance.");
+ }
+
+ // Update the balances
+ Put fromPut =
+ Put.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, fromId))
+ .intValue(BALANCE, newFromBalance)
+ .build();
+ Put toPut =
+ Put.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, toId))
+ .intValue(BALANCE, newToBalance)
+ .build();
+ storage.put(fromPut);
+ storage.put(toPut);
+ }
+
+ public int getBalance(String id) throws ExecutionException {
+ // Retrieve the current balances for id
+ Get get =
+ Get.newBuilder()
+ .namespace(NAMESPACE)
+ .table(TABLENAME)
+ .partitionKey(Key.ofText(ID, id))
+ .build();
+ Optional result = storage.get(get);
+
+ int balance = -1;
+ if (result.isPresent()) {
+ balance = result.get().getInt(BALANCE);
+ }
+ return balance;
+ }
+
+ public void close() {
+ storage.close();
+ }
+}
+```
+
+## Storage API guide
+
+The Storage API is composed of the Administrative API and CRUD API.
+
+### Administrative API
+
+You can execute administrative operations programmatically as described in this section.
+
+:::note
+
+Another method that you could use to execute administrative operations is by using [Schema Loader](schema-loader.mdx).
+
+:::
+
+#### Get a `DistributedStorageAdmin` instance
+
+To execute administrative operations, you first need to get a `DistributedStorageAdmin` instance. You can obtain the `DistributedStorageAdmin` instance from `StorageFactory` as follows:
+
+```java
+StorageFactory storageFactory = StorageFactory.create("");
+DistributedStorageAdmin admin = storageFactory.getStorageAdmin();
+```
+
+For details about configurations, see [ScalarDB Configurations](configurations.mdx).
+
+After you have executed all administrative operations, you should close the `DistributedStorageAdmin` instance as follows:
+
+```java
+admin.close();
+```
+
+#### Create a namespace
+
+Before creating tables, namespaces must be created since a table belongs to one namespace.
+
+You can create a namespace as follows:
+
+```java
+// Create the namespace "ns". If the namespace already exists, an exception will be thrown.
+admin.createNamespace("ns");
+
+// Create the namespace only if it does not already exist.
+boolean ifNotExists = true;
+admin.createNamespace("ns", ifNotExists);
+
+// Create the namespace with options.
+Map options = ...;
+admin.createNamespace("ns", options);
+```
+
+For details about creation options, see [Creation options](api-guide.mdx#creation-options).
+
+#### Create a table
+
+When creating a table, you should define the table metadata and then create the table.
+
+To define the table metadata, you can use `TableMetadata`. The following shows how to define the columns, partition key, clustering key including clustering orders, and secondary indexes of a table:
+
+```java
+// Define the table metadata.
+TableMetadata tableMetadata =
+ TableMetadata.newBuilder()
+ .addColumn("c1", DataType.INT)
+ .addColumn("c2", DataType.TEXT)
+ .addColumn("c3", DataType.BIGINT)
+ .addColumn("c4", DataType.FLOAT)
+ .addColumn("c5", DataType.DOUBLE)
+ .addPartitionKey("c1")
+ .addClusteringKey("c2", Scan.Ordering.Order.DESC)
+ .addClusteringKey("c3", Scan.Ordering.Order.ASC)
+ .addSecondaryIndex("c4")
+ .build();
+```
+
+For details about the data model of ScalarDB, see [Data Model](design.mdx#data-model).
+
+Then, create a table as follows:
+
+```java
+// Create the table "ns.tbl". If the table already exists, an exception will be thrown.
+admin.createTable("ns", "tbl", tableMetadata);
+
+// Create the table only if it does not already exist.
+boolean ifNotExists = true;
+admin.createTable("ns", "tbl", tableMetadata, ifNotExists);
+
+// Create the table with options.
+Map options = ...;
+admin.createTable("ns", "tbl", tableMetadata, options);
+```
+
+#### Create a secondary index
+
+You can create a secondary index as follows:
+
+```java
+// Create a secondary index on column "c5" for table "ns.tbl". If a secondary index already exists, an exception will be thrown.
+admin.createIndex("ns", "tbl", "c5");
+
+// Create the secondary index only if it does not already exist.
+boolean ifNotExists = true;
+admin.createIndex("ns", "tbl", "c5", ifNotExists);
+
+// Create the secondary index with options.
+Map options = ...;
+admin.createIndex("ns", "tbl", "c5", options);
+```
+
+#### Add a new column to a table
+
+You can add a new, non-partition key column to a table as follows:
+
+```java
+// Add a new column "c6" with the INT data type to the table "ns.tbl".
+admin.addNewColumnToTable("ns", "tbl", "c6", DataType.INT)
+```
+
+:::warning
+
+You should carefully consider adding a new column to a table because the execution time may vary greatly depending on the underlying storage. Please plan accordingly and consider the following, especially if the database runs in production:
+
+- **For Cosmos DB for NoSQL and DynamoDB:** Adding a column is almost instantaneous as the table schema is not modified. Only the table metadata stored in a separate table is updated.
+- **For Cassandra:** Adding a column will only update the schema metadata and will not modify the existing schema records. The cluster topology is the main factor for the execution time. Changes to the schema metadata are shared to each cluster node via a gossip protocol. Because of this, the larger the cluster, the longer it will take for all nodes to be updated.
+- **For relational databases (MySQL, Oracle, etc.):** Adding a column shouldn't take a long time to execute.
+
+:::
+
+#### Truncate a table
+
+You can truncate a table as follows:
+
+```java
+// Truncate the table "ns.tbl".
+admin.truncateTable("ns", "tbl");
+```
+
+#### Drop a secondary index
+
+You can drop a secondary index as follows:
+
+```java
+// Drop the secondary index on column "c5" from table "ns.tbl". If the secondary index does not exist, an exception will be thrown.
+admin.dropIndex("ns", "tbl", "c5");
+
+// Drop the secondary index only if it exists.
+boolean ifExists = true;
+admin.dropIndex("ns", "tbl", "c5", ifExists);
+```
+
+#### Drop a table
+
+You can drop a table as follows:
+
+```java
+// Drop the table "ns.tbl". If the table does not exist, an exception will be thrown.
+admin.dropTable("ns", "tbl");
+
+// Drop the table only if it exists.
+boolean ifExists = true;
+admin.dropTable("ns", "tbl", ifExists);
+```
+
+#### Drop a namespace
+
+You can drop a namespace as follows:
+
+```java
+// Drop the namespace "ns". If the namespace does not exist, an exception will be thrown.
+admin.dropNamespace("ns");
+
+// Drop the namespace only if it exists.
+boolean ifExists = true;
+admin.dropNamespace("ns", ifExists);
+```
+
+#### Get existing namespaces
+
+You can get the existing namespaces as follows:
+
+```java
+Set namespaces = admin.getNamespaceNames();
+```
+
+:::note
+
+This method extracts the namespace names of user tables dynamically. As a result, only namespaces that contain tables are returned. Starting from ScalarDB 4.0, we plan to improve the design to remove this limitation.
+
+:::
+
+#### Get the tables of a namespace
+
+You can get the tables of a namespace as follows:
+
+```java
+// Get the tables of the namespace "ns".
+Set tables = admin.getNamespaceTableNames("ns");
+```
+
+#### Get table metadata
+
+You can get table metadata as follows:
+
+```java
+// Get the table metadata for "ns.tbl".
+TableMetadata tableMetadata = admin.getTableMetadata("ns", "tbl");
+```
+
+#### Repair a table
+
+You can repair the table metadata of an existing table as follows:
+
+```java
+// Repair the table "ns.tbl" with options.
+TableMetadata tableMetadata =
+ TableMetadata.newBuilder()
+ ...
+ .build();
+Map options = ...;
+admin.repairTable("ns", "tbl", tableMetadata, options);
+```
+
+### Implement CRUD operations
+
+The following sections describe CRUD operations.
+
+#### Get a `DistributedStorage` instance
+
+To execute CRUD operations in the Storage API, you need to get a `DistributedStorage` instance.
+
+You can get an instance as follows:
+
+```java
+StorageFactory storageFactory = StorageFactory.create("");
+DistributedStorage storage = storageFactory.getStorage();
+```
+
+After you have executed all CRUD operations, you should close the `DistributedStorage` instance as follows:
+
+```java
+storage.close();
+```
+
+#### `Get` operation
+
+`Get` is an operation to retrieve a single record specified by a primary key.
+
+You need to create a `Get` object first, and then you can execute the object by using the `storage.get()` method as follows:
+
+```java
+// Create a `Get` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .projections("c1", "c2", "c3", "c4")
+ .build();
+
+// Execute the `Get` operation.
+Optional result = storage.get(get);
+```
+
+You can also specify projections to choose which columns are returned.
+
+For details about how to construct `Key` objects, see [Key construction](api-guide.mdx#key-construction). And, for details about how to handle `Result` objects, see [Handle Result objects](api-guide.mdx#handle-result-objects).
+
+##### Specify a consistency level
+
+You can specify a consistency level in each operation (`Get`, `Scan`, `Put`, and `Delete`) in the Storage API as follows:
+
+```java
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .consistency(Consistency.LINEARIZABLE) // Consistency level
+ .build();
+```
+
+The following table describes the three consistency levels:
+
+| Consistency level | Description |
+| ------------------- | ----------- |
+| `SEQUENTIAL` | Sequential consistency assumes that the underlying storage implementation makes all operations appear to take effect in some sequential order and the operations of each individual process appear in this sequence. |
+| `EVENTUAL` | Eventual consistency assumes that the underlying storage implementation makes all operations take effect eventually. |
+| `LINEARIZABLE` | Linearizable consistency assumes that the underlying storage implementation makes each operation appear to take effect atomically at some point between its invocation and completion. |
+
+##### Execute `Get` by using a secondary index
+
+You can execute a `Get` operation by using a secondary index.
+
+Instead of specifying a partition key, you can specify an index key (indexed column) to use a secondary index as follows:
+
+```java
+// Create a `Get` operation by using a secondary index.
+Key indexKey = Key.ofFloat("c4", 1.23F);
+
+Get get =
+ Get.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .indexKey(indexKey)
+ .projections("c1", "c2", "c3", "c4")
+ .build();
+
+// Execute the `Get` operation.
+Optional result = storage.get(get);
+```
+
+:::note
+
+If the result has more than one record, `storage.get()` will throw an exception.
+
+:::
+
+#### `Scan` operation
+
+`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations.
+
+You need to create a `Scan` object first, and then you can execute the object by using the `storage.scan()` method as follows:
+
+```java
+// Create a `Scan` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key startClusteringKey = Key.of("c2", "aaa", "c3", 100L);
+Key endClusteringKey = Key.of("c2", "aaa", "c3", 300L);
+
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .start(startClusteringKey, true) // Include startClusteringKey
+ .end(endClusteringKey, false) // Exclude endClusteringKey
+ .projections("c1", "c2", "c3", "c4")
+ .orderings(Scan.Ordering.desc("c2"), Scan.Ordering.asc("c3"))
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation.
+Scanner scanner = storage.scan(scan);
+```
+
+You can omit the clustering-key boundaries or specify either a `start` boundary or an `end` boundary. If you don't specify `orderings`, you will get results ordered by the clustering order that you defined when creating the table.
+
+In addition, you can specify `projections` to choose which columns are returned and use `limit` to specify the number of records to return in `Scan` operations.
+
+##### Handle `Scanner` objects
+
+A `Scan` operation in the Storage API returns a `Scanner` object.
+
+If you want to get results one by one from the `Scanner` object, you can use the `one()` method as follows:
+
+```java
+Optional result = scanner.one();
+```
+
+Or, if you want to get a list of all results, you can use the `all()` method as follows:
+
+```java
+List results = scanner.all();
+```
+
+In addition, since `Scanner` implements `Iterable`, you can use `Scanner` in a for-each loop as follows:
+
+```java
+for (Result result : scanner) {
+ ...
+}
+```
+
+Remember to close the `Scanner` object after getting the results:
+
+```java
+scanner.close();
+```
+
+Or you can use `try`-with-resources as follows:
+
+```java
+try (Scanner scanner = storage.scan(scan)) {
+ ...
+}
+```
+
+##### Execute `Scan` by using a secondary index
+
+You can execute a `Scan` operation by using a secondary index.
+
+Instead of specifying a partition key, you can specify an index key (indexed column) to use a secondary index as follows:
+
+```java
+// Create a `Scan` operation by using a secondary index.
+Key indexKey = Key.ofFloat("c4", 1.23F);
+
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .indexKey(indexKey)
+ .projections("c1", "c2", "c3", "c4")
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation.
+Scanner scanner = storage.scan(scan);
+```
+
+:::note
+
+You can't specify clustering-key boundaries and orderings in `Scan` by using a secondary index.
+
+:::
+
+##### Execute `Scan` without specifying a partition key to retrieve all the records of a table
+
+You can execute a `Scan` operation without specifying a partition key.
+
+Instead of calling the `partitionKey()` method in the builder, you can call the `all()` method to scan a table without specifying a partition key as follows:
+
+```java
+// Create a `Scan` operation without specifying a partition key.
+Key partitionKey = Key.ofInt("c1", 10);
+Key startClusteringKey = Key.of("c2", "aaa", "c3", 100L);
+Key endClusteringKey = Key.of("c2", "aaa", "c3", 300L);
+
+Scan scan =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .all()
+ .projections("c1", "c2", "c3", "c4")
+ .limit(10)
+ .build();
+
+// Execute the `Scan` operation.
+Scanner scanner = storage.scan(scan);
+```
+
+:::note
+
+You can't specify clustering-key boundaries and orderings in `Scan` without specifying a partition key.
+
+:::
+
+#### `Put` operation
+
+`Put` is an operation to put a record specified by a primary key. The operation behaves as an upsert operation for a record, in which the operation updates the record if the record exists or inserts the record if the record does not exist.
+
+You need to create a `Put` object first, and then you can execute the object by using the `storage.put()` method as follows:
+
+```java
+// Create a `Put` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+// Execute the `Put` operation.
+storage.put(put);
+```
+
+You can also put a record with `null` values as follows:
+
+```java
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", null)
+ .doubleValue("c5", null)
+ .build();
+```
+
+:::note
+
+If you specify `enableImplicitPreRead()`, `disableImplicitPreRead()`, or `implicitPreReadEnabled()` in the `Put` operation builder, they will be ignored.
+
+
+:::
+
+#### `Delete` operation
+
+`Delete` is an operation to delete a record specified by a primary key.
+
+You need to create a `Delete` object first, and then you can execute the object by using the `storage.delete()` method as follows:
+
+```java
+// Create a `Delete` operation.
+Key partitionKey = Key.ofInt("c1", 10);
+Key clusteringKey = Key.of("c2", "aaa", "c3", 100L);
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .build();
+
+// Execute the `Delete` operation.
+storage.delete(delete);
+```
+
+#### `Put` and `Delete` with a condition
+
+You can write arbitrary conditions (for example, a bank account balance must be equal to or more than zero) that you require an operation to meet before being executed by implementing logic that checks the conditions. Alternatively, you can write simple conditions in a mutation operation, such as `Put` and `Delete`.
+
+When a `Put` or `Delete` operation includes a condition, the operation is executed only if the specified condition is met. If the condition is not met when the operation is executed, an exception called `NoMutationException` will be thrown.
+
+##### Conditions for `Put`
+
+In a `Put` operation in the Storage API, you can specify a condition that causes the `Put` operation to be executed only when the specified condition matches. This operation is like a compare-and-swap operation where the condition is compared and the update is performed atomically.
+
+You can specify a condition in a `Put` operation as follows:
+
+```java
+// Build a condition.
+MutationCondition condition =
+ ConditionBuilder.putIf(ConditionBuilder.column("c4").isEqualToFloat(0.0F))
+ .and(ConditionBuilder.column("c5").isEqualToDouble(0.0))
+ .build();
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .condition(condition) // condition
+ .build();
+```
+
+Other than the `putIf` condition, you can specify the `putIfExists` and `putIfNotExists` conditions as follows:
+
+```java
+// Build a `putIfExists` condition.
+MutationCondition putIfExistsCondition = ConditionBuilder.putIfExists();
+
+// Build a `putIfNotExists` condition.
+MutationCondition putIfNotExistsCondition = ConditionBuilder.putIfNotExists();
+```
+
+##### Conditions for `Delete`
+
+Similar to a `Put` operation, you can specify a condition in a `Delete` operation in the Storage API.
+
+You can specify a condition in a `Delete` operation as follows:
+
+```java
+// Build a condition.
+MutationCondition condition =
+ ConditionBuilder.deleteIf(ConditionBuilder.column("c4").isEqualToFloat(0.0F))
+ .and(ConditionBuilder.column("c5").isEqualToDouble(0.0))
+ .build();
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKey)
+ .condition(condition) // condition
+ .build();
+```
+
+In addition to using the `deleteIf` condition, you can specify the `deleteIfExists` condition as follows:
+
+```java
+// Build a `deleteIfExists` condition.
+MutationCondition deleteIfExistsCondition = ConditionBuilder.deleteIfExists();
+```
+
+#### Mutate operation
+
+Mutate is an operation to execute multiple mutations (`Put` and `Delete` operations) in a single partition.
+
+You need to create mutation objects first, and then you can execute the objects by using the `storage.mutate()` method as follows:
+
+```java
+// Create `Put` and `Delete` operations.
+Key partitionKey = Key.ofInt("c1", 10);
+
+Key clusteringKeyForPut = Key.of("c2", "aaa", "c3", 100L);
+
+Put put =
+ Put.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKeyForPut)
+ .floatValue("c4", 1.23F)
+ .doubleValue("c5", 4.56)
+ .build();
+
+Key clusteringKeyForDelete = Key.of("c2", "bbb", "c3", 200L);
+
+Delete delete =
+ Delete.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .partitionKey(partitionKey)
+ .clusteringKey(clusteringKeyForDelete)
+ .build();
+
+// Execute the operations.
+storage.mutate(Arrays.asList(put, delete));
+```
+
+:::note
+
+A Mutate operation only accepts mutations for a single partition; otherwise, an exception will be thrown.
+
+In addition, if you specify multiple conditions in a Mutate operation, the operation will be executed only when all the conditions match.
+
+:::
+
+#### Default namespace for CRUD operations
+
+A default namespace for all CRUD operations can be set by using a property in the ScalarDB configuration.
+
+```properties
+scalar.db.default_namespace_name=
+```
+
+Any operation that does not specify a namespace will use the default namespace set in the configuration.
+
+```java
+// This operation will target the default namespace.
+Scan scanUsingDefaultNamespace =
+ Scan.newBuilder()
+ .table("tbl")
+ .all()
+ .build();
+// This operation will target the "ns" namespace.
+Scan scanUsingSpecifiedNamespace =
+ Scan.newBuilder()
+ .namespace("ns")
+ .table("tbl")
+ .all()
+ .build();
+```
diff --git a/versioned_docs/version-3.X/run-transactions-through-scalardb-core-library.mdx b/versioned_docs/version-3.X/run-transactions-through-scalardb-core-library.mdx
new file mode 100644
index 00000000..b448d61f
--- /dev/null
+++ b/versioned_docs/version-3.X/run-transactions-through-scalardb-core-library.mdx
@@ -0,0 +1,242 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Run Transactions Through the ScalarDB Core Library
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This guide explains how to configure your ScalarDB properties file and create schemas to run transactions through a one-phase or a two-phase commit interface by using the ScalarDB Core library.
+
+## Preparation
+
+For the purpose of this guide, you will set up a database and ScalarDB by using a sample in the ScalarDB samples repository.
+
+### Clone the ScalarDB samples repository
+
+Open **Terminal**, then clone the ScalarDB samples repository by running the following command:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-samples
+```
+
+Then, go to the directory that contains the necessary files by running the following command:
+
+```console
+cd scalardb-samples/scalardb-sample
+```
+
+## Set up a database
+
+Select your database, and follow the instructions to configure it for ScalarDB.
+
+For a list of databases that ScalarDB supports, see [Databases](requirements.mdx#databases).
+
+
+
+ Run MySQL locally
+
+ You can run MySQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start MySQL, run the following command:
+
+ ```console
+ docker compose up -d mysql
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for MySQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For MySQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:mysql://localhost:3306/
+ scalar.db.username=root
+ scalar.db.password=mysql
+ ```
+
+
+ Run PostgreSQL locally
+
+ You can run PostgreSQL in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start PostgreSQL, run the following command:
+
+ ```console
+ docker compose up -d postgres
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for PostgreSQL in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For PostgreSQL
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:postgresql://localhost:5432/
+ scalar.db.username=postgres
+ scalar.db.password=postgres
+ ```
+
+
+ Run Oracle Database locally
+
+ You can run Oracle Database in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Oracle Database, run the following command:
+
+ ```console
+ docker compose up -d oracle
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Oracle Database in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Oracle
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:oracle:thin:@//localhost:1521/FREEPDB1
+ scalar.db.username=SYSTEM
+ scalar.db.password=Oracle
+ ```
+
+
+ Run SQL Server locally
+
+ You can run SQL Server in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start SQL Server, run the following command:
+
+ ```console
+ docker compose up -d sqlserver
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for SQL Server in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For SQL Server
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:sqlserver://localhost:1433;encrypt=true;trustServerCertificate=true
+ scalar.db.username=sa
+ scalar.db.password=SqlServer22
+ ```
+
+
+ Run Db2 locally
+
+ You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start IBM Db2, run the following command:
+
+ ```console
+ docker compose up -d db2
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Db2 in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Db2
+ scalar.db.storage=jdbc
+ scalar.db.contact_points=jdbc:db2://localhost:50000/sample
+ scalar.db.username=db2inst1
+ scalar.db.password=db2inst1
+ ```
+
+
+ Run Amazon DynamoDB Local
+
+ You can run Amazon DynamoDB Local in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Amazon DynamoDB Local, run the following command:
+
+ ```console
+ docker compose up -d dynamodb
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Amazon DynamoDB Local in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For DynamoDB Local
+ scalar.db.storage=dynamo
+ scalar.db.contact_points=sample
+ scalar.db.username=sample
+ scalar.db.password=sample
+ scalar.db.dynamo.endpoint_override=http://localhost:8000
+ ```
+
+
+ To use Azure Cosmos DB for NoSQL, you must have an Azure account. If you don't have an Azure account, visit [Create an Azure Cosmos DB account](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-portal#create-account).
+
+ Configure Cosmos DB for NoSQL
+
+ Set the **default consistency level** to **Strong** according to the official document at [Configure the default consistency level](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency#configure-the-default-consistency-level).
+
+ Configure ScalarDB
+
+ The following instructions assume that you have properly installed and configured the JDK in your local environment and properly configured your Cosmos DB for NoSQL account in Azure.
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Be sure to change the values for `scalar.db.contact_points` and `scalar.db.password` as described.
+
+ ```properties
+ # For Cosmos DB
+ scalar.db.storage=cosmos
+ scalar.db.contact_points=
+ scalar.db.password=
+ ```
+
+:::note
+
+You can use the primary key or the secondary key in your Azure Cosmos DB account as the value for `scalar.db.password`.
+
+:::
+
+
+ Run Cassandra locally
+
+ You can run Apache Cassandra in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory.
+
+ To start Apache Cassandra, run the following command:
+ ```console
+ docker compose up -d cassandra
+ ```
+
+ Configure ScalarDB
+
+ The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Cassandra in the **database.properties** file so that the configuration looks as follows:
+
+ ```properties
+ # For Cassandra
+ scalar.db.storage=cassandra
+ scalar.db.contact_points=localhost
+ scalar.db.username=cassandra
+ scalar.db.password=cassandra
+ ```
+
+
+
+For a comprehensive list of configurations for ScalarDB, see [ScalarDB Configurations](configurations.mdx).
+
+## Create or import a schema
+
+ScalarDB has its own data model and schema that maps to the implementation-specific data model and schema.
+
+- **Need to create a database schema?** See [ScalarDB Schema Loader](schema-loader.mdx).
+- **Need to import an existing database?** See [Importing Existing Tables to ScalarDB by Using ScalarDB Schema Loader](schema-loader-import.mdx).
+
+## Run transactions by using Java
+
+- **Want to run transactions by using a one-phase commit interface?** See the [ScalarDB Java API Guide](api-guide.mdx#transactional-api).
+- **Want to run transactions by using a two-phase commit interface?** See [Transactions with a Two-Phase Commit Interface](two-phase-commit-transactions.mdx).
diff --git a/versioned_docs/version-3.X/scalar-licensing/index.mdx b/versioned_docs/version-3.X/scalar-licensing/index.mdx
new file mode 100644
index 00000000..d6a813a6
--- /dev/null
+++ b/versioned_docs/version-3.X/scalar-licensing/index.mdx
@@ -0,0 +1,65 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# How to Configure a Product License Key
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+To run Scalar products, you must create a `.properties` file and add your product license key and a certificate to the file. In your `.properties` file, copy one of the following configurations, based on the product you're using, and paste the contents in the `.properties` file, replacing `` with your license key.
+
+:::note
+
+If you don't have a license key, please [contact us](https://www.scalar-labs.com/contact).
+
+:::
+
+:::warning
+
+If you're using a trial license, ScalarDB must be connected to the Internet. An Internet connection is required to check if the trial license is valid and hasn't expired.
+
+:::
+
+## ScalarDB Enterprise Edition
+
+
+
+ ```properties
+ scalar.db.cluster.node.licensing.license_key=
+ scalar.db.cluster.node.licensing.license_check_cert_pem=-----BEGIN CERTIFICATE-----\nMIICKzCCAdKgAwIBAgIIBXxj3s8NU+owCgYIKoZIzj0EAwIwbDELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMSMwIQYDVQQDExplbnRlcnByaXNlLnNjYWxhci1sYWJzLmNv\nbTAeFw0yMzExMTYwNzExNTdaFw0yNDAyMTUxMzE2NTdaMGwxCzAJBgNVBAYTAkpQ\nMQ4wDAYDVQQIEwVUb2t5bzERMA8GA1UEBxMIU2hpbmp1a3UxFTATBgNVBAoTDFNj\nYWxhciwgSW5jLjEjMCEGA1UEAxMaZW50ZXJwcmlzZS5zY2FsYXItbGFicy5jb20w\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATJx5gvAr+GZAHcBpUvDFDsUlFo4GNw\npRfsntzwStIP8ac3dew7HT4KbGBWei0BvIthleaqpv0AEP7JT6eYAkNvo14wXDAO\nBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwG\nA1UdEwEB/wQCMAAwHQYDVR0OBBYEFMIe+XuuZcnDX1c3TmUPlu3kNv/wMAoGCCqG\nSM49BAMCA0cAMEQCIGGlqKpgv+KW+Z1ZkjfMHjSGeUZKBLwfMtErVyc9aTdIAiAy\nvsZyZP6Or9o40x3l3pw/BT7wvy93Jm0T4vtVQH6Zuw==\n-----END CERTIFICATE-----
+ ```
+
+
+ ```properties
+ scalar.db.cluster.node.licensing.license_key=
+ scalar.db.cluster.node.licensing.license_check_cert_pem=-----BEGIN CERTIFICATE-----\nMIICKzCCAdKgAwIBAgIIBXxj3s8NU+owCgYIKoZIzj0EAwIwbDELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMSMwIQYDVQQDExplbnRlcnByaXNlLnNjYWxhci1sYWJzLmNv\nbTAeFw0yMzExMTYwNzExNTdaFw0yNDAyMTUxMzE2NTdaMGwxCzAJBgNVBAYTAkpQ\nMQ4wDAYDVQQIEwVUb2t5bzERMA8GA1UEBxMIU2hpbmp1a3UxFTATBgNVBAoTDFNj\nYWxhciwgSW5jLjEjMCEGA1UEAxMaZW50ZXJwcmlzZS5zY2FsYXItbGFicy5jb20w\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATJx5gvAr+GZAHcBpUvDFDsUlFo4GNw\npRfsntzwStIP8ac3dew7HT4KbGBWei0BvIthleaqpv0AEP7JT6eYAkNvo14wXDAO\nBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwG\nA1UdEwEB/wQCMAAwHQYDVR0OBBYEFMIe+XuuZcnDX1c3TmUPlu3kNv/wMAoGCCqG\nSM49BAMCA0cAMEQCIGGlqKpgv+KW+Z1ZkjfMHjSGeUZKBLwfMtErVyc9aTdIAiAy\nvsZyZP6Or9o40x3l3pw/BT7wvy93Jm0T4vtVQH6Zuw==\n-----END CERTIFICATE-----
+ ```
+
+
+ ```properties
+ scalar.db.cluster.node.licensing.license_key=
+ scalar.db.cluster.node.licensing.license_check_cert_pem=-----BEGIN CERTIFICATE-----\nMIICIzCCAcigAwIBAgIIKT9LIGX1TJQwCgYIKoZIzj0EAwIwZzELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMR4wHAYDVQQDExV0cmlhbC5zY2FsYXItbGFicy5jb20wHhcN\nMjMxMTE2MDcxMDM5WhcNMjQwMjE1MTMxNTM5WjBnMQswCQYDVQQGEwJKUDEOMAwG\nA1UECBMFVG9reW8xETAPBgNVBAcTCFNoaW5qdWt1MRUwEwYDVQQKEwxTY2FsYXIs\nIEluYy4xHjAcBgNVBAMTFXRyaWFsLnNjYWxhci1sYWJzLmNvbTBZMBMGByqGSM49\nAgEGCCqGSM49AwEHA0IABBSkIYAk7r5FRDf5qRQ7dbD3ib5g3fb643h4hqCtK+lC\nwM4AUr+PPRoquAy+Ey2sWEvYrWtl2ZjiYyyiZw8slGCjXjBcMA4GA1UdDwEB/wQE\nAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw\nADAdBgNVHQ4EFgQUbFyOWFrsjkkOvjw6vK3gGUADGOcwCgYIKoZIzj0EAwIDSQAw\nRgIhAKwigOb74z9BdX1+dUpeVG8WrzLTIqdIU0w+9jhAueXoAiEA6cniJ3qsP4j7\nsck62kHnFpH1fCUOc/b/B8ZtfeXI2Iw=\n-----END CERTIFICATE-----
+ ```
+
+
+
+## ScalarDB Analytics with Spark
+
+
+
+ ```apacheconf
+ spark.sql.catalog.scalardb_catalog.license.key
+ spark.sql.catalog.scalardb_catalog.license.cert_pem -----BEGIN CERTIFICATE-----\nMIICKzCCAdKgAwIBAgIIBXxj3s8NU+owCgYIKoZIzj0EAwIwbDELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMSMwIQYDVQQDExplbnRlcnByaXNlLnNjYWxhci1sYWJzLmNv\nbTAeFw0yMzExMTYwNzExNTdaFw0yNDAyMTUxMzE2NTdaMGwxCzAJBgNVBAYTAkpQ\nMQ4wDAYDVQQIEwVUb2t5bzERMA8GA1UEBxMIU2hpbmp1a3UxFTATBgNVBAoTDFNj\nYWxhciwgSW5jLjEjMCEGA1UEAxMaZW50ZXJwcmlzZS5zY2FsYXItbGFicy5jb20w\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATJx5gvAr+GZAHcBpUvDFDsUlFo4GNw\npRfsntzwStIP8ac3dew7HT4KbGBWei0BvIthleaqpv0AEP7JT6eYAkNvo14wXDAO\nBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwG\nA1UdEwEB/wQCMAAwHQYDVR0OBBYEFMIe+XuuZcnDX1c3TmUPlu3kNv/wMAoGCCqG\nSM49BAMCA0cAMEQCIGGlqKpgv+KW+Z1ZkjfMHjSGeUZKBLwfMtErVyc9aTdIAiAy\nvsZyZP6Or9o40x3l3pw/BT7wvy93Jm0T4vtVQH6Zuw==\n-----END CERTIFICATE-----
+ ```
+
+
+ ```apacheconf
+ spark.sql.catalog.scalardb_catalog.license.key
+ spark.sql.catalog.scalardb_catalog.license.cert_pem -----BEGIN CERTIFICATE-----\nMIICIzCCAcigAwIBAgIIKT9LIGX1TJQwCgYIKoZIzj0EAwIwZzELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMR4wHAYDVQQDExV0cmlhbC5zY2FsYXItbGFicy5jb20wHhcN\nMjMxMTE2MDcxMDM5WhcNMjQwMjE1MTMxNTM5WjBnMQswCQYDVQQGEwJKUDEOMAwG\nA1UECBMFVG9reW8xETAPBgNVBAcTCFNoaW5qdWt1MRUwEwYDVQQKEwxTY2FsYXIs\nIEluYy4xHjAcBgNVBAMTFXRyaWFsLnNjYWxhci1sYWJzLmNvbTBZMBMGByqGSM49\nAgEGCCqGSM49AwEHA0IABBSkIYAk7r5FRDf5qRQ7dbD3ib5g3fb643h4hqCtK+lC\nwM4AUr+PPRoquAy+Ey2sWEvYrWtl2ZjiYyyiZw8slGCjXjBcMA4GA1UdDwEB/wQE\nAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw\nADAdBgNVHQ4EFgQUbFyOWFrsjkkOvjw6vK3gGUADGOcwCgYIKoZIzj0EAwIDSQAw\nRgIhAKwigOb74z9BdX1+dUpeVG8WrzLTIqdIU0w+9jhAueXoAiEA6cniJ3qsP4j7\nsck62kHnFpH1fCUOc/b/B8ZtfeXI2Iw=\n-----END CERTIFICATE-----
+ ```
+
+
diff --git a/versioned_docs/version-3.X/scalar-manager/images/backup-and-restore-check-pauses.png b/versioned_docs/version-3.X/scalar-manager/images/backup-and-restore-check-pauses.png
new file mode 100644
index 00000000..d4f63157
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/backup-and-restore-check-pauses.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/images/backup-and-restore-create-pauses.png b/versioned_docs/version-3.X/scalar-manager/images/backup-and-restore-create-pauses.png
new file mode 100644
index 00000000..927f1910
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/backup-and-restore-create-pauses.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/images/dashboard-cluster.png b/versioned_docs/version-3.X/scalar-manager/images/dashboard-cluster.png
new file mode 100644
index 00000000..cdc5c5ab
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/dashboard-cluster.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/images/dashboard-pod-list.png b/versioned_docs/version-3.X/scalar-manager/images/dashboard-pod-list.png
new file mode 100644
index 00000000..ed247f0c
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/dashboard-pod-list.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/images/logs.png b/versioned_docs/version-3.X/scalar-manager/images/logs.png
new file mode 100644
index 00000000..1127bd71
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/logs.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/images/metrics.png b/versioned_docs/version-3.X/scalar-manager/images/metrics.png
new file mode 100644
index 00000000..e4f4d116
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/metrics.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/images/metrics2.png b/versioned_docs/version-3.X/scalar-manager/images/metrics2.png
new file mode 100644
index 00000000..6f76551b
Binary files /dev/null and b/versioned_docs/version-3.X/scalar-manager/images/metrics2.png differ
diff --git a/versioned_docs/version-3.X/scalar-manager/overview.mdx b/versioned_docs/version-3.X/scalar-manager/overview.mdx
new file mode 100644
index 00000000..525b9306
--- /dev/null
+++ b/versioned_docs/version-3.X/scalar-manager/overview.mdx
@@ -0,0 +1,54 @@
+---
+tags:
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+# Scalar Manager Overview
+
+Scalar Manager is a centralized management and monitoring solution for ScalarDB within Kubernetes cluster environments.
+It simplifies the operational tasks associated with these products by aggregating essential functionalities into a graphical user interface (GUI).
+
+## Why Scalar Manager?
+
+Before Scalar Manager was released, you would need to use various command-line tools and third-party solutions individually to manage and monitor ScalarDB deployments.
+For example, `kubectl` is often used to check deployment status, the Prometheus stack for monitoring metrics, the Loki stack for log analysis, and Scalar's proprietary CLI tool for pausing ScalarDB to ensure transactional consistency between multiple databases.
+This constellation of tools presented a steep learning curve and lacked a unified interface, resulting in inefficient workflows for performing routine management tasks or troubleshooting issues.
+
+Scalar Manager mitigates these pain points by aggregating essential functionalities into a single, user-friendly GUI.
+With Scalar Manager, you can reduce the time and effort needed for management and monitoring, allowing you to focus on business development and operations.
+
+## Key features
+
+At its core, Scalar Manager provides the following features.
+
+### Centralized cluster visualization
+
+You can quickly gain real-time metrics about cluster health, pod logs, hardware usage, performance metrics like requests per second, and deep visibility into time-series data via the Grafana dashboards.
+
+
+
+
+With the Grafana dashboards, you can also view pod logs and metrics in real-time or in time series.
+
+
+
+
+### Streamlined pausing job management
+
+You can execute or schedule pausing jobs to ensure transactional consistency, review and manage scheduled jobs, and monitor paused states within an intuitive GUI.
+
+
+
+
+### User management
+
+Scalar Manager includes authentication capabilities, allowing for secure access control to your deployment. The system provides user management functionalities that enable administrators to create, modify, and remove user accounts through an intuitive interface.
+
+### Authentication and authorization
+
+By using the authorization feature, administrators can define and assign specific roles to users, controlling their access permissions within the Scalar Manager environment. This control ensures that users only have access to the functionalities relevant to their responsibilities.
+
+### Integrated authentication with Grafana
+
+Scalar Manager now offers seamless authentication integration between your Grafana instance and other components of the system. This single-sign-on capability eliminates the need for multiple authentication processes, streamlining the user experience and enhancing security by reducing credential management overhead.
diff --git a/versioned_docs/version-3.X/scalardb-analytics-postgresql/getting-started.mdx b/versioned_docs/version-3.X/scalardb-analytics-postgresql/getting-started.mdx
new file mode 100644
index 00000000..29e34ce5
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics-postgresql/getting-started.mdx
@@ -0,0 +1,98 @@
+---
+tags:
+ - Community
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with ScalarDB Analytics with PostgreSQL
+
+This document explains how to get started with ScalarDB Analytics with PostgreSQL. We assume that you have already installed ScalarDB Analytics with PostgreSQL and that all required services are running. If you don't have such an environment, please follow the instructions in [How to Install ScalarDB Analytics with PostgreSQL in Your Local Environment by Using Docker](./installation.mdx). Because ScalarDB Analytics with PostgreSQL executes queries via PostgreSQL, we also assume that you already have a `psql` client or another PostgreSQL client to send queries to PostgreSQL.
+
+## What is ScalarDB Analytics with PostgreSQL?
+
+ScalarDB, as a universal transaction manager, targets mainly transactional workloads and therefore supports limited subsets of relational queries.
+
+ScalarDB Analytics with PostgreSQL extends the functionality of ScalarDB to process analytical queries on ScalarDB-managed data by using PostgreSQL and its foreign data wrapper (FDW) extension.
+
+ScalarDB Analytics with PostgreSQL mainly consists of two components: PostgreSQL and Schema Importer.
+
+PostgreSQL runs as a service, accepting queries from users to process. FDW extensions are used to read data from the back-end storages that ScalarDB manages. Schema Importer is a tool to import the schema of the ScalarDB database into PostgreSQL so that users can see tables on the PostgreSQL side, which are identical to the tables on the ScalarDB side.
+
+## Set up a ScalarDB database
+
+First, you need one or more ScalarDB databases to run analytical queries with ScalarDB Analytics with PostgreSQL. If you have your own ScalarDB database, you can skip this section and use your database instead. If you use the [scalardb-samples/scalardb-analytics-postgresql-sample](https://github.com/scalar-labs/scalardb-samples/tree/main/scalardb-analytics-postgresql-sample) project, you can set up a sample database by running the following command in the project directory.
+
+```console
+docker compose run --rm schema-loader \
+ -c /etc/scalardb.properties \
+ --schema-file /etc/schema.json \
+ --coordinator \
+ --no-backup \
+ --no-scaling
+```
+
+This command sets up [multiple storage instances](../multi-storage-transactions.mdx) that consist of DynamoDB, PostgreSQL, and Cassandra. Then, the command creates namespaces for `dynamons`, `postgresns`, and `cassandrans` that are mapped to those storages, creates tables for `dynamons.customer`, `postgresns.orders`, and `cassandrans.lineitem` by using [ScalarDB Schema Loader](https://scalardb.scalar-labs.com/docs/latest/schema-loader/).
+
+
+
+You can load sample data into the created tables by running the following command.
+
+```console
+docker compose run --rm sample-data-loader
+```
+
+## Import the schemas from ScalarDB into PostgreSQL
+
+Next, let's import the schemas of the ScalarDB databases into PostgreSQL that processes analytical queries. ScalarDB Analytics with PostgreSQL provides a tool, Schema Importer, for this purpose. It'll get everything in place to run analytical queries for you.
+
+```console
+docker compose run --rm schema-importer \
+ import \
+ --config /etc/scalardb.properties \
+ --host analytics \
+ --port 5432 \
+ --database test \
+ --user postgres \
+ --password postgres \
+ --namespace cassandrans \
+ --namespace postgresns \
+ --namespace dynamons \
+ --config-on-postgres-host /etc/scalardb.properties
+```
+
+If you use your own ScalarDB database, you must replace the `--config` and `--config-on-postgres-host` options with your ScalarDB configuration file and the `--namespace` options with your ScalarDB namespaces to import.
+
+This creates tables (in precise, views) with the same names as the tables in the ScalarDB databases. In this example, the tables of `dynamons.customer`, `postgresns.orders`, and `cassandrans.lineitem` are created. The column definitions are also identical to the ScalarDB databases. These tables are [foreign tables](https://www.postgresql.org/docs/current/sql-createforeigntable.html) connected to the underlying storage of the ScalarDB databases using FDW. Therefore, you can equate those tables in PostgreSQL with the tables in the ScalarDB databases.
+
+
+
+## Run analytical queries
+
+Now, you have all tables to read the same data in the ScalarDB databases and can run any arbitrary analytical queries supported by PostgreSQL. To run queries, please connect to PostgreSQL with `psql` or other client.
+
+```console
+psql -U postgres -h localhost test
+Password for user postgres:
+
+> select c_mktsegment, count(*) from dynamons.customer group by c_mktsegment;
+ c_mktsegment | count
+--------------+-------
+ AUTOMOBILE | 4
+ BUILDING | 2
+ FURNITURE | 1
+ HOUSEHOLD | 2
+ MACHINERY | 1
+(5 rows)
+```
+
+For details about the sample data and additional practical work, see the sample application page.
+
+## Caveats
+
+### Isolation level
+
+ScalarDB Analytics with PostgreSQL reads data with the **Read Committed** isolation level set. This isolation level ensures that the data you read has been committed in the past but does not guarantee that you can read consistent data at a particular point in time.
+
+### Write operations are not supported
+
+ScalarDB Analytics with PostgreSQL only supports read-only queries. `INSERT`, `UPDATE`, and other write operations are not supported.
diff --git a/versioned_docs/version-3.X/scalardb-analytics-postgresql/images/imported-schema.png b/versioned_docs/version-3.X/scalardb-analytics-postgresql/images/imported-schema.png
new file mode 100644
index 00000000..1cf8fea3
Binary files /dev/null and b/versioned_docs/version-3.X/scalardb-analytics-postgresql/images/imported-schema.png differ
diff --git a/versioned_docs/version-3.X/scalardb-analytics-postgresql/images/multi-storage-overview.png b/versioned_docs/version-3.X/scalardb-analytics-postgresql/images/multi-storage-overview.png
new file mode 100644
index 00000000..fc8df1cb
Binary files /dev/null and b/versioned_docs/version-3.X/scalardb-analytics-postgresql/images/multi-storage-overview.png differ
diff --git a/versioned_docs/version-3.X/scalardb-analytics-postgresql/installation.mdx b/versioned_docs/version-3.X/scalardb-analytics-postgresql/installation.mdx
new file mode 100644
index 00000000..ca3e82bf
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics-postgresql/installation.mdx
@@ -0,0 +1,61 @@
+---
+tags:
+ - Community
+displayed_sidebar: docsEnglish
+---
+
+# How to Install ScalarDB Analytics with PostgreSQL in Your Local Environment by Using Docker
+
+This document explains how to set up a local environment that runs ScalarDB Analytics with PostgreSQL using the multi-storage back-end of Cassandra, PostgreSQL, and DynamoDB local server using [Docker Compose](https://docs.docker.com/compose/).
+
+## Prerequisites
+
+- [Docker Engine](https://docs.docker.com/engine/) and [Docker Compose](https://docs.docker.com/compose/).
+
+Follow the instructions on the Docker website according to your platform.
+
+## Step 1. Clone the `scalardb-samples` repository
+
+[scalardb-samples/scalardb-analytics-postgresql-sample](https://github.com/scalar-labs/scalardb-samples/tree/main/scalardb-analytics-postgresql-sample) repository is a project containing a sample configuration to set up ScalarDB Analytics with PostgreSQL.
+
+Determine the location on your local machine where you want to run the scalardb-analytics-postgresql-sample app. Then, open Terminal, go to the location by using the `cd` command, and run the following commands:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-samples.git
+cd scalardb-samples/scalardb-analytics-postgresql-sample
+```
+
+## Step 2. Start up the ScalarDB Analytics with PostgreSQL services
+
+The following command starts up the PostgreSQL instance that serves ScalarDB Analytics with PostgreSQL along with the back-end servers of Cassandra, PostgreSQL, and DynamoDB local in the Docker containers. When you first run the command, the required Docker images will be downloaded from the GitHub Container Registry.
+
+```console
+docker-compose up
+```
+
+If you want to run the containers in the background, add the `-d` (--detach) option:
+
+```console
+docker-compose up -d
+```
+
+If you already have your own ScalarDB database and want to use it as a back-end service, you can launch only the PostgreSQL instance without starting additional back-end servers in the container.
+
+```console
+docker-compose up analytics
+```
+
+## Step 3. Run your analytical queries
+
+Now, you should have all the required services running. To run analytical queries, see [Getting Started with ScalarDB Analytics with PostgreSQL](./getting-started.mdx).
+
+## Step 4. Shut down the ScalarDB Analytics with PostgreSQL services
+
+To shut down the containers, do one of the following in Terminal, depending on how you:
+
+- If you started the containers in the foreground, press Ctrl+C where `docker-compose` is running.
+- If you started the containers in the background, run the following command.
+
+```console
+docker-compose down
+```
diff --git a/versioned_docs/version-3.X/scalardb-analytics-postgresql/scalardb-fdw.mdx b/versioned_docs/version-3.X/scalardb-analytics-postgresql/scalardb-fdw.mdx
new file mode 100644
index 00000000..d8583026
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics-postgresql/scalardb-fdw.mdx
@@ -0,0 +1,180 @@
+---
+tags:
+ - Community
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB FDW
+
+ScalarDB FDW is a PostgreSQL extension that implements a foreign data wrapper (FDW) for [ScalarDB](https://www.scalar-labs.com/scalardb/).
+
+ScalarDB FDW uses the Java Native Interface to directly utilize ScalarDB as a library inside the FDW and read data from external databases via scan operations for ScalarDB.
+
+## Prerequisites
+
+You must have the following prerequisites set up in your environment.
+
+### JDK
+
+You must install a version of the Java Development Kit (JDK) that is compatible with ScalarDB. In addition, you must set the `JAVA_HOME` environment variable, which points to your JDK installation directory.
+
+Note that since these extensions use the Java Native Interface (JNI) internally, you must include the dynamic library of the Java virtual machine (JVM), such as `libjvm.so`, in the library search path.
+
+### PostgreSQL
+
+This extension supports PostgreSQL 13 or later. For details on how to install PostgreSQL, see the official documentation at [Server Administration](https://www.postgresql.org/docs/current/admin.html).
+
+## Build and installation
+
+You can build and install this extension by running the following command.
+
+```console
+make install
+```
+
+### Common build errors
+
+This section describes some common build errors that you might encounter.
+
+#### ld: library not found for -ljvm
+
+Normally, the build script finds the path for `libjvm.so` and properly sets it as a library search path. However, if you encounter the error `ld: library not found for -ljvm`, please copy the `libjvm.so` file to the default library search path. For example:
+
+```console
+ln -s //libjvm.so /usr/lib64/libjvm.so
+```
+
+## Usage
+
+This section provides a usage example and available options for FDW for ScalarDB.
+
+### Example
+
+The following example shows you how to install and create the necessary components, and then run a query by using the FDW extension.
+
+#### 1. Install the extension
+
+For details on how to install the extension, see the [Build and installation](#build-and-installation) section.
+
+#### 2. Create an extension
+
+To create an extension, run the following command:
+
+```sql
+CREATE EXTENSION scalardb_fdw;
+```
+
+#### 3. Create a foreign server
+
+To create a foreign server, run the following command:
+
+```sql
+CREATE SERVER scalardb FOREIGN DATA WRAPPER scalardb_fdw OPTIONS (
+ config_file_path '/path/to/scalardb.properties'
+);
+```
+
+#### 4. Create user mapping
+
+To create user mapping, run the following command:
+
+```sql
+CREATE USER MAPPING FOR PUBLIC SERVER scalardb;
+```
+
+#### 5. Create a foreign table
+
+To create a foreign table, run the following command:
+
+```sql
+CREATE FOREIGN TABLE sample_table (
+ pk int,
+ ck1 int,
+ ck2 int,
+ boolean_col boolean,
+ bigint_col bigint,
+ float_col double precision,
+ double_col double precision,
+ text_col text,
+ blob_col bytea
+) SERVER scalardb OPTIONS (
+ namespace 'ns',
+ table_name 'sample_table'
+);
+```
+
+#### 6. Run a query
+
+To run a query, run the following command:
+
+```sql
+select * from sample_table;
+```
+
+### Available options
+
+You can set the following options for ScalarDB FDW objects.
+
+#### `CREATE SERVER`
+
+You can set the following options on a ScalarDB foreign server object:
+
+| Name | Required | Type | Description |
+| ------------------ | -------- | -------- | --------------------------------------------------------------- |
+| `config_file_path` | **Yes** | `string` | The path to the ScalarDB config file. |
+| `max_heap_size` | No | `string` | The maximum heap size of JVM. The format is the same as `-Xmx`. |
+
+#### `CREATE USER MAPPING`
+
+Currently, no options exist for `CREATE USER MAPPING`.
+
+#### `CREATE FOREIGN SERVER`
+
+The following options can be set on a ScalarDB foreign table object:
+
+| Name | Required | Type | Description |
+| ------------ | -------- | -------- | ---------------------------------------------------------------- |
+| `namespace` | **Yes** | `string` | The name of the namespace of the table in the ScalarDB instance. |
+| `table_name` | **Yes** | `string` | The name of the table in the ScalarDB instance. |
+
+### Data-type mapping
+
+| ScalarDB | PostgreSQL |
+| -------- | ---------------- |
+| BOOLEAN | boolean |
+| INT | int |
+| BIGINT | bigint |
+| FLOAT | float |
+| DOUBLE | double precision |
+| TEXT | text |
+| BLOB | bytea |
+
+## Testing
+
+This section describes how to test FDW for ScalarDB.
+
+### Set up a ScalarDB instance for testing
+
+Before testing FDW for ScalarDB, you must have a running ScalarDB instance that contains test data. You can set up the instance and load the test data by running the following commands:
+
+```console
+./test/setup.sh
+```
+
+If you want to reset the instances, you can run the following command, then the above setup command again.
+
+```console
+./test/cleanup.sh
+```
+
+### Run regression tests
+
+You can run regression tests by running the following command **after** you have installed the FDW extension.
+
+```console
+make installcheck
+```
+
+## Limitations
+
+- This extension aims to enable analytical query processing on ScalarDB-managed databases. Therefore, this extension only supports reading data from ScalarDB.
diff --git a/versioned_docs/version-3.X/scalardb-analytics-postgresql/schema-importer.mdx b/versioned_docs/version-3.X/scalardb-analytics-postgresql/schema-importer.mdx
new file mode 100644
index 00000000..51457edc
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics-postgresql/schema-importer.mdx
@@ -0,0 +1,66 @@
+---
+tags:
+ - Community
+displayed_sidebar: docsEnglish
+---
+
+# Schema Importer
+
+Schema Importer is a CLI tool for automatically configuring PostgreSQL. By using this tool, your PostgreSQL database can have identical database objects, such as namespaces and tables, as your ScalarDB instance.
+
+Schema Importer reads the ScalarDB configuration file, retrieves the schemas of the tables defined in ScalarDB, and creates the corresponding foreign data wrapper external tables and views in that order. For more information, refer to [Getting Started with ScalarDB Analytics with PostgreSQL](getting-started.mdx).
+
+## Build Schema Importer
+
+You can build Schema Importer by using [Gradle](https://gradle.org/). To build Schema Importer, run the following command:
+
+```console
+./gradlew build
+```
+
+You may want to build a fat JAR file so that you can launch Schema Importer by using `java -jar`. To build the fat JAR, run the following command:
+
+ ```console
+ ./gradlew shadowJar
+ ```
+
+After you build the fat JAR, you can find the fat JAR file in the `app/build/libs/` directory.
+
+## Run Schema Importer
+
+To run Schema Importer by using the fat JAR file, run the following command:
+
+```console
+java -jar
+```
+Available options are as follows:
+
+| Name | Required | Description | Default |
+| --------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
+| `--config` | **Yes** | Path to the ScalarDB configuration file | |
+| `--config-on-postgres-host` | No | Path to the ScalarDB configuration file on the PostgreSQL-running host | The same value as `--config` will be used. |
+| `--namespace`, `-n` | **Yes** | Namespaces to import into the analytics instance. You can specify the `--namespace` option multiple times if you have two or more namespaces. | |
+| `--host` | No | PostgreSQL host | localhost |
+| `--port` | No | PostgreSQL port | 5432 |
+| `--database` | No | PostgreSQL port | postgres |
+| `--user` | No | PostgreSQL user | postgres |
+| `--password` | No | PostgreSQL password | |
+| `--debug` | No | Enable debug mode | |
+
+
+## Test Schema Importer
+
+To test Schema Importer, run the following command:
+
+```console
+./gradlew test
+```
+
+## Build a Docker image of Schema Importer
+
+
+To build a Docker image of Schema Importer, run the following command, replacing `` with the tag version of Schema Importer that you want to use:
+
+```console
+docker build -t ghcr.io/scalar-labs/scalardb-analytics-postgresql-schema-importer: -f ./app/Dockerfile .
+```
diff --git a/versioned_docs/version-3.X/scalardb-analytics/README.mdx b/versioned_docs/version-3.X/scalardb-analytics/README.mdx
new file mode 100644
index 00000000..fa416e71
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics/README.mdx
@@ -0,0 +1,20 @@
+---
+tags:
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Analytics
+
+import WarningLicenseKeyContact from '/src/components/en-us/_warning-license-key-contact.mdx';
+
+**ScalarDB Analytics** is the analytical component of ScalarDB. Similar to ScalarDB, it unifies diverse data sources - ranging from RDBMSs like PostgreSQL and MySQL to NoSQL databases such as Cassandra and DynamoDB - into a single logical database. While ScalarDB focuses on operational workloads with strong transactional consistency across multiple databases, ScalarDB Analytics is optimized for analytical workloads. It supports a wide range of queries, including complex joins, aggregations, and window functions. ScalarDB Analytics operates seamlessly on both ScalarDB-managed data sources and non-ScalarDB-managed ones, enabling advanced analytical queries across various datasets.
+
+The current version of ScalarDB Analytics leverages **Apache Spark** as its execution engine. It provides a unified view of ScalarDB-managed and non-ScalarDB-managed data sources by utilizing a Spark custom catalog. Using ScalarDB Analytics, you can treat tables from these data sources as native Spark tables. This allows you to execute arbitrary Spark SQL queries seamlessly. For example, you can join a table stored in Cassandra with a table in PostgreSQL to perform a cross-database analysis with ease.
+
+
+
+## Further reading
+
+* For tutorials on how to use ScalarDB Analytics by using a sample dataset and application, see [Getting Started with ScalarDB Analytics](../scalardb-samples/scalardb-analytics-spark-sample/README.mdx).
+* For supported Spark and Scala versions, see [Version Compatibility of ScalarDB Analytics with Spark](./run-analytical-queries.mdx#version-compatibility)
diff --git a/versioned_docs/version-3.X/scalardb-analytics/deployment.mdx b/versioned_docs/version-3.X/scalardb-analytics/deployment.mdx
new file mode 100644
index 00000000..b1f5a54f
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics/deployment.mdx
@@ -0,0 +1,219 @@
+---
+tags:
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Deploy ScalarDB Analytics in Public Cloud Environments
+
+This guide explains how to deploy ScalarDB Analytics in a public cloud environment. ScalarDB Analytics currently uses Apache Spark as an execution engine and supports managed Spark services provided by public cloud providers, such as Amazon EMR and Databricks.
+
+## Supported managed Spark services and their application types
+
+ScalarDB Analytics supports the following managed Spark services and application types.
+
+| Public Cloud Service | Spark Driver | Spark Connect | JDBC |
+| -------------------------- | ------------ | ------------- | ---- |
+| Amazon EMR (EMR on EC2) | ✅ | ✅ | ❌ |
+| Databricks | ✅ | ❌ | ✅ |
+
+## Configure and deploy
+
+Select your public cloud environment, and follow the instructions to set up and deploy ScalarDB Analytics.
+
+
+
+
+Use Amazon EMR
+
+You can use Amazon EMR (EMR on EC2) to run analytical queries through ScalarDB Analytics. For the basics to launch an EMR cluster, please refer to the [AWS EMR on EC2 documentation](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan.html).
+
+ScalarDB Analytics configuration
+
+To enable ScalarDB Analytics, you need to add the following configuration to the Software setting when you launch an EMR cluster. Be sure to replace the content in the angle brackets:
+
+```json
+[
+ {
+ "Classification": "spark-defaults",
+ "Properties": {
+ "spark.jars.packages": "com.scalar-labs:scalardb-analytics-spark-all-_:",
+ "spark.sql.catalog.": "com.scalar.db.analytics.spark.ScalarDbAnalyticsCatalog",
+ "spark.sql.extensions": "com.scalar.db.analytics.spark.extension.ScalarDbAnalyticsExtensions",
+ "spark.sql.catalog..license.cert_pem": "",
+ "spark.sql.catalog..license.key": "",
+
+ // Add your data source configuration below
+ }
+ }
+]
+```
+
+The following describes what you should change the content in the angle brackets to:
+
+- ``: The version of Spark.
+- ``: The version of Scala used to build Spark.
+- ``: The version of ScalarDB Analytics.
+- ``: The name of the catalog.
+- ``: The PEM encoded license certificate.
+- ``: The license key.
+
+For more details, refer to [Set up ScalarDB Analytics in the Spark configuration](./run-analytical-queries.mdx#set-up-scalardb-analytics-in-the-spark-configuration).
+
+Run analytical queries via the Spark driver
+
+After the EMR Spark cluster has launched, you can use ssh to connect to the primary node of the EMR cluster and run your Spark application. For details on how to create a Spark Driver application, refer to [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver-application#develop-a-spark-application).
+
+Run analytical queries via Spark Connect
+
+You can use Spark Connect to run your Spark application remotely by using the EMR cluster that you launched.
+
+You first need to configure the Software setting in the same way as the [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver-application#develop-a-spark-application). You also need to set the following configuration to enable Spark Connect.
+
+Allow inbound traffic for a Spark Connect server
+
+1. Create a security group to allow inbound traffic for a Spark Connect server. (Port 15001 is the default).
+2. Allow the role of "Amazon EMR service role" to attach the security group to the primary node of the EMR cluster.
+3. Add the security group to the primary node of the EMR cluster as "Additional security groups" when you launch the EMR cluster.
+
+Launch the Spark Connect server via a bootstrap action
+
+1. Create a script file to launch the Spark Connect server as follows:
+
+```bash
+#!/usr/bin/env bash
+
+set -eu -o pipefail
+
+cd /var/lib/spark
+
+sudo -u spark /usr/lib/spark/sbin/start-connect-server.sh --packages org.apache.spark:spark-connect_:,com.scalar-labs:scalardb-analytics-spark-all-_:
+```
+
+The following describes what you should change the content in the angle brackets to:
+
+- ``: The major and minor version of Scala that matches your Spark installation (such as 2.12 or 2.13)
+- ``: The full version of Spark you are using (such as 3.5.3)
+- ``: The major and minor version of Spark you are using (such as 3.5)
+- ``: The version of ScalarDB Analytics
+
+2. Upload the script file to S3.
+3. Allow the role of "EC2 instance profile for Amazon EMR" to access the uploaded script file in S3.
+4. Add the uploaded script file to "Bootstrap actions" when you launch the EMR cluster.
+
+Run analytical queries
+
+You can run your Spark application via Spark Connect from anywhere by using the remote URL of the Spark Connect server, which is `sc://:15001`.
+
+For details on how to create a Spark application by using Spark Connect, refer to [Spark Connect application](./run-analytical-queries.mdx?spark-application-type=spark-connect#develop-a-spark-application).
+
+
+
+Use Databricks
+
+You can use Databricks to run analytical queries through ScalarDB Analytics.
+
+:::note
+
+Note that Databricks provides a modified version of Apache Spark, which works differently from the original Apache Spark.
+
+:::
+
+Launch Databricks cluster
+
+ScalarDB Analytics works with all-purpose and jobs-compute clusters on Databricks. When you launch the cluster, you need to configure the cluster to enable ScalarDB Analytics as follows:
+
+1. Store the license certificate and license key in the cluster by using the Databricks CLI.
+
+```console
+databricks secrets create-scope scalardb-analytics-secret # you can use any secret scope name
+cat license_key.json | databricks secrets put-secret scalardb-analytics-secret license-key
+cat license_cert.pem | databricks secrets put-secret scalardb-analytics-secret license-cert
+```
+
+:::note
+
+For details on how to install and use the Databricks CLI, refer to the [Databricks CLI documentation](https://docs.databricks.com/en/dev-tools/cli/index.html).
+
+:::
+
+2. Select "No isolation shared" for the cluster mode. (This is required. ScalarDB Analytics works only with this cluster mode.)
+3. Select an appropriate Databricks runtime version that supports Spark 3.4 or later.
+4. Configure "Advanced Options" > "Spark config" as follows, replacing `` with the name of the catalog that you want to use:
+
+```
+spark.sql.catalog. com.scalar.db.analytics.spark.ScalarDbAnalyticsCatalog
+spark.sql.extensions com.scalar.db.analytics.spark.extension.ScalarDbAnalyticsExtensions
+spark.sql.catalog..license.key {{secrets/scalardb-analytics-secret/license-key}}
+spark.sql.catalog..license.cert_pem {{secrets/scalardb-analytics-secret/license-pem}}
+```
+
+:::note
+
+You also need to configure the data source. For details, refer to [Set up ScalarDB Analytics in the Spark configuration](./run-analytical-queries.mdx#set-up-scalardb-analytics-in-the-spark-configuration).
+
+:::
+
+:::note
+
+If you specified different secret names in the previous step, be sure to replace the secret names in the configuration above.
+
+:::
+
+5. Add the library of ScalarDB Analytics to the launched cluster as a Maven dependency. For details on how to add the library, refer to the [Databricks cluster libraries documentation](https://docs.databricks.com/en/libraries/cluster-libraries.html).
+
+Run analytical queries via the Spark Driver
+
+You can run your Spark application on the properly configured Databricks cluster with Databricks Notebook or Databricks Jobs to access the tables in ScalarDB Analytics. To run the Spark application, you can migrate your Pyspark, Scala, or Spark SQL application to Databricks Notebook, or use Databricks Jobs to run your Spark application. ScalarDB Analytics works with task types for Notebook, Python, JAR, and SQL.
+
+For more details on how to use Databricks Jobs, refer to the [Databricks Jobs documentation](https://docs.databricks.com/en/jobs/index.html)
+
+Run analytical queries via the JDBC driver
+
+Databricks supports JDBC to run SQL jobs on the cluster. You can use this feature to run your Spark application in SQL with ScalarDB Analytics by configuring extra settings as follows:
+
+1. Download the ScalarDB Analytics library JAR file from the Maven repository.
+2. Upload the JAR file to the Databricks workspace.
+3. Add the JAR file to the cluster as a library, instead of the Maven dependency.
+4. Create an init script as follows, replacing `` with the path to your JAR file in the Databricks workspace:
+
+```bash
+#!/bin/bash
+
+# Target directories
+TARGET_DIRECTORIES=("/databricks/jars" "/databricks/hive_metastore_jars")
+JAR_PATH="
+
+# Copy the JAR file to the target directories
+for TARGET_DIR in "${TARGET_DIRECTORIES[@]}"; do
+ mkdir -p "$TARGET_DIR"
+ cp "$JAR_PATH" "$TARGET_DIR/"
+done
+```
+
+5. Upload the init script to the Databricks workspace.
+6. Add the init script to the cluster to "Advanced Options" > "Init scripts" when you launch the cluster.
+
+After the cluster is launched, you can get the JDBC URL of the cluster in the "Advanced Options" > "JDBC/ODBC" tab on the cluster details page.
+
+To connect to the Databricks cluster by using JDBC, you need to add the Databricks JDBC driver to your application dependencies. For example, if you are using Gradle, you can add the following dependency to your `build.gradle` file:
+
+```groovy
+implementation("com.databricks:databricks-jdbc:0.9.6-oss")
+```
+
+Then, you can connect to the Databricks cluster by using JDBC with the JDBC URL (``), as is common with JDBC applications.
+
+```java
+Class.forName("com.databricks.client.jdbc.Driver");
+String url = "";
+Connection conn = DriverManager.getConnection(url)
+```
+
+For more details on how to use JDBC with Databricks, refer to the [Databricks JDBC Driver documentation](https://docs.databricks.com/en/integrations/jdbc/index.html).
+
+
+
diff --git a/versioned_docs/version-3.X/scalardb-analytics/design.mdx b/versioned_docs/version-3.X/scalardb-analytics/design.mdx
new file mode 100644
index 00000000..e1f99d07
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics/design.mdx
@@ -0,0 +1,391 @@
+---
+tags:
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Analytics Design
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+ScalarDB Analytics is the analytical component of ScalarDB. Similar to ScalarDB, it unifies diverse data sources—ranging from RDBMSs like PostgreSQL and MySQL to NoSQL databases like Cassandra and DynamoDB—into a single logical database. This enables you to perform analytical queries across multiple databases seamlessly.
+
+ScalarDB Analytics consists of two main components: a universal data catalog and a query engine:
+
+- **Universal data catalog.** The universal data catalog is a flexible metadata management system that handles multiple catalog spaces. Each catalog space provides an independent logical grouping of data sources and views, enabling organized management of diverse data environments.
+- **Query engine.** The query engine executes queries against the universal data catalog. ScalarDB Analytics provides appropriate data connectors to interface with the underlying data sources.
+
+ScalarDB Analytics employs a decoupled architecture where the data catalog and query engine are separate components. This design allows for integration with various existing query engines through an extensible architecture. As a result, you can select different query engines to execute queries against the same data catalog based on your specific requirements.
+
+## Universal data catalog
+
+The universal data catalog is composed of several levels and is structured as follows:
+
+```mermaid
+graph TD
+ C[Catalog] --> D[Data Source]
+ C[Catalog] --> D2[Data Source]
+ subgraph " "
+ D --> N[Namespace]
+ D --> N2[Namespace]
+ N --> T[Table]
+ N --> T2[Table]
+ T --> TC[Column]
+ T --> TC2[Column]
+ D2
+ end
+
+ C --> VN[View Namespace]
+ C --> VN2[View Namespace]
+ subgraph " "
+ VN --> V[View]
+ VN --> V2[View]
+ V --> VC[Column]
+ V --> VC2[Column]
+ VN2
+ end
+```
+
+The following are definitions for those levels:
+
+- **Catalog** is a folder that contains all your data source information. For example, you might have one catalog called `analytics_catalog` for your analytics data and another called `operational_catalog` for your day-to-day operations.
+- **Data source** represents each data source you connect to. For each data source, we store important information like:
+ - What kind of data source it is (PostgreSQL, Cassandra, etc.)
+ - How to connect to it (connection details and passwords)
+ - Special features the data source supports (like transactions)
+- **Namespace** is like a subfolder within your data source that groups related tables together. In PostgreSQL these are called schemas, in Cassandra they're called keyspaces. You can have multiple levels of namespaces, similar to having folders within folders.
+- **Table** is where your actual data lives. For each table, we keep track of:
+ - What columns it has
+ - What type of data each column can store
+ - Whether columns can be empty (null)
+- **View namespace** is a special folder for views. Unlike regular namespaces that are tied to one data source, view namespaces can work with multiple data sources at once.
+- **View** is like a virtual table that can:
+ - Show your data in a simpler way (like hiding technical columns in ScalarDB tables)
+ - Combine data from different sources using SQL queries
+ - Each view, like tables, has its own columns with specific types and rules about empty values.
+
+### Supported data types
+
+ScalarDB Analytics supports a wide range of data types across different data sources. The universal data catalog maps these data types to a common set of types to ensure compatibility and consistency across sources. The following list shows the supported data types in ScalarDB Analytics:
+
+- `BYTE`
+- `SMALLINT`
+- `INT`
+- `BIGINT`
+- `FLOAT`
+- `DOUBLE`
+- `DECIMAL`
+- `TEXT`
+- `BLOB`
+- `BOOLEAN`
+- `DATE`
+- `TIME`
+- `TIMESTAMP`
+- `TIMESTAMPTZ`
+- `DURATION`
+- `INTERVAL`
+
+### Catalog information mappings by data source
+
+When registering a data source to ScalarDB Analytics, the catalog information of the data source, that is, namespaces, tables, and columns, are resolved and registered to the universal data catalog. To resolve the catalog information of the data source, a particular object on the data sources side are mapped to the universal data catalog object. This mapping is consists of two parts: catalog-level mappings and data-type mappings. In the following sections, we describe how ScalarDB Analytics maps the catalog level and data type from each data source into the universal data catalog.
+
+#### Catalog-level mappings
+
+The catalog-level mappings are the mappings of the namespace names, table names, and column names from the data sources to the universal data catalog. To see the catalog-level mappings in each data source, select a data source.
+
+
+
+ The catalog information of ScalarDB is automatically resolved by ScalarDB Analytics. The catalog-level objects are mapped as follows:
+
+ - The ScalarDB namespace is mapped to the namespace. Therefore, the namespace of the ScalarDB data source is always single level, consisting of only the namespace name.
+ - The ScalarDB table is mapped to the table.
+ - The ScalarDB column is mapped to the column.
+
+
+
+
+ The catalog information of PostgreSQL is automatically resolved by ScalarDB Analytics. The catalog-level objects are mapped as follows:
+
+ - The PostgreSQL schema is mapped to the namespace. Therefore, the namespace of the PostgreSQL data source is always single level, consisting of only the schema name.
+ - Only user-defined schemas are mapped to namespaces. The following system schemas are ignored:
+ - `information_schema`
+ - `pg_catalog`
+ - The PostgreSQL table is mapped to the table.
+ - The PostgreSQL column is mapped to the column.
+
+
+
+ The catalog information of MySQL is automatically resolved by ScalarDB Analytics. The catalog-level objects are mapped as follows:
+
+ - The MySQL database is mapped to the namespace. Therefore, the namespace of the MySQL data source is always single level, consisting of only the database name.
+ - Only user-defined databases are mapped to namespaces. The following system databases are ignored:
+ - `mysql`
+ - `sys`
+ - `information_schema`
+ - `performance_schema`
+ - The MySQL table is mapped to the table.
+ - The MySQL column is mapped to the column.
+
+
+
+ The catalog information of Oracle is automatically resolved by ScalarDB Analytics. The catalog-level objects are mapped as follows:
+
+ - The Oracle schema is mapped to the namespace. Therefore, the namespace of the Oracle data source is always single level, consisting of only schema name.
+ - Only user-defined schemas are mapped to namespaces. The following system schemas are ignored:
+ - `ANONYMOUS`
+ - `APPQOSSYS`
+ - `AUDSYS`
+ - `CTXSYS`
+ - `DBSNMP`
+ - `DGPDB_INT`
+ - `DBSFWUSER`
+ - `DVF`
+ - `DVSYS`
+ - `GGSYS`
+ - `GSMADMIN_INTERNAL`
+ - `GSMCATUSER`
+ - `GSMROOTUSER`
+ - `GSMUSER`
+ - `LBACSYS`
+ - `MDSYS`
+ - `OJVMSYS`
+ - `ORDDATA`
+ - `ORDPLUGINS`
+ - `ORDSYS`
+ - `OUTLN`
+ - `REMOTE_SCHEDULER_AGENT`
+ - `SI_INFORMTN_SCHEMA`
+ - `SYS`
+ - `SYS$UMF`
+ - `SYSBACKUP`
+ - `SYSDG`
+ - `SYSKM`
+ - `SYSRAC`
+ - `SYSTEM`
+ - `WMSYS`
+ - `XDB`
+ - `DIP`
+ - `MDDATA`
+ - `ORACLE_OCM`
+ - `XS$NULL`
+
+
+
+ The catalog information of SQL Server is automatically resolved by ScalarDB Analytics. The catalog-level objects are mapped as follows:
+
+ - The SQL Server database and schema are mapped to the namespace together. Therefore, the namespace of the SQL Server data source is always two-level, consisting of the database name and the schema name.
+ - Only user-defined databases are mapped to namespaces. The following system databases are ignored:
+ - `sys`
+ - `guest`
+ - `INFORMATION_SCHEMA`
+ - `db_accessadmin`
+ - `db_backupoperator`
+ - `db_datareader`
+ - `db_datawriter`
+ - `db_ddladmin`
+ - `db_denydatareader`
+ - `db_denydatawriter`
+ - `db_owner`
+ - `db_securityadmin`
+ - Only user-defined schemas are mapped to namespaces. The following system schemas are ignored:
+ - `master`
+ - `model`
+ - `msdb`
+ - `tempdb`
+ - The SQL Server table is mapped to the table.
+ - The SQL Server column is mapped to the column.
+
+
+
+ Since DynamoDB is schema-less, you need to specify the catalog information explicitly when registering a DynamoDB data source by using the following format JSON:
+
+ ```json
+ {
+ "namespaces": [
+ {
+ "name": "",
+ "tables": [
+ {
+ "name": "",
+ "columns": [
+ {
+ "name": "",
+ "type": ""
+ },
+ ...
+ ]
+ },
+ ...
+ ]
+ },
+ ...
+ ]
+ }
+ ```
+
+ In the specified JSON, you can use any arbitrary namespace names, but the table names must match the table names in DynamoDB and column name and type must match field names and types in DynamoDB.
+
+
+
+
+#### Data-type mappings
+
+The native data types of the underlying data sources are mapped to the data types in ScalarDB Analytics. To see the data-type mappings in each data source, select a data source.
+
+
+
+ | **ScalarDB Data Type** | **ScalarDB Analytics Data Type** |
+ |:------------------------------|:---------------------------------|
+ | `BOOLEAN` | `BOOLEAN` |
+ | `INT` | `INT` |
+ | `BIGINT` | `BIGINT` |
+ | `FLOAT` | `FLOAT` |
+ | `DOUBLE` | `DOUBLE` |
+ | `TEXT` | `TEXT` |
+ | `BLOB` | `BLOB` |
+ | `DATE` | `DATE` |
+ | `TIME` | `TIME` |
+ | `TIMESTAMP` | `TIMESTAMP` |
+ | `TIMESTAMPTZ` | `TIMESTAMPTZ` |
+
+
+ | **PostgreSQL Data Type** | **ScalarDB Analytics Data Type** |
+ |:------------------------------|:---------------------------------|
+ | `integer` | `INT` |
+ | `bigint` | `BIGINT` |
+ | `real` | `FLOAT` |
+ | `double precision` | `DOUBLE` |
+ | `smallserial` | `SMALLINT` |
+ | `serial` | `INT` |
+ | `bigserial` | `BIGINT` |
+ | `char` | `TEXT` |
+ | `varchar` | `TEXT` |
+ | `text` | `TEXT` |
+ | `bpchar` | `TEXT` |
+ | `boolean` | `BOOLEAN` |
+ | `bytea` | `BLOB` |
+ | `date` | `DATE` |
+ | `time` | `TIME` |
+ | `time with time zone` | `TIME` |
+ | `time without time zone` | `TIME` |
+ | `timestamp` | `TIMESTAMP` |
+ | `timestamp with time zone` | `TIMESTAMPTZ` |
+ | `timestamp without time zone` | `TIMESTAMP` |
+
+
+ | **MySQL Data Type** | **ScalarDB Analytics Data Type** |
+ |:-----------------------|:---------------------------------|
+ | `bit` | `BOOLEAN` |
+ | `bit(1)` | `BOOLEAN` |
+ | `bit(x)` if *x >= 2* | `BLOB` |
+ | `tinyint` | `SMALLINT` |
+ | `tinyint(1)` | `BOOLEAN` |
+ | `boolean` | `BOOLEAN` |
+ | `smallint` | `SMALLINT` |
+ | `smallint unsigned` | `INT` |
+ | `mediumint` | `INT` |
+ | `mediumint unsigned` | `INT` |
+ | `int` | `INT` |
+ | `int unsigned` | `BIGINT` |
+ | `bigint` | `BIGINT` |
+ | `float` | `FLOAT` |
+ | `double` | `DOUBLE` |
+ | `real` | `DOUBLE` |
+ | `char` | `TEXT` |
+ | `varchar` | `TEXT` |
+ | `text` | `TEXT` |
+ | `binary` | `BLOB` |
+ | `varbinary` | `BLOB` |
+ | `blob` | `BLOB` |
+ | `date` | `DATE` |
+ | `time` | `TIME` |
+ | `datetime` | `TIMESTAMP` |
+ | `timestamp` | `TIMESTAMPTZ` |
+
+
+ | **Oracle Data Type** | **ScalarDB Analytics Data Type** |
+ |:-----------------------------------|:---------------------------------|
+ | `NUMBER` if *scale = 0* | `BIGINT` |
+ | `NUMBER` if *scale > 0* | `DOUBLE` |
+ | `FLOAT` if *precision ≤ 53* | `DOUBLE` |
+ | `BINARY_FLOAT` | `FLOAT` |
+ | `BINARY_DOUBLE` | `DOUBLE` |
+ | `CHAR` | `TEXT` |
+ | `NCHAR` | `TEXT` |
+ | `VARCHAR2` | `TEXT` |
+ | `NVARCHAR2` | `TEXT` |
+ | `CLOB` | `TEXT` |
+ | `NCLOB` | `TEXT` |
+ | `BLOB` | `BLOB` |
+ | `BOOLEAN` | `BOOLEAN` |
+ | `DATE` | `DATE` |
+ | `TIMESTAMP` | `TIMESTAMPTZ` |
+ | `TIMESTAMP WITH TIME ZONE` | `TIMESTAMPTZ` |
+ | `TIMESTAMP WITH LOCAL TIME ZONE` | `TIMESTAMP` |
+ | `RAW` | `BLOB` |
+
+
+ | **SQL Server Data Type** | **ScalarDB Analytics Data Type** |
+ |:---------------------------|:---------------------------------|
+ | `bit` | `BOOLEAN` |
+ | `tinyint` | `SMALLINT` |
+ | `smallint` | `SMALLINT` |
+ | `int` | `INT` |
+ | `bigint` | `BIGINT` |
+ | `real` | `FLOAT` |
+ | `float` | `DOUBLE` |
+ | `float(n)` if *n ≤ 24* | `FLOAT` |
+ | `float(n)` if *n ≥ 25* | `DOUBLE` |
+ | `binary` | `BLOB` |
+ | `varbinary` | `BLOB` |
+ | `char` | `TEXT` |
+ | `varchar` | `TEXT` |
+ | `nchar` | `TEXT` |
+ | `nvarchar` | `TEXT` |
+ | `ntext` | `TEXT` |
+ | `text` | `TEXT` |
+ | `date` | `DATE` |
+ | `time` | `TIME` |
+ | `datetime` | `TIMESTAMP` |
+ | `datetime2` | `TIMESTAMP` |
+ | `smalldatetime` | `TIMESTAMP` |
+ | `datetimeoffset` | `TIMESTAMPTZ` |
+
+
+ | **DynamoDB Data Type** | **ScalarDB Analytics Data Type** |
+ |:-------------------------|:---------------------------------|
+ | `Number` | `BYTE` |
+ | `Number` | `SMALLINT` |
+ | `Number` | `INT` |
+ | `Number` | `BIGINT` |
+ | `Number` | `FLOAT` |
+ | `Number` | `DOUBLE` |
+ | `Number` | `DECIMAL` |
+ | `String` | `TEXT` |
+ | `Binary` | `BLOB` |
+ | `Boolean` | `BOOLEAN` |
+
+:::warning
+
+It is important to ensure that the field values of `Number` types are parsable as a specified data type for ScalarDB Analytics. For example, if a column that corresponds to a `Number`-type field is specified as an `INT` type, its value must be an integer. If the value is not an integer, an error will occur when running a query.
+
+:::
+
+
+
+
+## Query engine
+
+A query engine is an independent component along with the universal data catalog, which is responsible for executing queries against the data sources registered in the universal data catalog and returning the results to the user. ScalarDB Analytics does not currently provide a built-in query engine. Instead, it is designed to be integrated with existing query engines, normally provided as a plugin of the query engine.
+
+When you run a query, the ScalarDB Analytics query engine plugin works as follows:
+
+1. Fetches the catalog metadata by calling the universal data catalog API, like the data source location, the table object identifier, and the table schema.
+2. Sets up the data source connectors to the data sources by using the catalog metadata.
+3. Provides the query optimization information to the query engine based on the catalog metadata.
+4. Reads the data from the data sources by using the data source connectors.
+
+ScalarDB Analytics manages these processes internally. You can simply run a query against the universal data catalog by using the query engine API in the same way that you would normally run a query.
+
+ScalarDB Analytics currently supports Apache Spark as its query engine. For details on how to use ScalarDB Analytics with Spark, see [Run Analytical Queries Through ScalarDB Analytics](./run-analytical-queries.mdx).
diff --git a/versioned_docs/version-3.X/scalardb-analytics/run-analytical-queries.mdx b/versioned_docs/version-3.X/scalardb-analytics/run-analytical-queries.mdx
new file mode 100644
index 00000000..4f4b26aa
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-analytics/run-analytical-queries.mdx
@@ -0,0 +1,453 @@
+---
+tags:
+ - Enterprise Option
+displayed_sidebar: docsEnglish
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Run Analytical Queries Through ScalarDB Analytics
+
+This guide explains how to develop ScalarDB Analytics applications. For details on the architecture and design, see [ScalarDB Analytics Design](./design.mdx)
+
+ScalarDB Analytics currently uses Spark as an execution engine and provides a Spark custom catalog plugin to provide a unified view of ScalarDB-managed and non-ScalarDB-managed data sources as Spark tables. This allows you to execute arbitrary Spark SQL queries seamlessly.
+
+## Preparation
+
+This section describes the prerequisites, setting up ScalarDB Analytics in the Spark configuration, and adding the ScalarDB Analytics dependency.
+
+### Prerequisites
+
+ScalarDB Analytics works with Apache Spark 3.4 or later. If you don't have Spark installed yet, please download the Spark distribution from [Apache's website](https://spark.apache.org/downloads.html).
+
+:::note
+
+Apache Spark are built with either Scala 2.12 or Scala 2.13. ScalarDB Analytics supports both versions. You need to be sure which version you are using so that you can select the correct version of ScalarDB Analytics later. You can refer to [Version Compatibility](#version-compatibility) for more details.
+
+:::
+
+### Set up ScalarDB Analytics in the Spark configuration
+
+The following sections describe all available configuration options for ScalarDB Analytics. These configurations control:
+
+- How ScalarDB Analytics integrates with Spark
+- How data sources are connected and accessed
+- How license information is provided
+
+For example configurations in a practical scenario, see [the sample application configuration](../scalardb-samples/scalardb-analytics-spark-sample/README.mdx#scalardb-analytics-configuration).
+
+#### Spark plugin configurations
+
+| Configuration Key | Required | Description |
+|:-----------------|:---------|:------------|
+| `spark.jars.packages` | No | A comma-separated list of Maven coordinates for the required dependencies. User need to include the ScalarDB Analytics package you are using, otherwise, specify it as the command line argument when running the Spark application. For details about the Maven coordinates of ScalarDB Analytics, refer to [Add ScalarDB Analytics dependency](#add-the-scalardb-analytics-dependency). |
+| `spark.sql.extensions` | Yes | Must be set to `com.scalar.db.analytics.spark.extension.ScalarDbAnalyticsExtensions`. |
+| `spark.sql.catalog.` | Yes | Must be set to `com.scalar.db.analytics.spark.ScalarDbAnalyticsCatalog`. |
+
+You can specify any name for ``. Be sure to use the same catalog name throughout your configuration.
+
+#### License configurations
+
+| Configuration Key | Required | Description |
+| :--------------------------------------------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------- |
+| `spark.sql.catalog..license.key` | Yes | JSON string of the license key for ScalarDB Analytics |
+| `spark.sql.catalog..license.cert_pem` | Yes | A string of PEM-encoded certificate of ScalarDB Analytics license. Either `cert_pem` or `cert_path` must be set. |
+| `spark.sql.catalog..license.cert_path` | Yes | A path to the PEM-encoded certificate of ScalarDB Analytics license. Either `cert_pem` or `cert_path` must be set. |
+
+#### Data source configurations
+
+ScalarDB Analytics supports multiple types of data sources. Each type requires specific configuration parameters:
+
+
+
+
+:::note
+
+ScalarDB Analytics supports ScalarDB as a data source. This table describes how to configure ScalarDB as a data source.
+
+:::
+
+| Configuration Key | Required | Description |
+| :---------------------------------------------------------------------------- | :------- | :---------------------------------------------- |
+| `spark.sql.catalog..data_source..type` | Yes | Always set to `scalardb` |
+| `spark.sql.catalog..data_source..config_path` | Yes | The path to the configuration file for ScalarDB |
+
+:::tip
+
+You can use an arbitrary name for ``.
+
+:::
+
+
+
+
+| Configuration Key | Required | Description |
+| :------------------------------------------------------------------------- | :------- | :------------------------------------- |
+| `spark.sql.catalog..data_source..type` | Yes | Always set to `mysql` |
+| `spark.sql.catalog..data_source..host` | Yes | The host name of the MySQL server |
+| `spark.sql.catalog..data_source..port` | Yes | The port number of the MySQL server |
+| `spark.sql.catalog..data_source..username` | Yes | The username of the MySQL server |
+| `spark.sql.catalog..data_source..password` | Yes | The password of the MySQL server |
+| `spark.sql.catalog..data_source..database` | No | The name of the database to connect to |
+
+:::tip
+
+You can use an arbitrary name for ``.
+
+:::
+
+
+
+
+| Configuration Key | Required | Description |
+| :------------------------------------------------------------------------- | :------- | :--------------------------------------- |
+| `spark.sql.catalog..data_source..type` | Yes | Always set to `postgresql` or `postgres` |
+| `spark.sql.catalog..data_source..host` | Yes | The host name of the PostgreSQL server |
+| `spark.sql.catalog..data_source..port` | Yes | The port number of the PostgreSQL server |
+| `spark.sql.catalog..data_source..username` | Yes | The username of the PostgreSQL server |
+| `spark.sql.catalog..data_source..password` | Yes | The password of the PostgreSQL server |
+| `spark.sql.catalog..data_source..database` | Yes | The name of the database to connect to |
+
+:::tip
+
+You can use an arbitrary name for ``.
+
+:::
+
+
+
+
+| Configuration Key | Required | Description |
+| :----------------------------------------------------------------------------- | :------- | :------------------------------------ |
+| `spark.sql.catalog..data_source..type` | Yes | Always set to `oracle` |
+| `spark.sql.catalog..data_source..host` | Yes | The host name of the Oracle server |
+| `spark.sql.catalog..data_source..port` | Yes | The port number of the Oracle server |
+| `spark.sql.catalog..data_source..username` | Yes | The username of the Oracle server |
+| `spark.sql.catalog..data_source..password` | Yes | The password of the Oracle server |
+| `spark.sql.catalog..data_source..service_name` | Yes | The service name of the Oracle server |
+
+:::tip
+
+You can use an arbitrary name for ``.
+
+:::
+
+
+
+
+| Configuration Key | Required | Description |
+| :------------------------------------------------------------------------- | :------- | :----------------------------------------------------------------------------------------------------- |
+| `spark.sql.catalog..data_source..type` | Yes | Always set to `sqlserver` or `mssql` |
+| `spark.sql.catalog..data_source..host` | Yes | The host name of the SQL Server server |
+| `spark.sql.catalog..data_source..port` | Yes | The port number of the SQL Server server |
+| `spark.sql.catalog..data_source..username` | Yes | The username of the SQL Server server |
+| `spark.sql.catalog..data_source..password` | Yes | The password of the SQL Server server |
+| `spark.sql.catalog..data_source..database` | No | The name of the database to connect to |
+| `spark.sql.catalog..data_source..secure` | No | Whether to use a secure connection to the SQL Server server. Set to `true` to use a secure connection. |
+
+:::tip
+
+You can use an arbitrary name for ``.
+
+:::
+
+
+
+
+| Configuration Key | Required | Description |
+|:---------------------------------------------------------------------------|:------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `spark.sql.catalog..data_source..type` | Yes | Always set to `dynamodb` |
+| `spark.sql.catalog..data_source..region` | Either `region` or `endpoint` must be set | The AWS region of the DynamoDB instance |
+| `spark.sql.catalog..data_source..endpoint` | Either `region` or `endpoint` must be set | The AWS endpoint of the DynamoDB instance |
+| `spark.sql.catalog..data_source..schema` | Yes | A JSON object representing the schema of the catalog. For details on the format, see [Catalog-level mappings](./design.mdx#catalog-level-mappings). |
+
+
+:::tip
+
+You can use an arbitrary name for ``.
+
+:::
+
+
+
+
+#### Example configuration
+
+Below is an example configuration for ScalarDB Analytics that demonstrates how to set up a catalog named `scalardb` with multiple data sources:
+
+```conf
+# Spark plugin configurations
+spark.jars.packages com.scalar-labs:scalardb-analytics-spark-all-_:
+spark.sql.extensions com.scalar.db.analytics.spark.extension.ScalarDbAnalyticsExtensions
+spark.sql.catalog.scalardb com.scalar.db.analytics.spark.ScalarDbAnalyticsCatalog
+
+# License configurations
+spark.sql.catalog.scalardb.license.key
+spark.sql.catalog.scalardb.license.cert_pem
+
+# Data source configurations
+spark.sql.catalog.scalardb.data_source.scalardb.type scalardb
+spark.sql.catalog.scalardb.data_source.scalardb.config_path /path/to/scalardb.properties
+
+spark.sql.catalog.scalardb.data_source.mysql_source.type mysql
+spark.sql.catalog.scalardb.data_source.mysql_source.host localhost
+spark.sql.catalog.scalardb.data_source.mysql_source.port 3306
+spark.sql.catalog.scalardb.data_source.mysql_source.username root
+spark.sql.catalog.scalardb.data_source.mysql_source.password password
+spark.sql.catalog.scalardb.data_source.mysql_source.database mydb
+```
+
+The following describes what you should change the content in the angle brackets to:
+
+- ``: The license key for ScalarDB Analytics
+- ``: The PEM-encoded certificate of ScalarDB Analytics license
+- ``: The major and minor version of Spark you are using (such as 3.4)
+- ``: The major and minor version of Scala that matches your Spark installation (such as 2.12 or 2.13)
+- ``: The version of ScalarDB Analytics
+
+### Add the ScalarDB Analytics dependency
+
+ScalarDB Analytics is hosted in the Maven Central Repository. The name of the package is `scalardb-analytics-spark-all-_:`, where:
+
+- ``: The major and minor version of Spark you are using (such as 3.4)
+- ``: The major and minor version of Scala that matches your Spark installation (such as 2.12 or 2.13)
+- ``: The version of ScalarDB Analytics
+
+For details about version compatibility, refer to [Version Compatibility](#version-compatibility).
+
+You can add this dependency to your project by configuring the build settings of your project. For example, if you are using Gradle, you can add the following to your `build.gradle` file:
+
+```groovy
+dependencies {
+ implementation 'com.scalar-labs:scalardb-analytics-spark-all-_:'
+}
+```
+
+:::note
+
+If you want bundle your application in a single fat JAR file by using plugins like Gradle Shadow plugin or Maven Shade plugin, you need to exclude ScalarDB Analytics from the fat JAR file by choosing the appropriate configuration, such as `provided` or `shadow`, depending on the plugin you are using.
+
+:::
+
+## Develop a Spark application
+
+In this section, you will learn how to develop a Spark application that uses ScalarDB Analytics in Java.
+
+There are three ways to develop Spark applications with ScalarDB Analytics:
+
+1. **Spark driver application**: A traditional Spark application that runs within the cluster
+2. **Spark Connect application**: A remote application that uses the Spark Connect protocol
+3. **JDBC application**: A remote application that uses the JDBC interface
+
+:::note
+
+Depending on your environment, you may not be able to use all the methods mentioned above. For details about supported features and deployment options, refer to [Supported managed Spark services and their application types](./deployment.mdx#supported-managed-spark-services-and-their-application-types).
+
+:::
+
+With all these methods, you can refer to tables in ScalarDB Analytics using the same table identifier format. For details about how ScalarDB Analytics maps catalog information from data sources, refer to [Catalog information mappings by data source](./design.mdx#catalog-information-mappings-by-data-source).
+
+
+
+
+You can use a commonly used `SparkSession` class for ScalarDB Analytics. Additionally, you can use any type of cluster deployment that Spark supports, such as YARN, Kubernetes, standalone, or local mode.
+
+To read data from tables in ScalarDB Analytics, you can use the `spark.sql` or `spark.read.table` function in the same way as when reading a normal Spark table.
+
+First, you need to set up your Java project. For example, if you are using Gradle, you can add the following to your `build.gradle` file:
+
+```groovy
+dependencies {
+ implementation 'com.scalar-labs:scalardb-analytics-spark-_:'
+}
+```
+
+Below is an example of a Spark Driver application:
+
+```java
+import org.apache.spark.sql.SparkSession;
+
+public class MyApp {
+ public static void main(String[] args) {
+ // Create a SparkSession
+ try (SparkSession spark = SparkSession.builder().getOrCreate()) {
+ // Read data from a table in ScalarDB Analytics
+ spark.sql("SELECT * FROM my_catalog.my_data_source.my_namespace.my_table").show();
+ }
+ }
+}
+```
+
+Then, you can build and run your application by using the `spark-submit` command.
+
+:::note
+
+You may need to build a fat JAR file for your application, as is usual for normal Spark applications.
+
+:::
+
+```console
+spark-submit --class MyApp --master local[*] my-spark-application-all.jar
+```
+
+:::tip
+
+You can also use other CLI tools that Spark provides, such as `spark-sql` and `spark-shell`, to interact with ScalarDB Analytics tables.
+
+:::
+
+
+
+
+You can use [Spark Connect](https://spark.apache.org/spark-connect/) to interact with ScalarDB Analytics. By using Spark Connect, you can access a remote Spark cluster and read data in the same way as a Spark Driver application. The following briefly describes how to use Spark Connect.
+
+First, you need to start a Spark Connect server in the remote Spark cluster by running the following command:
+
+```console
+./sbin/start-connect-server.sh --packages org.apache.spark:spark-connect_:,com.scalar-labs:scalardb-analytics-spark-all-_:
+```
+
+The following describes what you should change the content in the angle brackets to:
+
+- ``: The major and minor version of Scala that matches your Spark installation (such as 2.12 or 2.13)
+- ``: The full version of Spark you are using (such as 3.5.3)
+- ``: The major and minor version of Spark you are using (such as 3.5)
+- ``: The version of ScalarDB Analytics
+
+:::note
+
+The versions of the packages must match the versions of Spark and ScalarDB Analytics that you are using.
+
+:::
+
+You also need to include the Spark Connect client package in your application. For example, if you are using Gradle, you can add the following to your `build.gradle` file:
+
+```kotlin
+implementation("org.apache.spark:spark-connect-client-jvm_2.12:3.5.3")
+```
+
+Then, you can write a Spark Connect client application to connect to the server and read data.
+
+```java
+import org.apache.spark.sql.SparkSession;
+
+public class MyApp {
+ public static void main(String[] args) {
+ try (SparkSession spark = SparkSession.builder()
+ .remote("sc://:")
+ .getOrCreate()) {
+
+ // Read data from a table in ScalarDB Analytics
+ spark.sql("SELECT * FROM my_catalog.my_data_source.my_namespace.my_table").show();
+ }
+ }
+}
+```
+
+You can run your Spark Connect client application as a normal Java application by running the following command:
+
+```console
+java -jar my-spark-connect-client.jar
+```
+
+For details about how you can use Spark Connect, refer to the [Spark Connect documentation](https://spark.apache.org/docs/latest/spark-connect-overview.html).
+
+
+
+
+Unfortunately, Spark Thrift JDBC server does not support the Spark features that are necessary for ScalarDB Analytics, so you cannot use JDBC to read data from ScalarDB Analytics in your Apache Spark environment. JDBC application is referred to here because some managed Spark services provide different ways to interact with a Spark cluster via the JDBC interface. For more details, refer to [Supported application types](./deployment.mdx#supported-managed-spark-services-and-their-application-types).
+
+
+
+
+## Catalog information mapping
+
+ScalarDB Analytics manages its own catalog, containing data sources, namespaces, tables, and columns. That information is automatically mapped to the Spark catalog. In this section, you will learn how ScalarDB Analytics maps its catalog information to the Spark catalog.
+
+For details about how information in the raw data sources is mapped to the ScalarDB Analytics catalog, refer to [Catalog information mappings by data source](./design.mdx#catalog-information-mappings-by-data-source).
+
+### Catalog level mapping
+
+Each catalog level object in the ScalarDB Analytics catalog is mapped to a Spark catalog. The following table shows how the catalog levels are mapped:
+
+#### Data source tables
+
+Tables from data sources in the ScalarDB Analytics catalog are mapped to Spark tables. The following format is used to represent the identity of the Spark tables that correspond to ScalarDB Analytics tables:
+
+```console
+...
+```
+
+The following describes what you should change the content in the angle brackets to:
+
+- ``: The name of the catalog.
+- ``: The name of the data source.
+- ``: The names of the namespaces. If the namespace names are multi-level, they are concatenated with a dot (`.`) as the separator.
+- ``: The name of the table.
+
+For example, if you have a ScalarDB catalog named `my_catalog` that contains a data source named `my_data_source` and a schema named `my_schema`, you can refer to the table named `my_table` in that schema as `my_catalog.my_data_source.my_schema.my_table`.
+
+#### Views
+
+Views in ScalarDB Analytics are provided as tables in the Spark catalog, not views. The following format is used to represent the identity of the Spark tables that correspond to ScalarDB Analytics views:
+
+```console
+.view..
+```
+
+The following describes what you should change the content in the angle brackets to:
+
+- ``: The name of the catalog.
+- ``: The names of the view namespaces. If the view namespace names are multi-level, they are concatenated with a dot (`.`) as the separator.
+- ``: The name of the view.
+
+For example, if you have a ScalarDB catalog named `my_catalog` and a view namespace named `my_view_namespace`, you can refer to the view named `my_view` in that namespace as `my_catalog.view.my_view_namespace.my_view`.
+
+:::note
+
+`view` is prefixed to avoid conflicts with the data source table identifier.
+
+:::
+
+##### WAL-interpreted views
+
+As explained in [ScalarDB Analytics Design](./design.mdx), ScalarDB Analytics provides a functionality called WAL-interpreted views, which is a special type of views. These views are automatically created for tables of ScalarDB data sources to provide a user-friendly view of the data by interpreting WAL-metadata in the tables.
+
+Since the data source name and the namespace names of the original ScalarDB tables are used as the view namespace names for WAL-interpreted views, if you have a ScalarDB table named `my_table` in a namespace named `my_namespace` of a data source named `my_data_source`, you can refer to the WAL-interpreted view of the table as `my_catalog.view.my_data_source.my_namespace.my_table`.
+
+### Data-type mapping
+
+ScalarDB Analytics maps data types in its catalog to Spark data types. The following table shows how the data types are mapped:
+
+| ScalarDB Data Type | Spark Data Type |
+| :----------------- | :----------------- |
+| `BYTE` | `Byte` |
+| `SMALLINT` | `Short` |
+| `INT` | `Integer` |
+| `BIGINT` | `Long` |
+| `FLOAT` | `Float` |
+| `DOUBLE` | `Double` |
+| `DECIMAL` | `Decimal` |
+| `TEXT` | `String` |
+| `BLOB` | `Binary` |
+| `BOOLEAN` | `Boolean` |
+| `DATE` | `Date` |
+| `TIME` | `TimestampNTZ` |
+| `TIMESTAMP` | `TimestampNTZ` |
+| `TIMESTAMPTZ` | `Timestamp` |
+| `DURATION` | `CalendarInterval` |
+| `INTERVAL` | `CalendarInterval` |
+
+## Version compatibility
+
+Since Spark and Scala may be incompatible among different minor versions, ScalarDB Analytics offers different artifacts for various Spark and Scala versions, named in the format `scalardb-analytics-spark-all-_`. Make sure that you select the artifact matching the Spark and Scala versions you're using. For example, if you're using Spark 3.5 with Scala 2.13, you must specify `scalardb-analytics-spark-all-3.5_2.13`.
+
+Regarding the Java version, ScalarDB Analytics supports Java 8 or later.
+
+The following is a list of Spark and Scalar versions supported by each version of ScalarDB Analytics.
+
+| ScalarDB Analytics Version | ScalarDB Version | Spark Versions Supported | Scala Versions Supported | Minimum Java Version |
+|:---------------------------|:-----------------|:-------------------------|:-------------------------|:---------------------|
+| 3.16 | 3.16 | 3.5, 3.4 | 2.13, 2.12 | 8 |
+| 3.15 | 3.15 | 3.5, 3.4 | 2.13, 2.12 | 8 |
diff --git a/versioned_docs/version-3.X/scalardb-benchmarks/README.mdx b/versioned_docs/version-3.X/scalardb-benchmarks/README.mdx
new file mode 100644
index 00000000..010d3e9d
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-benchmarks/README.mdx
@@ -0,0 +1,236 @@
+---
+tags:
+ - Community
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Benchmarking Tools
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This tutorial describes how to run benchmarking tools for ScalarDB. Database benchmarking is helpful for evaluating how databases perform against a set of standards.
+
+## Benchmark workloads
+
+- TPC-C
+- YCSB (Workloads A, C, and F)
+- Multi-storage YCSB (Workloads C and F)
+ - This YCSB variant is for a multi-storage environment that uses ScalarDB.
+ - Workers in a multi-storage YCSB execute the same number of read and write operations in two namespaces: `ycsb_primary` and `ycsb_secondary`.
+
+## Prerequisites
+
+- One of the following Java Development Kits (JDKs):
+ - [Oracle JDK](https://www.oracle.com/java/technologies/downloads/) LTS version 8
+ - [OpenJDK](https://openjdk.org/install/) LTS version 8
+- Gradle
+- [Kelpie](https://github.com/scalar-labs/kelpie)
+ - Kelpie is a framework for performing end-to-end testing, such as system benchmarking and verification. Get the latest version from [Kelpie Releases](https://github.com/scalar-labs/kelpie), and unzip the archive file.
+- A client to run the benchmarking tools
+- A target database
+ - For a list of databases that ScalarDB supports, see [Databases](../requirements.mdx#databases).
+
+:::note
+
+Currently, only JDK 8 can be used when running the benchmarking tools.
+
+:::
+
+## Set up the benchmarking tools
+
+The following sections describe how to set up the benchmarking tools.
+
+### Clone the ScalarDB benchmarks repository
+
+Open **Terminal**, then clone the ScalarDB benchmarks repository by running the following command:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-benchmarks
+```
+
+Then, go to the directory that contains the benchmarking files by running the following command:
+
+```console
+cd scalardb-benchmarks
+```
+
+### Build the tools
+
+To build the benchmarking tools, run the following command:
+
+```console
+./gradlew shadowJar
+```
+
+### Load the schema
+
+Before loading the initial data, the tables must be defined by using the [ScalarDB Schema Loader](../schema-loader.mdx). You can download the ScalarDB Schema Loader on the [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases) page. Select the Schema Loader based on how you access ScalarDB:
+- **Using the ScalarDB Core library (Community edition)?:** Choose `scalardb-schema-loader-.jar` for the version of ScalarDB that you're using. Then, save the `.jar` file in the `scalardb-benchmarks` root directory.
+- **Using ScalarDB Cluster (Enterprise edition)?:** Choose `scalardb-cluster-schema-loader--all.jar` for the version of ScalarDB Cluster that you're using. Then, save the `.jar` file in the `scalardb-benchmarks` root directory.
+
+In addition, you need a properties file for accessing ScalarDB via the Java CRUD interface. For details about configuring the ScalarDB properties file, see [ScalarDB Configurations](../configurations.mdx) or [ScalarDB Cluster Client Configurations](../scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx#client-configurations).
+
+After applying the schema and configuring the properties file, select a benchmark and follow the instructions to create the tables.
+
+
+
+ To create tables for TPC-C benchmarking ([`tpcc-schema.json`](https://github.com/scalar-labs/scalardb-benchmarks/blob/master/tpcc-schema.json)), run the following command, replacing the contents in the angle brackets as described:
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config -f tpcc-schema.json --coordinator
+ ```
+
+ If you are using ScalarDB Cluster, run the following command instead:
+
+ ```console
+ java -jar scalardb-cluster-schema-loader--all.jar --config -f tpcc-schema.json --coordinator
+ ```
+
+
+ To create tables for YCSB benchmarking ([`ycsb-schema.json`](https://github.com/scalar-labs/scalardb-benchmarks/blob/master/ycsb-schema.json)), run the following command, replacing the contents in the angle brackets as described:
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config -f ycsb-schema.json --coordinator
+ ```
+
+ If you are using ScalarDB Cluster, run the following command instead:
+
+ ```console
+ java -jar scalardb-cluster-schema-loader--all.jar --config -f ycsb-schema.json --coordinator
+ ```
+
+
+ To create tables for multi-storage YCSB benchmarking ([`ycsb-multi-storage-schema.json`](https://github.com/scalar-labs/scalardb-benchmarks/blob/master/ycsb-multi-storage-schema.json)), run the following command, replacing the contents in the angle brackets as described:
+
+ ```console
+ java -jar scalardb-schema-loader-.jar --config -f ycsb-multi-storage-schema.json --coordinator
+ ```
+
+ If you are using ScalarDB Cluster, run the following command instead:
+
+ ```console
+ java -jar scalardb-cluster-schema-loader--all.jar --config -f ycsb-multi-storage-schema.json --coordinator
+ ```
+
+
+
+### Prepare a benchmarking configuration file
+
+To run a benchmark, you must first prepare a benchmarking configuration file. The configuration file requires at least the locations of the workload modules to run and the database configuration.
+
+The following is an example configuration for running the TPC-C benchmark. The ScalarDB properties file specified for `config_file` should be the properties file that you created as one of the steps in [Load the schema](#load-the-schema).
+
+:::note
+
+Alternatively, instead of using the ScalarDB properties file, you can specify each database configuration item in the `.toml` file. If `config_file` is specified, all other configurations under `[database_config]` will be ignored even if they are uncommented.
+
+:::
+
+```toml
+[modules]
+[modules.preprocessor]
+name = "com.scalar.db.benchmarks.tpcc.TpccLoader"
+path = "./build/libs/scalardb-benchmarks-all.jar"
+[modules.processor]
+name = "com.scalar.db.benchmarks.tpcc.TpccBench"
+path = "./build/libs/scalardb-benchmarks-all.jar"
+[modules.postprocessor]
+name = "com.scalar.db.benchmarks.tpcc.TpccReporter"
+path = "./build/libs/scalardb-benchmarks-all.jar"
+
+[database_config]
+config_file = ""
+#contact_points = "localhost"
+#contact_port = 9042
+#username = "cassandra"
+#password = "cassandra"
+#storage = "cassandra"
+```
+
+You can define parameters to pass to modules in the configuration file. For details, see the sample configuration files below and available parameters in [Common parameters](#common-parameters):
+
+- **TPC-C:** [`tpcc-benchmark-config.toml`](https://github.com/scalar-labs/scalardb-benchmarks/blob/master/tpcc-benchmark-config.toml)
+- **YCSB:** [`ycsb-benchmark-config.toml`](https://github.com/scalar-labs/scalardb-benchmarks/blob/master/ycsb-benchmark-config.toml)
+- **Multi-storage YCSB:** [`ycsb-multi-storage-benchmark-config.toml`](https://github.com/scalar-labs/scalardb-benchmarks/blob/master/ycsb-multi-storage-benchmark-config.toml)
+
+## Run a benchmark
+
+Select a benchmark, and follow the instructions to run the benchmark.
+
+
+
+ To run the TPC-C benchmark, run the following command, replacing `` with the path to the Kelpie directory:
+
+ ```console
+ //bin/kelpie --config tpcc-benchmark-config.toml
+ ```
+
+
+ To run the YCSB benchmark, run the following command, replacing `` with the path to the Kelpie directory:
+
+ ```console
+ //bin/kelpie --config ycsb-benchmark-config.toml
+ ```
+
+
+ To run the multi-storage YCSB benchmark, run the following command, replacing `` with the path to the Kelpie directory:
+
+ ```console
+ //bin/kelpie --config ycsb-multi-storage-benchmark-config.toml
+ ```
+
+
+
+In addition, the following options are available:
+
+- `--only-pre`. Only loads the data.
+- `--only-process`. Only runs the benchmark.
+- `--except-pre` Runs a job without loading the data.
+- `--except-process`. Runs a job without running the benchmark.
+
+## Common parameters
+
+| Name | Description | Default |
+|:---------------|:--------------------------------------------------------|:----------|
+| `concurrency` | Number of threads for benchmarking. | `1` |
+| `run_for_sec` | Duration of benchmark (in seconds). | `60` |
+| `ramp_for_sec` | Duration of ramp-up time before benchmark (in seconds). | `0` |
+
+## Workload-specific parameters
+
+Select a benchmark to see its available workload parameters.
+
+
+
+ | Name | Description | Default |
+ |:-----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------|
+ | `num_warehouses` | Number of warehouses (scale factor) for benchmarking. | `1` |
+ | `load_concurrency` | Number of threads for loading. | `1` |
+ | `load_start_warehouse` | Start ID of loading warehouse. This option can be useful with `--skip-item-load` when loading large-scale data with multiple clients or adding additional warehouses. | `1` |
+ | `load_end_warehouse` | End ID of loading warehouse. You can use either `--num-warehouses` or `--end-warehouse` to specify the number of loading warehouses. | `1` |
+ | `skip_item_load` | Whether or not to skip loading item table. | `false` |
+ | `use_table_index` | Whether or not to use a generic table-based secondary index instead of ScalarDB's secondary index. | `false` |
+ | `np_only` | Run benchmark with only new-order and payment transactions (50% each). | `false` |
+ | `rate_new_order` | Percentage of new-order transactions. When specifying this percentage based on your needs, you must specify the percentages for all other rate parameters. In that case, the total of all rate parameters must equal 100 percent. | N/A |
+ | `rate_payment` | Percentage of payment transactions. When specifying this percentage based on your needs, you must specify the percentages for all other rate parameters. In that case, the total of all rate parameters must equal 100 percent. | N/A |
+ | `rate_order_status` | Percentage of order-status transactions. When specifying this percentage based on your needs, you must specify the percentages for all other rate parameters. In that case, the total of all rate parameters must equal 100 percent. | N/A |
+ | `rate_delivery` | Percentage of delivery transactions. When specifying this percentage based on your needs, you must specify the percentages for all other rate parameters. In that case, the total of all rate parameters must equal 100 percent. | N/A |
+ | `rate_stock_level` | Percentage of stock-level transactions. When specifying this percentage based on your needs, you must specify the percentages for all other rate parameters. In that case, the total of all rate parameters must equal 100 percent. | N/A |
+ | `backoff` | Sleep time in milliseconds inserted after a transaction is aborted due to a conflict. | `0` |
+
+
+ | Name | Description | Default |
+ |:------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------|
+ | `load_concurrency` | Number of threads for loading. | `1` |
+ | `load_batch_size` | Number of put records in a single loading transaction. | `1` |
+ | `load_overwrite` | Whether or not to overwrite when loading records. | `false` |
+ | `ops_per_tx` | Number of operations in a single transaction. | `2` (Workloads A and C)
`1` (Workload F) |
+ | `record_count` | Number of records in the target table. | `1000` |
+ | `use_read_modify_write` | Whether or not to use read-modify-writes instead of blind writes in Workload A. | `false`[^rmw] |
+
+ [^rmw]: The default value is `false` for `use_read_modify_write` since Workload A doesn't assume that the transaction reads the original record first. However, if you're using Consensus Commit as the transaction manager, you must set `use_read_modify_write` to `true`. This is because ScalarDB doesn't allow a blind write for an existing record.
+
+
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/common-reference.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/common-reference.mdx
new file mode 100644
index 00000000..a71540f7
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/common-reference.mdx
@@ -0,0 +1,194 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Cluster .NET Client SDK Reference
+
+This reference provides details on how the ScalarDB Cluster .NET Client SDK works.
+
+## Client configuration
+
+The client can be configured by using the following:
+
+- A settings file, like `appsettings.json` or a custom JSON file
+- Environment variables
+- The `ScalarDbOptions` object
+
+If you use the SDK with ASP.NET Core, you can configure an app in more ways. For details, see [Configuration in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-8.0).
+
+For a list of options that you can configure, see [Available options](common-reference.mdx#available-options).
+
+### Using a settings file
+
+The SDK supports both the standard `appsettings.json` and custom JSON setting files. To configure the client in a JSON file, add the `ScalarDbOptions` section and configure the options that you need. For example:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://localhost:60053",
+ "HopLimit": 10
+ }
+}
+```
+
+Then, create a configured `TransactionFactory` object as follows:
+
+```c#
+// If appsettings.json is used, call the Create() method without parameters.
+var factory = TransactionFactory.Create();
+
+// Or, if a custom file is used, call the Create() method that is passed in the path to the custom file as a parameter.
+factory = TransactionFactory.Create("scalardb-options.json");
+```
+
+If you use the SDK with ASP.NET Core, the settings from `appsettings.json` will be applied automatically when the registered transaction managers and/or `ScalarDbContext` are created. If you want to use a custom JSON file, add it to the configuration framework as follows:
+
+```c#
+var builder = WebApplication.CreateBuilder(args);
+
+// ...
+
+builder.Configuration.AddJsonFile("scalardb-options.json");
+```
+
+:::warning
+
+Because the custom JSON file is applied after all standard configuration providers, the values from the custom file will override values from other sources.
+
+:::
+
+### Using environment variables
+
+To configure the client to use environment variables, you can use the prefix `ScalarDbOptions__`. For example:
+
+```console
+export ScalarDbOptions__Address="http://localhost:60053"
+export ScalarDbOptions__HopLimit=10
+```
+
+:::warning
+
+Values from environment variables will override values from settings files.
+
+:::
+
+### Using the `ScalarDbOptions` object
+
+You can configure the client at runtime by using the `ScalarDbOptions` object as follows:
+
+```c#
+var options = new ScalarDbOptions()
+{
+ Address = "http://localhost:60053",
+ HopLimit = 10
+};
+
+var factory = TransactionFactory.Create(options);
+```
+
+You can also initialize the `ScalarDbOptions` object with values from JSON files and/or environment variables, and then set any remaining values at runtime as follows:
+
+```c#
+// If appsettings.json is used, call the Load() method without parameters.
+var options = ScalarDbOptions.Load();
+
+// Or, if a custom file is used, call the Load() method that is passed in the path to the custom file as a parameter.
+options = ScalarDbOptions.Load("scalardb-options.json");
+
+options.HopLimit = 10;
+
+var factory = TransactionFactory.Create(options);
+```
+
+If you use the SDK with ASP.NET Core, a lambda function of `AddScalarDb` and/or `AddScalarDbContext` can be used as follows:
+
+```c#
+var builder = WebApplication.CreateBuilder(args);
+
+//...
+
+builder.Services.AddScalarDb(options =>
+{
+ options.Address = "http://localhost:60053";
+ options.HopLimit = 10;
+});
+
+builder.Services.AddScalarDbContext(options =>
+{
+ options.Address = "http://localhost:60053";
+ options.HopLimit = 10;
+});
+```
+
+By using this configuration, the `ScalarDbOptions` object that is passed to the lambda function (named `options` in the example above) is initialized with values from the JSON files, environment variables, and other sources.
+
+### Available options
+
+The following options are available:
+
+| Name | Description | Default |
+|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|
+| `Address` | **Required:** Address of the cluster in the following format: `://:`. ``: `https` if wire encryption (TLS) is enabled; `http` otherwise. ``: The FQDN or the IP address of the cluster. ``: The port number (`60053` by default) of the cluster. | - |
+| `HopLimit` | Number of hops for a request to the cluster. The purpose of `HopLimit` is to prevent infinite loops within the cluster. Each time a request is forwarded to another cluster node, `HopLimit` decreases by one. If `HopLimit` reaches zero, the request will be rejected. | `3` |
+| `RetryCount` | How many times a client can try to connect to the cluster if it's unavailable. | `10` |
+| `AuthEnabled` | Whether authentication and authorization are enabled. | `false` |
+| `Username` | Username for authentication and authorization. | |
+| `Password` | Password for authentication. If this isn't set, authentication is conducted without a password. | |
+| `AuthTokenExpirationTime` | Time after which the authentication token should be refreshed. If the time set for `AuthTokenExpirationTime` is greater than the expiration time on the cluster, the authentication token will be refreshed when an authentication error is received. If the authentication token is successfully refreshed, the authentication error won't be propagated to the client code. Instead, the operation that has failed with the authentication error will be retried automatically. If more than one operation is running in parallel, all these operations will fail once with the authentication error before the authentication token is refreshed. | `00:00:00` (The authentication token expiration time received from the cluster is used.) |
+| `TlsRootCertPem` | Custom CA root certificate (PEM data) for TLS communication. | |
+| `TlsRootCertPath` | File path to the custom CA root certificate for TLS communication. | |
+| `TlsOverrideAuthority` | Custom authority for TLS communication. This doesn't change what host is actually connected. This is mainly intended for testing. For example, you can specify the hostname presented in the cluster's certificate (the `scalar.db.cluster.node.tls.cert_chain_path` parameter of the cluster). If there's more than one hostname in the cluster's certificate, only the first hostname will be checked. | |
+| `LogSensitiveData` | If set to `true`, information like username, password, and authentication token will be logged as is without masking when logging gRPC requests and responses. | `false` |
+| `GrpcRequestTimeout` | Timeout for gRPC requests. Internally, the timeout's value is used to calculate and set a deadline for each gRPC request to the cluster. If the set deadline is exceeded, the request is cancelled and `DeadlineExceededException` is thrown. If the timeout is set to `0`, no deadline will be set. | `00:01:00` |
+| `GrpcMaxReceiveMessageSize` | The maximum message size in bytes that can be received by the client. When set to `0`, the message size is unlimited. | `4 MB` |
+| `GrpcMaxSendMessageSize` | The maximum message size in bytes that can be sent from the client. When set to `0`, the message size is unlimited. | `0` (Unlimited) |
+
+## How ScalarDB column types are converted to and from .NET types
+
+When using [LINQ](getting-started-with-linq.mdx#set-up-classes) or extension methods for the [Transactional API](getting-started-with-scalardb-tables-as-csharp-classes.mdx#create-classes-for-all-scalardb-tables), [SQL API](getting-started-with-distributed-sql-transactions.mdx#execute-sql-queries), or [Administrative API](getting-started-with-scalardb-tables-as-csharp-classes.mdx#use-the-administrative-api), a column's value received from the cluster is automatically converted to a corresponding .NET type. Likewise, a value of a .NET property is automatically converted to a corresponding cluster's type when an object is being saved to the cluster.
+
+In the following table, you can find how types are converted:
+
+| ScalarDB type | .NET type | C# alias |
+|---------------|----------------------------|----------|
+| TEXT | System.String | string |
+| INT | System.Int32 | int |
+| BIGINT | System.Int64 | long |
+| FLOAT | System.Single | float |
+| DOUBLE | System.Double | double |
+| BOOLEAN | System.Boolean | bool |
+| BLOB | Google.Protobuf.ByteString | |
+| DATE | NodaTime.LocalDate | |
+| TIME | NodaTime.LocalTime | |
+| TIMESTAMP | NodaTime.LocalDateTime | |
+| TIMESTAMPTZ | NodaTime.Instant | |
+
+:::note
+
+The ScalarDB Cluster .NET Client SDK uses [Google.Protobuf](https://www.nuget.org/packages/Google.Protobuf) for `BLOB` type and [NodaTime](https://www.nuget.org/packages/NodaTime) for time-related types.
+
+:::
+
+:::warning
+
+The precision of time-related types in .NET is greater than supported by ScalarDB. Therefore, you should be careful when saving time-related values received from external sources. The ScalarDB Cluster .NET Client SDK includes `WithScalarDbPrecision` extension methods that you can use to lower the precision of time-related values in the following manner:
+
+```c#
+using ScalarDB.Client.Extensions;
+
+// ...
+
+var updatedAt = Instant.FromDateTimeUtc(DateTime.UtcNow)
+ .WithScalarDbPrecision();
+
+// using NodaTime to get current instant
+updatedAt = clockInstance.GetCurrentInstant()
+ .WithScalarDbPrecision();
+```
+
+For details about value ranges and precision in ScalarDB, see [Data-type mapping between ScalarDB and other databases](../schema-loader.mdx#data-type-mapping-between-scalardb-and-other-databases).
+
+:::
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx
new file mode 100644
index 00000000..1767360f
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx
@@ -0,0 +1,175 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Exception Handling in the ScalarDB Cluster .NET Client SDK
+
+When executing a transaction, you will also need to handle exceptions properly.
+
+:::warning
+
+If you don't handle exceptions properly, you may face anomalies or data inconsistency.
+
+:::
+
+:::note
+
+The Transactional API is used in this example, but exceptions can be handled in the same way when using the SQL API or `ScalarDbContext`.
+
+:::
+
+The following sample code shows how to handle exceptions:
+
+```c#
+using System.ComponentModel.DataAnnotations.Schema;
+using ScalarDB.Client;
+using ScalarDB.Client.DataAnnotations;
+using ScalarDB.Client.Exceptions;
+using ScalarDB.Client.Extensions;
+
+var options = new ScalarDbOptions { Address = "http://:"};
+
+var factory = TransactionFactory.Create(options);
+using var manager = factory.GetTransactionManager();
+
+var retryCount = 0;
+TransactionException? lastException = null;
+
+while (true)
+{
+ if (retryCount++ > 0)
+ {
+ // Retry the transaction three times maximum in this sample code
+ if (retryCount > 3)
+ // Throw the last exception if the number of retries exceeds the maximum
+ throw lastException!;
+
+ // Sleep 100 milliseconds before retrying the transaction in this sample code
+ await Task.Delay(100);
+ }
+
+ // Begin a transaction
+ var tran = await manager.BeginAsync();
+ try
+ {
+ // Execute CRUD operations in the transaction
+ var getKeys = new Dictionary { { nameof(Item.Id), 1 } };
+ var result = await tran.GetAsync- (getKeys);
+
+ var scanKeys = new Dictionary { { nameof(Item.Id), 1 } };
+ await foreach (var item in tran.ScanAsync
- (scanKeys, null))
+ Console.WriteLine($"{item.Id}, {item.Name}, {item.Price}");
+
+ await tran.InsertAsync(new Item { Id = 1, Name = "Watermelon", Price = 4500 });
+ await tran.DeleteAsync(new Item { Id = 1 });
+
+ // Commit the transaction
+ await tran.CommitAsync();
+
+ return;
+ }
+ catch (UnsatisfiedConditionException)
+ {
+ // You need to handle `UnsatisfiedConditionException` only if a mutation operation specifies
+ // a condition. This exception indicates the condition for the mutation operation is not met.
+ // InsertAsync/UpdateAsync implicitlly sets IfNotExists/IfExists condition
+
+ try
+ {
+ await tran.RollbackAsync();
+ }
+ catch (TransactionException ex)
+ {
+ // Rolling back the transaction failed. As the transaction should eventually recover, you
+ // don't need to do anything further. You can simply log the occurrence here
+ Console.WriteLine($"Rollback error: {ex}");
+ }
+
+ // You can handle the exception here, according to your application requirements
+
+ return;
+ }
+ catch (UnknownTransactionStatusException)
+ {
+ // If you catch `UnknownTransactionStatusException` when committing the transaction, it
+ // indicates that the status of the transaction, whether it has succeeded or not, is
+ // unknown. In such a case, you need to check if the transaction is committed successfully
+ // or not and retry it if it failed. How to identify a transaction status is delegated to users
+ return;
+ }
+ catch (TransactionException ex)
+ {
+ // For other exceptions, you can try retrying the transaction.
+
+ // For `TransactionConflictException` and `TransactionNotFoundException`,
+ // you can basically retry the transaction. However, for the other exceptions,
+ // the transaction may still fail if the cause of the exception is nontransient.
+ // In such a case, you will exhaust the number of retries and throw the last exception
+
+ try
+ {
+ await tran.RollbackAsync();
+ }
+ catch (TransactionException e)
+ {
+ // Rolling back the transaction failed. As the transaction should eventually recover,
+ // you don't need to do anything further. You can simply log the occurrence here
+ Console.WriteLine($"Rollback error: {e}");
+ }
+
+ lastException = ex;
+ }
+}
+
+[Table("order_service.items")]
+public class Item
+{
+ [PartitionKey]
+ [Column("item_id", Order = 0)]
+ public int Id { get; set; }
+
+ [Column("name", Order = 1)]
+ public string Name { get; set; } = String.Empty;
+
+ [Column("price", Order = 2)]
+ public int Price { get; set; }
+}
+
+```
+
+:::note
+
+In the sample code, the transaction is retried a maximum of three times and sleeps for 100 milliseconds before it is retried. You can choose a retry policy, such as exponential backoff, according to your application requirements.
+
+:::
+
+### Exception details
+
+The table below shows transaction exceptions that can occur when communicating with the cluster:
+
+| Exception | Operations | Description |
+|-----------------------------------|--------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| AuthenticationErrorException | All | The authentication failed because of a wrong username and/or password when calling the cluster. |
+| AuthorizationErrorException | Put, Insert, Update, Delete, Mutate, Execute, Administrative | The authorization failed because of a lack of permissions. |
+| HopLimitExceededException | All | The hop limit was exceeded. This occurs when the routing information between cluster nodes is inconsistent. The error is usually resolved in a short amount of time, so you can retry the transaction from the beginning after some time has passed since encountering this error. |
+| IllegalArgumentException | All | The argument in the request message is invalid. |
+| IllegalStateException | All | The RPC was called in an invalid state. |
+| InternalErrorException | All | The operation failed due to transient or nontransient faults. You can try retrying the transaction from the beginning, but the transaction may still fail if the cause is nontransient. |
+| TransactionConflictException | All except Begin, Join, Rollback | A transaction conflict occurred. If you encounter this error, please retry the transaction from the beginning. |
+| TransactionNotFoundException | All except Begin, Join | The transaction associated with the specified transaction ID was not found. This error indicates that the transaction has expired or the routing information has been updated due to cluster topology changes. In this case, please retry the transaction from the beginning. |
+| UnavailableException | All | ScalarDB Cluster is unavailable even after trying to connect multiple times. |
+| UnknownTransactionStatusException | Commit | The status of the transaction is unknown (it is uncertain whether the transaction was successfully committed or not). In this situation, you need to check whether the transaction was successfully committed, and if not, to retry it. You are responsible for determining the transaction status. You may benefit from creating a transaction status table and updating it in conjunction with other application data. Doing so may help you determine the status of a transaction from the table itself. |
+| UnsatisfiedConditionException | Put, Insert, Update, Delete, Mutate | The mutation condition is not satisfied. |
+
+If you encounter an exception, you should roll back the transaction, except in the case of `Begin`. After rolling back the transaction, you can retry the transaction from the beginning for the exceptions that can be resolved by retrying.
+
+Besides the exceptions listed above, you may encounter exceptions thrown by the gRPC library. In such cases, you can check the `RpcException` property for more information.
+
+Also, `ScalarDbContext` will throw a `TransactionException` type exception in the following cases:
+
+- If `BeginTransaction` or `JoinTransaction` were called when there was already an active transaction
+- If `CommitTransaction` or `RollbackTransaction` were called without an active transaction
+- If `PrepareTransaction` or `ValidateTransaction` were called without an active two-phase commit transaction
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-admin-api.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-admin-api.mdx
new file mode 100644
index 00000000..c7a81560
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-admin-api.mdx
@@ -0,0 +1,128 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with the Administrative API in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports the Administrative API of ScalarDB Cluster. By using this API, you can manage ScalarDB Cluster from .NET applications.
+
+:::note
+
+Although we recommend using asynchronous methods as in the following examples, you can use synchronous methods instead.
+
+:::
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Create a settings file
+
+Create a `scalardb-options.json` file and add the following, replacing `` with the FQDN or the IP address, and `` with the port number (`60053` by default) of your cluster:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Get a transaction manager
+
+You need to get an object for interacting with the Administrative API. To get the object, you can use `TransactionFactory` as follows:
+
+```c#
+// Pass the path to the settings file created in the previous step.
+var factory = TransactionFactory.Create("scalardb-options.json");
+
+using var admin = factory.GetTransactionAdmin();
+```
+
+## Manage ScalarDB Cluster
+
+The following operations can be performed by using the ScalarDB Cluster .NET Client SDK.
+
+### Create a new namespace
+
+```c#
+await admin.CreateNamespaceAsync("ns", ifNotExists: true);
+```
+
+### Drop a namespace
+
+```c#
+await admin.DropNamespaceAsync("ns", ifExists: true);
+```
+
+### Check if a namespace exists
+
+```c#
+var namespaceExists = await admin.IsNamespacePresentAsync("ns");
+```
+
+### Create a new table
+
+```c#
+// ...
+using ScalarDB.Client.Builders.Admin;
+using ScalarDB.Client.Core;
+
+// ...
+
+var tableMetadata =
+ new TableMetadataBuilder()
+ .AddPartitionKey("pk", DataType.Int)
+ .AddClusteringKey("ck", DataType.Double)
+ .AddSecondaryIndex("index", DataType.Float)
+ .AddColumn("ordinary", DataType.Text)
+ .Build();
+
+await admin.CreateTableAsync("ns", "table_name", tableMetadata, ifNotExists: true);
+```
+
+### Drop a table
+
+```c#
+await admin.DropTableAsync("ns", "table_name", ifExists: true);
+```
+
+### Checking if a table exists
+
+```c#
+var tableExists = await admin.IsTablePresentAsync("ns", "table_name");
+```
+
+### Get the names of existing tables
+
+```c#
+var tablesList = await admin.GetTableNamesAsync("ns");
+```
+
+### Create the Coordinator table
+
+```c#
+await admin.CreateCoordinatorTablesAsync();
+```
+
+### Drop the Coordinator table
+
+```c#
+await admin.DropCoordinatorTablesAsync();
+```
+
+### Check if the Coordinator table exist
+
+```c#
+var exists = await admin.AreCoordinatorTablesPresentAsync();
+```
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-aspnet-and-di.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-aspnet-and-di.mdx
new file mode 100644
index 00000000..b3188720
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-aspnet-and-di.mdx
@@ -0,0 +1,84 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with ASP.NET Core and Dependency Injection in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports dependency injection (DI) in frameworks like ASP.NET Core.
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Add client settings
+
+Add the `ScalarDbOptions` section to the `appsettings.json` file of your ASP.NET Core app, replacing `` with the FQDN or the IP address, and `` with the port number (`60053` by default) of your cluster:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Set up the transaction managers
+
+You can register the ScalarDB transaction managers in the DI container as follows:
+
+```c#
+using ScalarDB.Client.Extensions;
+
+//...
+
+var builder = WebApplication.CreateBuilder(args);
+
+//...
+
+builder.Services.AddScalarDb();
+```
+
+:::note
+
+The ScalarDB transaction managers will be registered as transient services. For details about service lifetimes, see [.NET dependency injection - Service lifetimes](https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection#service-lifetimes).
+
+:::
+
+After registering the transaction managers, they can be injected into the controller's constructor as follows:
+
+```c#
+[ApiController]
+public class OrderController: ControllerBase
+{
+ private readonly IDistributedTransactionManager _manager;
+ private readonly ISqlTransactionManager _sqlManager;
+ private readonly ITwoPhaseCommitTransactionManager _twoPhaseManager;
+ private readonly ISqlTwoPhaseCommitTransactionManager _sqlTwoPhaseManager;
+ private readonly IDistributedTransactionAdmin _admin;
+
+ public OrderController(IDistributedTransactionManager manager,
+ ISqlTransactionManager sqlManager,
+ ITwoPhaseCommitTransactionManager twoPhaseManager,
+ ISqlTwoPhaseCommitTransactionManager sqlTwoPhaseManager,
+ IDistributedTransactionAdmin admin)
+ {
+ _manager = manager;
+ _sqlManager = sqlManager;
+ _twoPhaseManager = twoPhaseManager;
+ _sqlTwoPhaseManager = sqlTwoPhaseManager;
+ _admin = admin;
+ }
+}
+```
+
+Although these examples are for WebApi projects, the examples will work in a similar way in GrpcService projects.
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-auth.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-auth.mdx
new file mode 100644
index 00000000..196a2ac1
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-auth.mdx
@@ -0,0 +1,67 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with Authentication and Authorization by Using ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports [authentication and authorization](../scalardb-cluster/scalardb-auth-with-sql.mdx), which allows you to authenticate and authorize your requests to ScalarDB Cluster.
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Set credentials in the settings file
+
+You need to set credentials in the settings file as follows, replacing the contents in the angle brackets as described:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10,
+ "AuthEnabled": true,
+ "Username": "",
+ "Password": ""
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Get a transaction manager
+
+You need to get a transaction manager or transaction admin object by using `TransactionFactory` as follows. Be sure to replace `` with `GetTransactionManager()`, `GetTwoPhaseCommitTransactionManager()`, `GetSqlTransactionManager()`, or `GetSqlTwoPhaseCommitTransactionManager()`.
+
+```c#
+// Pass the path to the settings file.
+var factory = TransactionFactory.Create("scalardb-options.json");
+
+// To get a transaction manager
+using var manager = factory.();
+
+// To get a transaction admin
+using var admin = factory.GetTransactionAdmin();
+```
+
+A transaction manager or transaction admin object created from `TransactionFactory` with the provided credentials will automatically log in to ScalarDB Cluster and can communicate with it.
+
+## Wire encryption
+
+[Wire encryption](../scalardb-cluster/scalardb-auth-with-sql.mdx#wire-encryption) is also supported. It can be turned on by setting `Address` to the URL starting with `https` as follows:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "https://:"
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx
new file mode 100644
index 00000000..628ea26d
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx
@@ -0,0 +1,192 @@
+---
+tags:
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with Distributed SQL Transactions in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports the distributed SQL transaction functionality of ScalarDB Cluster. The SDK includes transaction and manager abstractions for easier communication within a cluster.
+
+:::note
+
+Although we recommend using asynchronous methods, as in the following examples, you can use synchronous methods instead.
+
+:::
+
+For details about distributed non-SQL transactions, see [Getting Started with Distributed Transactions in the ScalarDB Cluster .NET Client SDK](getting-started-with-distributed-transactions.mdx).
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Create a settings file
+
+Create a `scalardb-options.json` file and add the following, replacing `` with the FQDN or the IP address, and `` with the port number (`60053` by default) of your cluster:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Get a transaction manager
+
+You need to get a transaction manager object for distributed SQL transactions. To get the transaction manager object, you can use `TransactionFactory` as follows:
+
+```c#
+// Pass the path to the settings file created in the previous step.
+var factory = TransactionFactory.Create("scalardb-options.json");
+
+using var manager = factory.GetSqlTransactionManager();
+```
+
+## Execute SQL queries
+
+To execute a SQL statement, you need an `ISqlStatement` object, which can be created by using a builder as follows:
+
+```c#
+using ScalarDB.Client.Builders.Sql;
+
+// ...
+
+var sqlStatement =
+ new SqlStatementBuilder()
+ .SetSql("SELECT * FROM order_service.statements WHERE item_id = :item_id")
+ .AddParam("item_id", 2)
+ .Build();
+```
+
+A single SQL statement can be executed directly by using the transaction manager as follows:
+
+```c#
+var resultSet = await manager.ExecuteAsync(sqlStatement);
+```
+
+The result from the `ExecuteAsync` method will contain records received from the cluster. The value of the specific column can be retrieved in the following manner:
+
+```c#
+foreach (var record in resultSet.Records)
+{
+ // Getting an integer value from the "item_id" column.
+ // If it fails, an exception will be thrown.
+ var itemId = record.GetValue("item_id");
+
+ // Trying to get a string value from the "order_id" column.
+ // If it fails, no exception will be thrown.
+ if (record.TryGetValue("order_id", out var orderId))
+ Console.WriteLine($"order_id: {orderId}");
+
+ // Checking if the "count" column is null.
+ if (record.IsNull("count"))
+ Console.WriteLine("'count' is null");
+}
+```
+
+For details about which type should be used in `GetValue` and `TryGetValue`, see [How ScalarDB Column Types Are Converted to and from .NET Types](common-reference.mdx#how-scalardb-column-types-are-converted-to-and-from-net-types).
+
+## Execute SQL queries in a transaction
+
+To execute multiple SQL statements as part of a single transaction, you need a transaction object.
+
+You can create a transaction object by using the transaction manager as follows:
+
+```c#
+var transaction = await manager.BeginAsync();
+```
+
+You can also resume a transaction that has already been started as follows:
+
+```c#
+var transaction = manager.Resume(transactionIdString);
+```
+
+:::note
+
+The `Resume` method doesn't have an asynchronous version because it only creates a transaction object. Because of this, resuming a transaction by using the wrong ID is possible.
+
+:::
+
+The transaction has the same `ExecuteAsync` method as the transaction manager. That method can be used to execute SQL statements.
+
+When a transaction is ready to be committed, you can call the `CommitAsync` method of the transaction as follows:
+
+```c#
+await transaction.CommitAsync();
+```
+
+To roll back the transaction, you can use the `RollbackAsync` method:
+
+```c#
+await transaction.RollbackAsync();
+```
+
+## Get Metadata
+
+You can retrieve ScalarDB's metadata with the Metadata property as follows:
+
+```c#
+// namespaces, tables metadata
+var namespaceNames = new List();
+
+await foreach (var ns in manager.Metadata.GetNamespacesAsync())
+{
+ namespaceNames.Add(ns.Name);
+ Console.WriteLine($"Namespace: {ns.Name}");
+
+ await foreach (var tbl in ns.GetTablesAsync())
+ {
+ Console.WriteLine($" Table: {tbl.Name}");
+
+ Console.WriteLine($" Columns:");
+ foreach (var col in tbl.Columns)
+ Console.WriteLine($" {col.Name} [{col.DataType}]");
+
+ Console.WriteLine($" PartitionKey:");
+ foreach (var col in tbl.PartitionKey)
+ Console.WriteLine($" {col.Name}");
+
+ Console.WriteLine($" ClusteringKey:");
+ foreach (var col in tbl.ClusteringKey)
+ Console.WriteLine($" {col.Name} [{col.ClusteringOrder}]");
+
+ Console.WriteLine($" Indexes:");
+ foreach (var index in tbl.Indexes)
+ Console.WriteLine($" {index.ColumnName}");
+
+ Console.WriteLine();
+ }
+}
+
+// users metadata
+await foreach (var user in manager.Metadata.GetUsersAsync())
+{
+ Console.WriteLine($"User: {user.Name} [IsSuperuser: {user.IsSuperuser}]");
+
+ foreach (var nsName in namespaceNames)
+ {
+ Console.WriteLine($" Namespace: {nsName}");
+
+ Console.WriteLine($" Privileges:");
+ foreach (var privilege in await user.GetPrivilegesAsync(nsName))
+ Console.WriteLine($" {privilege}");
+ }
+
+ Console.WriteLine();
+}
+```
+
+:::note
+
+To use LINQ methods with `IAsyncEnumerable`, you can install [System.Linq.Async](https://www.nuget.org/packages/System.Linq.Async/) package.
+
+:::
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-transactions.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-transactions.mdx
new file mode 100644
index 00000000..30582abc
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-transactions.mdx
@@ -0,0 +1,329 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with Distributed Transactions in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports the distributed transaction functionality of ScalarDB Cluster. The SDK includes transaction and manager abstractions for easier communication within a cluster.
+
+:::note
+
+Although we recommend using asynchronous methods as in the following examples, you can use synchronous versions instead.
+
+:::
+
+For details about distributed SQL transactions, see [Getting Started with Distributed SQL Transactions in the ScalarDB Cluster .NET Client SDK](getting-started-with-distributed-sql-transactions.mdx).
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Create a settings file
+
+Create a `scalardb-options.json` file and add the following, replacing `` with the FQDN or the IP address, and `` with the port number (`60053` by default) of your cluster:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Get a transaction manager
+
+You need to get a transaction manager for distributed transactions. To get the transaction manager, you can use `TransactionFactory` as follows:
+
+```c#
+// Pass the path to the settings file created in the previous step.
+var factory = TransactionFactory.Create("scalardb-options.json");
+
+using var manager = factory.GetTransactionManager();
+```
+
+## Manage transactions
+
+To execute multiple CRUD operations as part of a single transaction, first, you need to begin a transaction. You can begin a transaction by using the transaction manager as follows:
+
+```c#
+var transaction = await manager.BeginAsync();
+```
+
+You can also resume a transaction that is already being executed as follows:
+
+```c#
+var transaction = manager.Resume(transactionIdString);
+```
+
+:::note
+
+The `Resume` method doesn't have an asynchronous version because it only creates a transaction object. Because of this, resuming a transaction by using the wrong ID is possible.
+
+:::
+
+When a transaction is ready to be committed, you can call the `CommitAsync` method of the transaction as follows:
+
+```c#
+await transaction.CommitAsync();
+```
+
+To roll back the transaction, you can use the `RollbackAsync` method:
+
+```c#
+await transaction.RollbackAsync();
+```
+
+## Execute CRUD operations
+
+A transaction has `GetAsync`, `ScanAsync`, `InsertAsync`, `UpsertAsync`, `UpdateAsync`, `DeleteAsync`, and `MutateAsync` methods to execute CRUD operations against the cluster. As a parameter, these methods have an operation object. An operation object can be created by using the builders listed in this section.
+
+:::note
+
+CRUD operations can be executed in a one-shot transaction manner without needing to explicitly create a transaction. For that, a manager object has the same CRUD methods as a transaction object.
+
+:::
+
+To use builders, add the following namespace to the `using` section:
+
+```c#
+using ScalarDB.Client.Builders;
+```
+
+:::note
+
+The cluster does not support parallel execution of commands inside one transaction, so make sure to use `await` for asynchronous methods.
+
+:::
+
+### `GetAsync` method example
+
+To retrieve a single record, you can use the `GetAsync` method as follows:
+
+```c#
+var get =
+ new GetBuilder()
+ .SetNamespaceName("ns")
+ .SetTableName("statements")
+ .AddPartitionKey("order_id", "1")
+ .AddClusteringKey("item_id", 2)
+ .SetProjections("item_id", "count")
+ .Build();
+
+var getResult = await transaction.GetAsync(get);
+```
+
+It is possible to retrieve a record by using an index instead of a partition key. To do that, you need to set the type of operation to `GetWithIndex` as follows:
+
+```c#
+// ...
+using ScalarDB.Client.Core;
+
+// ...
+
+var get =
+ new GetBuilder()
+ // ...
+ .SetGetType(GetOperationType.GetWithIndex)
+ .AddPartitionKey("index_column", "1")
+ .Build();
+```
+
+You can also specify arbitrary conditions that a retrieved record must meet, or it won't be returned. The conditions can be set as conjunctions of conditions as follows:
+
+```c#
+var get =
+ new GetBuilder()
+ // ...
+ .AddConjunction(c => c.AddCondition("cost", 1000, Operator.LessThan))
+ .AddConjunction(c =>
+ {
+ c.AddCondition("cost", 10000, Operator.LessThan);
+ c.AddCondition("in_stock", true, Operator.Equal);
+ })
+ .Build();
+```
+
+In the above example, a record will be returned only if its `cost` is less than `1000`, or if its `cost` is less than `10000` and `in_stock` is true.
+
+#### Handle `IResult` objects
+
+The `GetAsync` and `ScanAsync` methods return `IResult` objects. An `IResult` object contains columns of the retrieved record. The value of the specific column can be retrieved in the following manner:
+
+```c#
+// Getting an integer value from the "item_id" column.
+// If it fails, an exception will be thrown.
+var itemId = result.GetValue("item_id");
+
+// Trying to get a string value from the "order_id" column.
+// If it fails, no exception will be thrown.
+if (result.TryGetValue("order_id", out var orderId))
+ Console.WriteLine($"order_id: {orderId}");
+
+// Checking if the "count" column is null.
+if (result.IsNull("count"))
+ Console.WriteLine("'count' is null");
+```
+
+For details about which type should be used in `GetValue` and `TryGetValue`, see [How ScalarDB Column Types Are Converted to and from .NET Types](common-reference.mdx#how-scalardb-column-types-are-converted-to-and-from-net-types).
+
+### `ScanAsync` method example
+
+To retrieve a range of records, you can use the `ScanAsync` method as follows:
+
+```c#
+var scan =
+ new ScanBuilder()
+ .SetNamespaceName("ns")
+ .SetTableName("statements")
+ .AddPartitionKey("order_id", "1")
+ .AddStartClusteringKey("item_id", 2)
+ .SetStartInclusive(true)
+ .AddEndClusteringKey("item_id", 8)
+ .SetEndInclusive(true)
+ .SetProjections("item_id", "count")
+ .Build();
+
+var scanResult = await transaction.ScanAsync(scan);
+```
+
+It is possible to retrieve a record by using an index instead of a partition key. To do that, you need to set the type of operation to `ScanWithIndex` as follows:
+
+```c#
+// ...
+using ScalarDB.Client.Core;
+
+// ...
+
+var scan =
+ new ScanBuilder()
+ // ...
+ .SetScanType(ScanOperationType.ScanWithIndex)
+ .AddPartitionKey("index_column", "1")
+ .Build();
+```
+
+The arbitrary conditions that a retrieved record must meet can also be set for a scan operation in the same way as for a [get operation](getting-started-with-distributed-transactions.mdx#getasync-method-example).
+
+### `InsertAsync` method example
+
+To insert a new record, you can use the `InsertAsync` method as follows:
+
+```c#
+var insert =
+ new InsertBuilder()
+ .SetNamespaceName("ns")
+ .SetTableName("statements")
+ .AddPartitionKey("order_id", "1")
+ .AddClusteringKey("item_id", 2)
+ .AddColumn("count", 11)
+ .Build();
+
+await transaction.InsertAsync(insert);
+```
+
+### `UpsertAsync` method example
+
+To upsert a record (update an existing record or insert a new one), you can use the `UpsertAsync` method as follows:
+
+```c#
+var upsert =
+ new UpsertBuilder()
+ .SetNamespaceName("ns")
+ .SetTableName("statements")
+ .AddPartitionKey("order_id", "1")
+ .AddClusteringKey("item_id", 2)
+ .AddColumn("count", 11)
+ .Build();
+
+await transaction.UpsertAsync(upsert);
+```
+
+### `UpdateAsync` method example
+
+To update an existing record, you can use the `UpdateAsync` method as follows:
+
+```c#
+// ...
+using ScalarDB.Client.Core;
+
+// ...
+
+var update =
+ new UpdateBuilder()
+ .SetNamespaceName("ns")
+ .SetTableName("statements")
+ .AddPartitionKey("order_id", "1")
+ .AddClusteringKey("item_id", 2)
+ .AddColumn("count", 11)
+ .AddCondition("processed", false, Operator.Equal)
+ .Build();
+
+await transaction.UpdateAsync(update);
+```
+
+### `DeleteAsync` method example
+
+To delete a record, you can use the `DeleteAsync` method as follows:
+
+```c#
+// ...
+using ScalarDB.Client.Core;
+
+// ...
+
+var delete =
+ new DeleteBuilder()
+ .SetNamespaceName("ns")
+ .SetTableName("statements")
+ .AddPartitionKey("order_id", "1")
+ .AddClusteringKey("item_id", 2)
+ .AddCondition("processed", false, Operator.Equal)
+ .Build();
+
+await transaction.DeleteAsync(delete);
+```
+
+### `MutateAsync` method example
+
+The `MutateAsync` method allows you to execute more than one mutation operation in a single call to the cluster. You can do this in the following manner:
+
+```c#
+// ...
+using ScalarDB.Client.Core;
+
+// ...
+
+var mutations = new IMutation[]
+ {
+ new InsertBuilder()
+ // ...
+ .Build(),
+ new UpsertBuilder()
+ // ...
+ .Build(),
+ new UpdateBuilder()
+ // ...
+ .Build(),
+ new DeleteBuilder()
+ // ...
+ .Build()
+ };
+
+await transaction.MutateAsync(mutations);
+```
+
+:::note
+
+To modify data by using the `InsertAsync`, `UpsertAsync`, `UpdateAsync`, `DeleteAsync`, or `MutateAsync` method, the data must be retrieved first by using the `GetAsync` or `ScanAsync` method.
+
+:::
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-linq.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-linq.mdx
new file mode 100644
index 00000000..5acee088
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-linq.mdx
@@ -0,0 +1,369 @@
+---
+tags:
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with LINQ in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports querying the cluster with LINQ and some Entity Framework-like functionality.
+
+:::note
+
+This SDK doesn't support [Entity Framework](https://learn.microsoft.com/en-us/ef/). Instead, this SDK implements functionality that is similar to Entity Framework.
+
+:::
+
+:::note
+
+SQL support must be enabled on the cluster to use LINQ.
+
+:::
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Add client settings
+
+Add the `ScalarDbOptions` section to the `appsettings.json` file of your ASP.NET Core app, replacing `` with the FQDN or the IP address, and `` with the port number (`60053` by default) of your cluster:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Set up classes
+
+After confirming that SQL support is enabled, create a C# class for each ScalarDB table that you want to use. For example:
+
+```c#
+using System.ComponentModel.DataAnnotations.Schema;
+using ScalarDB.Client.DataAnnotations;
+
+// ...
+
+[Table("ns.statements")]
+public class Statement
+{
+ [PartitionKey]
+ [Column("statement_id", Order = 0)]
+ public int Id { get; set; }
+
+ [SecondaryIndex]
+ [Column("order_id", Order = 1)]
+ public string OrderId { get; set; } = String.Empty;
+
+ [SecondaryIndex]
+ [Column("item_id", Order = 2)]
+ public int ItemId { get; set; }
+
+ [Column("count", Order = 3)]
+ public int Count { get; set; }
+}
+
+[Table("order_service.items")]
+public class Item
+{
+ [PartitionKey]
+ [Column("item_id", Order = 0)]
+ public int Id { get; set; }
+
+ [Column("name", Order = 1)]
+ public string Name { get; set; } = String.Empty;
+
+ [Column("price", Order = 2)]
+ public int Price { get; set; }
+}
+```
+
+If a partition key, clustering key, or secondary index consists of more than one column, the `Order` property of `ColumnAttribute` will decide the order inside the key or index.
+
+For details about which types should be used for properties, see [How ScalarDB Column Types Are Converted to and from .NET Types](common-reference.mdx#how-scalardb-column-types-are-converted-to-and-from-net-types).
+
+Create a context class that has properties for all the tables you want to use. For example:
+
+```c#
+ public class MyDbContext: ScalarDbContext
+ {
+ public ScalarDbSet Statements { get; set; }
+ public ScalarDbSet
- Items { get; set; }
+ }
+```
+
+After all the classes are created, you need to register the created context in the dependency injection container. For example:
+
+```c#
+using ScalarDB.Client.Extensions;
+
+//...
+
+var builder = WebApplication.CreateBuilder(args);
+
+//...
+
+builder.Services.AddScalarDbContext();
+```
+
+:::note
+
+The context class will be registered as a transient service. For details about service lifetimes, see [.NET dependency injection - Service lifetimes](https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection#service-lifetimes).
+
+:::
+
+The context can be injected into the controller's constructor as follows:
+
+```c#
+[ApiController]
+public class OrderController: ControllerBase
+{
+ private readonly MyDbContext _myDbContext;
+
+ public OrderController(MyDbContext myDbContext)
+ {
+ _myDbContext = myDbContext;
+ }
+}
+```
+
+## Use LINQ to query properties
+
+After receiving `MyDbContext` in your controller, you can query its properties by using LINQ. For example:
+
+### Use query syntax
+
+```c#
+from stat in _myDbContext.Statements
+join item in _myDbContext.Items on stat.ItemId equals item.Id
+where stat.Count > 2 && item.Name.Contains("apple")
+orderby stat.Count descending, stat.ItemId
+select new { item.Name, stat.Count };
+```
+
+### Use method syntax
+
+```c#
+_myDbContext.Statements
+ .Where(stat => stat.OrderId == "1")
+ .Skip(1)
+ .Take(2);
+```
+
+### Use the `First` method to get one `Statement` by its partition key
+
+```c#
+_myDbContext.Statements.First(stat => stat.OrderId == "1");
+```
+
+### Use the `DefaultIfEmpty` method to perform left outer join
+
+```c#
+from stat in _myDbContext.Statements
+join item in _myDbContext.Items on stat.ItemId equals item.Id into items
+from i in items.DefaultIfEmpty()
+select new { ItemName = i != null ? i.Name : "" }
+```
+
+The following methods are supported:
+
+- `Select`
+- `Where`
+- `Join`
+- `GroupJoin`
+- `First`/`FirstOrDefault`
+- `Skip`
+- `Take`
+- `OrderBy`/`OrderByDescending`
+- `ThenBy`/`ThenByDescending`
+
+The following `String` methods are supported inside the predicates of `Where` and `First`/`FirstOrDefault` methods:
+
+- `Contains`
+- `StartsWith`
+- `EndsWith`
+
+Unsupported LINQ methods can be used after the supported methods. For example:
+
+```c#
+_myDbContext.Statements
+ .Where(stat => stat.OrderId == "1") // Will be executed remotely on the cluster.
+ .Distinct() // Will be executed locally in the app.
+ .Where(stat => stat.ItemId < 5); // Will be executed locally.
+```
+
+:::note
+
+If `Skip` is specified before `Take` or `First`/`FirstOrDefault`, the number that is passed to `Skip` will be added to the `LIMIT` number in the SQL query. By itself, `Skip` won't change the resulting SQL query.
+
+:::
+
+## Limitations when using LINQ against `ScalarDbSet{T}` objects
+
+- All method calls are supported inside `Select`. For example:
+
+```c#
+.Select(stat => convertToSomething(stat.ItemId))
+//...
+.Select(stat => stat.ItemId * getSomeNumber())
+```
+
+- Method calls, except for calls against the querying object, are also supported inside `Where` and `First`/`FirstOrDefault`. For example:
+
+```c#
+.Where(stat => stat.ItemId == getItemId()) // is OK
+//...
+.Where(stat => stat.ItemId.ToString() == "1") // is not supported
+```
+
+- All method calls are supported inside the result-selecting lambda of `Join` and `GroupJoin`. For example:
+
+```c#
+.Join(_myDbContext.Items,
+ stat => stat.ItemId,
+ item => item.Id,
+ (stat, item) => new { ItemName = convertToSomething(item.Name),
+ ItemCount = stat.Count.ToString() })
+```
+
+- Method calls are not supported inside the key-selecting lambdas of `Join` and `GroupJoin`.
+- Custom equality comparers are not supported. The `comparer` argument in `Join` and `GroupJoin` methods will be ignored if the argument has been passed.
+- More than one `from` directly in one query is not supported, except when the `DefaultIfEmpty` method is used to perform left outer join. Each subsequent `from` is considered to be a separate query.
+
+```c#
+var firstQuery = from stat in _myDbContext.Statements
+ where stat.Count > 2
+ select new { stat.Count };
+
+var secondQuery = from item in _myDbContext.Items
+ where item.Price > 6
+ select new { item.Name };
+
+var finalQuery = from first in firstQuery
+ from second in secondQuery
+ select new { first.Count, second.Name };
+
+// 1. firstQuery will be executed against the cluster.
+// 2. secondQuery will be executed against the cluster for each object (row) from 1.
+// 3. finalQuery will be executed locally with the results from 1 and 2.
+var result = finalQuery.ToArray();
+```
+
+- Method calls are not supported inside `OrderBy`/`OrderByDescending` or `ThenBy`/`ThenByDescending`.
+- Only overloads of `Contains`, `StartsWith`, and `EndsWith` methods that have a single string argument are supported inside `Where` and `First`/`FirstOrDefault`.
+
+## Modify data in a cluster by using `ScalarDbContext`
+
+The properties of the class inherited from `ScalarDbContext` can be used to modify data.
+
+### Add a new object by using the `AddAsync` method
+
+```c#
+var statement = new Statement
+ {
+ OrderId = "2",
+ ItemId = 4,
+ Count = 8
+ };
+await _myDbContext.Statements.AddAsync(statement);
+```
+
+### Update an object by using the `UpdateAsync` method
+
+```c#
+var statement = _myDbContext.Statements.First(stat => stat.Id == 1);
+
+// ...
+
+statement.Count = 10;
+await _myDbContext.Statements.UpdateAsync(statement);
+```
+
+### Remove an object by using the `RemoveAsync` method
+
+```c#
+var statement = _myDbContext.Statements.First(stat => stat.Id == 1);
+
+// ...
+
+await _myDbContext.Statements.RemoveAsync(statement);
+```
+
+## Manage transactions
+
+LINQ queries and `AddAsync`, `UpdateAsync`, and `RemoveAsync` methods can be executed without an explicitly started transaction. However, to execute multiple queries and methods as part of a single transaction, the transaction must be explicitly started and committed. `ScalarDbContext` supports both ordinary transactions and transactions with the two-phase commit interface in ScalarDB.
+
+### Begin a new transaction
+
+```c#
+await _myDbContext.BeginTransactionAsync();
+```
+
+### Begin a new transaction with the two-phase commit interface
+
+```c#
+await _myDbContext.BeginTwoPhaseCommitTransactionAsync();
+```
+
+### Get the ID of a currently active transaction
+
+```c#
+var transactionId = _myDbContext.CurrentTransactionId;
+```
+
+### Join an existing transaction with the two-phase commit interface
+
+```c#
+await _myDbContext.JoinTwoPhaseCommitTransactionAsync(transactionId);
+```
+
+### Resume an existing transaction
+
+```c#
+await _myDbContext.ResumeTransaction(transactionId);
+```
+
+### Resume an existing transaction with the two-phase commit interface
+
+```c#
+await _myDbContext.ResumeTwoPhaseCommitTransaction(transactionId);
+```
+
+:::note
+
+The `ResumeTransaction`/`ResumeTwoPhaseCommitTransaction` methods don't have asynchronous versions because they only initialize the transaction data in the `ScalarDbContext` inheriting object without querying the cluster. Because of this, resuming a transaction by using the wrong ID is possible.
+
+:::
+
+### Commit a transaction (ordinary or two-phase commit)
+
+```c#
+await _myDbContext.CommitTransactionAsync();
+```
+
+### Roll back a transaction (ordinary or two-phase commit)
+
+```c#
+await _myDbContext.RollbackTransactionAsync();
+```
+
+### Prepare a transaction with the two-phase commit interface for the commit
+
+```c#
+await _myDbContext.PrepareTransactionAsync();
+```
+
+### Validate a transaction with the two-phase commit interface before the commit
+
+```c#
+await _myDbContext.ValidateTransactionAsync();
+```
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx
new file mode 100644
index 00000000..3b6357ab
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx
@@ -0,0 +1,204 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with Tables as C# Classes in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK helps you write code to access a cluster by abstracting ScalarDB tables as C# objects. After defining a class that represents a table in the cluster, you can ensure that a column name or its type won't be mixed up when querying the cluster. In addition, if a table's structure changes, you can apply the changes to the code by using the refactoring feature in your IDE.
+
+:::note
+
+Although we recommend using asynchronous methods, as in the following examples, you can use synchronous methods instead.
+
+:::
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Create classes for all ScalarDB tables
+
+To work with ScalarDB tables as C# objects, you must create a class for each table that you want to use. For example:
+
+```c#
+using System.ComponentModel.DataAnnotations.Schema;
+using ScalarDB.Client.DataAnnotations;
+
+// ...
+
+[Table("ns.statements")]
+public class Statement
+{
+ [PartitionKey]
+ [Column("order_id", Order = 0)]
+ public string OrderId { get; set; } = String.Empty;
+
+ [ClusteringKey]
+ [Column("item_id", Order = 1)]
+ public int ItemId { get; set; }
+
+ [Column("count", Order = 2)]
+ public int Count { get; set; }
+}
+```
+
+For details about which types should be used for properties, see [How ScalarDB Column Types Are Converted to and from .NET Types](common-reference.mdx#how-scalardb-column-types-are-converted-to-and-from-net-types).
+
+## Execute CRUD operations
+
+After creating a class for each table, you can use the classes as objects by using the generic `GetAsync`, `ScanAsync`, `InsertAsync`, `UpdateAsync`, `DeleteAsync`, `UpsertAsync`, or `MutateAsync` method of `ITransactionCrudOperable`.
+
+To use these generic methods, add the following namespace to the `using` section:
+
+```c#
+using ScalarDB.Client.Extensions;
+```
+
+### Get one object by using the `GetAsync` method
+
+```c#
+var keys = new Dictionary
+ {
+ { nameof(Statement.OrderId), "1" }
+ };
+var statement = await transaction.GetAsync(keys);
+
+Console.WriteLine($"ItemId: {statement.ItemId}, Count: {statement.Count}");
+```
+
+### Get multiple objects by using the `ScanAsync` method
+
+```c#
+var startKeys = new Dictionary
+ {
+ { nameof(Statement.OrderId), "1" },
+ { nameof(Statement.ItemId), 3 }
+ };
+var endKeys = new Dictionary
+ {
+ { nameof(Statement.ItemId), 6}
+ };
+
+await foreach (var s in transaction.ScanAsync(startKeys, endKeys))
+ Console.WriteLine($"ItemId: {s.ItemId}, Count: {s.Count}");
+```
+
+:::note
+
+To use LINQ methods with `IAsyncEnumerable`, you can install [System.Linq.Async](https://www.nuget.org/packages/System.Linq.Async/) package.
+
+:::
+
+### Insert a new object by using the `InsertAsync` method
+
+```c#
+var statement = new Statement
+ {
+ OrderId = "2",
+ ItemId = 4,
+ Count = 8
+ };
+await transaction.InsertAsync(statement);
+```
+
+### Update an object by using the `UpdateAsync` method
+
+```c#
+// ...
+statement.ItemId = 4;
+statement.Count = 8;
+
+await transaction.UpdateAsync(statement);
+```
+
+### Delete an object by using the `DeleteAsync` method
+
+```c#
+// ...
+await transaction.DeleteAsync(statement);
+```
+
+### Upsert an object by using the `UpsertAsync` method
+
+```c#
+var statement = new Statement
+ {
+ OrderId = "2",
+ ItemId = 4,
+ Count = 8
+ };
+await transaction.UpsertAsync(statement);
+```
+
+### Upsert and delete multiple objects at once by using the `MutateAsync` method
+
+```c#
+var statement = new Statement
+ {
+ OrderId = "2",
+ ItemId = 4,
+ Count = 16
+ };
+
+// ...
+
+await transaction.MutateAsync(objectsToUpsert: new[] { statement },
+ objectsToDelete: new[] { statement2 });
+```
+
+:::note
+
+To modify objects by using the `UpdateAsync`, `DeleteAsync`, `UpsertAsync`, or `MutateAsync` method, the objects must be retrieved first by using the `GetAsync` or `ScanAsync` method.
+
+:::
+
+## Use the Administrative API
+
+C# objects also can be used with the Administrative API. To use generic Administrative API methods, add the following namespace to the `using` section:
+
+```c#
+using ScalarDB.Client.Extensions;
+```
+
+### Create a new namespace
+
+```c#
+await admin.CreateNamespaceAsync();
+```
+
+### Drop an existing namespace
+
+```c#
+await admin.DropNamespaceAsync();
+```
+
+### Check if a namespace exists
+
+```c#
+var namespaceExists = await admin.IsNamespacePresentAsync();
+```
+
+### Create a new table
+
+```c#
+await admin.CreateTableAsync();
+```
+
+### Drop an existing table
+
+```c#
+await admin.DropTableAsync();
+```
+
+### Check if a table exists
+
+```c#
+var tableExists = await admin.IsTablePresentAsync();
+```
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-two-phase-commit-transactions.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-two-phase-commit-transactions.mdx
new file mode 100644
index 00000000..7a684b2c
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/getting-started-with-two-phase-commit-transactions.mdx
@@ -0,0 +1,142 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with Distributed Transactions with a Two-Phase Commit Interface in the ScalarDB Cluster .NET Client SDK
+
+The ScalarDB Cluster .NET Client SDK supports transactions with the two-phase commit interface in ScalarDB. The SDK includes transaction and manager abstractions for enhanced communication within a cluster.
+
+:::note
+
+Although we recommend using asynchronous methods as in the following examples, you can use synchronous methods instead.
+
+:::
+
+## About transactions with the two-phase commit interface
+
+By using the SDK, you can execute transactions with the two-phase commit interface that span multiple applications. For example, if you have multiple microservices, you can create a transaction manager in each of them and execute a transaction that spans those microservices.
+
+In transactions with the two-phase commit interface, there are two roles—coordinator and a participant—that collaboratively execute a single transaction.
+
+The coordinator process first begins a transaction and sends the ID of the transaction to all the participants, and the participant processes join the transaction. After executing CRUD or SQL operations, the coordinator process and the participant processes commit the transaction by using the two-phase interface.
+
+## Install the SDK
+
+Install the same major and minor version of the [SDK](https://www.nuget.org/packages/ScalarDB.Client) as ScalarDB Cluster into the .NET project. You can do this by using the built-in NuGet package manager, replacing `.` with the version that you're using:
+
+```console
+dotnet add package ScalarDB.Client --version '..*'
+```
+
+## Create a settings file
+
+Create a `scalardb-options.json` file and add the following, replacing `` with the FQDN or the IP address, and `` with the port number (`60053` by default) of your cluster:
+
+```json
+{
+ "ScalarDbOptions": {
+ "Address": "http://:",
+ "HopLimit": 10
+ }
+}
+```
+
+For details about settings files and other ways to configure the client, see [Client configuration](common-reference.mdx#client-configuration).
+
+## Get a transaction manager (for coordinator and participants)
+
+You need to get a transaction manager for distributed transactions with the two-phase commit interface. To get the transaction manager, you can use `TransactionFactory` as follows:
+
+```c#
+// Pass the path to the settings file created in the previous step.
+var factory = TransactionFactory.Create("scalardb-options.json");
+
+using var manager = factory.GetTwoPhaseCommitTransactionManager();
+```
+
+Alternatively, you can use SQL instead of CRUD operations for transactions with the two-phase commit interface by specifying the following transaction manager:
+
+```c#
+using var manager = factory.GetSqlTwoPhaseCommitTransactionManager();
+```
+
+## Begin a transaction (for coordinator)
+
+You can begin a transaction with the two-phase commit interface in the coordinator as follows:
+
+```c#
+var transaction = await manager.BeginAsync();
+```
+
+The ID of the started transaction can be obtained with the following code:
+
+```c#
+var transactionId = transaction.Id;
+```
+
+## Join a transaction (for participants)
+
+You can join a transaction with the two-phase commit interface in a participant as follows:
+
+```c#
+var transaction = await manager.JoinAsync(transactionId);
+```
+
+## Resume a transaction (for coordinator and participants)
+
+Usually, a transaction with the two-phase commit interface involves multiple request and response exchanges. In scenarios where you need to work with a transaction that has been begun or joined in the previous request, you can resume such transaction as follows:
+
+```c#
+var transaction = manager.Resume(transactionId);
+```
+
+:::note
+
+The `Resume` method doesn't have an asynchronous version because it only creates a transaction object. Because of this, resuming a transaction by using the wrong ID is possible.
+
+:::
+
+## Roll back a transaction
+
+If a transaction fails to commit, you can roll back the transaction as follows:
+
+```c#
+await transaction.RollbackAsync();
+```
+
+## Commit a transaction (for coordinator and participants)
+
+After completing CRUD or SQL operations, you must commit the transaction. However, for transactions with the two-phase commit interface, you must prepare the transaction in the coordinator and all the participants first.
+
+```c#
+await transaction.PrepareAsync();
+```
+
+Next, depending on the concurrency control protocol, you may need to validate the transaction in the coordinator and all the participants as follows:
+
+```c#
+await transaction.ValidateAsync();
+```
+
+Finally, you can commit the transaction in the coordinator and all the participants as follows:
+
+```c#
+await transaction.CommitAsync();
+```
+
+If the coordinator or any of the participants failed to prepare or validate the transaction, you will need to call `RollbackAsync` in the coordinator and all the participants.
+
+In addition, if the coordinator and all the participants failed to commit the transaction, you will need to call `RollbackAsync` in the coordinator and all the participants.
+
+However, if the coordinator or only some of the participants failed to commit the transaction, the transaction will be regarded as committed as long as the coordinator or any one of the participants has succeeded in committing the transaction.
+
+## Execute CRUD operations
+
+The two-phase commit interface of the transaction has the same methods for CRUD operations as ordinary transactions. For details, see [Execute CRUD operations](getting-started-with-distributed-transactions.mdx#execute-crud-operations).
+
+## Execute SQL statements
+
+The two-phase commit interface of the SQL transaction has the same methods for executing SQL queries as ordinary SQL transactions. For details, see [Execute SQL queries](getting-started-with-distributed-sql-transactions.mdx#execute-sql-queries).
diff --git a/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/index.mdx b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/index.mdx
new file mode 100644
index 00000000..f2562631
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster-dotnet-client-sdk/index.mdx
@@ -0,0 +1,22 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Cluster .NET Client SDK Overview
+
+The ScalarDB Cluster .NET Client SDK enables applications to connect to ScalarDB Cluster by using gRPC.
+
+To use the ScalarDB Cluster .NET Client SDK, see the following getting started guides:
+
+* [Getting Started with Distributed Transactions](getting-started-with-distributed-transactions.mdx)
+* [Getting Started with Distributed SQL Transactions](getting-started-with-distributed-sql-transactions.mdx)
+* [Getting Started with the Administrative API](getting-started-with-admin-api.mdx)
+* [Getting Started with ScalarDB Tables as C# Classes](getting-started-with-scalardb-tables-as-csharp-classes.mdx)
+* [Getting Started with ASP.NET Core and Dependency Injection](getting-started-with-aspnet-and-di.mdx)
+* [Getting Started with LINQ](getting-started-with-linq.mdx)
+* [Getting Started with Distributed Transactions with a Two-Phase Commit Interface](getting-started-with-two-phase-commit-transactions.mdx)
+* [Getting Started with Authentication and Authorization](getting-started-with-auth.mdx)
+* [Exception Handling](exception-handling.mdx)
diff --git a/versioned_docs/version-3.X/scalardb-cluster/authorize-with-abac.mdx b/versioned_docs/version-3.X/scalardb-cluster/authorize-with-abac.mdx
new file mode 100644
index 00000000..7bffd58c
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/authorize-with-abac.mdx
@@ -0,0 +1,27 @@
+---
+tags:
+ - Enterprise Premium Option
+ - Private Preview
+displayed_sidebar: docsEnglish
+---
+
+# Control User Access in a Fine-Grained Manner
+
+:::info
+
+- This feature is currently available only to customers in Japan. If you're a customer in Japan, please see the Japanese version of this page.
+- If you need more details about this feature in English, please [contact support](https://www.scalar-labs.com/support).
+
+:::
+
+ScalarDB Cluster can authorize users in a fine-grained manner with a mechanism called attributed-based access control (ABAC). This page explains how to use ABAC in ScalarDB Cluster.
+
+## What is ABAC?
+
+ABAC is a fine-grained access control mechanism in ScalarDB Cluster, allowing for record-level access control instead of just table-level access control, done through [simple authorization](./scalardb-auth-with-sql.mdx). With ABAC, a user can access a particular record only if the user's attributes and the record's attributes match. For example, you can restrict access to some highly confidential records to only users with the required privileges. This mechanism is also useful when multiple applications share the same table but need to access different segments based on their respective privileges.
+
+## Why use ABAC?
+
+Enterprise databases often provide row-level security or similar alternatives to allow for controlling access to rows in a database table. However, if a system comprises several databases, you need to configure each database one by one in the same way. If different kinds of databases are used, you have to configure each database by understanding the differences in the capabilities of each database. Such configuration causes too much burden and is error-prone. With ABAC, you can just configure it once, even though you manage several databases under ScalarDB.
+
+Row-level security features in most databases often require you to implement matching logic through functions like stored procedures. This can sometimes lead to writing lots of code to achieve the desired logic, which can become burdensome. In contrast, ABAC allows you to configure matching logic by using attributes known as tags. With ABAC, you only need to define these tags and assign them to users and records, eliminating the need for coding. Tags consist of several components that enable you to specify matching logic in a flexible and straightforward manner.
diff --git a/versioned_docs/version-3.X/scalardb-cluster/compatibility.mdx b/versioned_docs/version-3.X/scalardb-cluster/compatibility.mdx
new file mode 100644
index 00000000..5aafdca9
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/compatibility.mdx
@@ -0,0 +1,49 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Cluster Compatibility Matrix
+
+This document shows the compatibility of ScalarDB Cluster versions among client SDK versions.
+
+## ScalarDB Cluster compatibility with client SDKs
+
+| ScalarDB Cluster version | ScalarDB Cluster Java Client SDK version | ScalarDB Cluster .NET Client SDK version |
+|:-------------------------|:-----------------------------------------|:-----------------------------------------|
+| 3.16 | 3.9 - 3.16 | 3.12* - 3.16 |
+| 3.15 | 3.9 - 3.15 | 3.12* - 3.15 |
+| 3.14 | 3.9 - 3.14 | 3.12* - 3.14 |
+| 3.13 | 3.9 - 3.13 | 3.12* - 3.13 |
+| 3.12 | 3.9 - 3.12 | 3.12* |
+| 3.11 | 3.9 - 3.11 | Not supported |
+| 3.10 | 3.9 - 3.10 | Not supported |
+| 3.9 | 3.9 | Not supported |
+
+\* This version is in private preview, which means that future versions might have backward-incompatible updates.
+
+:::note
+
+- You can consider the client tools (for example, [ScalarDB Cluster SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli) and [ScalarDB Cluster Schema Loader](developer-guide-for-scalardb-cluster-with-java-api.mdx#schema-loader-for-cluster)) to be the same as the ScalarDB Cluster Java Client SDK. In other words, you can apply the same compatibility rules to client tools as the ScalarDB Cluster Java Client SDK.
+- When you access backend databases by using ScalarDB Data Loader, you must use a version of ScalarDB Data Loader that is compatible with the version of ScalarDB Cluster that you're using. In this case, the supported version of ScalarDB Data Loader is the same as the version of the ScalarDB Cluster Java Client SDK shown in the matrix above. Note that ScalarDB Data Loader doesn't access ScalarDB Cluster directly.
+- If you use a new feature that ScalarDB Cluster provides in a new minor version, you may need to use the same or a later version of the client tools or re-create (or update) existing schemas. For details, please refer to the relevant documentation about each feature.
+
+:::
+
+## Version skew policy
+
+:::note
+
+Versions are expressed as `x.y.z`, where `x` represents the major version, `y` represents the minor version, and `z` represents the patch version. This format follows [Semantic Versioning](https://semver.org/).
+
+:::
+
+- If the **major** versions are different between ScalarDB Cluster and a client SDK, they are **not** compatible and are **not** supported.
+- If the **major** versions are the same and the **minor** versions are different between ScalarDB Cluster and a client SDK, the version of ScalarDB Cluster must be greater than or equal to the client SDK version. For example:
+ - **Supported:** Combination of ScalarDB Cluster 3.13 and client SDK 3.11
+ - **Not supported:** Combination of ScalarDB Cluster 3.11 and client SDK 3.13
+- If the **major** versions and the **minor** versions are the same, you can use different **patch** versions between ScalarDB Cluster and a client SDK. For example:
+ - **Supported:** Combination of ScalarDB Cluster 3.13.2 and client SDK 3.13.0
+ - **Supported:** Combination of ScalarDB Cluster 3.13.0 and client SDK 3.13.2
diff --git a/versioned_docs/version-3.X/scalardb-cluster/deployment-patterns-for-microservices.mdx b/versioned_docs/version-3.X/scalardb-cluster/deployment-patterns-for-microservices.mdx
new file mode 100644
index 00000000..d86a2d55
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/deployment-patterns-for-microservices.mdx
@@ -0,0 +1,72 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# ScalarDB Cluster Deployment Patterns for Microservices
+
+When building microservice applications that use ScalarDB Cluster, there are two patterns you can choose for how to deploy ScalarDB Cluster: shared-cluster pattern and separated-cluster pattern.
+This document first explains those patterns, how they differ, and the basic guidelines on which one to choose in which cases.
+
+Also, this document assumes that your microservice applications are created based on the database-per-service pattern, where each microservice manages its database, and a microservice needs to access another microservice's database via APIs between the microservices.
+
+## ScalarDB Cluster deployment patterns
+
+In the shared-cluster pattern, microservices share one ScalarDB Cluster instance, which is a cluster of ScalarDB Cluster nodes, in a system, so they access the same ScalarDB Cluster instance to interact with their databases. On the other hand, in the separated-cluster pattern, microservices use several ScalarDB Cluster instances. Typically, one microservice accesses one ScalarDB Cluster instance to interact with its database.
+
+The following diagram shows the patterns. (MS stands for microservice.)
+
+
+
+:::note
+
+You also need to manage the Coordinator table in either pattern in addition to the databases required for microservices.
+
+:::
+
+## Pros and cons
+
+One obvious difference is the amount of resources for ScalarDB Cluster instances. With the separated-cluster pattern, you need more resources to manage your applications. This also incurs more maintenance burden and costs.
+
+In addition, the ScalarDB Cluster APIs that you would need to use are different. Specifically, for the shared-cluster pattern, you need to use the [one-phase commit interface](../api-guide.mdx#transactional-api), where only one microservice needs to call `commit` to commit a transaction after microservices read and write records. For the separated-cluster pattern, you need to use the [two-phase commit interface](../two-phase-commit-transactions.mdx), where all the microservices first need to call `prepare` and then call `commit` if all the prepare calls are successful. Therefore, microservices with the separated-cluster pattern will likely be more complex than microservices with the shared-cluster pattern because they need to handle transactions and their errors in a more fine-grained manner.
+
+Moreover, the level of resource isolation is different. Microservices should be well-isolated for better maintainability and development efficiency, but the shared-cluster pattern brings weaker resource isolation. Weak resource isolation might also bring weak security. However, security risks can be mitigated by using the security features of ScalarDB Cluster, like authentication and authorization.
+
+Similarly, there is a difference in how systems are administrated. Specifically, in the shared-cluster pattern, a team must be tasked with managing a ScalarDB Cluster instance on behalf of the other teams. Typically, the central data team can manage it, but issues may arise if no such team exists. With the separated-cluster pattern, administration is more balanced but has a similar issue for the Coordinator table. The issue can be addressed by having a microservice for coordination and making a team manage the microservice.
+
+The following is a summary of the pros and cons of the patterns.
+
+### Shared-cluster pattern
+
+- **Pros:**
+ - Simple transaction and error handling because of the one-phase commit interface. (Backup operations for databases can also be simple.)
+ - Less resource usage because it uses one ScalarDB Cluster instance.
+- **Cons:**
+ - Weak resource isolation between microservices.
+ - Unbalanced administration. (One team needs to manage a ScalarDB Cluster instance on behalf of the others.)
+
+### Separated-cluster pattern
+
+- **Pros:**
+ - Better resource isolation.
+ - More balanced administration. (A team manages one microservice and one ScalarDB Cluster instance. Also, a team must be tasked with managing the Coordinator table.)
+- **Cons:**
+ - Complex transaction and error handling due to the two-phase commit interface. (Backup operations for databases can also be complex.)
+ - More resource usage because of several ScalarDB Cluster instances.
+
+## Which pattern to choose
+
+Using the shared-cluster pattern is recommended whenever possible. Although the shared-cluster pattern has some disadvantages, as described above, its simplicity and ease of management outweigh those disadvantages. Moreover, since ScalarDB Cluster stores all critical states in their underlying databases and does not hold any critical states in its memory, it can be seen as just a path to the databases. Therefore, we believe a system with the shared-cluster pattern still complies with the database-per-service pattern and does not violate the microservice philosophy much.
+
+If the cons of the shared-cluster pattern are not acceptable, you can still use the separated-cluster pattern. However, you should use that pattern only if you properly understand the mechanism and usage of the two-phase commit interface. Otherwise, you might face some issues, like database anomalies.
+
+## Limitations
+
+ScalarDB provides several APIs, such as CRUD, SQL, and Spring Data JDBC. Although the CRUD and SQL interfaces support both the shared-cluster and separated-cluster patterns, the Spring Data JDBC interface does not support the shared-cluster pattern. This is because its one-phase commit interface currently assumes an application is monolithic, where it is not divided into microservices that interact with each other. The Spring Data JDBC interface supports the two-phase commit interface and the separated-cluster pattern, just as the other APIs do.
+
+## See also
+
+- [Transactions with a Two-Phase Commit Interface](../two-phase-commit-transactions.mdx)
+
diff --git a/versioned_docs/version-3.X/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx b/versioned_docs/version-3.X/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx
new file mode 100644
index 00000000..b4b0830a
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx
@@ -0,0 +1,261 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Developer Guide for ScalarDB Cluster with the Java API
+
+ScalarDB Cluster provides a Java API for developing applications.
+This document explains how to use the Java API.
+
+## Add ScalarDB Cluster Java Client SDK to your build
+
+The ScalarDB Cluster Java Client SDK is available in the [Maven Central Repository](https://mvnrepository.com/artifact/com.scalar-labs/scalardb-cluster-java-client-sdk).
+
+To add a dependency on the ScalarDB Cluster Java Client SDK by using Gradle, use the following:
+
+```gradle
+dependencies {
+ implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0'
+}
+```
+
+To add a dependency by using Maven, use the following:
+
+```xml
+
+ com.scalar-labs
+ scalardb-cluster-java-client-sdk
+ 3.16.0
+
+```
+
+## Client modes
+
+The ScalarDB Cluster Java Client SDK supports two client modes: `indirect` and `direct-kubernetes`. The following describes the client modes.
+
+### `indirect` client mode
+
+This mode simply sends a request to any cluster node (typically via a load balancer, such as Envoy), and the cluster node receiving the request routes the request to the appropriate cluster node that has the transaction state.
+
+
+
+The advantage of this mode is that we can keep the client thin.
+The disadvantage is that we need an additional hop to reach the correct cluster node, which may affect performance.
+
+You can use this connection mode even if your application is running on a different Kubernetes cluster and your application can't access the Kubernetes API and each cluster node.
+If your application is running on the same Kubernetes cluster as your ScalarDB Cluster nodes, you can use the `direct-kubernetes` client mode.
+
+### `direct-kubernetes` client mode
+
+In this mode, the client uses the membership logic (using the Kubernetes API) and the distribution logic (consistent hashing algorithm) to find the right cluster node that has the transaction state.
+The client then sends a request to the cluster node directly.
+
+
+
+The advantage of this mode is that we can reduce the hop count to reach the proper cluster node, which will improve the performance.
+The disadvantage of this mode is that we need to make the client fat because the client needs to have membership logic and request-routing logic.
+
+Since this connection mode needs to access the Kubernetes API and each cluster node, you can use this connection mode only if your application is running on the same Kubernetes cluster as your ScalarDB Cluster nodes.
+If your application is running on a different Kubernetes cluster, use the `indirect` client mode.
+
+For details about how to deploy your application on Kubernetes with `direct-kubernetes` client mode, see [Deploy your client application on Kubernetes with `direct-kubernetes` mode](../helm-charts/how-to-deploy-scalardb-cluster.mdx#deploy-your-client-application-on-kubernetes-with-direct-kubernetes-mode).
+
+## ScalarDB Cluster Java API
+
+The ScalarDB Cluster Java Client SDK provides a Java API for applications to access ScalarDB Cluster. The following diagram shows the architecture of the ScalarDB Cluster Java API.
+
+```
+ +------------------+
+ | User/Application |
+ +------------------+
+ ↓ Java API
+ +--------------+
+ | ScalarDB API |
+ +--------------+
+ ↓ gRPC
+ +------------------+
+ | ScalarDB Cluster |
+ +------------------+
+ ↓ DB vendor–specific protocol
+ +----+
+ | DB |
+ +----+
+```
+
+Using the ScalarDB Cluster Java API is almost the same as using the ScalarDB Java API except the client configurations and Schema Loader are different.
+For details, see [ScalarDB Java API Guide](../api-guide.mdx).
+
+The following section describes the Schema Loader for ScalarDB Cluster.
+
+### Schema Loader for Cluster
+
+To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster).
+Using the Schema Loader for Cluster is basically the same as using the [ScalarDB Schema Loader](../schema-loader.mdx) except the name of the JAR file is different.
+You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0).
+After downloading the JAR file, you can run Schema Loader for Cluster with the following command:
+
+```console
+java -jar scalardb-cluster-schema-loader-3.16.0-all.jar --config --schema-file --coordinator
+```
+
+You can also pull the Docker image from the [Scalar container registry](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-schema-loader) by running the following command, replacing the contents in the angle brackets as described:
+
+```console
+docker run --rm -v :/scalardb.properties -v :/schema.json ghcr.io/scalar-labs/scalardb-cluster-schema-loader:3.16.0 --config /scalardb.properties --schema-file /schema.json --coordinator
+```
+
+## ScalarDB Cluster SQL
+
+ScalarDB Cluster SQL can be accessed via JDBC and Spring Data JDBC for ScalarDB in Java as follows:
+
+```
+ +-----------------------------------------+
+ | User/Application |
+ +-----------------------------------------+
+ ↓ ↓ Java API
+Java API ↓ +-------------------------------+
+ (JDBC) ↓ | Spring Data JDBC for ScalarDB |
+ ↓ +-------------------------------+
++----------------------------------------------+
+| ScalarDB JDBC (ScalarDB SQL) |
++----------------------------------------------+
+ ↓ gRPC
+ +----------------------+
+ | ScalarDB Cluster SQL |
+ +----------------------+
+ ↓ DB vendor–specific protocol
+ +----+
+ | DB |
+ +----+
+```
+
+This section describes how to use ScalarDB Cluster SQL though JDBC and Spring Data JDBC for ScalarDB.
+
+### ScalarDB Cluster SQL via JDBC
+
+Using ScalarDB Cluster SQL via JDBC is almost the same using [ScalarDB JDBC](../scalardb-sql/jdbc-guide.mdx) except for how to add the JDBC driver to your project.
+
+In addition to adding the ScalarDB Cluster Java Client SDK as described in [Add ScalarDB Cluster Java Client SDK to your build](#add-scalardb-cluster-java-client-sdk-to-your-build), you need to add the following dependencies to your project:
+
+To add the dependencies on the ScalarDB Cluster JDBC driver by using Gradle, use the following:
+
+```gradle
+dependencies {
+ implementation 'com.scalar-labs:scalardb-sql-jdbc:3.16.0'
+ implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0'
+}
+```
+
+To add the dependencies by using Maven, use the following:
+
+```xml
+
+
+ com.scalar-labs
+ scalardb-sql-jdbc
+ 3.16.0
+
+
+ com.scalar-labs
+ scalardb-cluster-java-client-sdk
+ 3.16.0
+
+
+```
+
+Other than that, using ScalarDB Cluster SQL via JDBC is the same as using ScalarDB JDBC.
+For details about ScalarDB JDBC, see [ScalarDB JDBC Guide](../scalardb-sql/jdbc-guide.mdx).
+
+### ScalarDB Cluster SQL via Spring Data JDBC for ScalarDB
+
+Similar to ScalarDB Cluster SQL via JDBC, using ScalarDB Cluster SQL via Spring Data JDBC for ScalarDB is almost the same as using [Spring Data JDBC for ScalarDB](../scalardb-sql/spring-data-guide.mdx) except for how to add it to your project.
+
+In addition to adding the ScalarDB Cluster Java Client SDK as described in [Add ScalarDB Cluster Java Client SDK to your build](#add-scalardb-cluster-java-client-sdk-to-your-build), you need to add the following dependencies to your project:
+
+To add the dependencies by using Gradle, use the following:
+
+```gradle
+dependencies {
+ implementation 'com.scalar-labs:scalardb-sql-spring-data:3.16.0'
+ implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0'
+}
+```
+
+To add the dependencies by using Maven, use the following:
+
+```xml
+
+
+ com.scalar-labs
+ scalardb-sql-spring-data
+ 3.16.0
+
+
+ com.scalar-labs
+ scalardb-cluster-java-client-sdk
+ 3.16.0
+
+
+```
+
+Other than that, using ScalarDB Cluster SQL via Spring Data JDBC for ScalarDB is the same as using Spring Data JDBC for ScalarDB.
+For details about Spring Data JDBC for ScalarDB, see [Guide of Spring Data JDBC for ScalarDB](../scalardb-sql/spring-data-guide.mdx).
+
+### SQL CLI
+
+Like other SQL databases, ScalarDB SQL also provides a CLI tool where you can issue SQL statements interactively in a command-line shell.
+
+You can download the SQL CLI for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). After downloading the JAR file, you can run the SQL CLI with the following command:
+
+```console
+java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config
+```
+
+You can also pull the Docker image from the [Scalar container registry](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-sql-cli) by running the following command, replacing the contents in the angle brackets as described:
+
+```console
+docker run --rm -it -v :/scalardb-sql.properties ghcr.io/scalar-labs/scalardb-cluster-sql-cli:3.16.0 --config /scalardb-sql.properties
+```
+
+#### Usage
+
+You can see the CLI usage with the `-h` option as follows:
+
+```console
+java -jar scalardb-cluster-sql-cli-3.16.0-all.jar -h
+Usage: scalardb-sql-cli [-hs] -c=PROPERTIES_FILE [-e=COMMAND] [-f=FILE]
+ [-l=LOG_FILE] [-o=] [-p=PASSWORD]
+ [-u=USERNAME]
+Starts ScalarDB SQL CLI.
+ -c, --config=PROPERTIES_FILE
+ A configuration file in properties format.
+ -e, --execute=COMMAND A command to execute.
+ -f, --file=FILE A script file to execute.
+ -h, --help Display this help message.
+ -l, --log=LOG_FILE A file to write output.
+ -o, --output-format=
+ Format mode for result display. You can specify
+ table/vertical/csv/tsv/xmlattrs/xmlelements/json/a
+ nsiconsole.
+ -p, --password=PASSWORD A password to connect.
+ -s, --silent Reduce the amount of informational messages
+ displayed.
+ -u, --username=USERNAME A username to connect.
+```
+
+## Further reading
+
+If you want to use ScalarDB Cluster in programming languages other than Java, you can use the ScalarDB Cluster gRPC API.
+For details about the ScalarDB Cluster gRPC API, refer to the following:
+
+* [ScalarDB Cluster gRPC API Guide](scalardb-cluster-grpc-api-guide.mdx)
+* [ScalarDB Cluster SQL gRPC API Guide](scalardb-cluster-sql-grpc-api-guide.mdx)
+
+JavaDocs are also available:
+
+* [ScalarDB Cluster Java Client SDK](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-java-client-sdk/3.16.0/index.html)
+* [ScalarDB Cluster Common](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-common/3.16.0/index.html)
+* [ScalarDB Cluster RPC](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-rpc/3.16.0/index.html)
diff --git a/versioned_docs/version-3.X/scalardb-cluster/encrypt-data-at-rest.mdx b/versioned_docs/version-3.X/scalardb-cluster/encrypt-data-at-rest.mdx
new file mode 100644
index 00000000..e7e49d7e
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/encrypt-data-at-rest.mdx
@@ -0,0 +1,325 @@
+---
+tags:
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Encrypt Data at Rest
+
+import WarningLicenseKeyContact from '/src/components/en-us/_warning-license-key-contact.mdx';
+
+This document explains how to encrypt data at rest in ScalarDB.
+
+## Overview
+
+ScalarDB can encrypt data stored through it. The encryption feature is similar to transparent data encryption (TDE) in major database systems; therefore, it is transparent to applications. ScalarDB encrypts data before writing it to the backend databases and decrypts it when reading from them.
+
+Currently, ScalarDB supports column-level encryption, allowing specific columns in a table to be encrypted.
+
+## Configurations
+
+To enable the encryption feature, you need to configure `scalar.db.cluster.encryption.enabled` to `true` in the ScalarDB Cluster node configuration file.
+
+| Name | Description | Default |
+|----------------------------------------|-----------------------------------------|---------|
+| `scalar.db.cluster.encryption.enabled` | Whether ScalarDB encrypts data at rest. | `false` |
+
+:::note
+
+Since encryption is transparent to the client, you don't need to change the client configuration.
+
+:::
+
+:::note
+
+If you enable the encryption feature, you will also need to set `scalar.db.cross_partition_scan.enabled` to `true` for the system namespace (`scalardb` by default) because it performs cross-partition scans internally.
+
+:::
+
+The other configurations depend on the encryption implementation you choose. Currently, ScalarDB supports the following encryption implementations:
+
+- HashiCorp Vault encryption
+- Self-encryption
+
+The following sections explain how to configure each encryption implementation.
+
+### HashiCorp Vault encryption
+
+In HashiCorp Vault encryption, ScalarDB uses the [encryption as a service](https://developer.hashicorp.com/vault/tutorials/encryption-as-a-service/eaas-transit) of HashiCorp Vault to encrypt and decrypt data. In this implementation, ScalarDB delegates the management of encryption keys, as well as the encryption and decryption of data, to HashiCorp Vault.
+
+To use HashiCorp Vault encryption, you need to set the property `scalar.db.cluster.encryption.type` to `vault` in the ScalarDB Cluster node configuration file:
+
+| Name | Description | Default |
+|-------------------------------------|-------------------------------------------------------------|---------|
+| `scalar.db.cluster.encryption.type` | Should be set to `vault` to use HashiCorp Vault encryption. | |
+
+You also need to configure the following properties:
+
+| Name | Description | Default |
+|------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
+| `scalar.db.cluster.encryption.vault.key_type` | The key type. Currently, `aes128-gcm96`, `aes256-gcm96`, and `chacha20-poly1305` are supported. For details about the key types, see [Key types](https://developer.hashicorp.com/vault/docs/secrets/transit#key-types). | `aes128-gcm96` |
+| `scalar.db.cluster.encryption.vault.associated_data_required` | Whether associated data is required for AEAD encryption. | `false` |
+| `scalar.db.cluster.encryption.vault.address` | The address of the HashiCorp Vault server. | |
+| `scalar.db.cluster.encryption.vault.token` | The token to authenticate with HashiCorp Vault. | |
+| `scalar.db.cluster.encryption.vault.namespace` | The namespace of the HashiCorp Vault. This configuration is optional. | |
+| `scalar.db.cluster.encryption.vault.transit_secrets_engine_path` | The path of the transit secrets engine. | `transit` |
+| `scalar.db.cluster.encryption.vault.column_batch_size` | The number of columns to be included in a single request to the HashiCorp Vault server. | `64` |
+
+### Self-encryption
+
+In self-encryption, ScalarDB manages data encryption keys (DEKs) and performs encryption and decryption. ScalarDB generates a DEK for each table when creating the table and stores it in Kubernetes Secrets.
+
+To use self-encryption, you need to set the property `scalar.db.cluster.encryption.type` to `self` in the ScalarDB Cluster node configuration file:
+
+| Name | Description | Default |
+|-------------------------------------|-------------------------------------------------|---------|
+| `scalar.db.cluster.encryption.type` | Should be set to `self` to use self-encryption. | |
+
+You also need to configure the following properties:
+
+| Name | Description | Default |
+|-------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|
+| `scalar.db.cluster.encryption.self.key_type` | The key type. Currently, `AES128_GCM`, `AES256_GCM`, `AES128_EAX`, `AES256_EAX`, `AES128_CTR_HMAC_SHA256`, `AES256_CTR_HMAC_SHA256`, `CHACHA20_POLY1305`, and `XCHACHA20_POLY1305` are supported. For details about the key types, see [Choose a key type](https://developers.google.com/tink/aead#choose_a_key_type). | `AES128_GCM` |
+| `scalar.db.cluster.encryption.self.associated_data_required` | Whether associated data is required for AEAD encryption. | `false` |
+| `scalar.db.cluster.encryption.self.kubernetes.secret.namespace_name` | The namespace name of the Kubernetes Secrets. | `default` |
+| `scalar.db.cluster.encryption.self.data_encryption_key_cache_expiration_time` | The expiration time of the DEK cache in milliseconds. | `60000` (60 seconds) |
+
+### Delete the DEK when dropping a table
+
+By default, ScalarDB does not delete the data encryption key (DEK) associated with a table when the table is dropped. However, you can configure ScalarDB to delete the DEK when dropping a table. To enable this, set the property `scalar.db.cluster.encryption.delete_data_encryption_key_on_drop_table.enabled` to `true` in the ScalarDB Cluster node configuration file:
+
+| Name | Description | Default |
+|---------------------------------------------------------------------------------|------------------------------------------------------------------|---------|
+| `scalar.db.cluster.encryption.delete_data_encryption_key_on_drop_table.enabled` | Whether to delete the DEK when dropping a table. | `false` |
+
+## Limitations
+
+There are some limitations to the encryption feature:
+
+- Primary-key columns (partition-key columns and clustering-key columns) cannot be encrypted.
+- Secondary-index columns cannot be encrypted.
+- Encrypted columns cannot be specified in the WHERE clauses or ORDER BY clauses.
+- Encrypted columns are stored in the underlying database as the BLOB type, so encrypted columns that are larger than the maximum size of the BLOB type cannot be stored. For the maximum size of the BLOB type, see [Data-type mapping between ScalarDB and other databases](../schema-loader.mdx#data-type-mapping-between-scalardb-and-other-databases).
+
+## Wire encryption
+
+If you enable the encryption feature, enabling wire encryption to protect your data is strongly recommended, especially in production environments. For details about wire encryption, see [Encrypt Wire Communications](encrypt-wire-communications.mdx).
+
+## Tutorial - Encrypt data by configuring HashiCorp Vault encryption
+
+This tutorial explains how to encrypt data stored through ScalarDB by using HashiCorp Vault encryption.
+
+### Prerequisites
+
+- OpenJDK LTS version (8, 11, 17, or 21) from [Eclipse Temurin](https://adoptium.net/temurin/releases/)
+- [Docker](https://www.docker.com/get-started/) 20.10 or later with [Docker Compose](https://docs.docker.com/compose/install/) V2 or later
+
+:::note
+
+This tutorial has been tested with OpenJDK from Eclipse Temurin. ScalarDB itself, however, has been tested with JDK distributions from various vendors. For details about the requirements for ScalarDB, including compatible JDK distributions, please see [Requirements](../requirements.mdx).
+
+:::
+
+
+
+### Step 1. Install HashiCorp Vault
+
+Install HashiCorp Vault by referring to the official HashiCorp documentation, [Install Vault](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install).
+
+### Step 2. Create the ScalarDB Cluster configuration file
+
+Create the following configuration file as `scalardb-cluster-node.properties`, replacing `` and `` with your ScalarDB license key and license check certificate values. For more information about the license key and certificate, see [How to Configure a Product License Key](../scalar-licensing/index.mdx).
+
+```properties
+scalar.db.storage=jdbc
+scalar.db.contact_points=jdbc:postgresql://postgresql:5432/postgres
+scalar.db.username=postgres
+scalar.db.password=postgres
+scalar.db.cluster.node.standalone_mode.enabled=true
+scalar.db.cross_partition_scan.enabled=true
+scalar.db.sql.enabled=true
+
+# Encryption configurations
+scalar.db.cluster.encryption.enabled=true
+scalar.db.cluster.encryption.type=vault
+scalar.db.cluster.encryption.vault.address=http://vault:8200
+scalar.db.cluster.encryption.vault.token=root
+
+# License key configurations
+scalar.db.cluster.node.licensing.license_key=
+scalar.db.cluster.node.licensing.license_check_cert_pem=
+```
+
+### Step 3. Create the Docker Compose configuration file
+
+Create the following configuration file as `docker-compose.yaml`.
+
+```yaml
+services:
+ vault:
+ container_name: "vault"
+ image: "hashicorp/vault:1.17.3"
+ ports:
+ - 8200:8200
+ environment:
+ - VAULT_DEV_ROOT_TOKEN_ID=root
+ - VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200
+ cap_add:
+ - IPC_LOCK
+
+ postgresql:
+ container_name: "postgresql"
+ image: "postgres:15"
+ ports:
+ - 5432:5432
+ environment:
+ - POSTGRES_PASSWORD=postgres
+ healthcheck:
+ test: ["CMD-SHELL", "pg_isready || exit 1"]
+ interval: 1s
+ timeout: 10s
+ retries: 60
+ start_period: 30s
+
+ scalardb-cluster-standalone:
+ container_name: "scalardb-cluster-node"
+ image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.16.0"
+ ports:
+ - 60053:60053
+ - 9080:9080
+ volumes:
+ - ./scalardb-cluster-node.properties:/scalardb-cluster/node/scalardb-cluster-node.properties
+ depends_on:
+ postgresql:
+ condition: service_healthy
+```
+
+### Step 4. Start the HashiCorp Vault server
+
+Run the following command to start the HashiCorp Vault server in development mode.
+
+```console
+docker compose up vault -d
+```
+
+Once the HashiCorp Vault server is running, set its environment variables by running the following commands.
+
+```console
+export VAULT_ADDR="http://127.0.0.1:8200"
+export VAULT_TOKEN=root
+```
+
+### Step 5. Enable the transit secrets engine on the HashiCorp Vault server
+
+Run the following command to enable the transit secrets engine on the HashiCorp Vault server.
+
+```console
+vault secrets enable transit
+```
+
+### Step 6. Start PostgreSQL and ScalarDB Cluster
+
+Run the following command to start PostgreSQL and ScalarDB Cluster in standalone mode.
+
+```console
+docker compose up postgresql scalardb-cluster-standalone -d
+```
+
+It may take a few minutes for ScalarDB Cluster to fully start.
+
+### Step 7. Connect to ScalarDB Cluster
+
+To connect to ScalarDB Cluster, this tutorial uses the SQL CLI, a tool for connecting to ScalarDB Cluster and executing SQL queries. You can download the SQL CLI from the [ScalarDB releases page](https://github.com/scalar-labs/scalardb/releases).
+
+Create a configuration file named `scalardb-cluster-sql-cli.properties`. This file will be used to connect to ScalarDB Cluster by using the SQL CLI.
+
+```properties
+scalar.db.sql.connection_mode=cluster
+scalar.db.sql.cluster_mode.contact_points=indirect:localhost
+```
+
+Then, start the SQL CLI by running the following command.
+
+```console
+java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config scalardb-cluster-sql-cli.properties
+```
+
+To begin, create the Coordinator tables required for ScalarDB transaction execution.
+
+```sql
+CREATE COORDINATOR TABLES IF NOT EXISTS;
+```
+
+Now you're ready to use the database with the encryption feature enabled in ScalarDB Cluster.
+
+### Step 8. Create a table
+
+Before creating a table, you need to create a namespace.
+
+```sql
+CREATE NAMESPACE ns;
+```
+
+Next, create a table.
+
+```sql
+CREATE TABLE ns.tbl (
+ id INT PRIMARY KEY,
+ col1 TEXT ENCRYPTED,
+ col2 INT ENCRYPTED,
+ col3 INT);
+```
+
+By using the `ENCRYPTED` keyword, the data in the specified columns will be encrypted. In this example, the data in `col1` and `col2` will be encrypted.
+
+### Step 9. Insert data into the table
+
+To insert data into the table, execute the following SQL query.
+
+```sql
+INSERT INTO ns.tbl (id, col1, col2, col3) VALUES (1, 'data1', 123, 456);
+```
+
+To verify the inserted data, run the following SQL query.
+
+```sql
+SELECT * FROM ns.tbl;
+```
+
+```console
++----+-------+------+------+
+| id | col1 | col2 | col3 |
++----+-------+------+------+
+| 1 | data1 | 123 | 456 |
++----+-------+------+------+
+```
+
+### Step 10. Verify data encryption
+
+To verify that the data is encrypted, connect directly to PostgreSQL and check the data.
+
+:::warning
+
+Reading or writing data from the backend database directly is not supported in ScalarDB. In such a case, ScalarDB cannot guarantee data consistency. This guide accesses the backend database directly for testing purposes, however, you cannot do this in a production environment.
+
+:::
+
+Run the following command to connect to PostgreSQL.
+
+```console
+docker exec -it postgresql psql -U postgres
+```
+
+Next, execute the following SQL query to check the data in the table.
+
+```sql
+SELECT id, col1, col2, col3 FROM ns.tbl;
+```
+
+You should see a similar output as below, which confirms that the data in `col1` and `col2` are encrypted.
+
+```console
+ id | col1 | col2 | col3
+----+--------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+------
+ 1 | \x7661756c743a76313a6b6f76455062316a676e6a4a596b643743765539315a49714d625564545a61697152666c7967367837336e66 | \x7661756c743a76313a4b6244543162764678676d44424b526d7037794f5176423569616e615635304c473079664354514b3866513d | 456
+```
diff --git a/versioned_docs/version-3.X/scalardb-cluster/encrypt-wire-communications.mdx b/versioned_docs/version-3.X/scalardb-cluster/encrypt-wire-communications.mdx
new file mode 100644
index 00000000..88dd7a97
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/encrypt-wire-communications.mdx
@@ -0,0 +1,64 @@
+---
+tags:
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Encrypt Wire Communications
+
+ScalarDB can encrypt wire communications by using Transport Layer Security (TLS). This document explains the configurations for wire encryption in ScalarDB.
+
+The wire encryption feature encrypts:
+
+* The communications between the ScalarDB Cluster node and clients.
+* The communications between all the ScalarDB Cluster nodes (the cluster's internal communications).
+
+This feature uses TLS support in gRPC. For details, see the official gRPC [Security Policy](https://github.com/grpc/grpc-java/blob/master/SECURITY.md).
+
+:::note
+
+Enabling wire encryption between the ScalarDB Cluster nodes and the underlying databases in production environments is strongly recommended. For instructions on how to enable wire encryption between the ScalarDB Cluster nodes and the underlying databases, please refer to the product documentation for your underlying databases.
+
+:::
+
+## Configurations
+
+This section describes the available configurations for wire encryption.
+
+### Enable wire encryption in the ScalarDB Cluster nodes
+
+To enable wire encryption in the ScalarDB Cluster nodes, you need to set `scalar.db.cluster.tls.enabled` to `true`.
+
+| Name | Description | Default |
+|---------------------------------|-------------------------------------------|---------|
+| `scalar.db.cluster.tls.enabled` | Whether wire encryption (TLS) is enabled. | `false` |
+
+You also need to set the following configurations:
+
+| Name | Description | Default |
+|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| `scalar.db.cluster.tls.ca_root_cert_pem` | The custom CA root certificate (PEM data) for TLS communication. | |
+| `scalar.db.cluster.tls.ca_root_cert_path` | The custom CA root certificate (file path) for TLS communication. | |
+| `scalar.db.cluster.tls.override_authority` | The custom authority for TLS communication. This doesn't change what host is actually connected. This is intended for testing, but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set for `scalar.db.cluster.node.tls.cert_chain_path`. | |
+| `scalar.db.cluster.node.tls.cert_chain_path` | The certificate chain file used for TLS communication. | |
+| `scalar.db.cluster.node.tls.private_key_path` | The private key file used for TLS communication. | |
+
+To specify the certificate authority (CA) root certificate, you should set either `scalar.db.cluster.tls.ca_root_cert_pem` or `scalar.db.cluster.tls.ca_root_cert_path`. If you set both, `scalar.db.cluster.tls.ca_root_cert_pem` will be used.
+
+### Enable wire encryption on the client side
+
+To enable wire encryption on the client side by using the ScalarDB Cluster Java client SDK, you need to set `scalar.db.cluster.tls.enabled` to `true`.
+
+| Name | Description | Default |
+|---------------------------------|-------------------------------------------|---------|
+| `scalar.db.cluster.tls.enabled` | Whether wire encryption (TLS) is enabled. | `false` |
+
+You also need to set the following configurations:
+
+| Name | Description | Default |
+|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| `scalar.db.cluster.tls.ca_root_cert_pem` | The custom CA root certificate (PEM data) for TLS communication. | |
+| `scalar.db.cluster.tls.ca_root_cert_path` | The custom CA root certificate (file path) for TLS communication. | |
+| `scalar.db.cluster.tls.override_authority` | The custom authority for TLS communication. This doesn't change what host is actually connected. This is intended for testing, but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set for `scalar.db.cluster.node.tls.cert_chain_path`. | |
+
+To specify the CA root certificate, you should set either `scalar.db.cluster.tls.ca_root_cert_pem` or `scalar.db.cluster.tls.ca_root_cert_path`. If you set both, `scalar.db.cluster.tls.ca_root_cert_pem` will be used.
diff --git a/versioned_docs/version-3.X/scalardb-cluster/getting-started-with-scalardb-cluster-dotnet.mdx b/versioned_docs/version-3.X/scalardb-cluster/getting-started-with-scalardb-cluster-dotnet.mdx
new file mode 100644
index 00000000..d06000d8
--- /dev/null
+++ b/versioned_docs/version-3.X/scalardb-cluster/getting-started-with-scalardb-cluster-dotnet.mdx
@@ -0,0 +1,439 @@
+---
+tags:
+ - Enterprise Standard
+ - Enterprise Premium
+displayed_sidebar: docsEnglish
+---
+
+# Getting Started with ScalarDB Cluster via .NET
+
+This tutorial describes how to create a sample application that uses [ScalarDB Cluster](./index.mdx) through the .NET API.
+
+## Overview
+
+This tutorial illustrates the process of creating a sample e-commerce application, where items can be ordered and paid for with a line of credit by using ScalarDB.
+
+:::note
+
+Since the focus of the sample application is to demonstrate using ScalarDB, application-specific error handling, authentication processing, and similar functions are not included in the sample application. For details about exception handling, see [Exception Handling in the ScalarDB Cluster .NET Client SDK](../scalardb-cluster-dotnet-client-sdk/exception-handling.mdx).
+
+:::
+
+The following diagram shows the system architecture of the sample application:
+
+```mermaid
+stateDiagram-v2
+ state "Sample application using the .NET API" as SA
+ state "Kubernetes Cluster" as KC
+ state "Service (Envoy)" as SE
+ state "Pod" as P1
+ state "Pod" as P2
+ state "Pod" as P3
+ state "Envoy" as E1
+ state "Envoy" as E2
+ state "Envoy" as E3
+ state "Service (ScalarDB Cluster)" as SSC
+ state "ScalarDB Cluster" as SC1
+ state "ScalarDB Cluster" as SC2
+ state "ScalarDB Cluster" as SC3
+ state "PostgreSQL" as PSQL
+ SA --> SE
+ state KC {
+ SE --> E1
+ SE --> E2
+ SE --> E3
+ state P1 {
+ E1 --> SSC
+ E2 --> SSC
+ E3 --> SSC
+ }
+ SSC --> SC1
+ SSC --> SC2
+ SSC --> SC3
+ state P2 {
+ SC1 --> PSQL
+ SC1 --> SC2
+ SC1 --> SC3
+ SC2 --> PSQL
+ SC2 --> SC1
+ SC2 --> SC3
+ SC3 --> PSQL
+ SC3 --> SC1
+ SC3 --> SC2
+ }
+ state P3 {
+ PSQL
+ }
+ }
+```
+
+### What you can do in this sample application
+
+The sample application supports the following types of transactions:
+
+- Get customer information.
+- Place an order by using a line of credit.
+ - Checks if the cost of the order is below the customer's credit limit.
+ - If the check passes, records the order history and updates the amount the customer has spent.
+- Get order information by order ID.
+- Get order information by customer ID.
+- Make a payment.
+ - Reduces the amount the customer has spent.
+
+## Prerequisites for this sample application
+
+- [.NET SDK 8.0](https://dotnet.microsoft.com/en-us/download/dotnet/8.0)
+- ScalarDB Cluster running on a Kubernetes cluster
+ - We assume that you have a ScalarDB Cluster running on a Kubernetes cluster that you deployed by following the instructions in [Set Up ScalarDB Cluster on Kubernetes by Using a Helm Chart](setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx).
+
+:::note
+
+.NET SDK 8.0 is the version used to create the sample application. For information about all supported versions, see [Requirements](../requirements.mdx#net)
+
+:::
+
+## Set up ScalarDB Cluster
+
+The following sections describe how to set up the sample e-commerce application.
+
+### Clone the ScalarDB samples repository
+
+Open **Terminal**, then clone the ScalarDB samples repository by running the following command:
+
+```console
+git clone https://github.com/scalar-labs/scalardb-samples
+```
+
+Then, go to the directory that contains the sample application by running the following command:
+
+```console
+cd scalardb-samples/scalardb-dotnet-samples/scalardb-cluster-sample
+```
+
+### Update the referenced version of the ScalarDB.Client package
+
+To use ScalarDB Cluster, open `ScalarDbClusterSample.csproj` in your preferred text editor. Then, update the version of the referenced `ScalarDB.Client` package, replacing `.` with the version of the deployed ScalarDB Cluster:
+
+```xml
+