diff --git a/docs/api-guide.mdx b/docs/api-guide.mdx index 7be336f1..09b28598 100644 --- a/docs/api-guide.mdx +++ b/docs/api-guide.mdx @@ -370,14 +370,14 @@ DistributedTransaction transaction = transactionManager.start(); Alternatively, you can use the `begin` method for a transaction by specifying a transaction ID as follows: ```java -// Begin a transaction by specifying a transaction ID. +// Begin a transaction with specifying a transaction ID. DistributedTransaction transaction = transactionManager.begin(""); ``` Or, you can use the `start` method for a transaction by specifying a transaction ID as follows: ```java -// Start a transaction by specifying a transaction ID. +// Start a transaction with specifying a transaction ID. DistributedTransaction transaction = transactionManager.start(""); ``` @@ -389,48 +389,6 @@ When you specify a transaction ID, make sure you specify a unique ID (for exampl ::: -##### Begin or start a transaction in read-only mode - -You can also begin or start a transaction in read-only mode. In this case, the transaction will not allow any write operations, and it will be optimized for read operations. - -:::note - -Using read-only transactions for read-only operations is strongly recommended to improve performance and reduce resource usage. - -::: - -You can begin or start a transaction in read-only mode as follows: - -```java -// Begin a transaction in read-only mode. -DistributedTransaction transaction = transactionManager.beginReadOnly(); -``` - -```java -// Start a transaction in read-only mode. -DistributedTransaction transaction = transactionManager.startReadOnly(); -``` - -Alternatively, you can use the `beginReadOnly` and `startReadOnly` methods by specifying a transaction ID as follows: - -```java -// Begin a transaction in read-only mode by specifying a transaction ID. -DistributedTransaction transaction = transactionManager.beginReadOnly(""); -``` - -```java -// Start a transaction in read-only mode by specifying a transaction ID. -DistributedTransaction transaction = transactionManager.startReadOnly(""); -``` - -:::note - -Specifying a transaction ID is useful when you want to link external systems to ScalarDB. Otherwise, you should use the `beginReadOnly()` method or the `startReadOnly()` method. - -When you specify a transaction ID, make sure you specify a unique ID (for example, UUID v4) throughout the system since ScalarDB depends on the uniqueness of transaction IDs for correctness. - -::: - #### Join a transaction Joining a transaction is particularly useful in a stateful application where a transaction spans multiple client requests. In such a scenario, the application can start a transaction during the first client request. Then, in subsequent client requests, the application can join the ongoing transaction by using the `join()` method. @@ -671,14 +629,9 @@ If the result has more than one record, `transaction.get()` will throw an except ##### `Scan` operation -`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations. To execute a `Scan` operation, you can use the `transaction.scan()` method or the `transaction.getScanner()` method: - -- `transaction.scan()`: - - This method immediately executes the given `Scan` operation and returns a list of all matching records. It is suitable when the result set is expected to be small enough to fit in memory. -- `transaction.getScanner()`: - - This method returns a `Scanner` object that allows you to iterate over the result set lazily. It is useful when the result set may be large, as it avoids loading all records into memory at once. +`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations. -You need to create a `Scan` object first, and then you can execute the object by using the `transaction.scan()` method or the `transaction.getScanner()` method as follows: +You need to create a `Scan` object first, and then you can execute the object by using the `transaction.scan()` method as follows: ```java // Create a `Scan` operation. @@ -699,17 +652,8 @@ Scan scan = .limit(10) .build(); -// Execute the `Scan` operation by using the `transaction.scan()` method. +// Execute the `Scan` operation. List results = transaction.scan(scan); - -// Or, execute the `Scan` operation by using the `transaction.getScanner()` method. -try (TransactionCrudOperable.Scanner scanner = transaction.getScanner(scan)) { - // Fetch the next result from the scanner - Optional result = scanner.one(); - - // Fetch all remaining results from the scanner - List allResults = scanner.all(); -} ``` You can omit the clustering-key boundaries or specify either a `start` boundary or an `end` boundary. If you don't specify `orderings`, you will get results ordered by the clustering order that you defined when creating the table. @@ -1348,14 +1292,9 @@ For details about the `Get` operation, see [`Get` operation](#get-operation). #### Execute `Scan` operation -`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations. To execute a `Scan` operation, you can use the `transactionManager.scan()` method or the `transactionManager.getScanner()` method: - -- `transactionManager.scan()`: - - This method immediately executes the given `Scan` operation and returns a list of all matching records. It is suitable when the result set is expected to be small enough to fit in memory. -- `transactionManager.getScanner()`: - - This method returns a `Scanner` object that allows you to iterate over the result set lazily. It is useful when the result set may be large, as it avoids loading all records into memory at once. +`Scan` is an operation to retrieve multiple records within a partition. You can specify clustering-key boundaries and orderings for clustering-key columns in `Scan` operations. -You need to create a `Scan` object first, and then you can execute the object by using the `transactionManager.scan()` method or the `transactionManager.getScanner()` method as follows: +You need to create a `Scan` object first, and then you can execute the object by using the `transactionManager.scan()` method as follows: ```java // Create a `Scan` operation. @@ -1375,17 +1314,8 @@ Scan scan = .limit(10) .build(); -// Execute the `Scan` operation by using the `transactionManager.scan()` method. +// Execute the `Scan` operation. List results = transactionManager.scan(scan); - -// Or, execute the `Scan` operation by using the `transactionManager.getScanner()` method. -try (TransactionManagerCrudOperable.Scanner scanner = transactionManager.getScanner(scan)) { - // Fetch the next result from the scanner - Optional result = scanner.one(); - - // Fetch all remaining results from the scanner - List allResults = scanner.all(); -} ``` For details about the `Scan` operation, see [`Scan` operation](#scan-operation). diff --git a/docs/backup-restore.mdx b/docs/backup-restore.mdx index 0efff032..55db1dbf 100644 --- a/docs/backup-restore.mdx +++ b/docs/backup-restore.mdx @@ -66,9 +66,6 @@ The backup methods by database listed below are just examples of some of the dat Clusters are backed up automatically based on the backup policy, and these backups are retained for a specific duration. You can also perform on-demand backups. For details on performing backups, see [YugabyteDB Managed: Back up and restore clusters](https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-clusters/backup-clusters/). - - Use the `backup` command. For details, on performing backups, see [Db2: Backup overview](https://www.ibm.com/docs/en/db2/12.1.0?topic=recovery-backup). - ### Back up with explicit pausing @@ -178,7 +175,4 @@ The restore methods by database listed below are just examples of some of the da You can restore from the scheduled or on-demand backup within the backup retention period. For details on performing backups, see [YugabyteDB Managed: Back up and restore clusters](https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-clusters/backup-clusters/). - - Use the `restore` command. For details, on restoring the database, see [Db2: Restore overview](https://www.ibm.com/docs/en/db2/12.1.0?topic=recovery-restore). - diff --git a/docs/configurations.mdx b/docs/configurations.mdx index 3e36d53f..5a05fd51 100644 --- a/docs/configurations.mdx +++ b/docs/configurations.mdx @@ -23,12 +23,13 @@ If you are using ScalarDB Cluster, please refer to [ScalarDB Cluster Configurati The following configurations are available for the Consensus Commit transaction manager: -| Name | Description | Default | -|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| -| `scalar.db.transaction_manager` | Transaction manager of ScalarDB. Specify `consensus-commit` to use [Consensus Commit](./consensus-commit.mdx) or `single-crud-operation` to [run non-transactional storage operations](./run-non-transactional-storage-operations-through-library.mdx). Note that the configurations under the `scalar.db.consensus_commit` prefix are ignored if you use `single-crud-operation`. | `consensus-commit` | -| `scalar.db.consensus_commit.isolation_level` | Isolation level used for Consensus Commit. Either `SNAPSHOT`, `SERIALIZABLE`, or `READ_COMMITTED` can be specified. | `SNAPSHOT` | -| `scalar.db.consensus_commit.coordinator.namespace` | Namespace name of Coordinator tables. | `coordinator` | -| `scalar.db.consensus_commit.include_metadata.enabled` | If set to `true`, `Get` and `Scan` operations results will contain transaction metadata. To see the transaction metadata columns details for a given table, you can use the `DistributedTransactionAdmin.getTableMetadata()` method, which will return the table metadata augmented with the transaction metadata columns. Using this configuration can be useful to investigate transaction-related issues. | `false` | +| Name | Description | Default | +|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------| +| `scalar.db.transaction_manager` | Transaction manager of ScalarDB. Specify `consensus-commit` to use [Consensus Commit](./consensus-commit.mdx) or `single-crud-operation` to [run non-transactional storage operations](./run-non-transactional-storage-operations-through-library.mdx). Note that the configurations under the `scalar.db.consensus_commit` prefix are ignored if you use `single-crud-operation`. | `consensus-commit` | +| `scalar.db.consensus_commit.isolation_level` | Isolation level used for Consensus Commit. Either `SNAPSHOT` or `SERIALIZABLE` can be specified. | `SNAPSHOT` | +| `scalar.db.consensus_commit.serializable_strategy` | Serializable strategy used for Consensus Commit. Either `EXTRA_READ` or `EXTRA_WRITE` can be specified. If `SNAPSHOT` is specified in the property `scalar.db.consensus_commit.isolation_level`, this configuration will be ignored. | `EXTRA_READ` | +| `scalar.db.consensus_commit.coordinator.namespace` | Namespace name of Coordinator tables. | `coordinator` | +| `scalar.db.consensus_commit.include_metadata.enabled` | If set to `true`, `Get` and `Scan` operations results will contain transaction metadata. To see the transaction metadata columns details for a given table, you can use the `DistributedTransactionAdmin.getTableMetadata()` method, which will return the table metadata augmented with the transaction metadata columns. Using this configuration can be useful to investigate transaction-related issues. | `false` | ## Performance-related configurations @@ -44,8 +45,6 @@ The following performance-related configurations are available for the Consensus | `scalar.db.consensus_commit.async_commit.enabled` | Whether or not the commit phase is executed asynchronously. | `false` | | `scalar.db.consensus_commit.async_rollback.enabled` | Whether or not the rollback phase is executed asynchronously. | The value of `scalar.db.consensus_commit.async_commit.enabled` | | `scalar.db.consensus_commit.parallel_implicit_pre_read.enabled` | Whether or not implicit pre-read is executed in parallel. | `true` | -| `scalar.db.consensus_commit.one_phase_commit.enabled` | Whether or not the one-phase commit optimization is enabled. | `false` | -| `scalar.db.consensus_commit.coordinator.write_omission_on_read_only.enabled` | Whether or not the write omission optimization is enabled for read-only transactions. This optimization is useful for read-only transactions that do not modify any data, as it avoids unnecessary writes to the Coordinator tables. | `true` | | `scalar.db.consensus_commit.coordinator.group_commit.enabled` | Whether or not committing the transaction state is executed in batch mode. This feature can't be used with a two-phase commit interface. | `false` | | `scalar.db.consensus_commit.coordinator.group_commit.slot_capacity` | Maximum number of slots in a group for the group commit feature. A large value improves the efficiency of group commit, but may also increase latency and the likelihood of transaction conflicts.[^1] | `20` | | `scalar.db.consensus_commit.coordinator.group_commit.group_size_fix_timeout_millis` | Timeout to fix the size of slots in a group. A large value improves the efficiency of group commit, but may also increase latency and the likelihood of transaction conflicts.[^1] | `40` | @@ -64,30 +63,28 @@ Select a database to see the configurations available for each storage. The following configurations are available for JDBC databases: - | Name | Description | Default | - |------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------| - | `scalar.db.storage` | `jdbc` must be specified. | - | - | `scalar.db.contact_points` | JDBC connection URL. | | - | `scalar.db.username` | Username to access the database. | | - | `scalar.db.password` | Password to access the database. | | - | `scalar.db.jdbc.connection_pool.min_idle` | Minimum number of idle connections in the connection pool. | `20` | - | `scalar.db.jdbc.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool. | `50` | - | `scalar.db.jdbc.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool. Use a negative value for no limit. | `100` | - | `scalar.db.jdbc.prepared_statements_pool.enabled` | Setting this property to `true` enables prepared-statement pooling. | `false` | - | `scalar.db.jdbc.prepared_statements_pool.max_open` | Maximum number of open statements that can be allocated from the statement pool at the same time. Use a negative value for no limit. | `-1` | - | `scalar.db.jdbc.isolation_level` | Isolation level for JDBC. `READ_UNCOMMITTED`, `READ_COMMITTED`, `REPEATABLE_READ`, or `SERIALIZABLE` can be specified. | Underlying-database specific | - | `scalar.db.jdbc.table_metadata.schema` | Schema name for the table metadata used for ScalarDB. | `scalardb` | - | `scalar.db.jdbc.table_metadata.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for the table metadata. | `5` | - | `scalar.db.jdbc.table_metadata.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for the table metadata. | `10` | - | `scalar.db.jdbc.table_metadata.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for the table metadata. Use a negative value for no limit. | `25` | - | `scalar.db.jdbc.admin.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for admin. | `5` | - | `scalar.db.jdbc.admin.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for admin. | `10` | - | `scalar.db.jdbc.admin.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for admin. Use a negative value for no limit. | `25` | - | `scalar.db.jdbc.mysql.variable_key_column_size` | Column size for TEXT and BLOB columns in MySQL when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` | - | `scalar.db.jdbc.oracle.variable_key_column_size` | Column size for TEXT and BLOB columns in Oracle when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` | - | `scalar.db.jdbc.oracle.time_column.default_date_component` | Value of the date component used for storing `TIME` data in Oracle. Since Oracle has no data type to only store a time without a date component, ScalarDB stores `TIME` data with the same date component value for ease of comparison and sorting. | `1970-01-01` | - | `scalar.db.jdbc.db2.variable_key_column_size` | Column size for TEXT and BLOB columns in IBM Db2 when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` | - | `scalar.db.jdbc.db2.time_column.default_date_component` | Value of the date component used for storing `TIME` data in IBM Db2. Since the IBM Db2 TIMESTAMP type is used to store ScalarDB `TIME` type data because it provides fractional-second precision, ScalarDB stores `TIME` data with the same date component value for ease of comparison and sorting. | `1970-01-01` | + | Name | Description | Default | + |------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------| + | `scalar.db.storage` | `jdbc` must be specified. | - | + | `scalar.db.contact_points` | JDBC connection URL. | | + | `scalar.db.username` | Username to access the database. | | + | `scalar.db.password` | Password to access the database. | | + | `scalar.db.jdbc.connection_pool.min_idle` | Minimum number of idle connections in the connection pool. | `20` | + | `scalar.db.jdbc.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool. | `50` | + | `scalar.db.jdbc.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool. Use a negative value for no limit. | `100` | + | `scalar.db.jdbc.prepared_statements_pool.enabled` | Setting this property to `true` enables prepared-statement pooling. | `false` | + | `scalar.db.jdbc.prepared_statements_pool.max_open` | Maximum number of open statements that can be allocated from the statement pool at the same time. Use a negative value for no limit. | `-1` | + | `scalar.db.jdbc.isolation_level` | Isolation level for JDBC. `READ_UNCOMMITTED`, `READ_COMMITTED`, `REPEATABLE_READ`, or `SERIALIZABLE` can be specified. | Underlying-database specific | + | `scalar.db.jdbc.table_metadata.schema` | Schema name for the table metadata used for ScalarDB. | `scalardb` | + | `scalar.db.jdbc.table_metadata.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for the table metadata. | `5` | + | `scalar.db.jdbc.table_metadata.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for the table metadata. | `10` | + | `scalar.db.jdbc.table_metadata.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for the table metadata. Use a negative value for no limit. | `25` | + | `scalar.db.jdbc.admin.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for admin. | `5` | + | `scalar.db.jdbc.admin.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for admin. | `10` | + | `scalar.db.jdbc.admin.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for admin. Use a negative value for no limit. | `25` | + | `scalar.db.jdbc.mysql.variable_key_column_size` | Column size for TEXT and BLOB columns in MySQL when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` | + | `scalar.db.jdbc.oracle.variable_key_column_size` | Column size for TEXT and BLOB columns in Oracle when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` | + | `scalar.db.jdbc.oracle.time_column.default_date_component` | Value of the date component used for storing `TIME` data in Oracle. Since Oracle has no data type to only store a time without a date component, ScalarDB stores `TIME` data with the same date component value for ease of comparison and sorting. | `1970-01-01` | :::note @@ -177,23 +174,15 @@ For non-JDBC databases, transactions could be executed at read-committed snapsho | `scalar.db.cross_partition_scan.filtering.enabled` | Enable filtering in cross-partition scan. | `false` | | `scalar.db.cross_partition_scan.ordering.enabled` | Enable ordering in cross-partition scan. | `false` | -##### Scan fetch size - -You can configure the fetch size for storage scan operations by using the following property: - -| Name | Description | Default | -|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| -| `scalar.db.scan_fetch_size` | Specifies the number of records to fetch in a single batch during a storage scan operation. A larger value can improve performance for a large result set by reducing round trips to the storage, but it also increases memory usage. A smaller value uses less memory but may increase latency. | `10` | - ## Other ScalarDB configurations The following are additional configurations available for ScalarDB: -| Name | Description | Default | -|------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| -| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` | -| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB maintains in-progress transactions, which can be resumed by using a transaction ID. This process expires transactions that have been idle for an extended period to prevent resource leaks. This setting specifies the expiration time of this transaction management feature in milliseconds. | `-1` (no expiration) | -| `scalar.db.default_namespace_name` | The given namespace name will be used by operations that do not already specify a namespace. | | +| Name | Description | Default | +|------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| +| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` | +| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB maintains ongoing transactions, which can be resumed by using a transaction ID. This setting specifies the expiration time of this transaction management feature in milliseconds. | `-1` (no expiration) | +| `scalar.db.default_namespace_name` | The given namespace name will be used by operations that do not already specify a namespace. | | ## Placeholder usage diff --git a/docs/consensus-commit.mdx b/docs/consensus-commit.mdx index 16df6dc7..d61d31fb 100644 --- a/docs/consensus-commit.mdx +++ b/docs/consensus-commit.mdx @@ -81,14 +81,11 @@ ScalarDB checks conflicting preparations by using linearizable conditional write ScalarDB then moves on to the validate-records phase as necessary. The validate-records phase is only necessary if the isolation level is set to SERIALIZABLE. In this phase, ScalarDB re-reads all the records in the read set to see if other transactions have written the records that the transaction has read before. If the read set has not been changed, the transaction can go to the commit-state phase since there are no anti-dependencies; otherwise, it aborts the transaction. ##### Commit phase - If all the validations in the prepare phase are done successfully, ScalarDB commits the transaction by writing a COMMITTED state record to the Coordinator table as the commit-state phase. :::note -* ScalarDB uses linearizable conditional writes to coordinate concurrent writes to the Coordinator table, creating a state record with a TxID if there is no record for the TxID. Once the COMMITTED state is correctly written to the Coordinator table, the transaction is regarded as committed. -* By default, if a transaction contains only read operations, ScalarDB skips the commit-state phase. However, you can configure ScalarDB to write a COMMITTED state record to the Coordinator table even for read-only transactions by setting the following parameter to `false`: - * `scalar.db.consensus_commit.coordinator.write_omission_on_read_only.enabled` +ScalarDB uses linearizable conditional writes to coordinate concurrent writes to the Coordinator table, creating a state record with a TxID if there is no record for the TxID. Once the COMMITTED state is correctly written to the Coordinator table, the transaction is regarded as committed. ::: @@ -116,7 +113,7 @@ A transaction expires after a certain amount of time (currently 15 seconds). Whe ## Isolation levels -The Consensus Commit protocol supports three isolation levels: read-committed snapshot isolation (a weaker variant of snapshot isolation), serializable, and read-committed, each of which has the following characteristics: +The Consensus Commit protocol supports two isolation levels: a weaker variant of snapshot isolation, read-committed snapshot isolation, and serializable, each of which has the following characteristics: * Read-committed snapshot isolation (SNAPSHOT - default) * Possible anomalies: read skew, write skew, read only @@ -124,11 +121,8 @@ The Consensus Commit protocol supports three isolation levels: read-committed sn * Serializable (SERIALIZABLE) * Possible anomalies: None * Slower than read-committed snapshot isolation, but guarantees stronger (strongest) correctness. -* Read-committed (READ_COMMITTED) - * Possible anomalies: read skew, write skew, read only - * Faster than read-committed snapshot isolation because it could return non-latest committed records. -As described above, serializable is preferable from a correctness perspective, but slower than read-committed snapshot isolation. Choose the appropriate one based on your application requirements and workload. For details on how to configure read-committed snapshot isolation, serializable, and read-committed, see [ScalarDB Configuration](configurations.mdx#basic-configurations). +As described above, serializable is preferable from a correctness perspective, but slower than read-committed snapshot isolation. Choose the appropriate one based on your application requirements and workload. For details on how to configure read-committed snapshot isolation and serializable, see [ScalarDB Configuration](configurations.mdx#basic-configurations). :::note @@ -192,14 +186,6 @@ You can enable respective asynchronous execution by using the following paramete * Rollback processing * `scalar.db.consensus_commit.async_rollback.enabled` -### One-phase commit - -With one-phase commit optimization, ScalarDB can omit the prepare-records and commit-state phases without sacrificing correctness, provided that the transaction only updates records that the underlying database can atomically update. - -You can enable one-phase commit optimization by using the following parameter: - -* `scalar.db.consensus_commit.one_phase_commit.enabled` - ### Group commit Consensus Commit provides a group-commit feature to execute the commit-state phase of multiple transactions in a batch, reducing the number of writes for the commit-state phase. It is especially useful when writing to a Coordinator table is slow, for example, when the Coordinator table is deployed in a multi-region environment for high availability. diff --git a/docs/getting-started-with-scalardb.mdx b/docs/getting-started-with-scalardb.mdx index 0c7bcd8f..2e21cefc 100644 --- a/docs/getting-started-with-scalardb.mdx +++ b/docs/getting-started-with-scalardb.mdx @@ -143,29 +143,6 @@ For a list of databases that ScalarDB supports, see [Databases](requirements.mdx scalar.db.password=SqlServer22 ``` - -

Run Db2 locally

- - You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory. - - To start IBM Db2, run the following command: - - ```console - docker compose up -d db2 - ``` - -

Configure ScalarDB

- - The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Db2 in the **database.properties** file so that the configuration looks as follows: - - ```properties - # For Db2 - scalar.db.storage=jdbc - scalar.db.contact_points=jdbc:db2://localhost:50000/sample - scalar.db.username=db2inst1 - scalar.db.password=db2inst1 - ``` -

Run Amazon DynamoDB Local

diff --git a/docs/overview.mdx b/docs/overview.mdx index b51df6be..e7a2532e 100644 --- a/docs/overview.mdx +++ b/docs/overview.mdx @@ -18,7 +18,7 @@ ScalarDB is a universal hybrid transaction/analytical processing (HTAP) engine f As a versatile solution, ScalarDB supports a range of databases, including: -- Relational databases that support JDBC, such as IBM Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle Database, PostgreSQL, SQLite, and their compatible databases, like Amazon Aurora and YugabyteDB. +- Relational databases that support JDBC, such as MariaDB, Microsoft SQL Server, MySQL, Oracle Database, PostgreSQL, SQLite, and their compatible databases, like Amazon Aurora and YugabyteDB. - NoSQL databases like Amazon DynamoDB, Apache Cassandra, and Azure Cosmos DB. For details on which databases ScalarDB supports, refer to [Databases](requirements.mdx#databases). diff --git a/docs/requirements.mdx b/docs/requirements.mdx index a5fdb19b..838b9715 100644 --- a/docs/requirements.mdx +++ b/docs/requirements.mdx @@ -51,7 +51,6 @@ ScalarDB is middleware that runs on top of the following databases and their ver | Version | Oracle Database 23ai | Oracle Database 21c | Oracle Database 19c | |:------------------|:--------------------|:------------------|:------------------| -| **ScalarDB 3.16** | ✅ | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | ✅ | @@ -62,34 +61,11 @@ ScalarDB is middleware that runs on top of the following databases and their ver | **ScalarDB 3.8** | ✅ | ✅ | ✅ | | **ScalarDB 3.7** | ✅ | ✅ | ✅ | -
- - -| Version | Db2 12.1 | Db2 11.5 | -|:------------------|:---------|:---------| -| **ScalarDB 3.16** | ✅ | ✅ | -| **ScalarDB 3.15** | ❌ | ❌ | -| **ScalarDB 3.14** | ❌ | ❌ | -| **ScalarDB 3.13** | ❌ | ❌ | -| **ScalarDB 3.12** | ❌ | ❌ | -| **ScalarDB 3.11** | ❌ | ❌ | -| **ScalarDB 3.10** | ❌ | ❌ | -| **ScalarDB 3.9** | ❌ | ❌ | -| **ScalarDB 3.8** | ❌ | ❌ | -| **ScalarDB 3.7** | ❌ | ❌ | - -:::note - -Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is not currently supported. - -::: - | Version | MySQL 8.4 | MySQL 8.0 | |:------------------|:----------|:-----------| -| **ScalarDB 3.16** | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | @@ -105,7 +81,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | PostgreSQL 17 | PostgreSQL 16 | PostgreSQL 15 | PostgreSQL 14 | PostgreSQL 13 | |:------------------|:--------------|:--------------|:--------------|:--------------|---------------| -| **ScalarDB 3.16** | ✅ | ✅ | ✅ | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | ✅ | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | ✅ | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | ✅ | ✅ | ✅ | @@ -121,7 +96,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | Aurora MySQL 3 | Aurora MySQL 2 | |:------------------|:----------------|:----------------| -| **ScalarDB 3.16** | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | @@ -137,7 +111,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | Aurora PostgreSQL 16 | Aurora PostgreSQL 15 | Aurora PostgreSQL 14 | Aurora PostgreSQL 13 | |:------------------|:---------------------|:---------------------|:---------------------|:---------------------| -| **ScalarDB 3.16** | ✅ | ✅ | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | ✅ | ✅ | @@ -153,7 +126,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | MariaDB 11.4 | MariaDB 10.11 | |:------------------|:--------------|:--------------| -| **ScalarDB 3.16** | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | @@ -169,7 +141,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | SQL Server 2022 | SQL Server 2019 | SQL Server 2017 | |:------------------|:-----------------|:-----------------|:-----------------| -| **ScalarDB 3.16** | ✅ | ✅ | ✅ | | **ScalarDB 3.15** | ✅ | ✅ | ✅ | | **ScalarDB 3.14** | ✅ | ✅ | ✅ | | **ScalarDB 3.13** | ✅ | ✅ | ✅ | @@ -185,7 +156,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | SQLite 3 | |:------------------|:----------| -| **ScalarDB 3.16** | ✅ | | **ScalarDB 3.15** | ✅ | | **ScalarDB 3.14** | ✅ | | **ScalarDB 3.13** | ✅ | @@ -201,7 +171,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | YugabyteDB 2 | |:------------------|:-------------| -| **ScalarDB 3.16** | ✅ | | **ScalarDB 3.15** | ✅ | | **ScalarDB 3.14** | ✅ | | **ScalarDB 3.13** | ✅ | @@ -222,7 +191,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | DynamoDB | |:------------------|:----------| -| **ScalarDB 3.16** | ✅ | | **ScalarDB 3.15** | ✅ | | **ScalarDB 3.14** | ✅ | | **ScalarDB 3.13** | ✅ | @@ -238,7 +206,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | Cassandra 4.1 | Cassandra 4.0 | Cassandra 3.11 | Cassandra 3.0 | |:------------------|:---------------|:---------------|:----------------|:---------------| -| **ScalarDB 3.16** | ❌ | ❌ | ✅ | ✅ | | **ScalarDB 3.15** | ❌ | ❌ | ✅ | ✅ | | **ScalarDB 3.14** | ❌ | ❌ | ✅ | ✅ | | **ScalarDB 3.13** | ❌ | ❌ | ✅ | ✅ | @@ -254,7 +221,6 @@ Only Linux, UNIX, and Windows versions of Db2 are supported. The z/OS version is | Version | Cosmos DB for NoSQL | |:------------------|:---------------------| -| **ScalarDB 3.16** | ✅ | | **ScalarDB 3.15** | ✅ | | **ScalarDB 3.14** | ✅ | | **ScalarDB 3.13** | ✅ | diff --git a/docs/run-non-transactional-storage-operations-through-library.mdx b/docs/run-non-transactional-storage-operations-through-library.mdx index ea97ee58..4bee033f 100644 --- a/docs/run-non-transactional-storage-operations-through-library.mdx +++ b/docs/run-non-transactional-storage-operations-through-library.mdx @@ -130,29 +130,6 @@ For a list of databases that ScalarDB supports, see [Databases](requirements.mdx scalar.db.password=SqlServer22 ``` - -

Run Db2 locally

- - You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory. - - To start IBM Db2, run the following command: - - ```console - docker compose up -d db2 - ``` - -

Configure ScalarDB

- - The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Db2 in the **database.properties** file so that the configuration looks as follows: - - ```properties - # For Db2 - scalar.db.storage=jdbc - scalar.db.contact_points=jdbc:db2://localhost:50000/sample - scalar.db.username=db2inst1 - scalar.db.password=db2inst1 - ``` -

Run Amazon DynamoDB Local

@@ -260,7 +237,7 @@ Select your build tool, and follow the instructions to add the build dependency ```gradle dependencies { - implementation 'com.scalar-labs:scalardb:3.16.0' + implementation 'com.scalar-labs:scalardb:3.15.4' } ```
@@ -271,7 +248,7 @@ Select your build tool, and follow the instructions to add the build dependency com.scalar-labs scalardb - 3.16.0 + 3.15.4 ``` @@ -292,4 +269,4 @@ The following limitations apply to non-transactional storage operations: ### Learn more -- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.16.0/index.html) +- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.15.4/index.html) diff --git a/docs/run-transactions-through-scalardb-core-library.mdx b/docs/run-transactions-through-scalardb-core-library.mdx index b448d61f..d6579d15 100644 --- a/docs/run-transactions-through-scalardb-core-library.mdx +++ b/docs/run-transactions-through-scalardb-core-library.mdx @@ -130,29 +130,6 @@ For a list of databases that ScalarDB supports, see [Databases](requirements.mdx scalar.db.password=SqlServer22 ``` - -

Run Db2 locally

- - You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-sample` directory. - - To start IBM Db2, run the following command: - - ```console - docker compose up -d db2 - ``` - -

Configure ScalarDB

- - The **database.properties** file in the `scalardb-samples/scalardb-sample` directory contains database configurations for ScalarDB. Please uncomment the properties for Db2 in the **database.properties** file so that the configuration looks as follows: - - ```properties - # For Db2 - scalar.db.storage=jdbc - scalar.db.contact_points=jdbc:db2://localhost:50000/sample - scalar.db.username=db2inst1 - scalar.db.password=db2inst1 - ``` -

Run Amazon DynamoDB Local

diff --git a/docs/scalardb-analytics/run-analytical-queries.mdx b/docs/scalardb-analytics/run-analytical-queries.mdx index 4f4b26aa..d13b86cc 100644 --- a/docs/scalardb-analytics/run-analytical-queries.mdx +++ b/docs/scalardb-analytics/run-analytical-queries.mdx @@ -449,5 +449,6 @@ The following is a list of Spark and Scalar versions supported by each version o | ScalarDB Analytics Version | ScalarDB Version | Spark Versions Supported | Scala Versions Supported | Minimum Java Version | |:---------------------------|:-----------------|:-------------------------|:-------------------------|:---------------------| -| 3.16 | 3.16 | 3.5, 3.4 | 2.13, 2.12 | 8 | | 3.15 | 3.15 | 3.5, 3.4 | 2.13, 2.12 | 8 | +| 3.14 | 3.14 | 3.5, 3.4 | 2.13, 2.12 | 8 | +| 3.12 | 3.12 | 3.5, 3.4 | 2.13, 2.12 | 8 | diff --git a/docs/scalardb-cluster/compatibility.mdx b/docs/scalardb-cluster/compatibility.mdx index 5aafdca9..0b469206 100644 --- a/docs/scalardb-cluster/compatibility.mdx +++ b/docs/scalardb-cluster/compatibility.mdx @@ -13,7 +13,6 @@ This document shows the compatibility of ScalarDB Cluster versions among client | ScalarDB Cluster version | ScalarDB Cluster Java Client SDK version | ScalarDB Cluster .NET Client SDK version | |:-------------------------|:-----------------------------------------|:-----------------------------------------| -| 3.16 | 3.9 - 3.16 | 3.12* - 3.16 | | 3.15 | 3.9 - 3.15 | 3.12* - 3.15 | | 3.14 | 3.9 - 3.14 | 3.12* - 3.14 | | 3.13 | 3.9 - 3.13 | 3.12* - 3.13 | diff --git a/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx b/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx index b4b0830a..964f3316 100644 --- a/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx +++ b/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx @@ -18,7 +18,7 @@ To add a dependency on the ScalarDB Cluster Java Client SDK by using Gradle, use ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ``` @@ -28,7 +28,7 @@ To add a dependency by using Maven, use the following: com.scalar-labs scalardb-cluster-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -94,17 +94,17 @@ The following section describes the Schema Loader for ScalarDB Cluster. To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [ScalarDB Schema Loader](../schema-loader.mdx) except the name of the JAR file is different. -You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). +You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can run Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.16.0-all.jar --config --schema-file --coordinator +java -jar scalardb-cluster-schema-loader-3.15.4-all.jar --config --schema-file --coordinator ``` You can also pull the Docker image from the [Scalar container registry](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-schema-loader) by running the following command, replacing the contents in the angle brackets as described: ```console -docker run --rm -v :/scalardb.properties -v :/schema.json ghcr.io/scalar-labs/scalardb-cluster-schema-loader:3.16.0 --config /scalardb.properties --schema-file /schema.json --coordinator +docker run --rm -v :/scalardb.properties -v :/schema.json ghcr.io/scalar-labs/scalardb-cluster-schema-loader:3.15.4 --config /scalardb.properties --schema-file /schema.json --coordinator ``` ## ScalarDB Cluster SQL @@ -144,8 +144,8 @@ To add the dependencies on the ScalarDB Cluster JDBC driver by using Gradle, use ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql-jdbc:3.16.0' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-sql-jdbc:3.15.4' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ``` @@ -156,12 +156,12 @@ To add the dependencies by using Maven, use the following: com.scalar-labs scalardb-sql-jdbc - 3.16.0 + 3.15.4 com.scalar-labs scalardb-cluster-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -179,8 +179,8 @@ To add the dependencies by using Gradle, use the following: ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql-spring-data:3.16.0' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-sql-spring-data:3.15.4' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ``` @@ -191,12 +191,12 @@ To add the dependencies by using Maven, use the following: com.scalar-labs scalardb-sql-spring-data - 3.16.0 + 3.15.4 com.scalar-labs scalardb-cluster-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -208,16 +208,16 @@ For details about Spring Data JDBC for ScalarDB, see [Guide of Spring Data JDBC Like other SQL databases, ScalarDB SQL also provides a CLI tool where you can issue SQL statements interactively in a command-line shell. -You can download the SQL CLI for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). After downloading the JAR file, you can run the SQL CLI with the following command: +You can download the SQL CLI for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can run the SQL CLI with the following command: ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar --config ``` You can also pull the Docker image from the [Scalar container registry](https://github.com/orgs/scalar-labs/packages/container/package/scalardb-cluster-sql-cli) by running the following command, replacing the contents in the angle brackets as described: ```console -docker run --rm -it -v :/scalardb-sql.properties ghcr.io/scalar-labs/scalardb-cluster-sql-cli:3.16.0 --config /scalardb-sql.properties +docker run --rm -it -v :/scalardb-sql.properties ghcr.io/scalar-labs/scalardb-cluster-sql-cli:3.15.4 --config /scalardb-sql.properties ``` #### Usage @@ -225,7 +225,7 @@ docker run --rm -it -v :/scalar You can see the CLI usage with the `-h` option as follows: ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar -h +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar -h Usage: scalardb-sql-cli [-hs] -c=PROPERTIES_FILE [-e=COMMAND] [-f=FILE] [-l=LOG_FILE] [-o=] [-p=PASSWORD] [-u=USERNAME] @@ -256,6 +256,6 @@ For details about the ScalarDB Cluster gRPC API, refer to the following: JavaDocs are also available: -* [ScalarDB Cluster Java Client SDK](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-java-client-sdk/3.16.0/index.html) -* [ScalarDB Cluster Common](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-common/3.16.0/index.html) -* [ScalarDB Cluster RPC](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-rpc/3.16.0/index.html) +* [ScalarDB Cluster Java Client SDK](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-java-client-sdk/3.15.4/index.html) +* [ScalarDB Cluster Common](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-common/3.15.4/index.html) +* [ScalarDB Cluster RPC](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-rpc/3.15.4/index.html) diff --git a/docs/scalardb-cluster/encrypt-data-at-rest.mdx b/docs/scalardb-cluster/encrypt-data-at-rest.mdx index e7e49d7e..ea2389fa 100644 --- a/docs/scalardb-cluster/encrypt-data-at-rest.mdx +++ b/docs/scalardb-cluster/encrypt-data-at-rest.mdx @@ -183,7 +183,7 @@ services: scalardb-cluster-standalone: container_name: "scalardb-cluster-node" - image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.16.0" + image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.15.4" ports: - 60053:60053 - 9080:9080 @@ -241,7 +241,7 @@ scalar.db.sql.cluster_mode.contact_points=indirect:localhost Then, start the SQL CLI by running the following command. ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config scalardb-cluster-sql-cli.properties +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar --config scalardb-cluster-sql-cli.properties ``` To begin, create the Coordinator tables required for ScalarDB transaction execution. diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx index 30bf1fb3..1234c270 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx @@ -108,11 +108,11 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. -You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). +You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.16.0-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.15.4-all.jar --config database.properties -f schema.json --coordinator ``` ## Step 4. Run operations from GraphiQL diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx index 5ed6d41c..d0910d07 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx @@ -86,10 +86,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: +To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config scalardb-sql.properties --file schema.sql +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar --config scalardb-sql.properties --file schema.sql ``` ## Step 4. Load the initial data diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx index 076b66c0..2fa48c23 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx @@ -86,10 +86,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: +To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config scalardb-sql.properties --file schema.sql +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar --config scalardb-sql.properties --file schema.sql ``` ## Step 4. Modify `application.properties` diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx index d8f1db4f..140afe2b 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx @@ -120,7 +120,7 @@ To use ScalarDB Cluster, open `build.gradle` in your preferred text editor. Then dependencies { ... - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ``` @@ -166,12 +166,12 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi The database schema (the method in which the data will be organized) for the sample application has already been defined in [`schema.json`](https://github.com/scalar-labs/scalardb-samples/tree/main/scalardb-sample/schema.json). -To apply the schema, go to [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0) and download the ScalarDB Cluster Schema Loader to the `scalardb-samples/scalardb-sample` folder. +To apply the schema, go to [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4) and download the ScalarDB Cluster Schema Loader to the `scalardb-samples/scalardb-sample` folder. Then, run the following command: ```console -java -jar scalardb-cluster-schema-loader-3.16.0-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.15.4-all.jar --config database.properties -f schema.json --coordinator ``` #### Schema details diff --git a/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx b/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx index 780a30c0..718b951e 100644 --- a/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx @@ -73,10 +73,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: +To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.16.0-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.15.4-all.jar --config database.properties -f schema.json --coordinator ``` ## Step 4. Set up a Go environment diff --git a/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx b/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx index 868b3c9d..f51d3ba0 100644 --- a/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx @@ -73,10 +73,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.16.0). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: +To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.15.4). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.16.0-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.15.4-all.jar --config database.properties -f schema.json --coordinator ``` ## Step 4. Set up a Python environment diff --git a/docs/scalardb-cluster/getting-started-with-vector-search.mdx b/docs/scalardb-cluster/getting-started-with-vector-search.mdx index b30b51e1..3432ceb2 100644 --- a/docs/scalardb-cluster/getting-started-with-vector-search.mdx +++ b/docs/scalardb-cluster/getting-started-with-vector-search.mdx @@ -331,7 +331,7 @@ Create the following configuration file as `docker-compose.yaml`. services: scalardb-cluster-standalone: container_name: "scalardb-cluster-node" - image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.16.0" + image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.15.4" ports: - 60053:60053 - 9080:9080 @@ -361,7 +361,7 @@ Select your build tool, and follow the instructions to add the build dependency ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-cluster-embedding-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-cluster-embedding-java-client-sdk:3.15.4' } ```
@@ -372,7 +372,7 @@ Select your build tool, and follow the instructions to add the build dependency com.scalar-labs scalardb-cluster-embedding-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -460,4 +460,4 @@ The `ScalarDbEmbeddingClientFactory` instance should be closed after use to rele The vector search feature is currently in Private Preview. For more details, please [contact us](https://www.scalar-labs.com/contact) or wait for this feature to become publicly available in a future version. -- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-embedding-java-client-sdk/3.16.0/index.html) +- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-embedding-java-client-sdk/3.15.4/index.html) diff --git a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx index 3fcdeffd..9261de2a 100644 --- a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx @@ -141,29 +141,6 @@ For a list of databases that ScalarDB supports, see [Databases](../requirements. scalar.db.password=SqlServer22 ``` - -

Run Db2 locally

- - You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-cluster-standalone-mode` directory. - - To start IBM Db2, run the following command: - - ```console - docker compose up -d db2 - ``` - -

Configure ScalarDB Cluster

- - The **scalardb-cluster-node.properties** file in the `scalardb-samples/scalardb-cluster-standalone-mode` directory contains database configurations for ScalarDB Cluster. Please uncomment the properties for Db2 in the **scalardb-cluster-node.properties** file so that the configuration looks as follows: - - ```properties - # For Db2 - scalar.db.storage=jdbc - scalar.db.contact_points=jdbc:db2://db2-1:50000/sample - scalar.db.username=db2inst1 - scalar.db.password=db2inst1 - ``` -

Run Amazon DynamoDB Local

@@ -294,7 +271,7 @@ Select your build tool, and follow the instructions to add the build dependency ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ```
@@ -305,7 +282,7 @@ Select your build tool, and follow the instructions to add the build dependency com.scalar-labs scalardb-cluster-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -330,5 +307,5 @@ The following limitations apply to non-transactional storage operations: ### Learn more -- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.16.0/index.html) +- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.15.4/index.html) - [Developer Guide for ScalarDB Cluster with the Java API](developer-guide-for-scalardb-cluster-with-java-api.mdx) diff --git a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx index 71249c3b..9479c41f 100644 --- a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx +++ b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx @@ -276,8 +276,8 @@ Also, for a list of supported DDLs, see [ScalarDB SQL Grammar](../scalardb-sql/g ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql-jdbc:3.16.0' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-sql-jdbc:3.15.4' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ``` @@ -289,12 +289,12 @@ Also, for a list of supported DDLs, see [ScalarDB SQL Grammar](../scalardb-sql/g com.scalar-labs scalardb-sql-jdbc - 3.16.0 + 3.15.4 com.scalar-labs scalardb-cluster-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -341,8 +341,8 @@ The following limitations apply to non-transactional storage operations: ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql:3.16.0' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.16.0' + implementation 'com.scalar-labs:scalardb-sql:3.15.4' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.15.4' } ``` @@ -354,12 +354,12 @@ The following limitations apply to non-transactional storage operations: com.scalar-labs scalardb-sql - 3.16.0 + 3.15.4 com.scalar-labs scalardb-cluster-java-client-sdk - 3.16.0 + 3.15.4 ``` @@ -387,7 +387,7 @@ The following limitations apply to non-transactional storage operations:

Learn more

- - [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb-sql/3.16.0/index.html) + - [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb-sql/3.15.4/index.html) diff --git a/docs/scalardb-cluster/run-transactions-through-scalardb-cluster-sql.mdx b/docs/scalardb-cluster/run-transactions-through-scalardb-cluster-sql.mdx index d4504ac0..f3677a25 100644 --- a/docs/scalardb-cluster/run-transactions-through-scalardb-cluster-sql.mdx +++ b/docs/scalardb-cluster/run-transactions-through-scalardb-cluster-sql.mdx @@ -140,29 +140,6 @@ For a list of databases that ScalarDB supports, see [Databases](../requirements. scalar.db.password=SqlServer22 ``` - -

Run Db2 locally

- - You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-cluster-standalone-mode` directory. - - To start IBM Db2, run the following command: - - ```console - docker compose up -d db2 - ``` - -

Configure ScalarDB Cluster

- - The **scalardb-cluster-node.properties** file in the `scalardb-samples/scalardb-cluster-standalone-mode` directory contains database configurations for ScalarDB Cluster. Please uncomment the properties for Db2 in the **scalardb-cluster-node.properties** file so that the configuration looks as follows: - - ```properties - # For Db2 - scalar.db.storage=jdbc - scalar.db.contact_points=jdbc:db2://db2-1:50000/sample - scalar.db.username=db2inst1 - scalar.db.password=db2inst1 - ``` -

Run Amazon DynamoDB Local

diff --git a/docs/scalardb-cluster/run-transactions-through-scalardb-cluster.mdx b/docs/scalardb-cluster/run-transactions-through-scalardb-cluster.mdx index c4e7fb10..1d9ed0c1 100644 --- a/docs/scalardb-cluster/run-transactions-through-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/run-transactions-through-scalardb-cluster.mdx @@ -141,29 +141,6 @@ For a list of databases that ScalarDB supports, see [Databases](../requirements. scalar.db.password=SqlServer22 ```
- -

Run Db2 locally

- - You can run IBM Db2 in Docker Compose by using the `docker-compose.yml` file in the `scalardb-samples/scalardb-cluster-standalone-mode` directory. - - To start IBM Db2, run the following command: - - ```console - docker compose up -d db2 - ``` - -

Configure ScalarDB Cluster

- - The **scalardb-cluster-node.properties** file in the `scalardb-samples/scalardb-cluster-standalone-mode` directory contains database configurations for ScalarDB Cluster. Please uncomment the properties for Db2 in the **scalardb-cluster-node.properties** file so that the configuration looks as follows: - - ```properties - # For Db2 - scalar.db.storage=jdbc - scalar.db.contact_points=jdbc:db2://db2-1:50000/sample - scalar.db.username=db2inst1 - scalar.db.password=db2inst1 - ``` -

Run Amazon DynamoDB Local

diff --git a/docs/scalardb-cluster/scalardb-abac-status-codes.mdx b/docs/scalardb-cluster/scalardb-abac-status-codes.mdx index 22aa089a..35df2526 100644 --- a/docs/scalardb-cluster/scalardb-abac-status-codes.mdx +++ b/docs/scalardb-cluster/scalardb-abac-status-codes.mdx @@ -381,14 +381,6 @@ The namespace policy for the policy and namespace already exists. Policy: %s; Na The table policy for the policy and table already exists. Policy: %s; Table: %s ``` -### `DB-ABAC-10045` - -**Message** - -```markdown -The user does not exist. Username: %s -``` - ## `DB-ABAC-2xxxx` status codes The following are status codes and messages for the concurrency error category. diff --git a/docs/scalardb-cluster/scalardb-auth-with-sql.mdx b/docs/scalardb-cluster/scalardb-auth-with-sql.mdx index 953e97a8..afeb17f5 100644 --- a/docs/scalardb-cluster/scalardb-auth-with-sql.mdx +++ b/docs/scalardb-cluster/scalardb-auth-with-sql.mdx @@ -208,7 +208,7 @@ services: scalardb-cluster-standalone: container_name: "scalardb-cluster-node" - image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.16.0" + image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.15.4" ports: - 60053:60053 - 9080:9080 @@ -246,7 +246,7 @@ scalar.db.cluster.auth.enabled=true Then, start the SQL CLI by running the following command. ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config scalardb-cluster-sql-cli.properties +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar --config scalardb-cluster-sql-cli.properties ``` Enter the username and password as `admin` and `admin`, respectively. @@ -335,7 +335,7 @@ You can see that `user1` has been granted the `SELECT`, `INSERT`, and `UPDATE` p Log in as `user1` and execute SQL statements. ```console -java -jar scalardb-cluster-sql-cli-3.16.0-all.jar --config scalardb-cluster-sql-cli.properties +java -jar scalardb-cluster-sql-cli-3.15.4-all.jar --config scalardb-cluster-sql-cli.properties ``` Enter the username and password as `user1` and `user1`, respectively. diff --git a/docs/scalardb-cluster/scalardb-cluster-configurations.mdx b/docs/scalardb-cluster/scalardb-cluster-configurations.mdx index 4575bad3..87051421 100644 --- a/docs/scalardb-cluster/scalardb-cluster-configurations.mdx +++ b/docs/scalardb-cluster/scalardb-cluster-configurations.mdx @@ -22,32 +22,32 @@ The following general configurations are available for ScalarDB Cluster. #### Transaction management configurations -| Name | Description | Default. | +| Name | Description | Default | |-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| | `scalar.db.transaction_manager` | Transaction manager of ScalarDB. Specify `consensus-commit` to use [Consensus Commit](../consensus-commit.mdx) or `single-crud-operation` to [run non-transactional storage operations](./run-non-transactional-storage-operations-through-scalardb-cluster.mdx). Note that the configurations under the `scalar.db.consensus_commit` prefix are ignored if you use `single-crud-operation`. | `consensus-commit` | -| `scalar.db.consensus_commit.isolation_level` | Isolation level used for Consensus Commit. Either `SNAPSHOT`, `SERIALIZABLE`, or `READ_COMMITTED` can be specified. | `SNAPSHOT` | +| `scalar.db.consensus_commit.isolation_level` | Isolation level used for Consensus Commit. Either `SNAPSHOT` or `SERIALIZABLE` can be specified. | `SNAPSHOT` | +| `scalar.db.consensus_commit.serializable_strategy` | Serializable strategy used for Consensus Commit. Either `EXTRA_READ` or `EXTRA_WRITE` can be specified. If `SNAPSHOT` is specified in the property `scalar.db.consensus_commit.isolation_level`, this configuration will be ignored. | `EXTRA_READ` | | `scalar.db.consensus_commit.coordinator.namespace` | Namespace name of Coordinator tables. | `coordinator` | | `scalar.db.consensus_commit.include_metadata.enabled` | If set to `true`, `Get` and `Scan` operations results will contain transaction metadata. To see the transaction metadata columns details for a given table, you can use the `DistributedTransactionAdmin.getTableMetadata()` method, which will return the table metadata augmented with the transaction metadata columns. Using this configuration can be useful to investigate transaction-related issues. | `false` | #### Node configurations -| Name | Description   | Default | -|--------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------| -| `scalar.db.cluster.membership.type` | Membership type. Currently, only `KUBERNETES` can be specified. | `KUBERNETES` | -| `scalar.db.cluster.membership.kubernetes.endpoint.namespace_name` | This configuration is for the `KUBERNETES` membership type. Namespace name for the [endpoint resource](https://kubernetes.io/docs/concepts/services-networking/service/#endpoints). | `default` | -| `scalar.db.cluster.membership.kubernetes.endpoint.name` | This configuration is for the `KUBERNETES` membership type. Name of the [endpoint resource](https://kubernetes.io/docs/concepts/services-networking/service/#endpoints) to get the membership info. | | -| `scalar.db.cluster.node.decommissioning_duration_secs` | Decommissioning duration in seconds. | `30` | -| `scalar.db.cluster.node.grpc.max_inbound_message_size` | Maximum message size allowed to be received. | The gRPC default value | -| `scalar.db.cluster.node.grpc.max_inbound_metadata_size` | Maximum size of metadata allowed to be received. | The gRPC default value | -| `scalar.db.cluster.node.port` | Port number of the ScalarDB Cluster node. | `60053` | -| `scalar.db.cluster.node.prometheus_exporter_port` | Port number of the Prometheus exporter. | `9080` | -| `scalar.db.cluster.grpc.deadline_duration_millis` | Deadline duration for gRPC in milliseconds. | `60000` (60 seconds) | -| `scalar.db.cluster.node.standalone_mode.enabled` | Whether standalone mode is enabled. Note that if standalone mode is enabled, the membership configurations (`scalar.db.cluster.membership.*`) will be ignored. | `false` | -| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` | -| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB Cluster nodes maintain in-progress transactions, which can be resumed by using a transaction ID. This process expires transactions that have been idle for an extended period to prevent resource leaks. This configuration specifies the expiration time of this transaction management feature in milliseconds. | `60000` (60 seconds) | -| `scalar.db.system_namespace_name` | The given namespace name will be used by ScalarDB internally. | `scalardb` | -| `scalar.db.transaction.enabled` | Whether the transaction feature is enabled. For example, if you use only the embedding feature, you can set this property to `false`. | `true` | -| `scalar.db.cluster.node.scanner_management.expiration_time_millis` | ScalarDB Cluster nodes maintain in-progress scanners. This process expires scanners that have been idle for an extended period to prevent resource leaks. This configuration specifies the expiration time of this scanner management feature in milliseconds. | `60000` (60 seconds) | +| Name | Description | Default | +|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------| +| `scalar.db.cluster.membership.type` | Membership type. Currently, only `KUBERNETES` can be specified. | `KUBERNETES` | +| `scalar.db.cluster.membership.kubernetes.endpoint.namespace_name` | This configuration is for the `KUBERNETES` membership type. Namespace name for the [endpoint resource](https://kubernetes.io/docs/concepts/services-networking/service/#endpoints). | `default` | +| `scalar.db.cluster.membership.kubernetes.endpoint.name` | This configuration is for the `KUBERNETES` membership type. Name of the [endpoint resource](https://kubernetes.io/docs/concepts/services-networking/service/#endpoints) to get the membership info. | | +| `scalar.db.cluster.node.decommissioning_duration_secs` | Decommissioning duration in seconds. | `30` | +| `scalar.db.cluster.node.grpc.max_inbound_message_size` | Maximum message size allowed to be received. | The gRPC default value | +| `scalar.db.cluster.node.grpc.max_inbound_metadata_size` | Maximum size of metadata allowed to be received. | The gRPC default value | +| `scalar.db.cluster.node.port` | Port number of the ScalarDB Cluster node. | `60053` | +| `scalar.db.cluster.node.prometheus_exporter_port` | Port number of the Prometheus exporter. | `9080` | +| `scalar.db.cluster.grpc.deadline_duration_millis` | Deadline duration for gRPC in milliseconds. | `60000` (60 seconds) | +| `scalar.db.cluster.node.standalone_mode.enabled` | Whether standalone mode is enabled. Note that if standalone mode is enabled, the membership configurations (`scalar.db.cluster.membership.*`) will be ignored. | `false` | +| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` | +| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB Cluster nodes maintain ongoing transactions, which can be resumed by using a transaction ID. This configuration specifies the expiration time of this transaction management feature in milliseconds. | `60000` (60 seconds) | +| `scalar.db.system_namespace_name` | The given namespace name will be used by ScalarDB internally. | `scalardb` | +| `scalar.db.transaction.enabled` | Whether the transaction feature is enabled. For example, if you use only the embedding feature, you can set this property to `false`. | `true` | ### Performance-related configurations @@ -63,8 +63,6 @@ The following performance-related configurations are available for the Consensus | `scalar.db.consensus_commit.async_commit.enabled` | Whether or not the commit phase is executed asynchronously. | `false` | | `scalar.db.consensus_commit.async_rollback.enabled` | Whether or not the rollback phase is executed asynchronously. | The value of `scalar.db.consensus_commit.async_commit.enabled` | | `scalar.db.consensus_commit.parallel_implicit_pre_read.enabled` | Whether or not implicit pre-read is executed in parallel. | `true` | -| `scalar.db.consensus_commit.one_phase_commit.enabled` | Whether or not the one-phase commit optimization is enabled. | `false` | -| `scalar.db.consensus_commit.coordinator.write_omission_on_read_only.enabled` | Whether or not the write omission optimization is enabled for read-only transactions. This optimization is useful for read-only transactions that do not modify any data, as it avoids unnecessary writes to the Coordinator tables. | `true` | | `scalar.db.consensus_commit.coordinator.group_commit.enabled` | Whether or not committing the transaction state is executed in batch mode. This feature can't be used with a two-phase commit interface. | `false` | | `scalar.db.consensus_commit.coordinator.group_commit.slot_capacity` | Maximum number of slots in a group for the group commit feature. A large value improves the efficiency of group commit, but may also increase latency and the likelihood of transaction conflicts.[^1] | `20` | | `scalar.db.consensus_commit.coordinator.group_commit.group_size_fix_timeout_millis` | Timeout to fix the size of slots in a group. A large value improves the efficiency of group commit, but may also increase latency and the likelihood of transaction conflicts.[^1] | `40` | @@ -83,26 +81,25 @@ Select a database to see the configurations available for each storage. The following configurations are available for JDBC databases: - | Name | Description | Default | - |-----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------| - | `scalar.db.storage` | `jdbc` must be specified. | - | - | `scalar.db.contact_points` | JDBC connection URL. | | - | `scalar.db.username` | Username to access the database. | | - | `scalar.db.password` | Password to access the database. | | - | `scalar.db.jdbc.connection_pool.min_idle` | Minimum number of idle connections in the connection pool. | `20` | - | `scalar.db.jdbc.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool. | `50` | - | `scalar.db.jdbc.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool. Use a negative value for no limit. | `100` | - | `scalar.db.jdbc.prepared_statements_pool.enabled` | Setting this property to `true` enables prepared-statement pooling. | `false` | - | `scalar.db.jdbc.prepared_statements_pool.max_open` | Maximum number of open statements that can be allocated from the statement pool at the same time. Use a negative value for no limit. | `-1` | - | `scalar.db.jdbc.isolation_level` | Isolation level for JDBC. `READ_UNCOMMITTED`, `READ_COMMITTED`, `REPEATABLE_READ`, or `SERIALIZABLE` can be specified. | Underlying-database specific | - | `scalar.db.jdbc.table_metadata.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for the table metadata. | `5` | - | `scalar.db.jdbc.table_metadata.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for the table metadata. | `10` | - | `scalar.db.jdbc.table_metadata.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for the table metadata. Use a negative value for no limit. | `25` | - | `scalar.db.jdbc.admin.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for admin. | `5` | - | `scalar.db.jdbc.admin.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for admin. | `10` | - | `scalar.db.jdbc.admin.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for admin. Use a negative value for no limit. | `25` | - | `scalar.db.jdbc.db2.variable_key_column_size` | Column size for TEXT and BLOB columns in IBM Db2 when they are used as a primary key or secondary key. Minimum 64 bytes. | `128` | - | `scalar.db.jdbc.db2.time_column.default_date_component` | Value of the date component used for storing `TIME` data in IBM Db2. Since the IBM Db2 TIMESTAMP type is used to store ScalarDB `TIME` type data because it provides fractional-second precision, ScalarDB stores `TIME` data with the same date component value for ease of comparison and sorting. | `1970-01-01` | + | Name | Description | Default | + |-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------| + | `scalar.db.storage` | `jdbc` must be specified. | - | + | `scalar.db.contact_points` | JDBC connection URL. | | + | `scalar.db.username` | Username to access the database. | | + | `scalar.db.password` | Password to access the database. | | + | `scalar.db.jdbc.connection_pool.min_idle` | Minimum number of idle connections in the connection pool. | `20` | + | `scalar.db.jdbc.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool. | `50` | + | `scalar.db.jdbc.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool. Use a negative value for no limit. | `100` | + | `scalar.db.jdbc.prepared_statements_pool.enabled` | Setting this property to `true` enables prepared-statement pooling. | `false` | + | `scalar.db.jdbc.prepared_statements_pool.max_open` | Maximum number of open statements that can be allocated from the statement pool at the same time. Use a negative value for no limit. | `-1` | + | `scalar.db.jdbc.isolation_level` | Isolation level for JDBC. `READ_UNCOMMITTED`, `READ_COMMITTED`, `REPEATABLE_READ`, or `SERIALIZABLE` can be specified. | Underlying-database specific | + | `scalar.db.jdbc.table_metadata.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for the table metadata. | `5` | + | `scalar.db.jdbc.table_metadata.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for the table metadata. | `10` | + | `scalar.db.jdbc.table_metadata.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for the table metadata. Use a negative value for no limit. | `25` | + | `scalar.db.jdbc.admin.connection_pool.min_idle` | Minimum number of idle connections in the connection pool for admin. | `5` | + | `scalar.db.jdbc.admin.connection_pool.max_idle` | Maximum number of connections that can remain idle in the connection pool for admin. | `10` | + | `scalar.db.jdbc.admin.connection_pool.max_total` | Maximum total number of idle and borrowed connections that can be active at the same time for the connection pool for admin. Use a negative value for no limit. | `25` | + :::note If you're using SQLite3 as a JDBC database, you must set `scalar.db.contact_points` as follows: @@ -175,14 +172,6 @@ For non-JDBC databases, we do not recommend enabling cross-partition scan with t | `scalar.db.cross_partition_scan.filtering.enabled` | Enable filtering in cross-partition scan. | `false` | | `scalar.db.cross_partition_scan.ordering.enabled` | Enable ordering in cross-partition scan. | `false` | -#### Scan fetch size - -You can configure the fetch size for storage scan operations by using the following property: - -| Name | Description | Default | -|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| -| `scalar.db.scan_fetch_size` | Specifies the number of records to fetch in a single batch during a storage scan operation. A larger value can improve performance for a large result set by reducing round trips to the storage, but it also increases memory usage. A smaller value uses less memory but may increase latency. | `10` | - ### GraphQL-related configurations The configurations for ScalarDB Cluster GraphQL are as follows: @@ -244,7 +233,6 @@ The following table shows the general configurations for the ScalarDB Cluster cl | `scalar.db.cluster.grpc.deadline_duration_millis` | Deadline duration for gRPC in millis. | `60000` (60 seconds) | | `scalar.db.cluster.grpc.max_inbound_message_size` | Maximum message size allowed for a single gRPC frame. | The gRPC default value | | `scalar.db.cluster.grpc.max_inbound_metadata_size` | Maximum size of metadata allowed to be received. | The gRPC default value | -| `scalar.db.cluster.client.scan_fetch_size` | The fetch size used for `Scanner` to fetch data from the cluster. This is the number of records that `Scanner` fetches at once from the cluster. A larger value can improve performance by reducing the number of round trips to the cluster, but it may also increase memory usage. | `10` | For example, if you use the `indirect` client mode and the load balancer IP address is `192.168.10.1`, you can configure the client as follows: diff --git a/docs/scalardb-cluster/scalardb-cluster-status-codes.mdx b/docs/scalardb-cluster/scalardb-cluster-status-codes.mdx index 01693c3b..0005c1fa 100644 --- a/docs/scalardb-cluster/scalardb-cluster-status-codes.mdx +++ b/docs/scalardb-cluster/scalardb-cluster-status-codes.mdx @@ -45,6 +45,14 @@ The table does not exist. Table: %s The user does not exist. User: %s ``` +### `DB-CLUSTER-10003` + +**Message** + +```markdown +ClusterConfig is not specified +``` + ### `DB-CLUSTER-10004` **Message** @@ -285,6 +293,14 @@ The property 'scalar.db.sql.cluster_mode.contact_points' must be prefixed with ' The format of the property 'scalar.db.sql.cluster_mode.contact_points' for direct-kubernetes client mode is 'direct-kubernetes:/' or 'direct-kubernetes:' ``` +### `DB-CLUSTER-10034` + +**Message** + +```markdown +ClusterNodeManagerFactory is not specified +``` + ### `DB-CLUSTER-10035` **Message** @@ -309,6 +325,14 @@ The update condition type is unrecognized The two-phase commit interface is not supported ``` +### `DB-CLUSTER-10038` + +**Message** + +```markdown +Membership is not specified +``` + ### `DB-CLUSTER-10039` **Message** @@ -473,14 +497,6 @@ The hop limit is exceeded A transaction associated with the specified transaction ID is not found. The transaction might have expired, or the cluster node that handled the transaction might have been restarted. Transaction ID: %s ``` -### `DB-CLUSTER-20002` - -**Message** - -```markdown -A scanner associated with the specified scanner ID is not found. The scanner might have expired, or the cluster node that handled the scanner might have been restarted. Transaction ID: %s; Scanner ID: %s -``` - ## `DB-CLUSTER-3xxxx` status codes The following are status codes and messages for the internal error category. diff --git a/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx b/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx index bb4294e1..a62e6578 100644 --- a/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx +++ b/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx @@ -170,7 +170,7 @@ You can deploy PostgreSQL on the Kubernetes cluster as follows. 5. Set the chart version of ScalarDB Cluster. ```console - SCALAR_DB_CLUSTER_VERSION=3.16.0 + SCALAR_DB_CLUSTER_VERSION=3.15.4 SCALAR_DB_CLUSTER_CHART_VERSION=$(helm search repo scalar-labs/scalardb-cluster -l | grep -F "${SCALAR_DB_CLUSTER_VERSION}" | awk '{print $2}' | sort --version-sort -r | head -n 1) ``` diff --git a/docs/scalardb-core-status-codes.mdx b/docs/scalardb-core-status-codes.mdx index bbb3a1f6..1bac0c42 100644 --- a/docs/scalardb-core-status-codes.mdx +++ b/docs/scalardb-core-status-codes.mdx @@ -180,7 +180,7 @@ The mutations are empty **Message** ```markdown -The storage does not support mutations across multiple partitions. Storage: %s; Mutations: %s +Mutations that span multiple partitions are not supported. Mutations: %s ``` ### `DB-CORE-10020` @@ -324,7 +324,7 @@ This operation is supported only when no conditions are specified. If you want t **Message** ```markdown -One or more columns must be specified +One or more columns must be specified. ``` ### `DB-CORE-10039` @@ -332,7 +332,7 @@ One or more columns must be specified **Message** ```markdown -One or more partition keys must be specified +One or more partition keys must be specified. ``` ### `DB-CORE-10040` @@ -372,7 +372,7 @@ The transaction is not active. Status: %s **Message** ```markdown -The transaction has already been committed. Status: %s +The transaction has already been committed or rolled back. Status: %s ``` ### `DB-CORE-10045` @@ -828,7 +828,15 @@ Put cannot have a condition when the target record is unread and implicit pre-re **Message** ```markdown -Writing data already-deleted by the same transaction is not allowed +Writing already-deleted data is not allowed +``` + +### `DB-CORE-10105` + +**Message** + +```markdown +Getting data neither in the read set nor the delete set is not allowed ``` ### `DB-CORE-10106` @@ -836,7 +844,7 @@ Writing data already-deleted by the same transaction is not allowed **Message** ```markdown -Scanning data already-written or already-deleted by the same transaction is not allowed +Reading already-written data is not allowed ``` ### `DB-CORE-10107` @@ -844,7 +852,7 @@ Scanning data already-written or already-deleted by the same transaction is not **Message** ```markdown -The transaction is not validated. When using the SERIALIZABLE isolation level, you need to call validate() before calling commit() +The transaction is not validated. When using the EXTRA_READ serializable strategy, you need to call validate() before calling commit() ``` ### `DB-CORE-10108` @@ -855,6 +863,142 @@ The transaction is not validated. When using the SERIALIZABLE isolation level, y DynamoDB cannot batch more than 100 mutations at once ``` +### `DB-CORE-10109` + +**Message** + +```markdown +The partition keys of the table %s.%s were modified, but altering partition keys is not supported +``` + +### `DB-CORE-10110` + +**Message** + +```markdown +The clustering keys of the table %s.%s were modified, but altering clustering keys is not supported +``` + +### `DB-CORE-10111` + +**Message** + +```markdown +The clustering ordering of the table %s.%s were modified, but altering clustering ordering is not supported +``` + +### `DB-CORE-10112` + +**Message** + +```markdown +The column %s of the table %s.%s has been deleted. Column deletion is not supported when altering a table +``` + +### `DB-CORE-10113` + +**Message** + +```markdown +The data type of the column %s of the table %s.%s was modified, but altering data types is not supported +``` + +### `DB-CORE-10114` + +**Message** + +```markdown +Specifying the '--schema-file' option is required when using the '--repair-all' option +``` + +### `DB-CORE-10115` + +**Message** + +```markdown +Specifying the '--schema-file' option is required when using the '--alter' option +``` + +### `DB-CORE-10116` + +**Message** + +```markdown +Specifying the '--schema-file' option is required when using the '--import' option +``` + +### `DB-CORE-10117` + +**Message** + +```markdown +Specifying the '--coordinator' option with the '--import' option is not allowed. Create Coordinator tables separately +``` + +### `DB-CORE-10118` + +**Message** + +```markdown +Reading the configuration file failed. File: %s +``` + +### `DB-CORE-10119` + +**Message** + +```markdown +Reading the schema file failed. File: %s +``` + +### `DB-CORE-10120` + +**Message** + +```markdown +Parsing the schema JSON failed. Details: %s +``` + +### `DB-CORE-10121` + +**Message** + +```markdown +The table name must contain the namespace and the table. Table: %s +``` + +### `DB-CORE-10122` + +**Message** + +```markdown +The partition key must be specified. Table: %s +``` + +### `DB-CORE-10123` + +**Message** + +```markdown +Invalid clustering-key format. The clustering key must be in the format of 'column_name' or 'column_name ASC/DESC'. Table: %s; Clustering key: %s +``` + +### `DB-CORE-10124` + +**Message** + +```markdown +Columns must be specified. Table: %s +``` + +### `DB-CORE-10125` + +**Message** + +```markdown +Invalid column type. Table: %s; Column: %s; Type: %s +``` + ### `DB-CORE-10126` **Message** @@ -895,6 +1039,46 @@ Cross-partition scan with ordering is not supported in Cosmos DB Cross-partition scan with ordering is not supported in DynamoDB ``` +### `DB-CORE-10131` + +**Message** + +```markdown +The directory '%s' does not have write permissions. Please ensure that the current user has write access to the directory. +``` + +### `DB-CORE-10132` + +**Message** + +```markdown +Failed to create the directory '%s'. Please check if you have sufficient permissions and if there are any file system restrictions. Details: %s +``` + +### `DB-CORE-10133` + +**Message** + +```markdown +Directory path cannot be null or empty. +``` + +### `DB-CORE-10134` + +**Message** + +```markdown +No file extension was found on the provided file name %s. +``` + +### `DB-CORE-10135` + +**Message** + +```markdown +Invalid file extension: %s. Allowed extensions are: %s +``` + ### `DB-CORE-10136` **Message** @@ -980,7 +1164,7 @@ The value of the column %s in the primary key contains an illegal character. Pri **Message** ```markdown -Inserting data already-written by the same transaction is not allowed +Inserting already-written data is not allowed ``` ### `DB-CORE-10147` @@ -988,7 +1172,39 @@ Inserting data already-written by the same transaction is not allowed **Message** ```markdown -Deleting data already-inserted by the same transaction is not allowed +Deleting already-inserted data is not allowed +``` + +### `DB-CORE-10148` + +**Message** + +```markdown +Invalid key: Column %s does not exist in the table %s in namespace %s. +``` + +### `DB-CORE-10149` + +**Message** + +```markdown +Invalid base64 encoding for blob value for column %s in table %s in namespace %s +``` + +### `DB-CORE-10150` + +**Message** + +```markdown +Invalid number specified for column %s in table %s in namespace %s +``` + +### `DB-CORE-10151` + +**Message** + +```markdown +Method null argument not allowed ``` ### `DB-CORE-10152` @@ -999,6 +1215,46 @@ Deleting data already-inserted by the same transaction is not allowed The attribute-based access control feature is not enabled. To use this feature, you must enable it. Note that this feature is supported only in the ScalarDB Enterprise edition ``` +### `DB-CORE-10153` + +**Message** + +```markdown +The provided clustering key %s was not found +``` + +### `DB-CORE-10154` + +**Message** + +```markdown +The column '%s' was not found +``` + +### `DB-CORE-10155` + +**Message** + +```markdown +The provided partition key is incomplete. Required key: %s +``` + +### `DB-CORE-10156` + +**Message** + +```markdown +The provided clustering key order does not match the table schema. Required order: %s +``` + +### `DB-CORE-10157` + +**Message** + +```markdown +The provided partition key order does not match the table schema. Required order: %s +``` + ### `DB-CORE-10158` **Message** @@ -1055,68 +1311,100 @@ This TIMESTAMPTZ column value precision cannot be shorter than one millisecond. The underlying-storage data type %s is not supported as the ScalarDB %s data type: %s ``` -### `DB-CORE-10188` +### `DB-CORE-10165` + +**Message** + +```markdown +Missing namespace or table: %s, %s +``` + +### `DB-CORE-10166` + +**Message** + +```markdown +Failed to retrieve table metadata. Details: %s +``` + +### `DB-CORE-10167` + +**Message** + +```markdown +Duplicate data mappings found for table '%s' in the control file +``` + +### `DB-CORE-10168` + +**Message** + +```markdown +No mapping found for column '%s' in table '%s' in the control file. Control file validation set at 'FULL'. All columns need to be mapped. +``` + +### `DB-CORE-10169` **Message** ```markdown -The replication feature is not enabled. To use this feature, you must enable it. Note that this feature is supported only in the ScalarDB Enterprise edition +The control file is missing data mappings ``` -### `DB-CORE-10205` +### `DB-CORE-10170` **Message** ```markdown -Some scanners were not closed. All scanners must be closed before committing the transaction +The target column '%s' for source field '%s' could not be found in table '%s' ``` -### `DB-CORE-10206` +### `DB-CORE-10171` **Message** ```markdown -Some scanners were not closed. All scanners must be closed before preparing the transaction +The required partition key '%s' is missing in the control file mapping for table '%s' ``` -### `DB-CORE-10211` +### `DB-CORE-10172` **Message** ```markdown -Mutations are not allowed in read-only transactions. Transaction ID: %s +The required clustering key '%s' is missing in the control file mapping for table '%s' ``` -### `DB-CORE-10212` +### `DB-CORE-10173` **Message** ```markdown -The storage does not support mutations across multiple records. Storage: %s; Mutations: %s +Duplicated data mappings found for column '%s' in table '%s' ``` -### `DB-CORE-10213` +### `DB-CORE-10174` **Message** ```markdown -The storage does not support mutations across multiple tables. Storage: %s; Mutations: %s +Missing required field or column mapping for clustering key %s ``` -### `DB-CORE-10214` +### `DB-CORE-10175` **Message** ```markdown -The storage does not support mutations across multiple namespaces. Storage: %s; Mutations: %s +Missing required field or column mapping for partition key %s ``` -### `DB-CORE-10215` +### `DB-CORE-10176` **Message** ```markdown -Mutations across multiple storages are not allowed. Mutations: %s +Missing field or column mapping for %s ``` ## `DB-CORE-2xxxx` status codes @@ -1291,6 +1579,14 @@ The record exists, so the %s condition is not satisfied The condition on the column '%s' is not satisfied ``` +### `DB-CORE-20021` + +**Message** + +```markdown +Reading empty records might cause a write skew anomaly, so the transaction has been aborted for safety purposes +``` + ### `DB-CORE-20022` **Message** @@ -1323,14 +1619,6 @@ The %s condition of the %s operation is not satisfied. Targeting column(s): %s A transaction conflict occurred in the Insert operation ``` -### `DB-CORE-20026` - -**Message** - -```markdown -A conflict occurred when committing records -``` - ## `DB-CORE-3xxxx` status codes The following are status codes and messages for the internal error category. @@ -1564,7 +1852,7 @@ An error occurred in the selection. Details: %s **Message** ```markdown -Fetching the next result failed. Details: %s +Fetching the next result failed ``` ### `DB-CORE-30029` @@ -1711,44 +1999,20 @@ The Update operation failed. Details: %s Handling the before-preparation snapshot hook failed. Details: %s ``` -### `DB-CORE-30054` - -**Message** - -```markdown -Getting the scanner failed. Details: %s -``` - -### `DB-CORE-30055` - -**Message** - -```markdown -Closing the scanner failed. Details: %s -``` - -### `DB-CORE-30056` - -**Message** - -```markdown -Getting the storage information failed. Namespace: %s -``` - -### `DB-CORE-30057` +### `DB-CORE-30047` **Message** ```markdown -Recovering records failed. Details: %s +Something went wrong while trying to save the data. Details: %s ``` -### `DB-CORE-30058` +### `DB-CORE-30048` **Message** ```markdown -Committing records failed +Something went wrong while scanning. Are you sure you are running in the correct transaction mode? Details: %s ``` ## `DB-CORE-4xxxx` status codes diff --git a/docs/scalardb-sql/grammar.mdx b/docs/scalardb-sql/grammar.mdx index b0a19b4d..c7d5045f 100644 --- a/docs/scalardb-sql/grammar.mdx +++ b/docs/scalardb-sql/grammar.mdx @@ -569,11 +569,11 @@ Examples of building statement objects for `ALTER TABLE` are as follows: ```java // Add a new column "new_col" to "ns.tbl" -AlterTableAddColumnStatement statement1 = +AlterTableAddColumnStatement statement = StatementBuilder.alterTable("ns", "tbl").addColumn("new_col", DataType.INT).build(); // Add a new encrypted column "new_col" to "ns.tbl" -AlterTableAddColumnStatement statement2 = +AlterTableAddColumnStatement statement = StatementBuilder.alterTable("ns", "tbl").addColumn("new_col", DataType.TEXT, true).build(); ``` @@ -2039,26 +2039,16 @@ This command returns the following column: #### Grammar ```sql -BEGIN [READ ONLY | READ WRITE] +BEGIN ``` -- If you specify `READ ONLY`, the transaction will be started in read-only mode. -- If you specify `READ WRITE`, the transaction will be started in read-write mode. -- If you omit the `READ ONLY` or `READ WRITE` option, the transaction will be started as a read-write transaction by default. - #### Examples An example of building statement objects for `BEGIN` is as follows: ```java -// Begin a transaction. -BeginStatement statement1 = StatementBuilder.begin().build(); - -// Begin a transaction in read-only mode. -BeginStatement statement2 = StatementBuilder.begin().readOnly().build(); - -// Begin a transaction in read-write mode. -BeginStatement statement3 = StatementBuilder.begin().readWrite().build(); +// Begin a transaction +BeginStatement statement = StatementBuilder.begin().build(); ``` ### START TRANSACTION @@ -2072,26 +2062,16 @@ This command returns the following column: #### Grammar ```sql -START TRANSACTION [READ ONLY | READ WRITE] +START TRANSACTION ``` -- If you specify `READ ONLY`, the transaction will be started in read-only mode. -- If you specify `READ WRITE`, the transaction will be started in read-write mode. -- If you omit the `READ ONLY` or `READ WRITE` option, the transaction will be started as a read-write transaction by default. - #### Examples An example of building statement objects for `START TRANSACTION` is as follows: ```java // Start a transaction. -StartTransactionStatement statement1 = StatementBuilder.startTransaction().build(); - -// Start a transaction in read-only mode. -StartTransactionStatement statement2 = StatementBuilder.startTransaction().readOnly().build(); - -// Start a transaction in read-write mode. -StartTransactionStatement statement3 = StatementBuilder.startTransaction().readWrite().build(); +StartTransactionStatement statement = StatementBuilder.startTransaction().build(); ``` ### JOIN diff --git a/docs/scalardb-sql/jdbc-guide.mdx b/docs/scalardb-sql/jdbc-guide.mdx index 4e285c32..ef8180e2 100644 --- a/docs/scalardb-sql/jdbc-guide.mdx +++ b/docs/scalardb-sql/jdbc-guide.mdx @@ -71,11 +71,10 @@ Please see [ScalarDB Cluster SQL client configurations](../scalardb-cluster/deve In addition, the ScalarDB JDBC specific configurations are as follows: -| name | description | default | -|---------------------------------------------------------------------|-----------------------------------------------------------------------------|---------| -| scalar.db.sql.jdbc.default_auto_commit | The default auto-commit mode for connections. | true | -| scalar.db.sql.jdbc.default_read_only | The default read-only state for connections. | false | -| scalar.db.sql.jdbc.sql_session_factory_cache.expiration_time_millis | The expiration time in milliseconds for the cache of SQL session factories. | 10000 | +| name | description | default | +|---------------------------------------------------------------------|------------------------------------------------------------------------------|---------| +| scalar.db.sql.jdbc.default_auto_commit | The default connection's auto-commit mode. | true | +| scalar.db.sql.jdbc.sql_session_factory_cache.expiration_time_millis | The expiration time in milliseconds for the cache of SQL session factories. | 10000 | ## Data type mapping between ScalarDB and JDBC @@ -221,4 +220,4 @@ Please see also [ScalarDB SQL API Guide](sql-api-guide.mdx) for more details on - [Java JDBC API](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) - [ScalarDB SQL API Guide](sql-api-guide.mdx) -- [Javadoc for ScalarDB JDBC](https://javadoc.io/doc/com.scalar-labs/scalardb-sql-jdbc/3.16.0/index.html) +- [Javadoc for ScalarDB JDBC](https://javadoc.io/doc/com.scalar-labs/scalardb-sql-jdbc/3.15.4/index.html) diff --git a/docs/scalardb-sql/scalardb-sql-status-codes.mdx b/docs/scalardb-sql/scalardb-sql-status-codes.mdx index 62cfd0f6..8dbe6bf5 100644 --- a/docs/scalardb-sql/scalardb-sql-status-codes.mdx +++ b/docs/scalardb-sql/scalardb-sql-status-codes.mdx @@ -641,27 +641,3 @@ Unmatched column type. The type of the column %s should be %s, but a TIMESTAMPTZ ```markdown The policy %s does not exist ``` - -### `DB-SQL-10078` - -**Message** - -```markdown -Beginning a transaction in read-only mode is not supported in two-phase commit transaction mode -``` - -### `DB-SQL-10079` - -**Message** - -```markdown -Starting a transaction in read-only mode is not supported in two-phase commit transaction mode -``` - -### `DB-SQL-10080` - -**Message** - -```markdown -Cannot change read-only mode while a transaction is in progress -``` diff --git a/docs/scalardb-sql/spring-data-guide.mdx b/docs/scalardb-sql/spring-data-guide.mdx index 040e412a..970251c8 100644 --- a/docs/scalardb-sql/spring-data-guide.mdx +++ b/docs/scalardb-sql/spring-data-guide.mdx @@ -820,4 +820,4 @@ In order to use Spring Data JDBC for ScalarDB, the following features are implem - [Spring Data JDBC - Reference Documentation](https://docs.spring.io/spring-data/jdbc/docs/3.0.x/reference/html/) - [ScalarDB JDBC Guide](jdbc-guide.mdx) -- [Javadoc for Spring Data JDBC for ScalarDB](https://javadoc.io/doc/com.scalar-labs/scalardb-sql-spring-data/3.16.0/index.html) +- [Javadoc for Spring Data JDBC for ScalarDB](https://javadoc.io/doc/com.scalar-labs/scalardb-sql-spring-data/3.15.4/index.html) diff --git a/docs/scalardb-sql/sql-api-guide.mdx b/docs/scalardb-sql/sql-api-guide.mdx index 9301191a..d2203aad 100644 --- a/docs/scalardb-sql/sql-api-guide.mdx +++ b/docs/scalardb-sql/sql-api-guide.mdx @@ -374,4 +374,4 @@ For more details, see the - | Db2 | ScalarDB | Notes | - |-----------------------|----------------------------------------|----------------------------| - | BIGINT | BIGINT | See warning [1](#1) below. | - | BINARY | BLOB | | - | BLOB | BLOB | | - | BOOLEAN | BOOLEAN | | - | CHAR | TEXT | | - | CHAR FOR BIT DATA | BLOB | | - | CLOB | TEXT | | - | DATE | DATE | | - | DOUBLE | DOUBLE | See warning [2](#2) below. | - | FLOAT(p), with p ≤ 24 | FLOAT | See warning [2](#2) below. | - | FLOAT(p), with p ≥ 25 | DOUBLE | See warning [2](#2) below. | - | GRAPHIC | TEXT | | - | INT | INT | | - | NCHAR | TEXT | | - | NCLOB | TEXT | | - | NVARCHAR | TEXT | | - | REAL | FLOAT | See warning [2](#2) below. | - | SMALLINT | INT | | - | TIME | TIME | | - | TIMESTAMP | TIMESTAMP (default), TIME, TIMESTAMPTZ | See warning [6](#6) below. | - | VARBINARY | BLOB | | - | VARCHAR | TEXT | | - | VARCHAR FOR BIT DATA | BLOB | | - | VARGRAPHIC | TEXT | | - - Data types not listed above are not supported. The following are some common data types that are not supported: - - - decimal - - decfloat - - xml - :::warning diff --git a/docs/schema-loader.mdx b/docs/schema-loader.mdx index 5270bb5e..e98ebe05 100644 --- a/docs/schema-loader.mdx +++ b/docs/schema-loader.mdx @@ -532,19 +532,19 @@ Auto-scaling for Cosmos DB for NoSQL is enabled only when this option is set to The following table shows the supported data types in ScalarDB and their mapping to the data types of other databases. -| ScalarDB | Cassandra | Cosmos DB for NoSQL | Db2 | DynamoDB | MySQL/MariaDB | PostgreSQL/YugabyteDB | Oracle | SQL Server | SQLite | -|-------------|----------------------|---------------------|------------------|----------|---------------|--------------------------|--------------------------|-----------------|---------| -| BOOLEAN | boolean | boolean (JSON) | BOOLEAN | BOOL | boolean | boolean | number(1) | bit | boolean | -| INT | int | number (JSON) | INT | N | int | int | number(10) | int | int | -| BIGINT | bigint | number (JSON) | BIGINT | N | bigint | bigint | number(16) | bigint | bigint | -| FLOAT | float | number (JSON) | REAL | N | real | real | binary_float | float(24) | float | -| DOUBLE | double | number (JSON) | DOUBLE | N | double | double precision | binary_double | float | double | -| TEXT | text | string (JSON) | VARCHAR(32672) | S | longtext | text | varchar2(4000) | varchar(8000) | text | -| BLOB | blob | string (JSON) | VARBINARY(32672) | B | longblob | bytea | RAW(2000) | varbinary(8000) | blob | -| DATE | date | number (JSON) | DATE | N | date | date | date | date | int | -| TIME | time | number (JSON) | TIMESTAMP | N | time | time | timestamp | time | bigint | -| TIMESTAMP | Unsupported | number (JSON) | TIMESTAMP | N | datetime | timestamp | timestamp | datetime2 | bigint | -| TIMESTAMPTZ | timestamp | number (JSON) | TIMESTAMP | N | datetime | timestamp with time zone | timestamp with time zone | datetimeoffset | bigint | +| ScalarDB | Cassandra | Cosmos DB for NoSQL | DynamoDB | MySQL/MariaDB | PostgreSQL/YugabyteDB | Oracle | SQL Server | SQLite | +|-------------|----------------------|---------------------|----------|---------------|--------------------------|--------------------------|-----------------|---------| +| BOOLEAN | boolean | boolean (JSON) | BOOL | boolean | boolean | number(1) | bit | boolean | +| INT | int | number (JSON) | N | int | int | number(10) | int | int | +| BIGINT | bigint | number (JSON) | N | bigint | bigint | number(16) | bigint | bigint | +| FLOAT | float | number (JSON) | N | real | real | binary_float | float(24) | float | +| DOUBLE | double | number (JSON) | N | double | double precision | binary_double | float | double | +| TEXT | text | string (JSON) | S | longtext | text | varchar2(4000) | varchar(8000) | text | +| BLOB | blob | string (JSON) | B | longblob | bytea | RAW(2000) | varbinary(8000) | blob | +| DATE | date | number (JSON) | N | date | date | date | date | int | +| TIME | time | number (JSON) | N | time | time | timestamp | time | bigint | +| TIMESTAMP | Unsupported | number (JSON) | N | datetime | timestamp | timestamp | datetime2 | bigint | +| TIMESTAMPTZ | timestamp | number (JSON) | N | datetime | timestamp with time zone | timestamp with time zone | datetimeoffset | bigint | :::note @@ -554,10 +554,10 @@ TIMESTAMP represents a date-time without time zone information, while TIMESTAMPT However, the following data types in JDBC databases are converted differently when they are used as a primary key or a secondary index key. This is due to the limitations of RDB data types. For MySQL and Oracle, you can change the column size (minimum 64 bytes) as long as it meets the limitation of the total size of key columns. For details, see [Underlying storage or database configurations](configurations.mdx#underlying-storage-or-database-configurations). -| ScalarDB | MySQL/MariaDB | PostgreSQL/YugabyteDB | Oracle | Db2 | -|----------|----------------|-----------------------|---------------|----------------| -| TEXT | VARCHAR(128) | VARCHAR(10485760) | VARCHAR2(128) | VARCHAR(128) | -| BLOB | VARBINARY(128) | | RAW(128) | VARBINARY(128) | +| ScalarDB | MySQL/MariaDB | PostgreSQL/YugabyteDB | Oracle | +|----------|----------------|-----------------------|---------------| +| TEXT | VARCHAR(128) | VARCHAR(10485760) | VARCHAR2(128) | +| BLOB | VARBINARY(128) | | RAW(128) | The following data types have a value range and precision regardless of the underlying database.