diff --git a/docs/api-guide.mdx b/docs/api-guide.mdx index d209a473..76efc2eb 100644 --- a/docs/api-guide.mdx +++ b/docs/api-guide.mdx @@ -580,7 +580,7 @@ And if you need to check if a value of a column is null, you can use the `isNull boolean isNull = result.isNull(""); ``` -For more details, see the `Result` page in the [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.13.1/index.html) of the version of ScalarDB that you're using. +For more details, see the `Result` page in the [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.14.0/index.html) of the version of ScalarDB that you're using. ###### Execute `Get` by using a secondary index diff --git a/docs/configurations.mdx b/docs/configurations.mdx index 0a013d3f..8ec447bb 100644 --- a/docs/configurations.mdx +++ b/docs/configurations.mdx @@ -203,11 +203,11 @@ For details about client configurations, see the [ScalarDB Cluster client config The following are additional configurations available for ScalarDB: -| Name | Description | Default | -|------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| -| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. | `-1` (no expiration) | -| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB maintains ongoing transactions, which can be resumed by using a transaction ID. This setting specifies the expiration time of this transaction management feature in milliseconds. | `-1` (no expiration) | -| `scalar.db.default_namespace_name` | The given namespace name will be used by operations that do not already specify a namespace. | | +| Name | Description | Default | +|------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| +| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` | +| `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB maintains ongoing transactions, which can be resumed by using a transaction ID. This setting specifies the expiration time of this transaction management feature in milliseconds. | `-1` (no expiration) | +| `scalar.db.default_namespace_name` | The given namespace name will be used by operations that do not already specify a namespace. | | ## Placeholder usage diff --git a/docs/design.mdx b/docs/design.mdx index d900065c..10474d91 100644 --- a/docs/design.mdx +++ b/docs/design.mdx @@ -5,9 +5,35 @@ tags: - Enterprise Premium --- -# ScalarDB Design Document +# ScalarDB Design -For details about the design and implementation of ScalarDB, please see the following documents, which we presented at the VLDB 2023 conference: +This document briefly explains the design and implementation of ScalarDB. For what ScalarDB is and its use cases, see [ScalarDB Overview](./overview.mdx). + +## Overall architecture + +ScalarDB is hybrid transaction/analytical processing (HTAP) middleware that sits in between applications and databases. As shown in the following figure, ScalarDB consists of three components: Core, Cluster, and Analytics. ScalarDB basically employs a layered architecture, so the Cluster and Analytics components use the Core component to interact with underlying databases but sometimes bypass the Core component for performance optimization without sacrificing correctness. Likewise, each component also consists of several layers. + +![ScalarDB architecture](images/scalardb-architecture.png) + +## Components + +The following subsections explain each component one by one. + +### Core + +ScalarDB Core, which is provided as open-source software under the Apache 2 License, is an integral part of ScalarDB. Core provides a database manager that has an abstraction layer that abstracts underlying databases and adapters (or shims) that implement the abstraction for each database. In addition, it provides a transaction manager on top of the database abstraction that achieves database-agnostic transaction management based on Scalar's novel distributed transaction protocol called Consensus Commit. Core is provided as a library that offers a simple CRUD interface. + +### Cluster + +ScalarDB Cluster, which is licensed under a commercial license, is a component that provides a clustering solution for the Core component to work as a clustered server. Cluster is mainly designed for OLTP workloads, which have many small, transactional and non-transactional reads and writes. In addition, it provides several enterprise features such as authentication, authorization, encryption at rest, and fine-grained access control (still under development). Not only does Cluster offer the same CRUD interface as the Core component, but it also offers SQL and GraphQL interfaces. Since Cluster is provided as a container in a Kubernetes Pod, you can increase performance and availability by having more containers. + +### Analytics + +ScalarDB Analytics, which is licensed under a commercial license, is a component that provides scalable analytical processing for the data managed by the Core component or managed by applications that don’t use ScalarDB. Analytics is mainly designed for OLAP workloads, which have a small number of large, analytical read queries. In addition, it offers a SQL and DataSet API through Spark. Since the Analytics component is provided as a Java package that can be installed on Apache Spark engines, you can increase performance by having more Spark worker nodes. + +## See also + +For details about the design and implementation of ScalarDB, see the following documents, which were presented at the VLDB 2023 conference: - **Speakerdeck presentation:** [ScalarDB: Universal Transaction Manager for Polystores](https://speakerdeck.com/scalar/scalardb-universal-transaction-manager-for-polystores-vldb23) - **Detailed paper:** [ScalarDB: Universal Transaction Manager for Polystores](https://www.vldb.org/pvldb/vol16/p3768-yamada.pdf) diff --git a/docs/index.mdx b/docs/index.mdx index b7a715c3..f961f9bd 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -7,7 +7,7 @@ tags: # ScalarDB -import { CardRowAbout, CardRowGettingStarted, CardRowSamples, CardRowDevelop, CardRowDeploy, CardRowManage, CardRowReference } from '/src/components/Cards/3.13'; +import { CardRowAbout, CardRowGettingStarted, CardRowSamples, CardRowDevelop, CardRowDeploy, CardRowManage, CardRowReference } from '/src/components/Cards/3.14'; ScalarDB is a cross-database HTAP engine. It achieves ACID transactions and real-time analytics across diverse databases to simplify the complexity of managing multiple databases. diff --git a/docs/quick-start-overview.mdx b/docs/quick-start-overview.mdx index 1a3183f3..6a05e7af 100644 --- a/docs/quick-start-overview.mdx +++ b/docs/quick-start-overview.mdx @@ -25,12 +25,11 @@ ScalarDB Cluster is available only in the Enterprise edition. ## Try running analytical queries through ScalarDB Analytics -In this sub-category, you can see tutorials on how to run analytical queries over the databases that you write through ScalarDB by using a component called ScalarDB Analytics. -ScalarDB Analytics currently targets only ScalarDB-managed databases, updated through ScalarDB transactions, but will target non-ScalarDB-managed databases in the future. +In this sub-category, you can see tutorials on how to run analytical queries over the databases that you write to by using a component called ScalarDB Analytics. ScalarDB Analytics targets both ScalarDB-managed databases, which are updated through ScalarDB transactions, and non-ScalarDB-managed databases. :::note - ScalarDB Analytics with PostgreSQL is available only under the Apache 2 License and doesn't require a commercial license. -- ScalarDB Analytics with Spark is in private preview. +- ScalarDB Analytics with Spark is in public preview. -::: \ No newline at end of file +::: diff --git a/docs/releases/release-notes.mdx b/docs/releases/release-notes.mdx index 5449db62..16ff956e 100644 --- a/docs/releases/release-notes.mdx +++ b/docs/releases/release-notes.mdx @@ -5,30 +5,35 @@ tags: - Enterprise Premium --- -# ScalarDB 3.13 Release Notes +# ScalarDB 3.14 Release Notes -This page includes a list of release notes for ScalarDB 3.13. +This page includes a list of release notes for ScalarDB 3.14. -## v3.13.1 +## v3.14.0 -**Release date:** October 13, 2024 +**Release date:** November 22, 2024 ### Summary -This release includes several bug fixes, and vulnerability fixes. +This release includes a lot of enhancements, improvements, bug fixes, and vulnerability fixes. ### Community edition #### Enhancements +- Added the encrypted column concept to ScalarDB. ([#1907](https://github.com/scalar-labs/scalardb/pull/1907) [#1975](https://github.com/scalar-labs/scalardb/pull/1975)) - Added support for MariaDB 11.4 and Oracle 19. ([#2061](https://github.com/scalar-labs/scalardb/pull/2061)) +#### Improvements + +- Added options for changing the key column size for MySQL and Oracle and used 128 bytes as the default. ([#2245](https://github.com/scalar-labs/scalardb/pull/2245)) +- Changed the default value of the metadata cache expiration time (`scalar.db.metadata.cache_expiration_time_secs`) to 60 seconds. ([#2274](https://github.com/scalar-labs/scalardb/pull/2274)) + #### Bug fixes - Fixed a bug where `NullPointerException` when a table specified in a Get/Scan object is not found in Consensus Commit. ([#2083](https://github.com/scalar-labs/scalardb/pull/2083)) - Fixed a corner case issue that causes inconsistent Coordinator states when lazy recovery happens before group commit ([#2135](https://github.com/scalar-labs/scalardb/pull/2135)) - Upgraded the mysql driver to fix security issues. [CVE-2023-22102](https://github.com/advisories/GHSA-m6vm-37g8-gqvh "CVE-2023-22102") ([#2238](https://github.com/scalar-labs/scalardb/pull/2238)) -- Upgraded the gRPC library, the Protocol Buffers library, grpc_health_probe, and scalar-admin to fix security issues. [CVE-2024-7254](https://github.com/advisories/GHSA-735f-pc8j-v9w8 "CVE-2024-7254"), [CVE-2024-25638](https://github.com/advisories/GHSA-cfxw-4h78-h7fw "CVE-2024-25638"), and [CVE-2024-34156](https://github.com/advisories/GHSA-crqm-pwhx-j97f "CVE-2024-34156") ([#2277](https://github.com/scalar-labs/scalardb/pull/2277)) ### Enterprise edition @@ -36,7 +41,16 @@ This release includes several bug fixes, and vulnerability fixes. ##### ScalarDB Cluster -- Support the group commit feature for Coordinator table in ScalarDB cluster +- Added support for encrypted columns introduced in [#1907](https://github.com/scalar-labs/scalardb/pull/1907). +- Added support for the group commit feature for the Coordinator table. +- Added support for encryption. +- Added support for `getCurrentUser()` in `DistributedTransactionAdmin` and `Metadata` to retrieve the current logged-in user. + +##### ScalarDB SQL + +- Added support for encrypted columns introduced in [#1907](https://github.com/scalar-labs/scalardb/pull/1907) for the Metadata API. +- Added support for encrypted columns for `CREATE TABLE` and `ALTER TABLE ADD COLUMN` statements. +- Added `SHOW USERS` and `SHOW GRANTS` commands, which list users and privileges for a specified user, respectively. #### Improvements @@ -48,112 +62,7 @@ This release includes several bug fixes, and vulnerability fixes. ##### ScalarDB Cluster -- Fix a bug where `NullPointerException` occurs when catching an exception without message. +- Fixed a bug where `NullPointerException` occurs when catching an exception without message. - Upgraded `grpc_health_probe` to fix a security issue. [CVE-2024-34156](https://github.com/advisories/GHSA-crqm-pwhx-j97f "CVE-2024-34156") - Upgraded `scalar-admin` to fix a security issue. [CVE-2024-25638](https://github.com/advisories/GHSA-cfxw-4h78-h7fw "CVE-2024-25638") - Upgraded the Protobuf Java library to fix a security issue. [CVE-2024-7254](https://github.com/advisories/GHSA-735f-pc8j-v9w8 "CVE-2024-7254") - -## v3.13.0 - -### Summary - -This release includes a lot of enhancements, improvements, bug fixes, and vulnerability fixes. - -### Community edition - -#### Enhancements - -- Added dynamic arbitrary filtering for non-JDBC databases. ([#1682](https://github.com/scalar-labs/scalardb/pull/1682)) -- Added the Insert, Upsert, and Update operations to the transactional API. ([#1697](https://github.com/scalar-labs/scalardb/pull/1697)) -- Added YugabyteDB adapter as one of JDBC storages ([#1710](https://github.com/scalar-labs/scalardb/pull/1710)) -- Added Group Commit feature for Coordinator Table ([#1728](https://github.com/scalar-labs/scalardb/pull/1728)) -- Allowed directly executing CRUD operations with transaction managers. ([#1755](https://github.com/scalar-labs/scalardb/pull/1755)) -- Added support for arbitrary filtering for partition scan and index scan. ([#1763](https://github.com/scalar-labs/scalardb/pull/1763)) -- Added a single CRUD operation transaction manager. This transaction manager implementation does not allow beginning a transaction by calling `begin()`/`start()`. It only allows directly executing CRUD operations from the transaction manager. ([#1793](https://github.com/scalar-labs/scalardb/pull/1793)) -- Added support for arbitrary filtering for get operations. ([#1834](https://github.com/scalar-labs/scalardb/pull/1834)) -- In MySQL, ScalarDB `FLOAT` type is changed from `DOUBLE` to `REAL` (single-precision floating-point value) ([#2000](https://github.com/scalar-labs/scalardb/pull/2000)) -- Added a new Admin API `admin.getNamespacesNames()` to list the user namespaces. Though, this API won't return a namespace that does not contain a table. From ScalarDB 4.0, we plan to improve the design to suppress this limitation. ([#2002](https://github.com/scalar-labs/scalardb/pull/2002)) - -#### Improvements - -- Removed the hard-coded collation for MySQL and SQL Server in the JDBC adapter. As a result, the collation configured in the underlying database will be used when creating tables. ([#1518](https://github.com/scalar-labs/scalardb/pull/1518)) -- Added error codes to the error messages of Schema Loader. ([#1564](https://github.com/scalar-labs/scalardb/pull/1564)) -- Performance improvement of the group commit by using priority queue in the background worker. ([#1641](https://github.com/scalar-labs/scalardb/pull/1641)) -- Refactored scan with filtering. ([#1715](https://github.com/scalar-labs/scalardb/pull/1715)) -- Avoided creating an internal unique index as much as possible to reduce resource consumption and improve performance. ([#1723](https://github.com/scalar-labs/scalardb/pull/1723)) -- Changed the hard-coded password for the Oracle user to a more secure one in the JDBC adapter. ([#1765](https://github.com/scalar-labs/scalardb/pull/1765)) -- Update base image of container image. This update fixes an OOM issue on a Kubernetes with cgroup v2 environment. In the previous versions, if you use a Kubernetes cluster with cgroup v2, you might face an OOM-killed issue. ([#1826](https://github.com/scalar-labs/scalardb/pull/1826)) -- Added capability to specify global properties for all storages in multi-storage. ([#1486](https://github.com/scalar-labs/scalardb/pull/1486)) - -#### Bug fixes - -- Upgraded the base image to fix security issues. [CVE-2023-47038](https://github.com/advisories/GHSA-96fh-9q43-rmjh "CVE-2023-47038") ([#1522](https://github.com/scalar-labs/scalardb/pull/1522) [#1521](https://github.com/scalar-labs/scalardb/pull/1521)) -- Upgraded the PostgresSQL lib to fix security issues. [CVE-2024-1597](https://github.com/advisories/GHSA-24rp-q3w6-vc56 "CVE-2024-1597") ([#1547](https://github.com/scalar-labs/scalardb/pull/1547)) -- Fixed a bug where `NullPointerException` occurs during the `EXTRA_READ` validation when scanning records in a transaction, but some of them are deleted by other transactions. ([#1624](https://github.com/scalar-labs/scalardb/pull/1624)) -- Fixed a bug where lazy recovery was not executed for the implicit pre-read of put and delete operations. ([#1681](https://github.com/scalar-labs/scalardb/pull/1681)) -- Fixed a bug where users could see inconsistent results when scanning records by an index key after putting the related records in Consensus Commit transactions. ([#1727](https://github.com/scalar-labs/scalardb/pull/1727)) -- Upgraded `grpc_health_probe` to fix security issues. [CVE-2024-24790](https://github.com/advisories/GHSA-49gw-vxvf-fc2g "CVE-2024-24790"), [CVE-2023-45283](https://github.com/advisories/GHSA-vvjp-q62m-2vph "CVE-2023-45283"), and [CVE-2023-45288](https://github.com/advisories/GHSA-4v7x-pqxf-cx7m "CVE-2023-45288") ([#1980](https://github.com/scalar-labs/scalardb/pull/1980)) -- Fixed snapshot management issues. ([#1976](https://github.com/scalar-labs/scalardb/pull/1976)) -- Fix a bug of the import-table feature that it could access tables in other namespace that have the same table name when using MySQL storage. ([#2001](https://github.com/scalar-labs/scalardb/pull/2001)) - -### Enterprise edition - -#### Enhancements - -##### ScalarDB Cluster - -- Added support for the insert mode of the Put operation introduced [#1679](https://github.com/scalar-labs/scalardb/pull/1679) in ScalarDB Cluster. -- Added support for insert, upsert, and update APIs introduced in [#1697](https://github.com/scalar-labs/scalardb/pull/1697) in ScalarDB Cluster. -- Added support executing a CRUD operations in a one-shot transaction. -- Added support for arbitrary filtering for partition scan and index scan introduced in [#1763](https://github.com/scalar-labs/scalardb/pull/1763) to ScalarDB Cluster. -- Added support for transaction managers other than Consensus Commit to ScalarDB Cluster. -- Added support for the single CRUD operation transaction manager introduced in [#1793](https://github.com/scalar-labs/scalardb/pull/1793) in ScalarDB Cluster. -- Added support for arbitrary filtering for get operations introduced in [#1834](https://github.com/scalar-labs/scalardb/pull/1834) to ScalarDB Cluster. -- Added support for `DistributedTransactionAdmin.getNamespaceNames()` - -##### ScalarDB SQL - -- Added support for the single CRUD operation transaction manager introduced in [#1793](https://github.com/scalar-labs/scalardb/pull/1793) to ScalarDB SQL. -- With this update, users now have several ways to access ScalarDB-managed namespaces in ScalarDB SQL. - -#### Improvements - -##### ScalarDB Cluster - -- Added error codes to the error messages of the authentication and authorization module. -- Added error codes to the error messages. -- Added TLS support for the Prometheus exporter. With this change, when enabling TLS (setting `scalar.db.cluster.tls.enabled` to `true`) in ScalarDB cluster nodes, the Prometheus exporter also starts with TLS (HTTPS). -- Update base image of container image. This update fixes an OOM issue on a Kubernetes with cgroup v2 environment. In the previous versions, if you use a Kubernetes cluster with cgroup v2, you might face an OOM-killed issue. - -##### ScalarDB GraphQL - -- Added error codes to the error messages. -- Update base image of container image. This update fixes an OOM issue on a Kubernetes with cgroup v2 environment. In the previous versions, if you use a Kubernetes cluster with cgroup v2, you might face an OOM-killed issue. - -##### ScalarDB SQL - -- Added error codes to the error messages. -- Changed the packages for `ConditionSetBuilder` and `AndConditionSet`. -- Allowed using the `EXISTS` keyword for the `CREATE/DROP COORDINATOR TABLES` statements. -- Update base image of container image. This update fixes an OOM issue on a Kubernetes with cgroup v2 environment. In the previous versions, if you use a Kubernetes cluster with cgroup v2, you might face an OOM-killed issue. -- Improved performance of selection queries with filtering by exploiting partition and index scans. - -#### Bug fixes - -##### ScalarDB Cluster - -- Upgraded the base image to fix security issues. [CVE-2023-47038](https://github.com/advisories/GHSA-96fh-9q43-rmjh "CVE-2023-47038") -- Upgraded the Kubernetes Client Java lib to fix security issues: [CVE-2024-25710](https://github.com/advisories/GHSA-4g9r-vxhx-9pgx "CVE-2024-25710") and [CVE-2024-26308](https://github.com/advisories/GHSA-4265-ccf5-phj5 "CVE-2024-26308"). -- Upgraded `grpc_health_probe` to fix security issues. [CVE-2024-24790](https://github.com/advisories/GHSA-49gw-vxvf-fc2g "CVE-2024-24790"), [CVE-2023-45283](https://github.com/advisories/GHSA-vvjp-q62m-2vph "CVE-2023-45283"), and [CVE-2023-45288](https://github.com/advisories/GHSA-4v7x-pqxf-cx7m "CVE-2023-45288") -- Fixed a bug where incorrect results are returned when executing SELECT queries with the same column names. - -##### ScalarDB GraphQL - -- Upgraded the base image to fix security issues. [CVE-2023-47038](https://github.com/advisories/GHSA-96fh-9q43-rmjh "CVE-2023-47038") - -##### ScalarDB SQL - -- Upgraded the base image to fix security issues. [CVE-2023-47038](https://github.com/advisories/GHSA-96fh-9q43-rmjh "CVE-2023-47038") -- Fixes a bug that Spring Data JDBC for ScalarDB doesn't work with Spring Boot 3 -- Fixed a bug where incorrect results are returned when executing SELECT queries with the same column names. -- Upgraded `grpc_health_probe` to fix security issues. [CVE-2024-24790](https://github.com/advisories/GHSA-49gw-vxvf-fc2g "CVE-2024-24790"), [CVE-2023-45283](https://github.com/advisories/GHSA-vvjp-q62m-2vph "CVE-2023-45283"), and [CVE-2023-45288](https://github.com/advisories/GHSA-4v7x-pqxf-cx7m "CVE-2023-45288") diff --git a/docs/releases/release-support-policy.mdx b/docs/releases/release-support-policy.mdx index ba9ff376..60e0e9f5 100644 --- a/docs/releases/release-support-policy.mdx +++ b/docs/releases/release-support-policy.mdx @@ -28,12 +28,19 @@ This page describes Scalar's support policy for major and minor version releases - 3.13 - 2024-07-08 + 3.14 + 2024-11-22 TBD* TBD* Contact us + + 3.13 + 2024-07-08 + 2025-11-22 + 2026-05-21 + Contact us + 3.12 2024-02-17 diff --git a/docs/run-non-transactional-storage-operations-through-library.mdx b/docs/run-non-transactional-storage-operations-through-library.mdx index e0ce8e2e..b8b913cb 100644 --- a/docs/run-non-transactional-storage-operations-through-library.mdx +++ b/docs/run-non-transactional-storage-operations-through-library.mdx @@ -236,7 +236,7 @@ Select your build tool, and follow the instructions to add the build dependency ```gradle dependencies { - implementation 'com.scalar-labs:scalardb:3.13.1' + implementation 'com.scalar-labs:scalardb:3.14.0' } ``` @@ -247,7 +247,7 @@ Select your build tool, and follow the instructions to add the build dependency com.scalar-labs scalardb - 3.13.1 + 3.14.0 ``` @@ -268,4 +268,4 @@ The following limitations apply to non-transactional storage operations: ### Learn more -- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.13.1/index.html) +- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.14.0/index.html) diff --git a/docs/scalar-licensing/README.mdx b/docs/scalar-licensing/README.mdx index d0cc7d99..f39c6977 100644 --- a/docs/scalar-licensing/README.mdx +++ b/docs/scalar-licensing/README.mdx @@ -54,15 +54,15 @@ Select the product you're using to see the product license key and a certificate - ```properties - spark.sql.catalog.scalardb_catalog.license.key= - spark.sql.catalog.scalardb_catalog.license.cert_pem=-----BEGIN CERTIFICATE-----\nMIICKzCCAdKgAwIBAgIIBXxj3s8NU+owCgYIKoZIzj0EAwIwbDELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMSMwIQYDVQQDExplbnRlcnByaXNlLnNjYWxhci1sYWJzLmNv\nbTAeFw0yMzExMTYwNzExNTdaFw0yNDAyMTUxMzE2NTdaMGwxCzAJBgNVBAYTAkpQ\nMQ4wDAYDVQQIEwVUb2t5bzERMA8GA1UEBxMIU2hpbmp1a3UxFTATBgNVBAoTDFNj\nYWxhciwgSW5jLjEjMCEGA1UEAxMaZW50ZXJwcmlzZS5zY2FsYXItbGFicy5jb20w\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATJx5gvAr+GZAHcBpUvDFDsUlFo4GNw\npRfsntzwStIP8ac3dew7HT4KbGBWei0BvIthleaqpv0AEP7JT6eYAkNvo14wXDAO\nBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwG\nA1UdEwEB/wQCMAAwHQYDVR0OBBYEFMIe+XuuZcnDX1c3TmUPlu3kNv/wMAoGCCqG\nSM49BAMCA0cAMEQCIGGlqKpgv+KW+Z1ZkjfMHjSGeUZKBLwfMtErVyc9aTdIAiAy\nvsZyZP6Or9o40x3l3pw/BT7wvy93Jm0T4vtVQH6Zuw==\n-----END CERTIFICATE----- + ```apacheconf + spark.sql.catalog.scalardb_catalog.license.key + spark.sql.catalog.scalardb_catalog.license.cert_pem -----BEGIN CERTIFICATE-----\nMIICKzCCAdKgAwIBAgIIBXxj3s8NU+owCgYIKoZIzj0EAwIwbDELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMSMwIQYDVQQDExplbnRlcnByaXNlLnNjYWxhci1sYWJzLmNv\nbTAeFw0yMzExMTYwNzExNTdaFw0yNDAyMTUxMzE2NTdaMGwxCzAJBgNVBAYTAkpQ\nMQ4wDAYDVQQIEwVUb2t5bzERMA8GA1UEBxMIU2hpbmp1a3UxFTATBgNVBAoTDFNj\nYWxhciwgSW5jLjEjMCEGA1UEAxMaZW50ZXJwcmlzZS5zY2FsYXItbGFicy5jb20w\nWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATJx5gvAr+GZAHcBpUvDFDsUlFo4GNw\npRfsntzwStIP8ac3dew7HT4KbGBWei0BvIthleaqpv0AEP7JT6eYAkNvo14wXDAO\nBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwG\nA1UdEwEB/wQCMAAwHQYDVR0OBBYEFMIe+XuuZcnDX1c3TmUPlu3kNv/wMAoGCCqG\nSM49BAMCA0cAMEQCIGGlqKpgv+KW+Z1ZkjfMHjSGeUZKBLwfMtErVyc9aTdIAiAy\nvsZyZP6Or9o40x3l3pw/BT7wvy93Jm0T4vtVQH6Zuw==\n-----END CERTIFICATE----- ``` - ```properties - spark.sql.catalog.scalardb_catalog.license.key= - spark.sql.catalog.scalardb_catalog.license.cert_pem=-----BEGIN CERTIFICATE-----\nMIICIzCCAcigAwIBAgIIKT9LIGX1TJQwCgYIKoZIzj0EAwIwZzELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMR4wHAYDVQQDExV0cmlhbC5zY2FsYXItbGFicy5jb20wHhcN\nMjMxMTE2MDcxMDM5WhcNMjQwMjE1MTMxNTM5WjBnMQswCQYDVQQGEwJKUDEOMAwG\nA1UECBMFVG9reW8xETAPBgNVBAcTCFNoaW5qdWt1MRUwEwYDVQQKEwxTY2FsYXIs\nIEluYy4xHjAcBgNVBAMTFXRyaWFsLnNjYWxhci1sYWJzLmNvbTBZMBMGByqGSM49\nAgEGCCqGSM49AwEHA0IABBSkIYAk7r5FRDf5qRQ7dbD3ib5g3fb643h4hqCtK+lC\nwM4AUr+PPRoquAy+Ey2sWEvYrWtl2ZjiYyyiZw8slGCjXjBcMA4GA1UdDwEB/wQE\nAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw\nADAdBgNVHQ4EFgQUbFyOWFrsjkkOvjw6vK3gGUADGOcwCgYIKoZIzj0EAwIDSQAw\nRgIhAKwigOb74z9BdX1+dUpeVG8WrzLTIqdIU0w+9jhAueXoAiEA6cniJ3qsP4j7\nsck62kHnFpH1fCUOc/b/B8ZtfeXI2Iw=\n-----END CERTIFICATE----- + ```apacheconf + spark.sql.catalog.scalardb_catalog.license.key + spark.sql.catalog.scalardb_catalog.license.cert_pem -----BEGIN CERTIFICATE-----\nMIICIzCCAcigAwIBAgIIKT9LIGX1TJQwCgYIKoZIzj0EAwIwZzELMAkGA1UEBhMC\nSlAxDjAMBgNVBAgTBVRva3lvMREwDwYDVQQHEwhTaGluanVrdTEVMBMGA1UEChMM\nU2NhbGFyLCBJbmMuMR4wHAYDVQQDExV0cmlhbC5zY2FsYXItbGFicy5jb20wHhcN\nMjMxMTE2MDcxMDM5WhcNMjQwMjE1MTMxNTM5WjBnMQswCQYDVQQGEwJKUDEOMAwG\nA1UECBMFVG9reW8xETAPBgNVBAcTCFNoaW5qdWt1MRUwEwYDVQQKEwxTY2FsYXIs\nIEluYy4xHjAcBgNVBAMTFXRyaWFsLnNjYWxhci1sYWJzLmNvbTBZMBMGByqGSM49\nAgEGCCqGSM49AwEHA0IABBSkIYAk7r5FRDf5qRQ7dbD3ib5g3fb643h4hqCtK+lC\nwM4AUr+PPRoquAy+Ey2sWEvYrWtl2ZjiYyyiZw8slGCjXjBcMA4GA1UdDwEB/wQE\nAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw\nADAdBgNVHQ4EFgQUbFyOWFrsjkkOvjw6vK3gGUADGOcwCgYIKoZIzj0EAwIDSQAw\nRgIhAKwigOb74z9BdX1+dUpeVG8WrzLTIqdIU0w+9jhAueXoAiEA6cniJ3qsP4j7\nsck62kHnFpH1fCUOc/b/B8ZtfeXI2Iw=\n-----END CERTIFICATE----- ``` diff --git a/docs/scalardb-analytics-spark/README.mdx b/docs/scalardb-analytics-spark/README.mdx index beb60e4e..79a294ea 100644 --- a/docs/scalardb-analytics-spark/README.mdx +++ b/docs/scalardb-analytics-spark/README.mdx @@ -1,19 +1,13 @@ --- tags: - Enterprise Option - - Private Preview + - Public Preview --- # ScalarDB Analytics with Spark import WarningLicenseKeyContact from '/src/components/en-us/_warning-license-key-contact.mdx'; -:::warning - -This version of ScalarDB Analytics with Spark was in private preview. Please use version 3.14 or later instead. - -::: - ScalarDB, as a universal transaction manager, targets mainly transactional workloads and therefore supports limited subsets of relational queries. ScalarDB Analytics with Spark extends the functionality of ScalarDB to process analytical queries on ScalarDB-managed data by using Apache Spark and Spark SQL. diff --git a/docs/scalardb-analytics-spark/version-compatibility.mdx b/docs/scalardb-analytics-spark/version-compatibility.mdx index d4caa449..41e0e0cb 100644 --- a/docs/scalardb-analytics-spark/version-compatibility.mdx +++ b/docs/scalardb-analytics-spark/version-compatibility.mdx @@ -1,17 +1,11 @@ --- tags: - Enterprise Option - - Private Preview + - Public Preview --- # Version Compatibility of ScalarDB Analytics with Spark -:::warning - -This version of ScalarDB Analytics with Spark was in private preview. Please use version 3.14 or later instead. - -::: - Since Spark and Scala may be incompatible among different minor versions, ScalarDB Analytics with Spark offers different artifacts for various Spark and Scala versions, named in the format `scalardb-analytics-spark-_`. Make sure that you select the artifact matching the Spark and Scala versions you're using. For example, if you're using Spark 3.5 with Scala 2.13, you must specify `scalardb-analytics-spark-3.5_2.13`. Regarding the Java version, ScalarDB Analytics with Spark supports Java 8 or later. diff --git a/docs/scalardb-cluster-dotnet-client-sdk/common-reference.mdx b/docs/scalardb-cluster-dotnet-client-sdk/common-reference.mdx index d6972311..a7c0918d 100644 --- a/docs/scalardb-cluster-dotnet-client-sdk/common-reference.mdx +++ b/docs/scalardb-cluster-dotnet-client-sdk/common-reference.mdx @@ -128,19 +128,22 @@ By using this configuration, the `ScalarDbOptions` object that is passed to the The following options are available: -| Name | Description | Default | -|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------| -| `Address` | **Required:** Address of the cluster in the following format: `://:`. ``: `https` if wire encryption (TLS) is enabled; `http` otherwise. ``: The FQDN or the IP address of the cluster. ``: The port number (`60053` by default) of the cluster. | - | -| `HopLimit` | Number of hops for a request to the cluster. The purpose of the `HopLimit` is to prevent infinite loops within the cluster. Each time a request is forwarded to another cluster node, the `HopLimit` decreases by one. If the `HopLimit` reaches zero, the request will be rejected. | `3` | -| `RetryCount` | How many times a client can try to connect to the cluster if it's unavailable. | `10` | -| `AuthEnabled` | Whether authentication and authorization are enabled. | `false` | -| `Username` | Username for authentication/authorization. | | -| `Password` | Password for authentication. If this isn't set, authentication is conducted without a password. | | -| `AuthTokenExpirationTime` | Time after which the authentication token should be refreshed. If the time set to `AuthTokenExpirationTime` is greater than the expiration time on the cluster, the authentication token will be refreshed when an authentication error is received. If the authentication token is successfully refreshed, the authentication error won't be propagated to the client code. Instead, the operation that has failed with the authentication error will be retried automatically. If more than one operation is running in parallel, all these operations will fail once with the authentication error before the authentication token is refreshed. | `00:00:00` (The authentication token expiration time received from the cluster is used.) | -| `TlsRootCertPem` | Custom CA root certificate (PEM data) for TLS communication. | | -| `TlsRootCertPath` | File path to the custom CA root certificate for TLS communication. | | -| `TlsOverrideAuthority` | Custom authority for TLS communication. This doesn't change what host is actually connected. This is mainly intended for testing. For example, you can specify the hostname presented in the cluster's certificate (the `scalar.db.cluster.node.tls.cert_chain_path` parameter of the cluster). If there's more than one hostname in the cluster's certificate, only the first hostname will be checked. | | -| `LogSensitiveData` | If set to `true`, information like username, password, and authentication token will be logged as is without masking when logging gRPC requests and responses. | `false` | +| Name | Description | Default | +|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------| +| `Address` | **Required:** Address of the cluster in the following format: `://:`. ``: `https` if wire encryption (TLS) is enabled; `http` otherwise. ``: The FQDN or the IP address of the cluster. ``: The port number (`60053` by default) of the cluster. | - | +| `HopLimit` | Number of hops for a request to the cluster. The purpose of `HopLimit` is to prevent infinite loops within the cluster. Each time a request is forwarded to another cluster node, `HopLimit` decreases by one. If `HopLimit` reaches zero, the request will be rejected. | `3` | +| `RetryCount` | How many times a client can try to connect to the cluster if it's unavailable. | `10` | +| `AuthEnabled` | Whether authentication and authorization are enabled. | `false` | +| `Username` | Username for authentication and authorization. | | +| `Password` | Password for authentication. If this isn't set, authentication is conducted without a password. | | +| `AuthTokenExpirationTime` | Time after which the authentication token should be refreshed. If the time set for `AuthTokenExpirationTime` is greater than the expiration time on the cluster, the authentication token will be refreshed when an authentication error is received. If the authentication token is successfully refreshed, the authentication error won't be propagated to the client code. Instead, the operation that has failed with the authentication error will be retried automatically. If more than one operation is running in parallel, all these operations will fail once with the authentication error before the authentication token is refreshed. | `00:00:00` (The authentication token expiration time received from the cluster is used.) | +| `TlsRootCertPem` | Custom CA root certificate (PEM data) for TLS communication. | | +| `TlsRootCertPath` | File path to the custom CA root certificate for TLS communication. | | +| `TlsOverrideAuthority` | Custom authority for TLS communication. This doesn't change what host is actually connected. This is mainly intended for testing. For example, you can specify the hostname presented in the cluster's certificate (the `scalar.db.cluster.node.tls.cert_chain_path` parameter of the cluster). If there's more than one hostname in the cluster's certificate, only the first hostname will be checked. | | +| `LogSensitiveData` | If set to `true`, information like username, password, and authentication token will be logged as is without masking when logging gRPC requests and responses. | `false` | +| `GrpcRequestTimeout` | Timeout for gRPC requests. Internally, the timeout's value is used to calculate and set a deadline for each gRPC request to the cluster. If the set deadline is exceeded, the request is cancelled and `DeadlineExceededException` is thrown. If the timeout is set to `0`, no deadline will be set. | `00:01:00` | +| `GrpcMaxReceiveMessageSize` | The maximum message size in bytes that can be received by the client. When set to `0`, the message size is unlimited. | `4 MB` | +| `GrpcMaxSendMessageSize` | The maximum message size in bytes that can be sent from the client. When set to `0`, the message size is unlimited. | `0` (Unlimited) | ## How ScalarDB column types are converted to and from .NET types diff --git a/docs/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx b/docs/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx index 598d91f3..bba16cb6 100644 --- a/docs/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx +++ b/docs/scalardb-cluster-dotnet-client-sdk/exception-handling.mdx @@ -59,7 +59,8 @@ while (true) var result = await tran.GetAsync(getKeys); var scanKeys = new Dictionary { { nameof(Item.Id), 1 } }; - var results = await tran.ScanAsync(scanKeys, null); + await foreach (var item in tran.ScanAsync(scanKeys, null)) + Console.WriteLine($"{item.Id}, {item.Name}, {item.Price}"); await tran.InsertAsync(new Item { Id = 1, Name = "Watermelon", Price = 4500 }); await tran.DeleteAsync(new Item { Id = 1 }); diff --git a/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx b/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx index 90f0fcf2..c4aba865 100644 --- a/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx +++ b/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-distributed-sql-transactions.mdx @@ -135,13 +135,14 @@ You can retrieve ScalarDB's metadata with the Metadata property as follows: ```c# // namespaces, tables metadata -var namespaces = (await manager.Metadata.GetNamespacesAsync()).ToList(); +var namespaceNames = new List(); -foreach (var ns in namespaces) +await foreach (var ns in manager.Metadata.GetNamespacesAsync()) { + namespaceNames.Add(ns.Name); Console.WriteLine($"Namespace: {ns.Name}"); - foreach (var tbl in await ns.GetTablesAsync()) + await foreach (var tbl in ns.GetTablesAsync()) { Console.WriteLine($" Table: {tbl.Name}"); @@ -166,19 +167,25 @@ foreach (var ns in namespaces) } // users metadata -foreach (var user in await manager.Metadata.GetUsersAsync()) +await foreach (var user in manager.Metadata.GetUsersAsync()) { Console.WriteLine($"User: {user.Name} [IsSuperuser: {user.IsSuperuser}]"); - foreach (var ns in namespaces) + foreach (var nsName in namespaceNames) { - Console.WriteLine($" Namespace: {ns.Name}"); + Console.WriteLine($" Namespace: {nsName}"); Console.WriteLine($" Privileges:"); - foreach (var privilege in await user.GetPrivilegesAsync(ns.Name)) + foreach (var privilege in await user.GetPrivilegesAsync(nsName)) Console.WriteLine($" {privilege}"); } Console.WriteLine(); } ``` + +:::note + +To use LINQ methods with `IAsyncEnumerable`, you can install [System.Linq.Async](https://www.nuget.org/packages/System.Linq.Async/) package. + +::: diff --git a/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx b/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx index a93b9931..5014987b 100644 --- a/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx +++ b/docs/scalardb-cluster-dotnet-client-sdk/getting-started-with-scalardb-tables-as-csharp-classes.mdx @@ -84,12 +84,17 @@ var endKeys = new Dictionary { { nameof(Statement.ItemId), 6} }; -var statements = await transaction.ScanAsync(startKeys, endKeys); -foreach (var s in statements) +await foreach (var s in transaction.ScanAsync(startKeys, endKeys)) Console.WriteLine($"ItemId: {s.ItemId}, Count: {s.Count}"); ``` +:::note + +To use LINQ methods with `IAsyncEnumerable`, you can install [System.Linq.Async](https://www.nuget.org/packages/System.Linq.Async/) package. + +::: + ### Insert a new object by using the `InsertAsync` method ```c# diff --git a/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx b/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx index 41bca441..395c9038 100644 --- a/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx +++ b/docs/scalardb-cluster/developer-guide-for-scalardb-cluster-with-java-api.mdx @@ -17,7 +17,7 @@ To add a dependency on the ScalarDB Cluster Java Client SDK by using Gradle, use ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -27,7 +27,7 @@ To add a dependency by using Maven, use the following: com.scalar-labs scalardb-cluster-java-client-sdk - 3.13.1 + 3.14.0 ``` @@ -120,11 +120,11 @@ scalar.db.contact_points=direct-kubernetes:ns/scalardb-cluster To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [ScalarDB Schema Loader](../schema-loader.mdx) except the name of the JAR file is different. -You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). +You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can run Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.13.1-all.jar --config -f --coordinator +java -jar scalardb-cluster-schema-loader-3.14.0-all.jar --config -f --coordinator ``` ## ScalarDB Cluster SQL @@ -164,8 +164,8 @@ To add the dependencies on the ScalarDB Cluster JDBC driver by using Gradle, use ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql-jdbc:3.13.1' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-sql-jdbc:3.14.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -176,12 +176,12 @@ To add the dependencies by using Maven, use the following: com.scalar-labs scalardb-sql-jdbc - 3.13.1 + 3.14.0 com.scalar-labs scalardb-cluster-java-client-sdk - 3.13.1 + 3.14.0 ``` @@ -199,8 +199,8 @@ To add the dependencies by using Gradle, use the following: ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql-spring-data:3.13.1' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-sql-spring-data:3.14.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -211,12 +211,12 @@ To add the dependencies by using Maven, use the following: com.scalar-labs scalardb-sql-spring-data - 3.13.1 + 3.14.0 com.scalar-labs scalardb-cluster-java-client-sdk - 3.13.1 + 3.14.0 ``` @@ -261,10 +261,10 @@ For details about how to configure Spring Data JDBC for ScalarDB, see [Configura Like other SQL databases, ScalarDB SQL also provides a CLI tool where you can issue SQL statements interactively in a command-line shell. -You can download the SQL CLI for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). After downloading the JAR file, you can run the SQL CLI with the following command: +You can download the SQL CLI for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can run the SQL CLI with the following command: ```console -java -jar scalardb-cluster-sql-cli-3.13.1-all.jar --config +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar --config ``` #### Usage @@ -272,7 +272,7 @@ java -jar scalardb-cluster-sql-cli-3.13.1-all.jar --config You can see the CLI usage with the `-h` option as follows: ```console -java -jar scalardb-cluster-sql-cli-3.13.1-all.jar -h +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar -h Usage: scalardb-sql-cli [-hs] -c=PROPERTIES_FILE [-e=COMMAND] [-f=FILE] [-l=LOG_FILE] [-o=] [-p=PASSWORD] [-u=USERNAME] @@ -303,6 +303,6 @@ For details about the ScalarDB Cluster gRPC API, refer to the following: JavaDocs are also available: -* [ScalarDB Cluster Java Client SDK](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-java-client-sdk/3.13.1/index.html) -* [ScalarDB Cluster Common](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-common/3.13.1/index.html) -* [ScalarDB Cluster RPC](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-rpc/3.13.1/index.html) +* [ScalarDB Cluster Java Client SDK](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-java-client-sdk/3.14.0/index.html) +* [ScalarDB Cluster Common](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-common/3.14.0/index.html) +* [ScalarDB Cluster RPC](https://javadoc.io/doc/com.scalar-labs/scalardb-cluster-rpc/3.14.0/index.html) diff --git a/docs/scalardb-cluster/encrypt-data-at-rest.mdx b/docs/scalardb-cluster/encrypt-data-at-rest.mdx new file mode 100644 index 00000000..eaab14e3 --- /dev/null +++ b/docs/scalardb-cluster/encrypt-data-at-rest.mdx @@ -0,0 +1,313 @@ +--- +tags: + - Enterprise Premium +--- + +# Encrypt Data at Rest + +import WarningLicenseKeyContact from '/src/components/en-us/_warning-license-key-contact.mdx'; + +This document explains how to encrypt data at rest in ScalarDB. + +## Overview + +ScalarDB can encrypt data stored through it. The encryption feature is similar to transparent data encryption (TDE) in major database systems; therefore, it is transparent to applications. ScalarDB encrypts data before writing it to the backend databases and decrypts it when reading from them. + +Currently, ScalarDB supports column-level encryption, allowing specific columns in a table to be encrypted. + +## Configurations + +To enable the encryption feature, you need to configure `scalar.db.cluster.encryption.enabled` to `true` in the ScalarDB Cluster node configuration file. + +| Name | Description | Default | +|----------------------------------------|-----------------------------------------|---------| +| `scalar.db.cluster.encryption.enabled` | Whether ScalarDB encrypts data at rest. | `false` | + +:::note + +Since encryption is transparent to the client, you don't need to change the client configuration. + +::: + +:::note + +If you enable the encryption feature, you will also need to set `scalar.db.cross_partition_scan.enabled` to `true` for the system namespace (`scalardb` by default) because it performs cross-partition scans internally. + +::: + +The other configurations depend on the encryption implementation you choose. Currently, ScalarDB supports the following encryption implementations: + +- HashiCorp Vault encryption +- Self-encryption + +The following sections explain how to configure each encryption implementation. + +### HashiCorp Vault encryption + +In HashiCorp Vault encryption, ScalarDB uses the [encryption as a service](https://developer.hashicorp.com/vault/tutorials/encryption-as-a-service/eaas-transit) of HashiCorp Vault to encrypt and decrypt data. In this implementation, ScalarDB delegates the management of encryption keys, as well as the encryption and decryption of data, to HashiCorp Vault. + +To use HashiCorp Vault encryption, you need to set the property `scalar.db.cluster.encryption.type` to `vault` in the ScalarDB Cluster node configuration file: + +| Name | Description | Default | +|-------------------------------------|-------------------------------------------------------------|---------| +| `scalar.db.cluster.encryption.type` | Should be set to `vault` to use HashiCorp Vault encryption. | | + +You also need to configure the following properties: + +| Name | Description | Default | +|------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------| +| `scalar.db.cluster.encryption.vault.key_type` | The key type. Currently, `aes128-gcm96`, `aes256-gcm96`, and `chacha20-poly1305` are supported. For details about the key types, see [Key types](https://developer.hashicorp.com/vault/docs/secrets/transit#key-types). | `aes128-gcm96` | +| `scalar.db.cluster.encryption.vault.associated_data_required` | Whether associated data is required for AEAD encryption. | `false` | +| `scalar.db.cluster.encryption.vault.address` | The address of the HashiCorp Vault server. | | +| `scalar.db.cluster.encryption.vault.token` | The token to authenticate with HashiCorp Vault. | | +| `scalar.db.cluster.encryption.vault.namespace` | The namespace of the HashiCorp Vault. This configuration is optional. | | +| `scalar.db.cluster.encryption.vault.transit_secrets_engine_path` | The path of the transit secrets engine. | `transit` | +| `scalar.db.cluster.encryption.vault.column_batch_size` | The number of columns to be included in a single request to the HashiCorp Vault server. | `64` | + +### Self-encryption + +In self-encryption, ScalarDB manages data encryption keys (DEKs) and performs encryption and decryption. ScalarDB generates a DEK for each table when creating the table and stores it in Kubernetes Secrets. + +To use self-encryption, you need to set the property `scalar.db.cluster.encryption.type` to `self` in the ScalarDB Cluster node configuration file: + +| Name | Description | Default | +|-------------------------------------|-------------------------------------------------|---------| +| `scalar.db.cluster.encryption.type` | Should be set to `self` to use self-encryption. | | + +You also need to configure the following properties: + +| Name | Description | Default | +|-------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| +| `scalar.db.cluster.encryption.self.key_type` | The key type. Currently, `AES128_GCM`, `AES256_GCM`, `AES128_EAX`, `AES256_EAX`, `AES128_CTR_HMAC_SHA256`, `AES256_CTR_HMAC_SHA256`, `CHACHA20_POLY1305`, and `XCHACHA20_POLY1305` are supported. For details about the key types, see [Choose a key type](https://developers.google.com/tink/aead#choose_a_key_type). | `AES128_GCM` | +| `scalar.db.cluster.encryption.self.associated_data_required` | Whether associated data is required for AEAD encryption. | `false` | +| `scalar.db.cluster.encryption.self.kubernetes.secret.namespace_name` | The namespace name of the Kubernetes Secrets. | `default` | +| `scalar.db.cluster.encryption.self.data_encryption_key_cache_expiration_time` | The expiration time of the DEK cache in milliseconds. | `60000` (60 seconds) | + +### Delete the DEK when dropping a table + +By default, ScalarDB does not delete the data encryption key (DEK) associated with a table when the table is dropped. However, you can configure ScalarDB to delete the DEK when dropping a table. To enable this, set the property `scalar.db.cluster.encryption.delete_data_encryption_key_on_drop_table.enabled` to `true` in the ScalarDB Cluster node configuration file: + +| Name | Description | Default | +|---------------------------------------------------------------------------------|------------------------------------------------------------------|---------| +| `scalar.db.cluster.encryption.delete_data_encryption_key_on_drop_table.enabled` | Whether to delete the DEK when dropping a table. | `false` | + +## Limitations + +There are some limitations to the encryption feature: + +- Primary-key columns (partition-key columns and clustering-key columns) cannot be encrypted. +- Secondary-index columns cannot be encrypted. +- Encrypted columns cannot be specified in the WHERE clauses or ORDER BY clauses. +- Encrypted columns are stored in the underlying database as the BLOB type, so encrypted columns that are larger than the maximum size of the BLOB type cannot be stored. For the maximum size of the BLOB type, see [Data-type mapping between ScalarDB and other databases](../schema-loader.mdx#data-type-mapping-between-scalardb-and-other-databases). + +## Wire encryption + +If you enable the encryption feature, enabling wire encryption to protect your data is strongly recommended, especially in production environments. For details about wire encryption, see [Encrypt Wire Communications](encrypt-wire-communications.mdx). + +## Tutorial - Encrypt data by configuring HashiCorp Vault encryption + +This tutorial explains how to encrypt data stored through ScalarDB by using HashiCorp Vault encryption. + + + +### Step 1. Install HashiCorp Vault + +Install HashiCorp Vault by referring to the official HashiCorp documentation, [Install Vault](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install). + +### Step 2. Create the ScalarDB Cluster configuration file + +Create the following configuration file as `scalardb-cluster-node.properties`, replacing `` and `` with your ScalarDB license key and license check certificate values. For more information about the license key and certificate, see [How to Configure a Product License Key](../scalar-licensing/README.mdx). + +```properties +scalar.db.storage=jdbc +scalar.db.contact_points=jdbc:postgresql://postgresql:5432/postgres +scalar.db.username=postgres +scalar.db.password=postgres +scalar.db.cluster.node.standalone_mode.enabled=true +scalar.db.cross_partition_scan.enabled=true +scalar.db.sql.enabled=true + +# Encryption configurations +scalar.db.cluster.encryption.enabled=true +scalar.db.cluster.encryption.type=vault +scalar.db.cluster.encryption.vault.address=http://vault:8200 +scalar.db.cluster.encryption.vault.token=root + +# License key configurations +scalar.db.cluster.node.licensing.license_key= +scalar.db.cluster.node.licensing.license_check_cert_pem= +``` + +### Step 3. Create the Docker Compose configuration file + +Create the following configuration file as `docker-compose.yaml`. + +```yaml +services: + vault: + container_name: "vault" + image: "hashicorp/vault:1.17.3" + ports: + - 8200:8200 + environment: + - VAULT_DEV_ROOT_TOKEN_ID=root + - VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200 + cap_add: + - IPC_LOCK + + postgresql: + container_name: "postgresql" + image: "postgres:15" + ports: + - 5432:5432 + environment: + - POSTGRES_PASSWORD=postgres + healthcheck: + test: ["CMD-SHELL", "pg_isready || exit 1"] + interval: 1s + timeout: 10s + retries: 60 + start_period: 30s + + scalardb-cluster-standalone: + container_name: "scalardb-cluser-node" + image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.14.0" + ports: + - 60053:60053 + - 9080:9080 + volumes: + - ./scalardb-cluster-node.properties:/scalardb-cluster/node/scalardb-cluster-node.properties + depends_on: + postgresql: + condition: service_healthy +``` + +### Step 4. Start the HashiCorp Vault server + +Run the following command to start the HashiCorp Vault server in development mode. + +```console +docker compose up vault -d +``` + +Once the HashiCorp Vault server is running, set its environment variables by running the following commands. + +```console +export VAULT_ADDR="http://127.0.0.1:8200" +export VAULT_TOKEN=root +``` + +### Step 5. Enable the transit secrets engine on the HashiCorp Vault server + +Run the following command to enable the transit secrets engine on the HashiCorp Vault server. + +```console +vault secrets enable transit +``` + +### Step 6. Start PostgreSQL and ScalarDB Cluster + +Run the following command to start PostgreSQL and ScalarDB Cluster in standalone mode. + +```console +docker compose up postgresql scalardb-cluster-standalone -d +``` + +It may take a few minutes for ScalarDB Cluster to fully start. + +### Step 7. Connect to ScalarDB Cluster + +To connect to ScalarDB Cluster, this tutorial uses the SQL CLI, a tool for connecting to ScalarDB Cluster and executing SQL queries. You can download the SQL CLI from the [ScalarDB releases page](https://github.com/scalar-labs/scalardb/releases). + +Create a configuration file named `scalardb-cluster-sql-cli.properties`. This file will be used to connect to ScalarDB Cluster by using the SQL CLI. + +```properties +scalar.db.sql.connection_mode=cluster +scalar.db.sql.cluster_mode.contact_points=indirect:localhost +``` + +Then, start the SQL CLI by running the following command. + +```console +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar --config scalardb-cluster-sql-cli.properties +``` + +To begin, create the Coordinator tables required for ScalarDB transaction execution. + +```sql +CREATE COORDINATOR TABLES IF NOT EXISTS; +``` + +Now you're ready to use the database with the encryption feature enabled in ScalarDB Cluster. + +### Step 8. Create a table + +Before creating a table, you need to create a namespace. + +```sql +CREATE NAMESPACE ns; +``` + +Next, create a table. + +```sql +CREATE TABLE ns.tbl ( + id INT PRIMARY KEY, + col1 TEXT ENCRYPTED, + col2 INT ENCRYPTED, + col3 INT); +``` + +By using the `ENCRYPTED` keyword, the data in the specified columns will be encrypted. In this example, the data in `col1` and `col2` will be encrypted. + +### Step 9. Insert data into the table + +To insert data into the table, execute the following SQL query. + +```sql +INSERT INTO ns.tbl (id, col1, col2, col3) VALUES (1, 'data1', 123, 456); +``` + +To verify the inserted data, run the following SQL query. + +```sql +SELECT * FROM ns.tbl; +``` + +```console ++----+-------+------+------+ +| id | col1 | col2 | col3 | ++----+-------+------+------+ +| 1 | data1 | 123 | 456 | ++----+-------+------+------+ +``` + +### Step 10. Verify data encryption + +To verify that the data is encrypted, connect directly to PostgreSQL and check the data. + +:::warning + +Reading or writing data from the backend database directly is not supported in ScalarDB. In such a case, ScalarDB cannot guarantee data consistency. This guide accesses the backend database directly for testing purposes, however, you cannot do this in a production environment. + +::: + +Run the following command to connect to PostgreSQL. + +```console +docker exec -it postgresql psql -U postgres +``` + +Next, execute the following SQL query to check the data in the table. + +```sql +SELECT id, col1, col2, col3 FROM ns.tbl; +``` + +You should see a similar output as below, which confirms that the data in `col1` and `col2` are encrypted. + +```console + id | col1 | col2 | col3 +----+--------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------+------ + 1 | \x7661756c743a76313a6b6f76455062316a676e6a4a596b643743765539315a49714d625564545a61697152666c7967367837336e66 | \x7661756c743a76313a4b6244543162764678676d44424b526d7037794f5176423569616e615635304c473079664354514b3866513d | 456 +``` diff --git a/docs/scalardb-cluster/encrypt-wire-communications.mdx b/docs/scalardb-cluster/encrypt-wire-communications.mdx new file mode 100644 index 00000000..9e3ff4c8 --- /dev/null +++ b/docs/scalardb-cluster/encrypt-wire-communications.mdx @@ -0,0 +1,63 @@ +--- +tags: + - Enterprise Premium +--- + +# Encrypt Wire Communications + +ScalarDB can encrypt wire communications by using Transport Layer Security (TLS). This document explains the configurations for wire encryption in ScalarDB. + +The wire encryption feature encrypts: + +* The communications between the ScalarDB Cluster node and clients. +* The communications between all the ScalarDB Cluster nodes (the cluster's internal communications). + +This feature uses TLS support in gRPC. For details, see the official gRPC [Security Policy](https://github.com/grpc/grpc-java/blob/master/SECURITY.md). + +:::note + +Enabling wire encryption between the ScalarDB Cluster nodes and the underlying databases in production environments is strongly recommended. For instructions on how to enable wire encryption between the ScalarDB Cluster nodes and the underlying databases, please refer to the product documentation for your underlying databases. + +::: + +## Configurations + +This section describes the available configurations for wire encryption. + +### Enable wire encryption in the ScalarDB Cluster nodes + +To enable wire encryption in the ScalarDB Cluster nodes, you need to set `scalar.db.cluster.tls.enabled` to `true`. + +| Name | Description | Default | +|---------------------------------|-------------------------------------------|---------| +| `scalar.db.cluster.tls.enabled` | Whether wire encryption (TLS) is enabled. | `false` | + +You also need to set the following configurations: + +| Name | Description | Default | +|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| +| `scalar.db.cluster.tls.ca_root_cert_pem` | The custom CA root certificate (PEM data) for TLS communication. | | +| `scalar.db.cluster.tls.ca_root_cert_path` | The custom CA root certificate (file path) for TLS communication. | | +| `scalar.db.cluster.tls.override_authority` | The custom authority for TLS communication. This doesn't change what host is actually connected. This is intended for testing, but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set for `scalar.db.cluster.node.tls.cert_chain_path`. | | +| `scalar.db.cluster.node.tls.cert_chain_path` | The certificate chain file used for TLS communication. | | +| `scalar.db.cluster.node.tls.private_key_path` | The private key file used for TLS communication. | | + +To specify the certificate authority (CA) root certificate, you should set either `scalar.db.cluster.tls.ca_root_cert_pem` or `scalar.db.cluster.tls.ca_root_cert_path`. If you set both, `scalar.db.cluster.tls.ca_root_cert_pem` will be used. + +### Enable wire encryption on the client side + +To enable wire encryption on the client side by using the ScalarDB Cluster Java client SDK, you need to set `scalar.db.cluster.tls.enabled` to `true`. + +| Name | Description | Default | +|---------------------------------|-------------------------------------------|---------| +| `scalar.db.cluster.tls.enabled` | Whether wire encryption (TLS) is enabled. | `false` | + +You also need to set the following configurations: + +| Name | Description | Default | +|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| +| `scalar.db.cluster.tls.ca_root_cert_pem` | The custom CA root certificate (PEM data) for TLS communication. | | +| `scalar.db.cluster.tls.ca_root_cert_path` | The custom CA root certificate (file path) for TLS communication. | | +| `scalar.db.cluster.tls.override_authority` | The custom authority for TLS communication. This doesn't change what host is actually connected. This is intended for testing, but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set for `scalar.db.cluster.node.tls.cert_chain_path`. | | + +To specify the CA root certificate, you should set either `scalar.db.cluster.tls.ca_root_cert_pem` or `scalar.db.cluster.tls.ca_root_cert_path`. If you set both, `scalar.db.cluster.tls.ca_root_cert_pem` will be used. diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx index 99032cb4..b0bb5d29 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-graphql.mdx @@ -105,11 +105,11 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. -You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). +You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.13.1-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.14.0-all.jar --config database.properties -f schema.json --coordinator ``` ## Step 4. Run operations from GraphiQL @@ -186,7 +186,7 @@ You should get the following result in the right pane: ### Mappings between GraphQL API and ScalarDB Java API The automatically generated GraphQL schema defines queries, mutations, and object types for input/output to allow you to run CRUD operations for all the tables in the target namespaces. -These operations are designed to match the ScalarDB APIs defined in the [`DistributedTransaction`](https://javadoc.io/doc/com.scalar-labs/scalardb/3.13.1/com/scalar/db/api/DistributedTransaction.html) interface. +These operations are designed to match the ScalarDB APIs defined in the [`DistributedTransaction`](https://javadoc.io/doc/com.scalar-labs/scalardb/3.14.0/com/scalar/db/api/DistributedTransaction.html) interface. Assuming you have an `account` table in a namespace, the following queries and mutations will be generated: diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx index 8c6fcfbb..26f39d54 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-jdbc.mdx @@ -85,10 +85,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: +To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: ```console -java -jar scalardb-cluster-sql-cli-3.13.1-all.jar --config scalardb-sql.properties --file schema.sql +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar --config scalardb-sql.properties --file schema.sql ``` ## Step 4. Load the initial data diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx index 8aa48301..b737a51b 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster-sql-spring-data-jdbc.mdx @@ -85,10 +85,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: +To load a schema, you need to use [the SQL CLI](developer-guide-for-scalardb-cluster-with-java-api.mdx#sql-cli). You can download the SQL CLI from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can use SQL CLI for Cluster by running the following command: ```console -java -jar scalardb-cluster-sql-cli-3.13.1-all.jar --config scalardb-sql.properties --file schema.sql +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar --config scalardb-sql.properties --file schema.sql ``` ## Step 4. Modify `application.properties` diff --git a/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx b/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx index 5bd1dc1b..c2f615a5 100644 --- a/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/getting-started-with-scalardb-cluster.mdx @@ -119,7 +119,7 @@ To use ScalarDB Cluster, open `build.gradle` in your preferred text editor. Then dependencies { ... - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -165,12 +165,12 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi The database schema (the method in which the data will be organized) for the sample application has already been defined in [`schema.json`](https://github.com/scalar-labs/scalardb-samples/tree/main/scalardb-sample/schema.json). -To apply the schema, go to [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1) and download the ScalarDB Cluster Schema Loader to the `scalardb-samples/scalardb-sample` folder. +To apply the schema, go to [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0) and download the ScalarDB Cluster Schema Loader to the `scalardb-samples/scalardb-sample` folder. Then, run the following command: ```console -java -jar scalardb-cluster-schema-loader-3.13.1-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.14.0-all.jar --config database.properties -f schema.json --coordinator ``` #### Schema details diff --git a/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx b/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx index 301bffc7..e9dbfe0e 100644 --- a/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/getting-started-with-using-go-for-scalardb-cluster.mdx @@ -72,10 +72,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: +To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.13.1-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.14.0-all.jar --config database.properties -f schema.json --coordinator ``` ## Step 4. Set up a Go environment diff --git a/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx b/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx index a198fe9e..11b194d1 100644 --- a/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/getting-started-with-using-python-for-scalardb-cluster.mdx @@ -72,10 +72,10 @@ For details about the client modes, see [Developer Guide for ScalarDB Cluster wi ## Step 3. Load a schema -To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.13.1). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: +To load a schema via ScalarDB Cluster, you need to use the dedicated Schema Loader for ScalarDB Cluster (Schema Loader for Cluster). Using the Schema Loader for Cluster is basically the same as using the [Schema Loader for ScalarDB](../schema-loader.mdx) except the name of the JAR file is different. You can download the Schema Loader for Cluster from [ScalarDB Releases](https://github.com/scalar-labs/scalardb/releases/tag/v3.14.0). After downloading the JAR file, you can run the Schema Loader for Cluster with the following command: ```console -java -jar scalardb-cluster-schema-loader-3.13.1-all.jar --config database.properties -f schema.json --coordinator +java -jar scalardb-cluster-schema-loader-3.14.0-all.jar --config database.properties -f schema.json --coordinator ``` ## Step 4. Set up a Python environment diff --git a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx index 89ca34d1..cd538abe 100644 --- a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx +++ b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-scalardb-cluster.mdx @@ -270,7 +270,7 @@ Select your build tool, and follow the instructions to add the build dependency ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -281,7 +281,7 @@ Select your build tool, and follow the instructions to add the build dependency com.scalar-labs scalardb-cluster-java-client-sdk - 3.13.1 + 3.14.0 ``` @@ -306,5 +306,5 @@ The following limitations apply to non-transactional storage operations: ### Learn more -- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.13.1/index.html) +- [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb/3.14.0/index.html) - [Developer Guide for ScalarDB Cluster with the Java API](developer-guide-for-scalardb-cluster-with-java-api.mdx) diff --git a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx index fd4442b2..fa3108d4 100644 --- a/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx +++ b/docs/scalardb-cluster/run-non-transactional-storage-operations-through-sql-interface.mdx @@ -275,8 +275,8 @@ Also, for a list of supported DDLs, see [ScalarDB SQL Grammar](../scalardb-sql/g ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql-jdbc:3.13.1' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-sql-jdbc:3.14.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -288,12 +288,12 @@ Also, for a list of supported DDLs, see [ScalarDB SQL Grammar](../scalardb-sql/g com.scalar-labs scalardb-sql-jdbc - 3.13.1 + 3.14.0 com.scalar-labs scalardb-cluster-java-client-sdk - 3.13.1 + 3.14.0 ``` @@ -340,8 +340,8 @@ The following limitations apply to non-transactional storage operations: ```gradle dependencies { - implementation 'com.scalar-labs:scalardb-sql:3.13.1' - implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.13.1' + implementation 'com.scalar-labs:scalardb-sql:3.14.0' + implementation 'com.scalar-labs:scalardb-cluster-java-client-sdk:3.14.0' } ``` @@ -353,12 +353,12 @@ The following limitations apply to non-transactional storage operations: com.scalar-labs scalardb-sql - 3.13.1 + 3.14.0 com.scalar-labs scalardb-cluster-java-client-sdk - 3.13.1 + 3.14.0 ``` @@ -386,7 +386,7 @@ The following limitations apply to non-transactional storage operations:

Learn more

- - [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb-sql/3.13.1/index.html) + - [Javadoc](https://javadoc.io/doc/com.scalar-labs/scalardb-sql/3.14.0/index.html) diff --git a/docs/scalardb-cluster/scalardb-auth-status-codes.mdx b/docs/scalardb-cluster/scalardb-auth-status-codes.mdx index a499f9c1..ff59e347 100644 --- a/docs/scalardb-cluster/scalardb-auth-status-codes.mdx +++ b/docs/scalardb-cluster/scalardb-auth-status-codes.mdx @@ -70,7 +70,7 @@ Access denied: Invalid auth token **Message** ```markdown -Access denied: You need the %s privilege for the namespace %s to execute this operation +Access denied: You need the %s privilege on the namespace %s to execute this operation ``` ### `AUTH-10008` @@ -78,7 +78,7 @@ Access denied: You need the %s privilege for the namespace %s to execute this op **Message** ```markdown -Access denied: You need the %s privilege for the table %s to execute this operation +Access denied: You need the %s privilege on the table %s to execute this operation ``` ### `AUTH-10009` @@ -134,7 +134,7 @@ You can't drop the current user %s **Message** ```markdown -Access denied: You can't grant the %s privilege because you don't have the same privilege for the table %s +Access denied: You can't grant the %s privilege because you don't have the same privilege on the table %s ``` ### `AUTH-10016` @@ -142,7 +142,7 @@ Access denied: You can't grant the %s privilege because you don't have the same **Message** ```markdown -Access denied: You can't grant the %s privilege because you don't have the same privilege for the namespace %s +Access denied: You can't grant the %s privilege because you don't have the same privilege on the namespace %s ``` ### `AUTH-10017` @@ -150,7 +150,7 @@ Access denied: You can't grant the %s privilege because you don't have the same **Message** ```markdown -Access denied: You can't revoke the %s privilege because you don't have the same privilege for the table %s +Access denied: You can't revoke the %s privilege because you don't have the same privilege on the table %s ``` ### `AUTH-10018` @@ -158,7 +158,7 @@ Access denied: You can't revoke the %s privilege because you don't have the same **Message** ```markdown -Access denied: You can't revoke the %s privilege because you don't have the same privilege for the namespace %s +Access denied: You can't revoke the %s privilege because you don't have the same privilege on the namespace %s ``` ### `AUTH-10019` diff --git a/docs/scalardb-cluster/scalardb-auth-with-sql.mdx b/docs/scalardb-cluster/scalardb-auth-with-sql.mdx index eefd9375..a0c0be6e 100644 --- a/docs/scalardb-cluster/scalardb-auth-with-sql.mdx +++ b/docs/scalardb-cluster/scalardb-auth-with-sql.mdx @@ -5,6 +5,8 @@ tags: # Authenticate and Authorize Users +import WarningLicenseKeyContact from '/src/components/en-us/_warning-license-key-contact.mdx'; + ScalarDB Cluster has a mechanism to authenticate and authorize users. This guide describes how to use authentication and authorization in ScalarDB Cluster. @@ -108,13 +110,13 @@ The following tables show which privileges are required for each type of operati ### DML -| Command | Superuser required | Required privileges | -|------------------|--------------------|-----------------------| -| `SELECT` | | `SELECT` | -| `INSERT` | | `INSERT` | -| `UPSERT` | | `INSERT` | -| `UPDATE` | | `SELECT` and `UPDATE` | -| `DELETE` | | `SELECT` and `DELETE` | +| Command | Superuser required | Required privileges | +|----------|--------------------|-----------------------| +| `SELECT` | | `SELECT` | +| `INSERT` | | `INSERT` | +| `UPSERT` | | `INSERT` | +| `UPDATE` | | `SELECT` and `UPDATE` | +| `DELETE` | | `SELECT` and `DELETE` | ### DCL @@ -126,55 +128,233 @@ The following tables show which privileges are required for each type of operati | `GRANT` | | `GRANT` (Users can grant only the privileges that they have.) | | `REVOKE` | | `GRANT` (Users can revoke only the privileges that they have.) | +## Limitations + +There are some limitations to the privileges granted or revoked in authentication and authorization: + +- You must grant or revoke `INSERT` and `UPDATE` privileges together. +- To grant a user the `UPDATE` or `DELETE` privilege, the target user must have the `SELECT` privilege. +- If the target user has the `INSERT` or `UPDATE` privilege, you cannot revoke the `SELECT` privilege from them. + ## Wire encryption -ScalarDB Cluster also supports wire encryption by using Transport Layer Security (TLS). If you enable authentication and authorization, enabling wire encryption in production environments to protect the user credentials is strongly recommended. +If you enable authentication and authorization, enabling wire encryption to protect the user credentials is strongly recommended, especially in production environments. For details about wire encryption, see [Encrypt Wire Communications](encrypt-wire-communications.mdx). + +## Tutorial - Authenticate and authorize users + +This tutorial explains how to use authentication and authorization. + + + +### 1. Create the ScalarDB Cluster configuration file + +Create the following configuration file as `scalardb-cluster-node.properties`, replacing `` and `` with your ScalarDB license key and license check certificate values. For more information about the license key and certificate, see [How to Configure a Product License Key](../scalar-licensing/README.mdx). + +```properties +scalar.db.storage=jdbc +scalar.db.contact_points=jdbc:postgresql://postgresql:5432/postgres +scalar.db.username=postgres +scalar.db.password=postgres +scalar.db.cluster.node.standalone_mode.enabled=true +scalar.db.cross_partition_scan.enabled=true +scalar.db.sql.enabled=true + +# Enable authentication and authorization +scalar.db.cluster.auth.enabled=true + +# License key configurations +scalar.db.cluster.node.licensing.license_key= +scalar.db.cluster.node.licensing.license_check_cert_pem= +``` + +### 2. Create the Docker Compose file + +Create the following configuration file as `docker-compose.yaml`. + +```yaml +services: + postgresql: + container_name: "postgresql" + image: "postgres:15" + ports: + - 5432:5432 + environment: + - POSTGRES_PASSWORD=postgres + healthcheck: + test: ["CMD-SHELL", "pg_isready || exit 1"] + interval: 1s + timeout: 10s + retries: 60 + start_period: 30s + + scalardb-cluster-standalone: + container_name: "scalardb-cluser-node" + image: "ghcr.io/scalar-labs/scalardb-cluster-node-byol-premium:3.14.0" + ports: + - 60053:60053 + - 9080:9080 + volumes: + - ./scalardb-cluster-node.properties:/scalardb-cluster/node/scalardb-cluster-node.properties + depends_on: + postgresql: + condition: service_healthy +``` + +### 3. Start PostgreSQL and ScalarDB Cluster + +Run the following command to start PostgreSQL and ScalarDB Cluster in standalone mode. + +```console +docker compose up -d +``` + +It may take a few minutes for ScalarDB Cluster to fully start. + +### 4. Connect to ScalarDB Cluster + +To connect to ScalarDB Cluster, this tutorial uses the SQL CLI, a tool for connecting to ScalarDB Cluster and executing SQL queries. You can download the SQL CLI from the [ScalarDB releases page](https://github.com/scalar-labs/scalardb/releases). + +Create a configuration file named `scalardb-cluster-sql-cli.properties`. This file will be used to connect to ScalarDB Cluster by using the SQL CLI. + +```properties +scalar.db.sql.connection_mode=cluster +scalar.db.sql.cluster_mode.contact_points=indirect:localhost + +# Enable authentication and authorization +scalar.db.cluster.auth.enabled=true +``` + +Then, start the SQL CLI by running the following command. + +```console +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar --config scalardb-cluster-sql-cli.properties +``` + +Enter the username and password as `admin` and `admin`, respectively. + +Now you're ready to use the database with authentication and authorization enabled in ScalarDB Cluster. + +### 5. Create namespaces and a table + +Create namespaces. + +```sql +CREATE NAMESPACE ns1; + +CREATE NAMESPACE ns2; +``` + +Next, create a table in the `ns1` namespaces. + +```sql +CREATE TABLE ns1.tbl ( + id INT PRIMARY KEY, + col1 TEXT, + col2 INT); +``` + +### 6. Create a user + +Create a user named `user1`. + +```sql +CREATE USER user1 WITH PASSWORD 'user1'; +``` + +To check the user, run the following command. + +```sql +SHOW USERS; +``` + +```console ++----------+-------------+ +| username | isSuperuser | ++----------+-------------+ +| user1 | false | +| admin | true | ++----------+-------------+ +``` + +You can see that the `user1` user has been created. + +### 7. Grant privileges + +Grant the `SELECT`, `INSERT`, and `UPDATE` privileges to `user1` on the `ns1.tbl` table. + +```sql +GRANT SELECT, INSERT, UPDATE ON ns1.tbl TO user1; +``` + +Then, grant the `SELECT` privilege to `user1` on the `ns2` namespace. + +```sql +GRANT SELECT ON NAMESPACE ns2 TO user1; +``` + +To check the privileges, run the following command. + +```sql +SHOW GRANTS FOR user1; +``` + +```console ++---------+-----------+-----------+ +| name | type | privilege | ++---------+-----------+-----------+ +| ns2 | NAMESPACE | SELECT | +| ns1.tbl | TABLE | SELECT | +| ns1.tbl | TABLE | INSERT | +| ns1.tbl | TABLE | UPDATE | ++---------+-----------+-----------+ +``` -This wire encryption feature encrypts: +You can see that `user1` has been granted the `SELECT`, `INSERT`, and `UPDATE` privileges on the `ns.tbl` table, and the `SELECT` privilege on the `ns2` namespace. -* The communications between the ScalarDB Cluster node and clients. -* The communications between all ScalarDB Cluster nodes (the cluster's internal communications). +### 8. Log in as `user1` -This feature uses gRPC's TLS support. For details, see the official gRPC [Security Policy](https://github.com/grpc/grpc-java/blob/master/SECURITY.md). +Log in as `user1` and execute SQL statements. -### Configurations +```console +java -jar scalardb-cluster-sql-cli-3.14.0-all.jar --config scalardb-cluster-sql-cli.properties +``` -This section describes the available configurations for wire encryption. +Enter the username and password as `user1` and `user1`, respectively. -#### ScalarDB Cluster node configurations +Now you can execute SQL statements as `user1`. -To enable wire encryption, you need to set `scalar.db.cluster.tls.enabled` to `true`. +### 9. Execute DML statements -| Name | Description | Default | -|---------------------------------|-------------------------------------------|---------| -| `scalar.db.cluster.tls.enabled` | Whether wire encryption (TLS) is enabled. | `false` | +Execute the following `INSERT` statement as `user1`. -You also need to set the following configurations: +```sql +INSERT INTO ns1.tbl VALUES (1, 'a', 1); +``` -| Name | Description | Default | -|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| -| `scalar.db.cluster.tls.ca_root_cert_pem` | The custom CA root certificate (PEM data) for TLS communication. | | -| `scalar.db.cluster.tls.ca_root_cert_path` | The custom CA root certificate (file path) for TLS communication. | | -| `scalar.db.cluster.tls.override_authority` | The custom authority for TLS communication. This doesn't change what host is actually connected. This is intended for testing, but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set for `scalar.db.cluster.node.tls.cert_chain_path`. | | -| `scalar.db.cluster.node.tls.cert_chain_path` | The certificate chain file used for TLS communication. | | -| `scalar.db.cluster.node.tls.private_key_path` | The private key file used for TLS communication. | | +Then, execute the following `SELECT` statement as `user1`. -To specify the certificate authority (CA) root certificate, you should set either `scalar.db.cluster.tls.ca_root_cert_pem` or `scalar.db.cluster.tls.ca_root_cert_path`. If you set both, `scalar.db.cluster.tls.ca_root_cert_pem` will be used. +```sql +SELECT * FROM ns1.tbl; +``` -#### ScalarDB Cluster Java client SDK configurations +```console ++----+------+------+ +| id | col1 | col2 | ++----+------+------+ +| 1 | a | 1 | ++----+------+------+ +``` -To enable wire encryption on the client side, you need to set `scalar.db.cluster.tls.enabled` to `true`. +You can see that `user1` can execute `INSERT` and `SELECT` statements. -| Name | Description | Default | -|---------------------------------|-------------------------------------------|---------| -| `scalar.db.cluster.tls.enabled` | Whether wire encryption (TLS) is enabled. | `false` | +Next, try executing the following `DELETE` statement as `user1`. -You also need to set the following configurations: +```sql +DELETE FROM ns1.tbl WHERE id = 1; +``` -| Name | Description | Default | -|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| -| `scalar.db.cluster.tls.ca_root_cert_pem` | The custom CA root certificate (PEM data) for TLS communication. | | -| `scalar.db.cluster.tls.ca_root_cert_path` | The custom CA root certificate (file path) for TLS communication. | | -| `scalar.db.cluster.tls.override_authority` | The custom authority for TLS communication. This doesn't change what host is actually connected. This is intended for testing, but may safely be used outside of tests as an alternative to DNS overrides. For example, you can specify the hostname presented in the certificate chain file that you set for `scalar.db.cluster.node.tls.cert_chain_path`. | | +```console +Error: Authorization error (PERMISSION_DENIED: SQL-10021: Access denied: You need the DELETE privilege on the table ns1.tbl to execute this operation) (state=SDB11,code=9911) +``` -To specify the CA root certificate, you should set either `scalar.db.cluster.tls.ca_root_cert_pem` or `scalar.db.cluster.tls.ca_root_cert_path`. If you set both, `scalar.db.cluster.tls.ca_root_cert_pem` will be used. +You will see the above error message because `user1` doesn't have the `DELETE` privilege on the `ns1.tbl` table. diff --git a/docs/scalardb-cluster/scalardb-cluster-configurations.mdx b/docs/scalardb-cluster/scalardb-cluster-configurations.mdx index e8d172ec..4f108418 100644 --- a/docs/scalardb-cluster/scalardb-cluster-configurations.mdx +++ b/docs/scalardb-cluster/scalardb-cluster-configurations.mdx @@ -28,7 +28,7 @@ The basic configurations for a cluster node are as follows: | `scalar.db.cluster.node.prometheus_exporter_port` | Port number of the Prometheus exporter. | `9080` | | `scalar.db.cluster.grpc.deadline_duration_millis` | Deadline duration for gRPC in milliseconds. | `60000` (60 seconds) | | `scalar.db.cluster.node.standalone_mode.enabled` | Whether standalone mode is enabled. Note that if standalone mode is enabled, the membership configurations (`scalar.db.cluster.membership.*`) will be ignored. | `false` | -| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. | `-1` (no expiration) | +| `scalar.db.metadata.cache_expiration_time_secs` | ScalarDB has a metadata cache to reduce the number of requests to the database. This setting specifies the expiration time of the cache in seconds. If you specify `-1`, the cache will never expire. | `60` | | `scalar.db.active_transaction_management.expiration_time_millis` | ScalarDB Cluster nodes maintain ongoing transactions, which can be resumed by using a transaction ID. This configuration specifies the expiration time of this transaction management feature in milliseconds. | `60000` (60 seconds) | | `scalar.db.system_namespace_name` | The given namespace name will be used by ScalarDB internally. | `scalardb` | diff --git a/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx b/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx index 085811d3..9cb60473 100644 --- a/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx +++ b/docs/scalardb-cluster/setup-scalardb-cluster-on-kubernetes-by-using-helm-chart.mdx @@ -169,7 +169,7 @@ You can deploy PostgreSQL on the Kubernetes cluster as follows. 5. Set the chart version of ScalarDB Cluster. ```console - SCALAR_DB_CLUSTER_VERSION=3.13.1 + SCALAR_DB_CLUSTER_VERSION=3.14.0 SCALAR_DB_CLUSTER_CHART_VERSION=$(helm search repo scalar-labs/scalardb-cluster -l | grep -F "${SCALAR_DB_CLUSTER_VERSION}" | awk '{print $2}' | sort --version-sort -r | head -n 1) ``` diff --git a/docs/scalardb-core-status-codes.mdx b/docs/scalardb-core-status-codes.mdx index b3d096f3..ab61f380 100644 --- a/docs/scalardb-core-status-codes.mdx +++ b/docs/scalardb-core-status-codes.mdx @@ -201,7 +201,7 @@ The clustering key is not properly specified. Operation: %s **Message** ```markdown -This feature is not supported in the ScalarDB Community edition +The authentication and authorization feature is not enabled. To use this feature, you must enable it. Note that this feature is supported only in the ScalarDB Enterprise edition ``` ### `CORE-10023` @@ -1132,6 +1132,22 @@ Using the group commit feature on the Coordinator table with a two-phase commit This operation is supported only when no conditions are specified. If you want to modify a condition, please use clearConditions() to remove all existing conditions first ``` +### `CORE-10143` + +**Message** + +```markdown +The encryption feature is not enabled. To encrypt data at rest, you must enable this feature. Note that this feature is supported only in the ScalarDB Enterprise edition +``` + +### `CORE-10144` + +**Message** + +```markdown +The variable key column size must be greater than or equal to 64 +``` + ### `CORE-10145` **Message** diff --git a/docs/scalardb-sql/grammar.mdx b/docs/scalardb-sql/grammar.mdx index c7966d18..4a758ff1 100644 --- a/docs/scalardb-sql/grammar.mdx +++ b/docs/scalardb-sql/grammar.mdx @@ -45,6 +45,8 @@ tags: - [DESCRIBE](#describe) - [SUSPEND](#suspend) - [RESUME](#resume) + - [SHOW USERS](#show-users) + - [SHOW GRANTS](#show-grants) ## DDL @@ -117,13 +119,15 @@ data_type: BOOLEAN | INT | BIGINT | FLOAT | DOUBLE | TEXT | BLOB creation_options: