diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 35a850e778..f94be13144 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -206,11 +206,9 @@ include::third-party:partial$nav.adoc[] ** xref:install:deployment-considerations-lt-3nodes.adoc[Two-Node and Single-Node Clusters] * xref:install:install-intro.adoc[Installation] ** xref:install:install-linux.adoc[Install on Linux] - *** xref:install:rhel-suse-install-intro.adoc[Red Hat] + *** xref:install:rhel-suse-install-intro.adoc[Red Hat, Oracle Linux, or Amazon Linux] *** xref:install:ubuntu-debian-install.adoc[Ubuntu & Debian] *** xref:install:install_suse.adoc[SUSE Enterprise] - *** xref:install:install-oracle.adoc[Oracle Enterprise] - *** xref:install:amazon-linux2-install.adoc[Amazon Linux 2] *** xref:install:non-root.adoc[Non-Root Install and Upgrade] ** xref:install:install-package-windows.adoc[Install on Windows] ** xref:install:macos-install.adoc[Install on macOS] diff --git a/modules/install/pages/amazon-linux2-install.adoc b/modules/install/pages/amazon-linux2-install.adoc deleted file mode 100644 index d2d47a71d5..0000000000 --- a/modules/install/pages/amazon-linux2-install.adoc +++ /dev/null @@ -1,73 +0,0 @@ -= Install Couchbase Server on Amazon Linux 2 -:description: Couchbase Server can be installed on Amazon Linux 2 for production and development use-cases. \ -Root and non-root installations are supported. -:page-edition: Enterprise Edition -:tabs: - -[abstract] -{description} - -Amazon Linux 2 is supported with Couchbase Server 6.0.1+. -See xref:install-platforms.adoc[Supported Operating Systems] for details. - -Use the instructions on this page to install Couchbase Server on Amazon Linux 2 using Couchbase-provided RPM packages. -The instructions support both Enterprise and Community https://www.couchbase.com/products/editions[editions^]. - -If you're upgrading an existing installation of Couchbase Server, refer to xref:upgrade.adoc[Upgrading Couchbase Server]. - -== Before You Install - -Couchbase Server works out-of-the-box with most OS configurations. -However, the procedures on this page assume the following: - -* Your system meets the xref:pre-install.adoc[minimum requirements] and that your operating system version is xref:install-platforms.adoc[supported]. -* You're working from a clean system and that you've xref:install-uninstalling.adoc[uninstalled] any previous versions of Couchbase Server. -+ -If you're upgrading an existing installation of Couchbase Server, refer to xref:upgrade.adoc[Upgrading Couchbase Server]. - -For production deployments, make sure to follow the xref:install-production-deployment.adoc[deployment guidelines] so that your systems and environment are properly sized and configured before installation. - -== Basic Installation - -You must be logged in as root (superuser) or use `sudo` to run the installation commands. - -=== Install Using RPM Package - -Install Couchbase Server on Amazon Linux 2 using a full RPM package provided by Couchbase. - -. Download the appropriate package from the Couchbase https://www.couchbase.com/downloads[downloads page^]. - -. Install Couchbase Server. -+ -[source,console,subs=+quotes] ----- -sudo rpm --install [.var]_package-name_.rpm ----- -+ -Once installation is complete, Couchbase Server will start automatically (and will continue to start automatically at run levels 2, 3, 4, and 5, and explicitly shut down at run levels 0, 1, and 6). -You can use the `systemctl` command (`service` on older operating systems) to start and stop the Couchbase Server service, as well as check the current status. -Refer to xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown] for more information. - -. Open a web browser and access the Couchbase Web Console to xref:testing.adoc[verify] that the installation was successful and the node is available. - -[#amzn-lnx2-nonroot-nonsudo-] -== Installing as Non-Root - -Non-root installation is performed identically for all supported Linux distributions, including Amazon Linux 2. -For instructions, see xref:install:non-root.adoc[Non-Root Install and Upgrade]. - -== Next Steps - -Following installation and start-up of Couchbase Server, a node must be _initialized_ and _provisioned_. - -* If it is the first node in a deployment, initialization and provisioning happens all at once when you create a _cluster of one_. -+ -Refer to xref:manage:manage-nodes/create-cluster.adoc[Create a Cluster] - -* If you already have an existing cluster, the node is initialized and provisioned when you add it to the cluster. -+ -Refer to xref:manage:manage-nodes/add-node-and-rebalance.adoc[Add a Node and Rebalance] -+ -* Optionally, initialization can be performed explicitly and independently of provisioning, as a prior process, in order to establish certain configurations, such as custom disk-paths. -+ -Refer to xref:manage:manage-nodes/initialize-node.adoc[Initialize a Node] diff --git a/modules/install/pages/deployment-considerations-lt-3nodes.adoc b/modules/install/pages/deployment-considerations-lt-3nodes.adoc index 7f7118fda6..a561dd9b1e 100644 --- a/modules/install/pages/deployment-considerations-lt-3nodes.adoc +++ b/modules/install/pages/deployment-considerations-lt-3nodes.adoc @@ -82,7 +82,7 @@ Therefore, continue to monitor the RAM and CPU use of your arbiter node, especia === Metadata Management -In Couchbase Server 7.0+, metadata is managed by means of _Chronicle_; which is a _consensus-based_ system, based on the https://raft.github.io/[Raft^] algorithm. +In Couchbase Server 7.0 or a later version, metadata is managed by means of _Chronicle_; which is a _consensus-based_ system, based on the https://raft.github.io/[Raft^] algorithm. Due to the strong consistency with which topology-related metadata is thus managed, in the event of a _quorum failure_ (meaning, the unresponsiveness of at least half of the cluster's nodes -- for example, the unresponsiveness of one node in a two-node cluster), no modification of nodes, buckets, scopes, and collections can take place until the quorum failure is resolved. diff --git a/modules/install/pages/install-linux.adoc b/modules/install/pages/install-linux.adoc index f97252413d..3b94362f0b 100644 --- a/modules/install/pages/install-linux.adoc +++ b/modules/install/pages/install-linux.adoc @@ -6,22 +6,18 @@ == Supported Linux Platforms -Couchbase Server can be installed and run on Red Hat-based distributions, Ubuntu, Debian, SUSE Enterprise, Oracle Enterprise, and Amazon Linux. +Couchbase Server can be installed and run on Red Hat-based distributions, Oracle Enterprise, Amazon Linux, Ubuntu, Debian, and SUSE Enterprise. -The following procedures use Couchbase packages, and require the user who performs the install to have _root_ or _sudo_ privileges: +The following procedures use Couchbase packages, and require the user who performs the install to have root or sudo privileges: -* xref:install:rhel-suse-install-intro.adoc[Install on Red Hat-based distributions]. +* xref:install:rhel-suse-install-intro.adoc[Install on Red Hat-based distributions, Oracle Linux, or Amazon Linux]. * xref:install:ubuntu-debian-install.adoc[Install on Ubuntu and Debian]. * xref:install:install_suse.adoc[Install on SUSE Enterprise]. -* xref:install:install-oracle.adoc[Install on Oracle Enterprise]. - -* xref:install:amazon-linux2-install.adoc[Install on Amazon Linux 2]. - -Additionally, a _non-package-based_ install is performed on all the above platforms. -Unlike the package-based install, this does _not_ require root or sudo privileges. +Additionally, a non-package-based install is performed on all the above platforms. +Unlike the package-based install, this does not require root or sudo privileges. After the non-package-based install, the same user can stop, start, and get status on the server; and can also perform upgrade. -The non-package-based procedure is the same for all the above platforms: see xref:install:non-root.adoc[Non-Root Install and Upgrade] for detailed information. +The non-package-based procedure is the same for all the above platforms: see xref:install:non-root.adoc[Non-Root Install and Upgrade] for detailed information. \ No newline at end of file diff --git a/modules/install/pages/install-oracle.adoc b/modules/install/pages/install-oracle.adoc deleted file mode 100644 index 8c40f6b6ff..0000000000 --- a/modules/install/pages/install-oracle.adoc +++ /dev/null @@ -1,193 +0,0 @@ -= Install Couchbase Server on Oracle Linux -:description: Couchbase Server can be installed on Oracle Linux for production and development use-cases. \ -Root and non-root installations are supported. -:tabs: - -[abstract] -{description} - -Use the instructions on this page to install Couchbase Server on Oracle Linux using Couchbase-provided RPM packages. -The instructions support both Enterprise and Community https://www.couchbase.com/products/editions[editions^]. - -If you're upgrading an existing installation of Couchbase Server, refer to xref:upgrade.adoc[Upgrading Couchbase Server]. - -== Before You Install - -Couchbase Server works out-of-the-box with most OS configurations. -However, the procedures on this page assume the following: - -* Your system meets the xref:pre-install.adoc[minimum requirements] and that your operating system version is xref:install-platforms.adoc[supported]. -* You're working from a clean system and that you've xref:install-uninstalling.adoc[uninstalled] any previous versions of Couchbase Server. -+ -If you're upgrading an existing installation of Couchbase Server, refer to xref:upgrade.adoc[Upgrading Couchbase Server]. -* Only the Red Hat Compatible Kernel (RHCK) is supported. -The Unbreakable Enterprise Kernel (UEK) is not supported. - -For production deployments, make sure to follow the xref:install-production-deployment.adoc[deployment guidelines] so that your systems and environment are properly sized and configured before installation. - -== Basic Installation - -You must be logged in as root (superuser) or use `sudo` to run the installation commands. - -=== Install Using Yum - -The Red Hat package manager (`yum`) provides the simplest and most comprehensive way to install Couchbase Server on Oracle Linux. -This method involves downloading and installing a small meta package from Couchbase, which `yum` can then use to automatically download and install Couchbase Server and all of its dependencies. - -. Download the meta package. -+ -[source,console] ----- -curl -O https://packages.couchbase.com/releases/couchbase-release/couchbase-release-1.0.noarch.rpm ----- - -. Install the meta package. -+ -[source,console] ----- -sudo rpm -i ./couchbase-release-1.0.noarch.rpm ----- -+ -The meta package installs the necessary information for `yum` to be able to retrieve all of the necessary Couchbase Server installation packages and dependencies. - -. Install Couchbase Server. -+ -[{tabs}] -==== -Enterprise:: -+ --- -.To install the latest release -[source,console] ----- -sudo yum install couchbase-server ----- -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. -For each of these prompts, type `y` to accept and continue. - -.To install a specific release -. List the available releases. -+ -[source,console] ----- -yum list --showduplicates couchbase-server ----- -+ -Available releases are listed with their full `version-build` number: -+ -[subs=+quotes] ----- -couchbase-server.x86_64 *6.0.0-1693* ----- -+ -. Specify a release to install it. -+ -[source,console,subs=+quotes] ----- -sudo yum install couchbase-server-[.var]_version-build_ ----- -+ -Using the example listing from the previous step, the resulting installation command would be: -+ -[subs=+quotes] ----- -sudo yum install couchbase-server-*6.0.0-1693* ----- -+ -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. -For each of these prompts, type `y` to accept and continue. --- - -Community:: -+ --- -.To install the latest release -[source,console] ----- -sudo yum install couchbase-server-community ----- -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. -For each of these prompts, type `y` to accept and continue. - -.To install a specific release -. List the available releases. -+ -[source,console] ----- -yum list --showduplicates couchbase-server-community ----- -+ -Available releases are listed with their full `version-build` number: -+ -[subs=+quotes] ----- -couchbase-server-community.x86_64 *6.0.0-1693* ----- -+ -. Specify a release to install it. -+ -[source,console,subs=+quotes] ----- -sudo yum install couchbase-server-community-[.var]_version-build_ ----- -+ -Using the example listing from the previous step, the resulting installation command would be: -+ -[subs=+quotes] ----- -sudo yum install couchbase-server-community-*6.0.0-1693* ----- -+ -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. -For each of these prompts, type `y` to accept and continue. --- -==== -+ -Once installation is complete, Couchbase Server will start automatically (and will continue to start automatically at run levels 2, 3, 4, and 5, and explicitly shut down at run levels 0, 1, and 6). -You can use the `systemctl` command (`service` on older operating systems) to start and stop the Couchbase Server service, as well as check the current status. -Refer to xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown] for more information. -+ -. Open a web browser and access the Couchbase Web Console to xref:testing.adoc[verify] that the installation was successful and that the node is available. - -=== Install Using RPM Package - -Install Couchbase Server on Oracle Linux using a full RPM package provided by Couchbase. - -. Download the appropriate package from the Couchbase https://www.couchbase.com/downloads[downloads page^]. - -. Install Couchbase Server. -+ -[source,console,subs=+quotes] ----- -sudo yum install ./[.var]_package-name_.rpm ----- -+ -If any Couchbase Server dependencies are missing on your system, `yum` will automatically download and install them as part of the installation process. -+ -Once installation is complete, Couchbase Server will start automatically (and will continue to start automatically at run levels 2, 3, 4, and 5, and explicitly shut down at run levels 0, 1, and 6). -You can use the `systemctl` command (`service` on older operating systems) to start and stop the Couchbase Server service, as well as check the current status. -Refer to xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown] for more information. - -. Open a web browser and access the Couchbase Web Console to xref:testing.adoc[verify] that the installation was successful and the node is available. - -[#ol-nonroot-nonsudo-] -== Installing as Non-Root - -Non-root installation is performed identically for all supported Linux distributions, including Oracle Linux. -For instructions, see xref:install:non-root.adoc[Non-Root Install and Upgrade]. - -== Next Steps - -Following installation and start-up of Couchbase Server, a node must be _initialized_ and _provisioned_. - -* If it is the first node in a deployment, initialization and provisioning happens all at once when you create a _cluster of one_. -+ -Refer to xref:manage:manage-nodes/create-cluster.adoc[Create a Cluster] - -* If you already have an existing cluster, the node is initialized and provisioned when you add it to the cluster. -+ -Refer to xref:manage:manage-nodes/add-node-and-rebalance.adoc[Add a Node and Rebalance] -+ -* Optionally, initialization can be performed explicitly and independently of provisioning, as a prior process, in order to establish certain configurations, such as custom disk-paths. -+ -Refer to xref:manage:manage-nodes/initialize-node.adoc[Initialize a Node] diff --git a/modules/install/pages/install-package-windows.adoc b/modules/install/pages/install-package-windows.adoc index 893252c0a0..26e105cf89 100644 --- a/modules/install/pages/install-package-windows.adoc +++ b/modules/install/pages/install-package-windows.adoc @@ -65,9 +65,16 @@ cd C:\Users\customer\Downloads . Start the Couchbase-Server install wizard; by using the `call` command, and specifying the `.msi` file that you have downloaded: + +[.source,shell] +---- +call couchbase-server-enterprise_--windows_amd64.msi +---- ++ +The following is only an example: ++ [source,shell] ---- -call couchbase-server-enterprise_7.1.0-windows_amd64.msi +call couchbase-server-enterprise_8.0.0-3777-windows_amd64.msi ---- + The install wizard now appears: diff --git a/modules/install/pages/migration.adoc b/modules/install/pages/migration.adoc index b75c11316c..56d2a3f83f 100644 --- a/modules/install/pages/migration.adoc +++ b/modules/install/pages/migration.adoc @@ -1,80 +1,93 @@ = Enabling Timestamp-based Conflict Resolution for Migrated Data -:description: The Timestamp-based Conflict Resolution is a new conflict resolution type added in version 4.6.0. +:description: The Timestamp-based Conflict Resolution method is the latest conflict resolution type available for a bucket. + +Use timestamp-based conflict resolution for transactions and the latest Couchbase features. +You cannot change the conflict resolution type of an existing bucket. +If the bucket was not created with timestamp-based conflict resolution, migrate the data to use it. [abstract] {description} -This new feature is supported for *new* buckets that are created in Couchbase Server version 4.6.0. -You cannot change the conflict resolution mode to the _Timestamp-based Conflict Resolution_ for existing buckets after upgrading to version 4.6.0. +Couchbase Server supports this feature for new buckets. + +To enable timestamp-based conflict resolution for existing data, migrate the data to a new bucket with the timestamp-based conflict resolution type, +using the `cbbackupmgr` tool. +For more information about the `cbbackupmgr` tool, see xref:backup-restore:backup-restore.adoc[Backup and Restore]. -If you wish to enable the timestamp-based conflict resolution for your existing data, then you must migrate your data to version 4.6.0 cluster using the `cbbackupmgr` tool. -To learn more about the tool, See xref:backup-restore:backup-restore.adoc[Backup and Restore]. +IMPORTANT: This is a one time migration. +The bucket without timestamp-based conflict resolution must be removed and a new bucket with the xref:learn:clusters-and-availability/xdcr-conflict-resolution.adoc#timestamp-based-conflict-resolution[timestamp-based conflict resolution] type must be created as part of the migration. -IMPORTANT: This is a one time migration and the bucket must be switched to a new conflict resolution type as part of the migration. +These are 2 scenarios: -To understand more about timestamp-based Conflict Resolution, see xref:learn:clusters-and-availability/xdcr-conflict-resolution.adoc#timestamp-based-conflict-resolution[Timestamp-Based Conflict Resolution]. +* Scenario 1: Unidirectional replication for disaster recovery - +Cluster 1 (Bucket A) continuously replicates data into Cluster 2 (Bucket A'), which is one way. -Here are two scenarios: +* Scenario 2: Bidirectional replication for data locality - +Cluster 1 (Bucket A) and Cluster 2 (Bucket A’) replicate data into each other. -* Scenario 1 - In case of unidirectional replication, for example, for a disaster recovery where Cluster 1 (Bucket A) has one way replication streams to Cluster 2 (Bucket A’). -* Scenario 2 - In case of bidirectional replication, for example, for data locality where Cluster 1 (Bucket A) has both ways replication streams to Cluster 2 (Bucket A’). +== Migration during Unidirectional Replication -For Scenario 1, perform the following steps to migrate your data to a new cluster: +Follow these steps to migrate your data to a new bucket during unidirectional replication: . Stop application traffic coming into Cluster 1 (Bucket A). + -NOTE: Allow __enough tim__e for the replication queues to drain. -Check the *outbound XDCR mutation* statistics to confirm. -For instructions, see xref:manage:monitor/ui-monitoring-statistics.adoc#outgoing_xdcr_stats[Monitoring Outgoing XDCR]. +NOTE: Allow enough time for the replication queues to drain. +Confirm by reviewing the XDCR Total Outbound Mutations statistic. +For more information, see xref:manage:monitor/monitor-intro.html#monitoring-with-the-ui[]. -. Stop and delete Replication stream to Cluster 2 (Bucket A’). +. Stop and delete the replication stream to Cluster 2 (Bucket A’). For instructions, see xref:manage:manage-xdcr/xdcr-management-overview.adoc[XDCR Management Overview]. -. Run the `cbbackupmgr` tool to back up the entire bucket’s (Bucket A) data. -For instructions, see xref:backup-restore:enterprise-backup-restore.adoc[Backup]. + +. Run the `cbbackupmgr` tool to back up the bucket’s (Bucket A) data. +For instructions, see xref:backup-restore:enterprise-backup-restore.adoc[Backup]. + . Delete Bucket A from Cluster 1. For instructions, see xref:manage:manage-buckets/delete-bucket.adoc[Delete a Bucket]. -. Upgrade Cluster 1 to Couchbase Server version 4.6.0. -For instructions, see xref:upgrade.adoc[Upgrading Couchbase Server]. -. Create a new Bucket B with the conflict resolution type as *Timestamp* selected. + +. Create Bucket B on Cluster 1 and select the Timestamp-based Conflict Resolution type. For instructions, see xref:manage:manage-buckets/create-bucket.adoc[Create a Bucket]. -. Run the `cbbackupmgr` tool to restore data. -When restoring data from backup (use the [.cmd]`--force-updates` option). -Make sure to disable the *Conflict Resolution* option during the restore. -This is required since the conflict resolution types of the source and destination clusters do not match. -. Once the restore operation is completed on Cluster 1, delete Bucket A’ from Cluster 2. -. Upgrade Cluster 2 to Couchbase Server version 4.6.0. -. Create a new Bucket B’ with the Conflict Resolution type as *Timestamp* selected. + +. Run the `cbbackupmgr` tool on Cluster 1 to restore data to Bucket B. +Turn off the Conflict Resolution setting on the destination cluster and then use the `--force-updates` option to restore data from backup. +Do this because the source and destination clusters have different conflict resolution types. + +. After the data restore operation is complete on Cluster 1, delete Bucket A’ from Cluster 2. + +. Create Bucket B’ on Cluster 2 and select the Timestamp-based Conflict Resolution type. + . Create a new unidirectional replication for Bucket B. See xref:manage:manage-xdcr/xdcr-management-overview.adoc[XDCR Management Overview]. -For Scenario 2, perform the following steps to migrate your data to a new cluster: +== Migration during Bidirectional Replication + +Follow these steps to migrate your data to a new bucket during bidirectional replication: -. Stop application traffic coming to Cluster 1 (Bucket A) and Cluster 2 (Bucket A’). +. Stop application traffic coming into Cluster 1 (Bucket A) and Cluster 2 (Bucket A’). + -NOTE: Note: Allow _enough time_ for the replication queues to drain. -Check the *outbound XDCR mutation* statistics to confirm. -For instructions, see -xref:manage:monitor/ui-monitoring-statistics.adoc#outgoing_xdcr_stats[Monitoring Outgoing XDCR]. +NOTE: Allow enough time for the replication queues to drain. +Confirm by reviewing the XDCR Total Outbound Mutations statistic. +For more information, see xref:manage:monitor/monitor-intro.html#monitoring-with-the-ui[]. -. Stop and delete replication to both clusters. +. Stop and delete the replication streams to both clusters. For instructions, see xref:manage:manage-xdcr/xdcr-management-overview.adoc[XDCR Management Overview]. -. Run the `cbbackupmgr` tool to backup the entire bucket data on both clusters. + +. Run the `cbbackupmgr` tool on both clusters to backup the bucket data. For instructions, see xref:backup-restore:enterprise-backup-restore.adoc[Backup]. + . Delete the Bucket A from Cluster 1 and Bucket A’ from Cluster 2. For instructions, see xref:manage:manage-buckets/delete-bucket.adoc[Delete a Bucket]. -. Upgrade both clusters to Couchbase Server version 4.6.0. -For instructions, see xref:upgrade.adoc[Upgrading Couchbase Server]. -. Create new buckets on both clusters with the conflict resolution type as *Timestamp* selected. + +. Create buckets on both clusters and select the Timestamp-based Conflict Resolution type. For instructions, see xref:manage:manage-buckets/create-bucket.adoc[Create a Bucket]. + . Run the `cbbackupmgr` tool to restore data. -When restoring data from backup (use the [.cmd]`--force-updates` option). -Make sure to disable *Conflict Resolution* option during the restore. -This is required because the conflict resolution types of the source and destination do not match. -. Once the restore operation is completed on both clusters, create replication streams both ways from Cluster 1 and Cluster 2. -See -xref:manage:manage-xdcr/xdcr-management-overview.adoc[XDCR Management Overview]. +Turn off the Conflict Resolution setting on the destination cluster and then use the `--force-updates` option to restore data from backup. +Do this because the source and destination clusters have different conflict resolution types. + +. After the restore operation is complete on both clusters, create replication streams both ways between Cluster 1 and Cluster 2. +For more information, see xref:manage:manage-xdcr/xdcr-management-overview.adoc[XDCR Management Overview]. diff --git a/modules/install/pages/non-root.adoc b/modules/install/pages/non-root.adoc index db459d364c..3576fc3eef 100644 --- a/modules/install/pages/non-root.adoc +++ b/modules/install/pages/non-root.adoc @@ -126,11 +126,12 @@ This concludes the procedure for establishing and verifying new limits for user To perform a non-root installation of Couchbase Server on any supported Linux distribution, proceed as follows: -. Download the Couchbase Server RPM, using `wget` or `curl`. +. Download the Couchbase Server RPM from the https://www.couchbase.com/downloads/[Couchbase Downloads] page. . Using `wget` or `curl`, download the appropriate binary for your platform (the URI is the same for all supported x86_64 or aarch64 Linux distributions): + {aarch64-install-location}[role=add-ext-icon] ++ {x86-install-location}[role=add-ext-icon] + NOTE: In the following examples, use the appropriate suffix (`"x86_64"` or `"aarch64"`) instead of `##` for the `cb-non-package-installer`. @@ -155,7 +156,7 @@ For example: [source, console, subs=+quotes] ---- ./cb-non-package-installer-## --install --install-location ./cb-install \ - --package ./couchbase-server-enterprise-7.1.0-amzn2.##.rpm + --package ./couchbase-server-enterprise-8.0.0-linux.##.rpm ---- + NOTE: the program performs dependency checking, prior to installation. @@ -262,7 +263,7 @@ For example: [source, console, subs=+quotes] ---- ./cb-non-package-installer-## --upgrade --install-location ./cb-install \ ---package ./couchbase-server-enterprise-7.1.0-amzn2.##.rpm +--package ./couchbase-server-enterprise-8.0.0-linux.##.rpm ---- During upgrade, the following message may appear: diff --git a/modules/install/pages/rhel-suse-install-intro.adoc b/modules/install/pages/rhel-suse-install-intro.adoc index d1929e2a64..6f47eaa24b 100644 --- a/modules/install/pages/rhel-suse-install-intro.adoc +++ b/modules/install/pages/rhel-suse-install-intro.adoc @@ -1,36 +1,43 @@ -= Install Couchbase Server on Red Hat Enterprise -:description: Couchbase Server can be installed on Red Hat Enterprise Linux for production and development use-cases. += Install Couchbase Server on Red Hat Enterprise, Oracle Linux, or Amazon Linux 2023 +:description: Couchbase Server can be installed on Red Hat Enterprise Linux, Oracle Linux, or Amazon Linux 2023 for production and development use-cases. :tabs: [abstract] {description} -Root and non-root installations are supported. -Use the instructions on this page to install Couchbase Server on Red Hat Enterprise Linux using Couchbase-provided RPM packages. +Couchbase Server supports both root and non-root installations of Red Hat Enterprise Linux, Oracle Linux, and Amazon Linux 2023. + +Use the instructions on this page to install Couchbase Server on Red Hat Enterprise Linux, Oracle Linux, or Amazon Linux 2023 using Couchbase-provided RPM packages. The instructions support both Enterprise and Community https://www.couchbase.com/products/editions[editions^]. -If you're upgrading an existing installation of Couchbase Server, refer to xref:upgrade.adoc[Upgrading Couchbase Server]. +If you're upgrading an existing Couchbase Server instance, see xref:upgrade.adoc[Upgrading Couchbase Server]. == Before You Install Couchbase Server works out-of-the-box with most OS configurations. -However, the procedures on this page assume the following: +However, the following are the prerequisites for installation: * Your system meets the xref:pre-install.adoc[minimum requirements] and that your operating system version is xref:install-platforms.adoc[supported]. -* You're working from a clean system and that you've xref:install-uninstalling.adoc[uninstalled] any previous versions of Couchbase Server. + +* You're working from a clean system and that you have xref:install-uninstalling.adoc[uninstalled] any previous versions of Couchbase Server. + -If you're upgrading an existing installation of Couchbase Server, refer to xref:upgrade.adoc[Upgrading Couchbase Server]. +If you're upgrading an existing installation of Couchbase Server, see xref:upgrade.adoc[Upgrading Couchbase Server]. + +* For Oracle Linux, only the Red Hat Compatible Kernel (RHCK) is supported. +The Unbreakable Enterprise Kernel (UEK) is not supported. -For production deployments, make sure to follow the xref:install-production-deployment.adoc[deployment guidelines] so that your systems and environment are properly sized and configured before installation. +* For production deployments, make sure to follow the xref:install-production-deployment.adoc[deployment guidelines] so that your systems and environment are properly sized and configured before installation. == Basic Installation -You must be logged in as root (superuser) or use `sudo` to run the installation commands. +This section explains installation using both `yum` and RPM package methods. + +NOTE: You must be logged in as root user (superuser) or use `sudo` to run the installation commands. === Install Using Yum -The Red Hat package manager (`yum`) provides the simplest and most comprehensive way to install Couchbase Server on Red Hat Enterprise. -This method involves downloading and installing a small meta package from Couchbase, which `yum` can then use to automatically download and install Couchbase Server and all of its dependencies. +The Red Hat package manager `yum` provides the simplest and most comprehensive way to install Couchbase Server on Red Hat Enterprise or Oracle Linux. +This method involves downloading and installing a small meta package from Couchbase, which is used by `yum` to automatically download and install Couchbase Server and all of its dependencies. . Download the meta package. + @@ -46,7 +53,7 @@ curl -O https://packages.couchbase.com/releases/couchbase-release/couchbase-rele sudo rpm -i ./couchbase-release-1.0.noarch.rpm ---- + -The meta package installs the necessary information for `yum` to be able to retrieve all of the necessary Couchbase Server installation packages and dependencies. +The meta package installs the necessary information for `yum` to retrieve all necessary Couchbase Server installation packages and dependencies. . Install Couchbase Server. + @@ -56,6 +63,9 @@ Enterprise:: + -- .To install the latest release + +Run the following command: + [source,console] ---- sudo yum install couchbase-server @@ -64,6 +74,7 @@ You'll be prompted to start the download of Couchbase Server (plus any dependenc For each of these prompts, type `y` to accept and continue. .To install a specific release + . List the available releases. + [source,console] @@ -71,28 +82,35 @@ For each of these prompts, type `y` to accept and continue. yum list --showduplicates couchbase-server ---- + -Available releases are listed with their full `version-build` number: +Available releases are listed with their complete `version-build` number: + [subs=+quotes] ---- -couchbase-server.x86_64 *6.0.0-1693* +couchbase-server.x86_64 - ---- + +The following is an example of the listing: ++ +[subs=+quotes] +---- +couchbase-server.x86_64 *8.0.0-3777* +---- + . Specify a release to install it. + [source,console,subs=+quotes] ---- -sudo yum install couchbase-server-[.var]_version-build_ +sudo yum install couchbase-server-- ---- + -Using the example listing from the previous step, the resulting installation command would be: +The following is an example of the installation command: + [subs=+quotes] ---- -sudo yum install couchbase-server-*6.0.0-1693* +sudo yum install couchbase-server-*8.0.0-3777* ---- + -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. +You'll be prompted to start the download of Couchbase Server and its dependencies, and import several GPG keys. For each of these prompts, type `y` to accept and continue. -- @@ -100,14 +118,18 @@ Community:: + -- .To install the latest release + +Run the following command: + [source,console] ---- sudo yum install couchbase-server-community ---- -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. +You'll be prompted to start the download of Couchbase Server and its dependencies, and import several GPG keys. For each of these prompts, type `y` to accept and continue. .To install a specific release + . List the available releases. + [source,console] @@ -119,66 +141,91 @@ Available releases are listed with their full `version-build` number: + [subs=+quotes] ---- -couchbase-server-community.x86_64 *6.0.0-1693* +couchbase-server-community.x86_64 - ---- + +The following is an example of the listing: ++ +[subs=+quotes] +---- +couchbase-server-community.x86_64 *8.0.0-3777* +---- + . Specify a release to install it. + [source,console,subs=+quotes] ---- -sudo yum install couchbase-server-community-[.var]_version-build_ +sudo yum install couchbase-server-community-- ---- + -Using the example listing from the previous step, the resulting installation command would be: +The following is an example of the installation command: + [subs=+quotes] ---- -sudo yum install couchbase-server-community-*6.0.0-1693* +sudo yum install couchbase-server-community-*8.0.0-3777* ---- + -You'll be prompted to start the download of Couchbase Server (plus any dependencies), as well as import several GPG keys. +You'll be prompted to start the download of Couchbase Server and its dependencies, and import several GPG keys. For each of these prompts, type `y` to accept and continue. -- ==== + -Once installation is complete, Couchbase Server will start automatically (and will continue to start automatically at run levels 2, 3, 4, and 5, and explicitly shut down at run levels 0, 1, and 6). -You can use the `systemctl` command (`service` on older operating systems) to start and stop the Couchbase Server service, as well as check the current status. -Refer to xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown] for more information. -+ -. Open a web browser and access the Couchbase Web Console to xref:testing.adoc[verify] that the installation was successful and that the node is available. +After the installation is complete, Couchbase Server starts automatically. +Couchbase Server continues to start automatically at run levels 2, 3, 4, and 5, and explicitly shuts down at run levels 0, 1, and 6. +You can use the `systemctl` command, a `service` on older operating systems, to start and stop the Couchbase Server service, and to verify the current status. +For more information, see xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown]. + +. Open the Couchbase Web Console in a browser to xref:testing.adoc[verify] that the installation was successful and that the node is available. === Install Using RPM Package -Install Couchbase Server on Red Hat Enterprise using a full RPM package provided by Couchbase. +Install Couchbase Server on Red Hat Enterprise or Oracle Linux or Amazon Linux 2023 using a full RPM package provided by Couchbase. . Download the appropriate package from the Couchbase https://www.couchbase.com/downloads[downloads page^]. . Install Couchbase Server. + +** For Red Hat Enterprise, use the following command: ++ [source,console,subs=+quotes] ---- sudo yum upgrade ./[.var]_package-name_.rpm ---- + -If any Couchbase Server dependencies are missing on your system, `yum` will automatically download and install them as part of the installation process. +** For Oralce Linux, use the following command: + -Once installation is complete, Couchbase Server will start automatically (and will continue to start automatically at run levels 2, 3, 4, and 5, and explicitly shut down at run levels 0, 1, and 6). -You can use the `systemctl` command (`service` on older operating systems) to start and stop the Couchbase Server service, as well as check the current status. -Refer to xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown] for more information. +[source,console,subs=+quotes] +---- +sudo yum install ./[.var]_package-name_.rpm +---- ++ +** For Amazon Linux 2023, use the following command: ++ +[source,console,subs=+quotes] +---- +sudo rpm --install [.var]_package-name_.rpm +---- ++ +If any Couchbase Server dependencies are missing on your system, `yum` automatically downloads and installs them as part of the installation process. ++ +After the installation is complete, Couchbase Server starts automatically. +Couchbase Server starts automatically at run levels 2, 3, 4, and 5, and explicitly shuts down at run levels 0, 1, and 6. +You can use the `systemctl` command, a `service` on older operating systems, to start and stop the Couchbase Server service, and to verify the current status. +For more information, see xref:startup-shutdown.adoc[Couchbase Server Startup and Shutdown]. -. Open a web browser and access the Couchbase Web Console to xref:testing.adoc[verify] that the installation was successful and the node is available. +. Open the Couchbase Web Console in a browser to xref:testing.adoc[verify] that the installation was successful and that the node is available. [#rh-nonroot-nonsudo-] == Installing as Non-Root -Non-root installation is performed identically for all supported Linux distributions, including Red Hat Enterprise. +Non-root installation process is identical for all supported Linux distributions, including Red Hat Enterprise, Oracle Linux, and Amazon Linux 2023. For instructions, see xref:install:non-root.adoc[Non-Root Install and Upgrade]. == Setting Max Process Limits -On Red Hat Enterprise, it's recommended that you increase the maximum process limits for Couchbase. +Couchbase recommends that, on Red Hat Enterprise, you increase the maximum process limits for Couchbase. -To set the process limits, create a `.conf` file in the `/etc/security/limits.d` directory (such as `91-couchbase.conf`), and add the following values: +To set the process limits, create a `.conf` file in the `/etc/security/limits.d` directory such as `91-couchbase.conf`, and add the following values: [source,console] ---- @@ -186,20 +233,21 @@ couchbase soft nproc 4096 couchbase hard nproc 16384 ---- -For more information (provided in the context of _non-root_ install and upgrade), see xref:install:non-root.adoc#establish-limits-for-user-processes-and-file-descriptors[Establish Limits for User Processes and File Descriptors]. +For more information about non-root installation and upgrade, see xref:install:non-root.adoc#establish-limits-for-user-processes-and-file-descriptors[Establish Limits for User Processes and File Descriptors]. == Next Steps -Following installation and start-up of Couchbase Server, a node must be _initialized_ and _provisioned_. +After you install and start Couchbase Server, initialize and provision a node. -* If it is the first node in a deployment, initialization and provisioning happens all at once when you create a _cluster of one_. +* If it's the first node in a deployment, initialization and provisioning happens all at once when you create a cluster of one. + -Refer to xref:manage:manage-nodes/create-cluster.adoc[Create a Cluster] +For more information, see xref:manage:manage-nodes/create-cluster.adoc[Create a Cluster]. * If you already have an existing cluster, the node is initialized and provisioned when you add it to the cluster. -+ -Refer to xref:manage:manage-nodes/add-node-and-rebalance.adoc[Add a Node and Rebalance] -+ -* Optionally, initialization can be performed explicitly and independently of provisioning, as a prior process, in order to establish certain configurations, such as custom disk-paths. -+ -Refer to xref:manage:manage-nodes/initialize-node.adoc[Initialize a Node] + +For more information, see xref:manage:manage-nodes/add-node-and-rebalance.adoc[Add a Node and Rebalance]. + +* Optionally, you can initialize a node explicitly and independently of provisioning, as a prior process. +This is to establish configurations such as custom disk-paths. + +For more information, see xref:manage:manage-nodes/initialize-node.adoc[Initialize a Node]. diff --git a/modules/install/pages/ubuntu-debian-install.adoc b/modules/install/pages/ubuntu-debian-install.adoc index 686b289fc1..15f76810d6 100644 --- a/modules/install/pages/ubuntu-debian-install.adoc +++ b/modules/install/pages/ubuntu-debian-install.adoc @@ -79,6 +79,13 @@ Available releases are listed with their full `version-build` number: + [subs=+quotes] ---- +couchbase-server/xenial - amd64 +---- ++ +The following is an example of the listing: ++ +[subs=+quotes] +---- couchbase-server/xenial *8.0.0-3777* amd64 ---- + @@ -86,10 +93,10 @@ couchbase-server/xenial *8.0.0-3777* amd64 + [source,console,subs=+quotes] ---- -sudo apt-get install couchbase-server=[.var]_version-string_ +sudo apt-get install couchbase-server=- ---- + -Using the example listing from the previous step, the resulting installation command would be: +The following is an example of the installation command: + [subs=+quotes] ---- @@ -117,21 +124,28 @@ Available releases are listed with their full `version-build` number: + [subs=+quotes] ---- -couchbase-server-community/xenial *6.0.0-1693-1* amd64 +couchbase-server-community/xenial - amd64 +---- ++ +The following is an example of the listing: ++ +[subs=+quotes] +---- +couchbase-server-community/xenial *8.0.0-3777* amd64 ---- + . Specify a release to install it. + [source,console,subs=+quotes] ---- -sudo apt-get install couchbase-server-community=[.var]_version-string_ +sudo apt-get install couchbase-server-community=- ---- + -Using the example listing from the previous step, the resulting installation command would be: +The following is an example of the installation command: + [subs=+quotes] ---- -sudo apt-get install couchbase-server-community=*6.0.0-1693-1* +sudo apt-get install couchbase-server-community=*8.0.0-3777* ---- -- ==== diff --git a/modules/install/pages/upgrade-cluster-online.adoc b/modules/install/pages/upgrade-cluster-online.adoc index ed19739cf5..41c13031cb 100644 --- a/modules/install/pages/upgrade-cluster-online.adoc +++ b/modules/install/pages/upgrade-cluster-online.adoc @@ -23,7 +23,7 @@ include::partial$cannot-upgrade-docker-single-node.adoc[tag=cannot-upgrade-singl [#tls-address-family-restriction-and-node-addition] === TLS, Address-Family Restriction, and Node Addition -Couchbase Server Version 7.0.2+ allows TLS to be specified as mandatory for all internal and external cluster-communications -- see xref:manage:manage-security/manage-tls.adoc[Manage On-the-Wire Security]. +Couchbase Server 7.0.2 and later versions allow TLS to be specified as mandatory for all internal and external cluster-communications -- see xref:manage:manage-security/manage-tls.adoc[Manage On-the-Wire Security]. It also allows the cluster's address family to be specifically restricted to either IPv4 or IPv6 -- see xref:manage:manage-nodes/manage-address-families.adoc[Manage Address Families]. The procedures described in the current section both involve the introduction of upgraded nodes into an existing, online cluster. diff --git a/modules/install/pages/upgrade.adoc b/modules/install/pages/upgrade.adoc index 59115edf0f..d8ae042aaa 100644 --- a/modules/install/pages/upgrade.adoc +++ b/modules/install/pages/upgrade.adoc @@ -1,7 +1,7 @@ = Upgrade :description: To upgrade a Couchbase-Server cluster means to upgrade the version of Couchbase Server that's running on every node. -:erlang-upgrade-note: The upgrade to Erlang support in Couchbase Server 8.0 requires that you first upgrade Couchbase to version 7.2 before upgrading to version 8.0 +:erlang-upgrade-note: The upgrade to Erlang support in Couchbase Server 8.0 requires that you first upgrade Couchbase to version 7.2 before upgrading to version 8.0. :xrefstyle: short @@ -132,13 +132,13 @@ TIP: As far as is possible, you should aim to keep your cluster up to date with | Any 6.0.x / 6.5.x → 6.6 → 7.2.3 → 8.0 | 7.x -| Any 7.0.x / 7.1.x → 7.2.3 → 8.0{empty}xref:#erlang-7-2-4-footnote1[^+[1]+^] +| Any 7.0.x / 7.1.x → 7.2.3 → 8.0{empty}xref:##erlang-8-0-footnote1[^+[1]+^] Any 7.2.x / 7.6.x → 8.0 |=== -[#erlang-7-2-4-footnote1] +[#erlang-8-0-footnote1] ^1^{erlang-upgrade-note} [#table-upgrade-community] @@ -149,19 +149,19 @@ Any 7.2.x / 7.6.x → 8.0 | Starting Version | Path to Current Version | 5.x -| 5.x → 6.6 → 7.2.3 → 7.6.x[{empty}xref:erlang-7-2-4-footnote2[^+[1]+^] +| Any 5.x → 6.6.0 → 7.2.2 → 8.0{empty}xref:erlang-8-0-footnote1[^+[1]+^] → | 6.x -| 6.0 → 6.6 → 7.2.3 → 7.6.x{empty}xref:erlang-7-2-4-footnote2[^+[1]+^] +| Any 6.0.x / 6.5.x → 6.6.0 → 7.2.2 → 8.0{empty}xref:erlang-8-0-footnote1[^+[1]+^] | 7.x -| 7.0 → 7.1 → 7.6.x{empty}xref:erlang-7-2-4-footnote2[^+[1]+^] - +| Any 7.0.x / 7.1.x → 7.2.2 → 8.0{empty}xref:erlang-8-0-footnote1[^+[1]+^] +Any 7.2.x / 7.6.x → 8.0 |=== -[#erlang-7-2-4-footnote2]^1^{erlang-upgrade-note} +[#erlang-8-0-footnote1]^1^{erlang-upgrade-note} .Important note when upgrading from 7.0.4 to 7.2.x on Windows 2019 .Important note when upgrading from 7.0.4 to 7.2.x on Windows 2019 @@ -180,37 +180,30 @@ This will restore the missing files. == How to Upgrade Your Cluster If you are upgrading several nodes at once, then the version of the software on each node must be kept in step throughout the upgrade process. + -For example, if you are upgrading three enterprise nodes (`*Node{nbsp}1*`, `*Node{nbsp}2*` and `*Node{nbsp}3*`) from version 5.1x to 7.6.x, then you would use the following sequence: +For example, if you are upgrading three enterprise nodes (`*Node{nbsp}1*`, `*Node{nbsp}2*` and `*Node{nbsp}3*`) from version 6.6.x to 8.0.x, then you would use the following sequence: [#upgrade-example] -.Upgrading from version 5.1.x to 7.6.x +.Upgrading from version 6.6.x to 8.0.x ==== [cols="1,2,2"] |=== | Step | Description | Upgrades - | {counter: upgrade} -| Upgrade all nodes from 5.1x to 6.6 +| Upgrade all nodes from 6.6.x to 7.2.3 | -`*Node{nbsp}1*` => 5.1x -> 6.6 + -`*Node{nbsp}2*` => 5.1x -> 6.6 + -`*Node{nbsp}3*` => 5.1x -> 6.6 +`*Node{nbsp}1*` => 6.6.x -> 7.2.3 + +`*Node{nbsp}2*` => 6.6.x -> 7.2.3 + +`*Node{nbsp}3*` => 6.6.x -> 7.2.3 -| {counter: upgrade} -| Upgrade all nodes from 6.6 to 7.2.3 -| -`*Node{nbsp}1*` => 6.6 -> 7.2.3 + -`*Node{nbsp}2*` => 6.6 -> 7.2.3 + -`*Node{nbsp}3*` => 6.6 -> 7.2.3 | {counter: upgrade} -| Upgrade all nodes from 7.2.3 to 7.6.x +| Upgrade all nodes from 7.2.3 to 8.0.x | -`*Node{nbsp}1*` => 7.2.3 -> 7.6.x + -`*Node{nbsp}2*` => 7.2.3 -> 7.6.x + -`*Node{nbsp}3*` => 7.2.3 -> 7.6.x +`*Node{nbsp}1*` => 7.2.3 -> 8.0.x + +`*Node{nbsp}2*` => 7.2.3 -> 8.0.x + +`*Node{nbsp}3*` => 7.2.3 -> 8.0.x |=== @@ -219,10 +212,10 @@ For example, if you are upgrading three enterprise nodes (`*Node{nbsp}1*`, `*Nod .Upgrading between non-adjacent version numbers is usually _not_ supported. [NOTE] ==== -For example, to upgrade from *5.1.x* to *7.2.4*, then _three_ upgrades must be performed (as shown in <>): + -first, from **5.1.x** to** 6.6**, + -then, from *6.6* to *7.2.3* + -and finally, from *7.2.3* to *7.6.x*. +For example, to upgrade from *6.6.x* to *8.0.x*, then 2 upgrades must be performed (as shown in <>): + +. First, from *6.6.x* to *7.2.3*. +. Then, from *7.2.3* to *8.0.x*. ==== diff --git a/modules/learn/pages/clusters-and-availability/groups.adoc b/modules/learn/pages/clusters-and-availability/groups.adoc index 17d2f0d65f..a64de803fe 100644 --- a/modules/learn/pages/clusters-and-availability/groups.adoc +++ b/modules/learn/pages/clusters-and-availability/groups.adoc @@ -19,9 +19,9 @@ A server group can be automatically _failed over_: thus, if the entire group goe NOTE: For the vBuckets and replica indexes to be automatically promoted to active, the conditions specified in xref:./automatic-failover.adoc#auto-failover-constraints[Auto-failover Constraints] must apply. -Note that in 7.1+, automatic failover can fail over more than three nodes concurrently: this has permitted the removal of pre-7.1 interfaces that were specific to triggering auto-failover for server groups. + -Consequently, for auto-failover of a server group to be possible, the maximum count for auto-failover must be established by the administrator as a value equal to or greater than the number of nodes in the server group. + -Due to the removal of the pre-7.1 interfaces, applications that attempt to use the interfaces with 7.1+ will _fail._ +NOTE: In 7.1 and later versions, automatic failover can fail over more than three nodes concurrently: this has permitted the removal of pre-7.1 interfaces that were specific to triggering auto-failover for server groups. +Consequently, for auto-failover of a server group to be possible, the maximum count for auto-failover must be established by the administrator as a value equal to or greater than the number of nodes in the server group. +Due to the removal of the pre-7.1 interfaces, applications that attempt to use the interfaces with 7.1 or a later version will _fail._ [IMPORTANT] diff --git a/modules/learn/pages/clusters-and-availability/hard-failover.adoc b/modules/learn/pages/clusters-and-availability/hard-failover.adoc index 5ca6267736..5eb0732f62 100644 --- a/modules/learn/pages/clusters-and-availability/hard-failover.adoc +++ b/modules/learn/pages/clusters-and-availability/hard-failover.adoc @@ -39,7 +39,7 @@ Note that this restriction also protects the cluster in situations where multipl [#performing-an-unsafe-failover] === Performing an Unsafe Failover -In Couchbase Server 7.0+, metadata is managed by a _consensus protocol_; which achieves strong consistency by _synchronously_ replicating metadata to a majority of the nodes before considering it committed. +In Couchbase Server 7.0 or a later version, metadata is managed by a _consensus protocol_; which achieves strong consistency by _synchronously_ replicating metadata to a majority of the nodes before considering it committed. (This is described in xref:learn:clusters-and-availability/metadata-management.adoc[Metadata Management].) The metadata so managed includes _topology information_. Consequently, in the event of a quorum failure, no topology changes can be made on the cluster-nodes that remain responsive. @@ -56,7 +56,7 @@ No unsafe failover should be attempted without a full understanding of the conse [#consequences-of-unsafe-failover] ==== Consequences of Unsafe Failover -In Couchbase Server Version 7.0+, the consequences of unsafe failover are: +In Couchbase Server 7.0 or a later version, the consequences of unsafe failover are: * Strong consistency of metadata is no longer guaranteed. @@ -68,7 +68,7 @@ They are, however, not informed of their removal; and so may continue to attempt * The failed over nodes _cannot be recovered_; and will therefore need to be _re-initialized_, if they are to be re-introduced into the cluster. (Note that a REST API is provided specifically for this purpose: see xref:rest-api:rest-reinitialize-node.adoc[Reinitializing Nodes]). -The consequences of unsafe failover in Couchbase Server Version 7.0+ are therefore significantly different from in previous versions: previously, the failed over nodes remained in the cluster, and could be recovered; but in 7.0+, they are removed from the cluster, are non-recoverable, and must be re-initialized before being added back to the cluster (see xref:rest-api:rest-reinitialize-node.adoc[Reinitializing Nodes]). +The consequences of unsafe failover in Couchbase Server 7.0 or a later version are therefore significantly different from in previous versions: previously, the failed over nodes remained in the cluster, and could be recovered; but in 7.0 or a later version, they are removed from the cluster, are non-recoverable, and must be re-initialized before being added back to the cluster (see xref:rest-api:rest-reinitialize-node.adoc[Reinitializing Nodes]). [#client-communications-following-unsafe-failover] ==== Client Communications following Unsafe Failover diff --git a/modules/learn/pages/clusters-and-availability/nodes.adoc b/modules/learn/pages/clusters-and-availability/nodes.adoc index 0348415d24..749a6e8660 100644 --- a/modules/learn/pages/clusters-and-availability/nodes.adoc +++ b/modules/learn/pages/clusters-and-availability/nodes.adoc @@ -98,7 +98,7 @@ Therefore, provided that one node in the cluster is running the Data Service, th === Restricting the Addition and Joining of Nodes -To ensure cluster-security, in Couchbase Server Version 7.1.1+, restrictions can be placed on addition and joining, based on the establishment of _node-naming conventions_. +To ensure cluster-security, in Couchbase Server 7.1.1 and later versions, restrictions can be placed on addition and joining, based on the establishment of _node-naming conventions_. Only nodes whose names correspond to at least one of the stipulated conventions can be added or joined. For information, see xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. @@ -211,7 +211,7 @@ Whichever kind of node name is specified for the single-node cluster, if calls a Calls made from other hosts on the network must use either the IP address or the hostname. In all cases, the appropriate port number must also be specified, following the name, separated by a colon. -Note that in Couchbase Enterprise Server 7.2 and later, when certificates are used for cluster authentication, each node certificate must be configured with the node-name correctly specified as a Subject Alternative Name (SAN). +NOTE: In Couchbase Enterprise Server 7.2 and later versions, when certificates are used for cluster authentication, each node certificate must be configured with the node-name correctly specified as a Subject Alternative Name (SAN). For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node Certificate Validation]. [#specifying-the-cluster-name] @@ -288,7 +288,7 @@ If an attempt is made to incorporate a new node into the certificate-protected c Therefore, a new node should be appropriately certificate-protected, before any attempt is made to incorporate it into a certificate-protected cluster. -Note also that in Couchbase Enterprise Server Version 7.2+, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. +NOTE: In Couchbase Enterprise Server 7.2 and later versions, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. If such identification is not correctly configured, failure may occur when uploading the certificate, or when attempting to add or join the node to a cluster. For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node Certificate Validation]. diff --git a/modules/learn/pages/clusters-and-availability/rebalance.adoc b/modules/learn/pages/clusters-and-availability/rebalance.adoc index 02936faaaf..f958814932 100644 --- a/modules/learn/pages/clusters-and-availability/rebalance.adoc +++ b/modules/learn/pages/clusters-and-availability/rebalance.adoc @@ -234,10 +234,10 @@ See xref:rest-api:rest-modify-index-batch-size.adoc[Modify Index Batch Size]. The Search Service automatically partitions its indexes across all Search nodes in the cluster, ensuring optimal distribution, following rebalance. To achieve this, in versions of Couchbase Server prior to 7.1, by default, partitions needing to be newly created were entirely _built_, on their newly assigned nodes. -In 7.1+, by default, new partitions are instead created by the _transfer_ of partition files from old nodes to new nodes: this significantly enhances performance. +In 7.1 and later versions, by default, new partitions are instead created by the _transfer_ of partition files from old nodes to new nodes: this significantly enhances performance. This is an Enterprise-only feature, which requires all Search Service nodes _either_ to be running 7.1 or later; _or_ to be running 7.0.2, with the feature explicitly switched on. -Community Edition clusters that are upgraded to Enterprise Edition 7.1+ thus gain this feature in its default setting. +Community Edition clusters that are upgraded to Enterprise Edition 7.1 and later versions thus gain this feature in its default setting. Community Edition clusters that are upgraded to Enterprise Edition 7.0.2 can have this feature switched on, subsequent to upgrade. During file transfer, should an unresolvable error occur, file transfer is automatically abandoned, and _partition build_ is used instead. diff --git a/modules/learn/pages/clusters-and-availability/xdcr-active-active-sgw.adoc b/modules/learn/pages/clusters-and-availability/xdcr-active-active-sgw.adoc index aa736db412..d3d10d8953 100644 --- a/modules/learn/pages/clusters-and-availability/xdcr-active-active-sgw.adoc +++ b/modules/learn/pages/clusters-and-availability/xdcr-active-active-sgw.adoc @@ -12,7 +12,7 @@ NOTE: To set up XDCR bi-directional replication with Sync Gateway (SGW), the min In the versions earlier than Server 7.6.6 and Sync Gateway (SGW) 4.0.0, only an active-passive setup was supported with both XDCR and SGW. XDCR Active-Active replication with Sync Gateway for XDCR-Mobile interoperability configuration was introduced in the Server 7.6.6 version, where you can configure an active-active XDCR setup with Sync Gateway (SGW) and mobile applications both on the XDCR source and target clusters. -For more information about how Sync Gateway 4.0+ version works with Couchbase Server's XDCR, see xref:sync-gateway::server-compatibility-xdcr.adoc[XDCR - Server Compatibility]. +For more information about how Sync Gateway 4.0 and later versions work with Couchbase Server's XDCR, see xref:sync-gateway::server-compatibility-xdcr.adoc[XDCR - Server Compatibility]. [IMPORTANT] ==== @@ -63,7 +63,7 @@ Also, create an XDCR from B2 to B1 by setting `mobile=Active`. For information about creating an XDCR by setting `mobile=Active` through the REST API, see xref:rest-api:rest-xdcr-create-replication.adoc[Creating a Replication]. + For information about creating an XDCR by setting `mobile=Active` from the UI, see xref:manage:manage-xdcr/create-xdcr-replication.adoc#create-an-xdcr-replication-with-the-ui[Create an XDCR Replication with the UI]. -. Configure SGW 4.0+ version on each cluster, cluster A and cluster B. +. Configure SGW 4.0 or a later version on each cluster, cluster A and cluster B. This setup can handle application traffic on both buckets B1 and B2 of the respective clusters along with SGW import into both the buckets simultaneously. @@ -96,8 +96,8 @@ You can use the REST API or the XDCR UI to update an existing replication. For information about using the REST API to modify the replication settings for an existing replication, see xref:rest-api:rest-xdcr-adv-settings.adoc#change-existing-replication-with-mobile-active[Change Settings for an Existing Replication to Set mobile=Active] in xref:rest-api:rest-xdcr-adv-settings.adoc[Managing Advanced Settings]. + . Create an XDCR from B2 to B1 with the replication settings as `mobile=Active`. -. Upgrade SGW on cluster A to the version 4.0+. -. Connect SGW version 4.0+ to cluster B. +. Upgrade SGW on cluster A to the version 4.0 or a later version. +. Connect SGW version 4.0 or a later version to cluster B. . Enable application active traffic on cluster B. This setup can handle application traffic on both buckets B1 and B2 of the respective clusters along with SGW import into both the buckets simultaneously. diff --git a/modules/learn/pages/clusters-and-availability/xdcr-enable-crossclusterversioning.adoc b/modules/learn/pages/clusters-and-availability/xdcr-enable-crossclusterversioning.adoc index c063ff7f78..da70d389f5 100644 --- a/modules/learn/pages/clusters-and-availability/xdcr-enable-crossclusterversioning.adoc +++ b/modules/learn/pages/clusters-and-availability/xdcr-enable-crossclusterversioning.adoc @@ -65,7 +65,7 @@ For information about modifying the bucket property `versionPruningWindowHrs` th + For more information, including important limitations, see xref:learn:clusters-and-availability/xdcr-active-active-sgw.adoc[XDCR Active-Active with Sync Gateway]. + -For more information about how Sync Gateway 4.0+ version works with Couchbase Server's XDCR, see xref:sync-gateway::server-compatibility-xdcr.adoc[XDCR - Server Compatibility]. +For more information about how Sync Gateway 4.0 or a later version works with Couchbase Server's XDCR, see xref:sync-gateway::server-compatibility-xdcr.adoc[XDCR - Server Compatibility]. + NOTE: To set up XDCR bi-directional replication with Sync Gateway (SGW), the minimum required version for Server is 7.6.6 and SGW is 4.0.0. diff --git a/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc b/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc index 3cf6e4a57d..fa80fac04e 100644 --- a/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc +++ b/modules/learn/pages/clusters-and-availability/xdcr-overview.adoc @@ -121,7 +121,7 @@ For each such ID, the warning output `xdcr_error.*` is written to the log files [#xdcr-using-scopes-and-collections] == XDCR Using Scopes and Collections -XDCR supports _scopes_ and _collections_, which are provided with Couchbase Server Version 7.0 and after. +XDCR supports _scopes_ and _collections_, which are provided with Couchbase Server 7.0 or a later version. Scopes and collections are supported in the following ways: * Replication based on _implicit mapping_. @@ -144,10 +144,10 @@ In each case, _filtering_ can be applied. The source-bucket may be: -* A bucket on a 7.0+ cluster, housing its data in administrator-defined collections. +* A bucket on a cluster with 7.0 or a later version, housing its data in administrator-defined collections. Thus, data can be replicated (optionally using XDCR Advancing Filtering), from one collection to another within the same bucket; or from a collection in one bucket to a collection in another bucket. -* A bucket on a 7.0+ cluster, housing its data in the `_default` collection, within the `_default` scope (this being the default initial residence for all data in a bucket whose cluster has been upgraded from a pre-7.0 Couchbase Server version to a 7.0+ version). +* A bucket on a cluster with 7.0 or a later version, housing its data in the `_default` collection, within the `_default` scope (this being the default initial residence for all data in a bucket of a cluster which has been upgraded from a Couchbase Server version earlier than 7.0 to a 7.0 or a later version). Thus, XDCR can subsequently be used to redistribute the data into administrator-defined collections, either within the same or within different buckets (again, optionally using XDCR Advancing Filtering). Note that whereas _implicit_ replication is available in both Couchbase Server Enterprise and Community Edition, _explicit_ replication and _migration_ are available only in Couchbase Server Enterprise Edition. diff --git a/modules/learn/pages/data/scopes-and-collections.adoc b/modules/learn/pages/data/scopes-and-collections.adoc index bb629ecfef..38a9fe7b30 100644 --- a/modules/learn/pages/data/scopes-and-collections.adoc +++ b/modules/learn/pages/data/scopes-and-collections.adoc @@ -81,7 +81,7 @@ Once dropped, the default collection cannot be recreated. == The `_system` Scope, and its Collections -In Couchbase Server Version 7.6+, in each user-created or sample bucket, a `_system` scope is created and maintained by default. +In Couchbase Server 7.6 and later versions, in each user-created or sample bucket, a `_system` scope is created and maintained by default. This scope contains collections used by Couchbase services, for service-specific data. The scope and its collections _cannot_ be dropped. diff --git a/modules/learn/pages/security/authentication-domains.adoc b/modules/learn/pages/security/authentication-domains.adoc index 3e9906fcdb..9f6d3abedd 100644 --- a/modules/learn/pages/security/authentication-domains.adoc +++ b/modules/learn/pages/security/authentication-domains.adoc @@ -50,7 +50,7 @@ When a user attempts to authenticate, Couchbase Server always looks up their cre LDAP-based authentication must be set up in one of the following ways; * _Native LDAP Support_. -For Couchbase Server Enterprise Edition 6.5+, this is the recommended way of setting up LDAP for external authentication. +For Couchbase Server Enterprise Edition 6.5 and later versions, this is the recommended way of setting up LDAP for external authentication. It provides support for encrypted communication, and for LDAP groups. * _LDAP Support Based on saslauthd_. @@ -155,7 +155,7 @@ LDAP authentication based on `saslauthd` is only available for the Enterprise Ed It provides the benefits of centralized identity and security-policy management, and of simplified compliance. It does not support LDAP groups. -For LDAP authentication, _Native LDAP_ , rather than `saslauthd`, is recommended for Couchbase Server Enterprise Edition 6.5+. +For LDAP authentication, _Native LDAP_ , rather than `saslauthd`, is recommended for Couchbase Server Enterprise Edition 6.5 and later versions. For details on configuring `saslauthd` to support external authentication by LDAP, see xref:manage:manage-security/configure-saslauthd.adoc[Configure `saslauthd`]. diff --git a/modules/learn/pages/security/using-multiple-cas.adoc b/modules/learn/pages/security/using-multiple-cas.adoc index a6e138b89e..38a5c3ce93 100644 --- a/modules/learn/pages/security/using-multiple-cas.adoc +++ b/modules/learn/pages/security/using-multiple-cas.adoc @@ -95,7 +95,7 @@ In versions of Couchbase Server prior to 7.1, as described in xref:learn:securit a node certificate references its CA by presenting, to the client, as the file _chain.pem_, all certificates whose signature-chain leads to that CA. Likewise, a client references its own CA by presenting, to the server, all certificates whose signature-chain leads to its own CA. -This way of mutually referencing CAs continues to be supported in 7.1+. +This way of mutually referencing CAs continues to be supported in 7.1 and later versions. Alternatively, however, the _intermediate_ certificates in the chain need _not_ be presented -- their existing presence in the recipient's _trust store_ being assumed instead. For example: diff --git a/modules/learn/pages/views/views-intro.adoc b/modules/learn/pages/views/views-intro.adoc index 33f4dd8382..6c8ae09375 100644 --- a/modules/learn/pages/views/views-intro.adoc +++ b/modules/learn/pages/views/views-intro.adoc @@ -32,5 +32,5 @@ By exposing specific fields from the stored information, views enable the follow The View Builder provides an interface for creating views within the web console. Views can be accessed by using a Couchbase client library to retrieve matching records. -NOTE: In Couchbase Server 6.0+, Spatial Views are no longer supported. +NOTE: In Couchbase Server 6.0 and later versions, Spatial Views are no longer supported. See the 5.5 documentation, https://docs-archive.couchbase.com/server/5.5/understanding-couchbase/views/sv-writing-views.html[Writing Spatial Views]. diff --git a/modules/learn/partials/views-deprecation-notice.adoc b/modules/learn/partials/views-deprecation-notice.adoc index 8224b9cca4..478fd8576b 100644 --- a/modules/learn/partials/views-deprecation-notice.adoc +++ b/modules/learn/partials/views-deprecation-notice.adoc @@ -1,4 +1,4 @@ -IMPORTANT: Views are deprecated in Couchbase Server 7.0+. +IMPORTANT: Views are deprecated in Couchbase Server 7.0 and later versions. Views support in Couchbase Server will be removed in a future release. Instead of views, use indexes and queries using the xref:learn:services-and-indexes/services/index-service.adoc[Index Service] (GSI) and the xref:learn:services-and-indexes/services/query-service.adoc[Query Service] ({sqlpp}). Views will not run on the newer xref:learn:buckets-memory-and-storage/storage-engines.adoc[Magma storage engine]. diff --git a/modules/manage/pages/manage-buckets/create-bucket.adoc b/modules/manage/pages/manage-buckets/create-bucket.adoc index e12dd14136..aa2cde94e6 100644 --- a/modules/manage/pages/manage-buckets/create-bucket.adoc +++ b/modules/manage/pages/manage-buckets/create-bucket.adoc @@ -194,7 +194,7 @@ For more information about flushing, see xref:manage-buckets/flush-bucket.adoc[F image::manage-buckets/addBucketWithMagmaOption.png[,400,align=center, alt="An image that displays the Add Data Bucket dialog, with a Couchbase Bucket Type and CouchStore Storage Backend selected. The Advanced bucket settings are expanded and to show the default selections for a Couchbase and Couchstore bucket."] NOTE: Enable Cross Cluster Versioning can be enabled only in the xref:manage:manage-buckets/edit-bucket.adoc[Edit a Bucket] mode. -Enabling Cross Cluster Versioning is a prerequisite for features like XDCR Conflict Logging and XDCR Active-Active with Sync Gateway 4.0+. +Enabling Cross Cluster Versioning is a prerequisite for features like XDCR Conflict Logging and XDCR Active-Active with Sync Gateway 4.0 and later versions. For more information, see xref:learn:clusters-and-availability/xdcr-enable-crossclusterversioning.adoc[XDCR enableCrossClusterVersioning]. [#ephemeral-bucket-settings] diff --git a/modules/manage/pages/manage-nodes/add-node-and-rebalance.adoc b/modules/manage/pages/manage-nodes/add-node-and-rebalance.adoc index 0a5b342ee9..d323773ea5 100644 --- a/modules/manage/pages/manage-nodes/add-node-and-rebalance.adoc +++ b/modules/manage/pages/manage-nodes/add-node-and-rebalance.adoc @@ -33,7 +33,7 @@ A complete overview of Couchbase-Server certificate-management is provided in xr [#node-certificate-validation] === Validating Node Certificates -In Couchbase Enterprise Server Version 7.2+, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. +In Couchbase Enterprise Server 7.2 and later versions, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. If such identification is not correctly configured, failure may occur when uploading the certificate, or when attempting to add or join the node to a cluster. For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node-Certificate Validation]. @@ -258,7 +258,7 @@ See xref:manage:manage-nodes/add-node-and-rebalance.adoc#cancel-retries-with-the ==== Restricting the Addition of Nodes -To ensure cluster-security, in Couchbase Server Version 7.1.1+, restrictions can be placed on addition, based on the establishment of _node-naming conventions_. +To ensure cluster-security, in Couchbase Server 7.1.1 and later versions, restrictions can be placed on addition, based on the establishment of _node-naming conventions_. Only nodes whose names correspond to at least one of the stipulated conventions can be added. For information, see xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. @@ -272,7 +272,7 @@ Therefore, specify placeholder arguments. Additionally, specify that the `data` service be run on the node, once it is part of the cluster. Note that a server to be added (as specified by the value of the `server-add` parameter) can be prefixed with the scheme `https://`, and/or with the port `18091`): if no scheme and no port is specified, `https://` and `18091` are used as defaults. -The scheme `http://` cannot be used, nor can the port `8091`: since in 7.1+, addition must occur over a secure connection. +The scheme `http://` cannot be used, nor can the port `8091`: since in 7.1 and later versions, addition must occur over a secure connection. ---- couchbase-cli server-add -c 10.142.181.101:8091 \ @@ -382,7 +382,7 @@ To add a new Couchbase Server-node to an existing cluster, use the `/controller/ The following command adds node `10.142.181.102` to cluster `10.142.181.101`. Note that a server to be added can be prefixed with the scheme `https://`, and/or can be suffixed with the port `18091`): if no scheme or port is specified, `https://` and `18091` are used as defaults. -The scheme `http://` cannot be used; nor can the port `8091`, since in 7.1+, node-addition takes place only over a secure connection. +The scheme `http://` cannot be used; nor can the port `8091`, since in 7.1 and later versions, node-addition takes place only over a secure connection. ---- curl -u Administrator:password -v -X POST \ diff --git a/modules/manage/pages/manage-nodes/create-cluster.adoc b/modules/manage/pages/manage-nodes/create-cluster.adoc index f81ce7f178..140f934d5b 100644 --- a/modules/manage/pages/manage-nodes/create-cluster.adoc +++ b/modules/manage/pages/manage-nodes/create-cluster.adoc @@ -198,7 +198,7 @@ These are described in xref:manage:manage-ui/manage-ui.adoc#understanding-the-da [#establishing-arbiter-nodes] ==== Establishing Arbiter Nodes -In Couchbase Server 7.6+, you can deploy one or more arbiter nodes. +In Couchbase Server 7.6 and later versions, you can deploy one or more arbiter nodes. An arbiter node does not run any services. include::learn:partial$arbiter-node-benefits.adoc[] diff --git a/modules/manage/pages/manage-nodes/join-cluster-and-rebalance.adoc b/modules/manage/pages/manage-nodes/join-cluster-and-rebalance.adoc index e0aa3a50da..517e331e7d 100644 --- a/modules/manage/pages/manage-nodes/join-cluster-and-rebalance.adoc +++ b/modules/manage/pages/manage-nodes/join-cluster-and-rebalance.adoc @@ -36,7 +36,7 @@ A complete overview of Couchbase-Server certificate-management is provided in xr [#node-certificate-validation] === Validating Node Certificates -In Couchbase Enterprise Server Version 7.2+, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. +In Couchbase Enterprise Server 7.2 and later versions, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. If such identification is not correctly configured, failure may occur when uploading the certificate, or when attempting to add or join the node to a cluster. For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node-Certificate Validation]. @@ -142,7 +142,7 @@ See also the information provided on xref:manage:manage-nodes/add-node-and-rebal === Restricting the Joining of Nodes -To ensure cluster-security, in Couchbase Server Version 7.1.1+, restrictions can be placed on joining, based on the establishment of _node-naming conventions_. +To ensure cluster-security, in Couchbase Server 7.1.1 and later versions, restrictions can be placed on joining, based on the establishment of _node-naming conventions_. Only nodes whose names correspond to at least one of the stipulated conventions can be joined. For information, see xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. diff --git a/modules/manage/pages/manage-security/configure-ldap.adoc b/modules/manage/pages/manage-security/configure-ldap.adoc index 78e212c959..f612340140 100644 --- a/modules/manage/pages/manage-security/configure-ldap.adoc +++ b/modules/manage/pages/manage-security/configure-ldap.adoc @@ -18,7 +18,7 @@ For information, see xref:learn:security/authentication-domains.adoc#ldap-users- Note that Couchbase Server provides two different ways of setting up LDAP: * _Native LDAP Support_. -For Couchbase Server Enterprise Edition 6.5+, this is the recommended way of setting up LDAP for external authentication. +For Couchbase Server Enterprise Edition 6.5 and later versions, this is the recommended way of setting up LDAP for external authentication. It provides support for encrypted communication, and for LDAP groups. * _LDAP Support Based on_ `saslauthd`. diff --git a/modules/manage/pages/manage-xdcr/enable-half-secure-replication.adoc b/modules/manage/pages/manage-xdcr/enable-half-secure-replication.adoc index 4bcbf559aa..2235a3e7c6 100644 --- a/modules/manage/pages/manage-xdcr/enable-half-secure-replication.adoc +++ b/modules/manage/pages/manage-xdcr/enable-half-secure-replication.adoc @@ -65,7 +65,7 @@ The *Half* radio button is checked by default: this means that half-secure repli . If the half-secure connection is being made to a pre-5.5 version of Couchbase Enterprise Server, copy and paste the root certificate for the destination cluster into the interactive panel, below the radio buttons. This certificate can be found under the *Root Certificate* tab of the Couchbase Web Console *Security* screen: copy it there, and paste it here. -(If the connection is being made to a 5.5+ version, skip this step.) +(If the connection is being made to a 5.5 or a later version, skip this step.) + If the certificate has been added, the dialog now appears approximately as follows: + diff --git a/modules/manage/pages/manage-xdcr/secure-xdcr-replication.adoc b/modules/manage/pages/manage-xdcr/secure-xdcr-replication.adoc index 40f658365f..51b0e74b49 100644 --- a/modules/manage/pages/manage-xdcr/secure-xdcr-replication.adoc +++ b/modules/manage/pages/manage-xdcr/secure-xdcr-replication.adoc @@ -53,7 +53,7 @@ Please note that because various XDCR processes make frequent calls to the targe [#capella-trusted-cas] == Capella Trusted CAs -CAs used by the Couchbase cloud data-platform, _Capella_, are automatically trusted, when _fully secure_ XDCR connections are made to Capella databases from Couchbase Enterprise Server 7.2+, using Couchbase Web Console or the REST API. +CAs used by the Couchbase cloud data-platform, _Capella_, are automatically trusted, when _fully secure_ XDCR connections are made to Capella databases from Couchbase Enterprise Server 7.2 or later versions, using Couchbase Web Console or the REST API. This means that when a reference is configured by means of: * Couchbase Web Console, the interactive pane, provided for specifying the CA, can be left blank. diff --git a/modules/rest-api/pages/deprecated-security-apis/upload-retrieve-root-cert.adoc b/modules/rest-api/pages/deprecated-security-apis/upload-retrieve-root-cert.adoc index 862f9e527d..139e0998f6 100644 --- a/modules/rest-api/pages/deprecated-security-apis/upload-retrieve-root-cert.adoc +++ b/modules/rest-api/pages/deprecated-security-apis/upload-retrieve-root-cert.adoc @@ -13,7 +13,7 @@ These methods are deprecated in Couchbase Server Version 7.1. == Http Methods and URIs WARNING: The APIs listed below for uploading and retrieving the cluster's root certificate are deprecated. -Users of Couchbase Server Version 7.1+ should use instead the APIs described in xref:rest-api:rest-certificate-management.adoc[Certificate Management API]. +Users of Couchbase Server 7.1 or later versions should use instead the APIs described in xref:rest-api:rest-certificate-management.adoc[Certificate Management API]. ---- POST /controller/uploadClusterCA diff --git a/modules/rest-api/pages/rbac.adoc b/modules/rest-api/pages/rbac.adoc index c62ab5edba..90c88669ec 100644 --- a/modules/rest-api/pages/rbac.adoc +++ b/modules/rest-api/pages/rbac.adoc @@ -346,7 +346,7 @@ The names of bucket, scope, and collection must be separated by colons. If successful, the call returns `200 OK`. No object is returned. -Note that in Couchbase Server 7.1.1+, if an existing user's password is to be changed, and their existing role-assignments are to be kept unchanged, the `/settings/rbac/users/local` URI can be used with the `PATCH` method: this allows the `password` parameter to be used, specifying a new password; and the `username` and `roles` parameters to be omitted. +NOTE: In Couchbase Server 7.1.1 and later versions, if an existing user's password is to be changed, and their existing role-assignments are to be kept unchanged, the `/settings/rbac/users/local` URI can be used with the `PATCH` method: this allows the `password` parameter to be used, specifying a new password; and the `username` and `roles` parameters to be omitted. [#example-create-local-users] ==== Examples: Create Local Users, Assigning Roles diff --git a/modules/rest-api/pages/rest-cluster-addnodes.adoc b/modules/rest-api/pages/rest-cluster-addnodes.adoc index e16d432419..0a19047dde 100644 --- a/modules/rest-api/pages/rest-cluster-addnodes.adoc +++ b/modules/rest-api/pages/rest-cluster-addnodes.adoc @@ -20,21 +20,21 @@ One or more services can be specified to run on the added node. These are `kv` (data), `index` (index), `n1ql` (query), `eventing` (eventing), `fts` (search), `cbas` (analytics), and `backup` (backup). If no services are specified, the Data Service is enabled by default. -In 7.1+, heightened security is provided for adding nodes to clusters: a node that is to be added must itself be provisioned with conformant certificates before addition can be successfully performed. +In 7.1 and later versions, heightened security is provided for adding nodes to clusters: a node that is to be added must itself be provisioned with conformant certificates before addition can be successfully performed. The new node is now always added over an encrypted connection. See xref:manage:manage-security/configure-server-certificates.adoc#adding-new-nodes[Adding and Joining New Nodes]. In consequence, a server to be added can be prefixed with the scheme `https://`, and/or can be suffixed with the port `18091`): if no scheme or port is specified, `https://` and `18091` are used as defaults. -The scheme `http://` cannot be used; nor can the port `8091`, since in 7.1+, addition takes place only over a secure connection. +The scheme `http://` cannot be used; nor can the port `8091`, since in 7.1 and later versions, addition takes place only over a secure connection. -Further to ensure cluster-security, in Couchbase Server Version 7.1.1+, restrictions can be placed on node-addition, based on the establishment of _node-naming conventions_. +Further to ensure cluster-security, in Couchbase Server 7.1.1 and later versions, restrictions can be placed on node-addition, based on the establishment of _node-naming conventions_. Only nodes whose names correspond to at least one of the stipulated conventions can be added. For information, see xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. [#node-certificate-validation] === Validating Node Certificates -In Couchbase Enterprise Server Version 7.2 or later, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. +In Couchbase Enterprise Server 7.2 or later versions, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. If such identification is not correctly configured, failure may occur when attempting to add or join the node to a cluster. For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node-Certificate Validation]. diff --git a/modules/rest-api/pages/rest-cluster-joinnode.adoc b/modules/rest-api/pages/rest-cluster-joinnode.adoc index 5ab6ea6d49..1a25328c9f 100644 --- a/modules/rest-api/pages/rest-cluster-joinnode.adoc +++ b/modules/rest-api/pages/rest-cluster-joinnode.adoc @@ -35,14 +35,14 @@ This REST request adds an individual server node to a cluster. Two clusters cannot be merged together into a single cluster, however, a single node can be added to an existing cluster. The `clusterMemberHostIp` and `clusterMemberPort` parameters must be specified to add a node to a cluster. -To ensure cluster-security, in Couchbase Server Version 7.1.1+, restrictions can be placed on joining, based on the establishment of _node-naming conventions_. +To ensure cluster-security, in Couchbase Server 7.1.1 and later versions, restrictions can be placed on joining, based on the establishment of _node-naming conventions_. Only nodes whose names correspond to at least one of the stipulated conventions can be joined. For information, see xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. [#node-certificate-validation] === Validating Node Certificates -In Couchbase Enterprise Server Version 7.2+, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. +In Couchbase Enterprise Server 7.2 and later versions, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. If such identification is not correctly configured, failure may occur when attempting to add or join the node to a cluster. For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node-Certificate Validation]. diff --git a/modules/rest-api/pages/rest-initialize-cluster.adoc b/modules/rest-api/pages/rest-initialize-cluster.adoc index 05047ebe96..51abc2a56c 100644 --- a/modules/rest-api/pages/rest-initialize-cluster.adoc +++ b/modules/rest-api/pages/rest-initialize-cluster.adoc @@ -149,7 +149,7 @@ This parameter must be explicitly specified, even if `SAME` is to be used. A comma-separated list of the naming conventions that determine which hosts are allowed to be added or joined to the new cluster. The default is `"*"`, which determines that any hostname is acceptable. For information, see xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. -This parameter is optional, and is available only in Couchbase Server Version 7.1.1+. +This parameter is optional, and is available only in Couchbase Server 7.1.1 and later versions. == Responses diff --git a/modules/rest-api/pages/rest-setting-security.adoc b/modules/rest-api/pages/rest-setting-security.adoc index fa3fd697bf..383413d0e5 100644 --- a/modules/rest-api/pages/rest-setting-security.adoc +++ b/modules/rest-api/pages/rest-setting-security.adoc @@ -78,7 +78,7 @@ A list of response headers. * `allowedHosts`. Specifies a list of naming conventions that must be met by the name of any node that is to be added or joined to the cluster. -This parameter, which is available only in 7.1.1+, is described separately, in xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. +This parameter, which is available only in 7.1.1 and later versions, is described separately, in xref:rest-api:rest-specify-node-addition-conventions.adoc[Restrict Node-Addition]. This parameter can only be set globally. * `allowNonLocalCACertUpload`. diff --git a/modules/rest-api/pages/rest-specify-node-addition-conventions.adoc b/modules/rest-api/pages/rest-specify-node-addition-conventions.adoc index c53f50c664..40dff1919c 100644 --- a/modules/rest-api/pages/rest-specify-node-addition-conventions.adoc +++ b/modules/rest-api/pages/rest-specify-node-addition-conventions.adoc @@ -28,7 +28,7 @@ For a description of other parameters configurable by `POST /settings/security`, Full Admin or Local Security Admin permissions are required. -This API is available only in Couchbase Server Version 7.1.1+. +This API is available only in Couchbase Server 7.1.1 and later versions. [#curl-syntax] == Curl Syntax diff --git a/modules/rest-api/pages/rest-xdcr-adv-settings.adoc b/modules/rest-api/pages/rest-xdcr-adv-settings.adoc index caeb9613c2..40fa6a46fb 100644 --- a/modules/rest-api/pages/rest-xdcr-adv-settings.adoc +++ b/modules/rest-api/pages/rest-xdcr-adv-settings.adoc @@ -662,7 +662,7 @@ This setting can be established and retrieved either for an individual replicati | `mobile` | Active or Off | Default: `Off`. -When set to `Active`, enables the setting _XDCR Active-Active with Sync Gateway 4.0+_ on the clusters of both sides of the replication. The default value `Off` indicates that the replication setup supports either _XDCR Active-Passive with Sync Gateway_ or _XDCR Active-Active without Sync Gateway_. For more information, see xref:learn:clusters-and-availability/xdcr-active-active-sgw.adoc[XDCR Active-Active with Sync Gateway]. +When set to `Active`, enables the setting _XDCR Active-Active with Sync Gateway 4.0 or a later version_ on the clusters of both sides of the replication. The default value `Off` indicates that the replication setup supports either _XDCR Active-Passive with Sync Gateway_ or _XDCR Active-Active without Sync Gateway_. For more information, see xref:learn:clusters-and-availability/xdcr-active-active-sgw.adoc[XDCR Active-Active with Sync Gateway]. | `networkUsageLimit` | Integer diff --git a/modules/rest-api/pages/rest-xdcr-create-replication.adoc b/modules/rest-api/pages/rest-xdcr-create-replication.adoc index 6ba8a51605..be5a048371 100644 --- a/modules/rest-api/pages/rest-xdcr-create-replication.adoc +++ b/modules/rest-api/pages/rest-xdcr-create-replication.adoc @@ -131,7 +131,7 @@ The value can be `High`, `Medium`, or `Low`. The default value is `High`. For information, see xref:learn:clusters-and-availability/xdcr-overview.adoc#xdcr-priority[XDCR Priority]. -Use the `mobile=[Off | Active]` flag to enable the setting _XDCR Active-Active with Sync Gateway 4.0+_ by changing the value to `Active` on the clusters of both sides of the replication. The default value is `Off` , which indicates that the setup supports either _XDCR Active-Passive with Sync Gateway_ or _XDCR Active-Active without Sync Gateway_. +Use the `mobile=[Off | Active]` flag to enable the setting _XDCR Active-Active with Sync Gateway 4.0 or a later version_ by changing the value to `Active` on the clusters of both sides of the replication. The default value is `Off` , which indicates that the setup supports either _XDCR Active-Passive with Sync Gateway_ or _XDCR Active-Active without Sync Gateway_. [NOTE] To enable the setting `mobile=[Off | Active]`, ensure you have enabled the property `enableCrossClusterVersioning` on all the participating buckets, which is a prerequisite. For information about the bucket property `enableCrossClusterVersioning`, see xref:learn:clusters-and-availability/xdcr-enable-crossclusterversioning.adoc[XDCR enableCrossClusterVersioning]. diff --git a/modules/rest-api/pages/upload-retrieve-node-cert.adoc b/modules/rest-api/pages/upload-retrieve-node-cert.adoc index 333f73c9f9..b59b427ad8 100644 --- a/modules/rest-api/pages/upload-retrieve-node-cert.adoc +++ b/modules/rest-api/pages/upload-retrieve-node-cert.adoc @@ -33,7 +33,7 @@ For the loading of the node-certificate to succeed, the private key and chain fi [#node-certificate-validation] === Validating Node Certificates -In Couchbase Enterprise Server Version 7.2+, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. +In Couchbase Enterprise Server 7.2 and later versions, the node-name _must_ be correctly identified in the node certificate as a Subject Alternative Name. If such identification is not correctly configured, failure may occur when uploading the certificate. For information, see xref:learn:security/certificates.adoc#node-certificate-validation[Node-Certificate Validation]. diff --git a/modules/xdcr-reference/pages/xdcr-security-and-networking.adoc b/modules/xdcr-reference/pages/xdcr-security-and-networking.adoc index 8d087da598..c854aabc4c 100644 --- a/modules/xdcr-reference/pages/xdcr-security-and-networking.adoc +++ b/modules/xdcr-reference/pages/xdcr-security-and-networking.adoc @@ -184,7 +184,7 @@ As explained above, this CA must have been loaded into the trust store for the t Additionally, this CA must be recognizable to the source cluster: therefore, XDCR allows the CA to be passed to the source cluster during the set-up of the secure connection. See xref:manage:manage-xdcr/enable-full-secure-replication.adoc[Enable Fully Secure Replications] for examples; covering the UI, the CLI, and the REST API. -Note that, in Couchbase Server 7.1+, _multiple root certificates_ are supported (see xref:learn:security/using-multiple-cas.adoc[Using Multiple Root Certificates]). +Note that, in Couchbase Server 7.1 and later versions, _multiple root certificates_ are supported (see xref:learn:security/using-multiple-cas.adoc[Using Multiple Root Certificates]). Therefore, source and target clusters need not rely on the authority of the same CA: however, each must trust the CA of the other, if the client is to perform certificate-based authentication -- and consequently, if the CAs are different, the CA of the client must have been loaded into the trust store of the server, for authentication to succeed. See xref:rest-api:load-trusted-cas.adoc[Load Root Certificates], for further information. diff --git a/preview/DOC_13681.yml b/preview/DOC_13681.yml new file mode 100644 index 0000000000..54050c7a59 --- /dev/null +++ b/preview/DOC_13681.yml @@ -0,0 +1,28 @@ +sources: + docs-server: + branches: DOC_13681_install_upgrade_updates + docs-analytics: + branches: release/8.0 + docs-devex: + url: https://github.com/couchbaselabs/docs-devex.git + branches: master + startPaths: docs/ + couchbase-cli: + # url: ../../docs-includes/couchbase-cli + url: https://github.com/couchbaselabs/couchbase-cli-doc + # branches: HEAD + branches: master + startPaths: docs/ + backup: + # url: ../../docs-includes/backup + url: https://github.com/couchbaselabs/backup-docs.git + #branches: HEAD + branches: master + startPaths: docs/ + #analytics: + # url: ../../docs-includes/docs-analytics + # branches: HEAD + #cb-swagger: + # url: https://github.com/couchbaselabs/cb-swagger + # branches: release/8.0 + # start_path: docs