diff --git a/docs/modules/hbase/pages/getting_started/first_steps.adoc b/docs/modules/hbase/pages/getting_started/first_steps.adoc index 9fdf3bbc..b2e153f6 100644 --- a/docs/modules/hbase/pages/getting_started/first_steps.adoc +++ b/docs/modules/hbase/pages/getting_started/first_steps.adoc @@ -1,6 +1,9 @@ = First steps +:description: Deploy and verify an HBase cluster using ZooKeeper, HDFS, and HBase configurations. Test with REST API and Apache Phoenix for table creation and data querying. +:phoenix: https://phoenix.apache.org/index.html -Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase). +Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. +Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase). == Setup @@ -11,7 +14,8 @@ To deploy a ZooKeeper cluster create one file called `zk.yaml`: [source,yaml] include::example$getting_started/zk.yaml[] -We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper. Create another file called `znode.yaml` and define a separate ZNode for each service: +We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper. +Create another file called `znode.yaml` and define a separate ZNode for each service: [source,yaml] include::example$getting_started/znode.yaml[] @@ -28,7 +32,8 @@ include::example$getting_started/getting_started.sh[tag=watch-zk-rollout] === HDFS -An HDFS cluster has three components: the `namenode`, the `datanode` and the `journalnode`. Create a file named `hdfs.yaml` defining 2 `namenodes` and one `datanode` and `journalnode` each: +An HDFS cluster has three components: the `namenode`, the `datanode` and the `journalnode`. +Create a file named `hdfs.yaml` defining 2 `namenodes` and one `datanode` and `journalnode` each: [source,yaml] ---- @@ -37,10 +42,12 @@ include::example$getting_started/hdfs.yaml[] Where: -- `metadata.name` contains the name of the HDFS cluster -- the HBase version in the Docker image provided by Stackable must be set in `spec.image.productVersion` +* `metadata.name` contains the name of the HDFS cluster +* the HBase version in the Docker image provided by Stackable must be set in `spec.image.productVersion` -NOTE: Please note that the version you need to specify for `spec.image.productVersion` is the desired version of Apache HBase. You can optionally specify the `spec.image.stackableVersion` to a certain release like `23.11.0` but it is recommended to leave it out and use the default provided by the operator. For a list of available versions please check our https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fhbase%2Ftags[image registry]. +NOTE: Please note that the version you need to specify for `spec.image.productVersion` is the desired version of Apache HBase. +You can optionally specify the `spec.image.stackableVersion` to a certain release like `24.7.0` but it is recommended to leave it out and use the default provided by the operator. +For a list of available versions please check our https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fhbase%2Ftags[image registry]. It should generally be safe to simply use the latest image version that is available. Create the actual HDFS cluster by applying the file: @@ -56,7 +63,8 @@ include::example$getting_started/getting_started.sh[tag=watch-hdfs-rollout] === HBase -You can now create the HBase cluster. Create a file called `hbase.yaml` containing the following: +You can now create the HBase cluster. +Create a file called `hbase.yaml` containing the following: [source,yaml] ---- @@ -65,7 +73,9 @@ include::example$getting_started/hbase.yaml[] == Verify that it works -To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table. You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase. These actions wil be carried out from one of the HBase components, the REST server. +To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table. +You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase. +These actions wil be carried out from one of the HBase components, the REST server. First, check the cluster version with this callout: @@ -124,7 +134,8 @@ You can now create a table like this: [source] include::example$getting_started/getting_started.sh[tag=create-table] -This will create a table `users` with a single column family `cf`. Its creation can be verified by listing it: +This will create a table `users` with a single column family `cf`. +Its creation can be verified by listing it: [source] include::example$getting_started/getting_started.sh[tag=get-table] @@ -138,7 +149,8 @@ include::example$getting_started/getting_started.sh[tag=get-table] ] } -An alternative way to interact with HBase is to use the https://phoenix.apache.org/index.html[Phoenix] library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory). Use the python utility `psql.py` (found in /stackable/phoenix/bin) to create, populate and query a table called `WEB_STAT`: +An alternative way to interact with HBase is to use the {phoenix}[Phoenix] library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory). +Use the Python utility `psql.py` (found in /stackable/phoenix/bin) to create, populate and query a table called `WEB_STAT`: [source] include::example$getting_started/getting_started.sh[tag=phoenix-table] diff --git a/docs/modules/hbase/pages/getting_started/index.adoc b/docs/modules/hbase/pages/getting_started/index.adoc index a6df8b15..6cbdc32e 100644 --- a/docs/modules/hbase/pages/getting_started/index.adoc +++ b/docs/modules/hbase/pages/getting_started/index.adoc @@ -1,10 +1,11 @@ = Getting started -This guide will get you started with HBase using the Stackable operator. It will guide you through the installation of the operator and its dependencies, setting up your first HBase cluster and verifying its operation. +This guide will get you started with HBase using the Stackable operator. +It guides you through the installation of the operator and its dependencies, setting up your first HBase cluster and verifying its operation. == Prerequisites -You will need: +To get started you need: * a Kubernetes cluster * kubectl @@ -18,7 +19,7 @@ Resource sizing depends on cluster type(s), usage and scope, but as a starting p == What's next -The Guide is divided into two steps: +The guide is divided into two steps: * xref:getting_started/installation.adoc[Installing the Operators]. * xref:getting_started/first_steps.adoc[Setting up the HBase cluster and verifying it works]. diff --git a/docs/modules/hbase/pages/getting_started/installation.adoc b/docs/modules/hbase/pages/getting_started/installation.adoc index 3297624d..fca0cffa 100644 --- a/docs/modules/hbase/pages/getting_started/installation.adoc +++ b/docs/modules/hbase/pages/getting_started/installation.adoc @@ -1,7 +1,8 @@ = Installation +:description: Install Stackable HBase and required operators using stackablectl or Helm on Kubernetes. Follow setup and verification steps for a complete installation. +:kind: https://kind.sigs.k8s.io/ -On this page you will install the Stackable HBase operator and its dependencies, the ZooKeeper and HDFS operators, as -well as the commons, secret and listener operators which are required by all Stackable operators. +On this page you will install the Stackable HBase operator and its dependencies, the ZooKeeper and HDFS operators, as well as the commons, secret and listener operators which are required by all Stackable operators. == Stackable Operators @@ -12,8 +13,8 @@ There are 2 ways to run Stackable operators === stackablectl -`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install -operators. Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform. +`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install operators. +Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform. After you have installed stackablectl run the following command to install all operators necessary for the HBase cluster: @@ -28,12 +29,13 @@ The tool will show include::example$getting_started/install_output.txt[] -TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`. For -example, you can use the `--cluster kind` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind]. +TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`. +For example, you can use the `--cluster kind` flag to create a Kubernetes cluster with {kind}[kind]. === Helm -You can also use Helm to install the operators. Add the Stackable Helm repository: +You can also use Helm to install the operators. +Add the Stackable Helm repository: [source,bash] ---- include::example$getting_started/getting_started.sh[tag=helm-add-repo] @@ -45,8 +47,8 @@ Then install the Stackable Operators: include::example$getting_started/getting_started.sh[tag=helm-install-operators] ---- -Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs -for the required operators). You are now ready to deploy HBase in Kubernetes. +Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs for the required operators). +You are now ready to deploy HBase in Kubernetes. == What's next diff --git a/docs/modules/hbase/pages/index.adoc b/docs/modules/hbase/pages/index.adoc index 7ce8ec06..4a15ac44 100644 --- a/docs/modules/hbase/pages/index.adoc +++ b/docs/modules/hbase/pages/index.adoc @@ -1,5 +1,5 @@ = Stackable Operator for Apache HBase -:description: The Stackable Operator for Apache HBase is a Kubernetes operator that can manage Apache HBase clusters. Learn about its features, resources, dependencies, and demos, and see the list of supported HBase versions. +:description: Manage Apache HBase clusters on Kubernetes with the Stackable Operator: supports multiple HBase versions, integrates with ZooKeeper and HDFS. :keywords: Stackable Operator, Apache HBase, Kubernetes, operator, ZooKeeper, HDFS :hbase: https://hbase.apache.org/ :github: https://github.com/stackabletech/hbase-operator/ diff --git a/docs/modules/hbase/pages/usage-guide/compression.adoc b/docs/modules/hbase/pages/usage-guide/compression.adoc index e461f557..d6813994 100644 --- a/docs/modules/hbase/pages/usage-guide/compression.adoc +++ b/docs/modules/hbase/pages/usage-guide/compression.adoc @@ -1,5 +1,6 @@ = Compression support :hbase-docs-compression: https://hbase.apache.org/book.html#changing.compression +:description: Stackable HBase supports GZip and Snappy compression. Learn to enable and use compression for column families via HBase Shell commands. Stackable images of Apache HBase support compressed column families. The following compression algorithms are supported for all HBase 2.4 versions: diff --git a/docs/modules/hbase/pages/usage-guide/hbck2.adoc b/docs/modules/hbase/pages/usage-guide/hbck2.adoc index a70a9f7d..c013ed21 100644 --- a/docs/modules/hbase/pages/usage-guide/hbck2.adoc +++ b/docs/modules/hbase/pages/usage-guide/hbck2.adoc @@ -1,4 +1,5 @@ = Repairing a cluster with HBCK2 +:description: Use HBCK2 from hbase-operator-tools to repair HBase clusters. It helps fix issues like unknown RegionServers via the hbck2 script. :hbck2-github: https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2 :hbase-operator-tools-github: https://github.com/apache/hbase-operator-tools/ diff --git a/docs/modules/hbase/pages/usage-guide/logging.adoc b/docs/modules/hbase/pages/usage-guide/logging.adoc index e8398840..2a07afdb 100644 --- a/docs/modules/hbase/pages/usage-guide/logging.adoc +++ b/docs/modules/hbase/pages/usage-guide/logging.adoc @@ -1,4 +1,5 @@ = Log aggregation +:description: The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent: diff --git a/docs/modules/hbase/pages/usage-guide/monitoring.adoc b/docs/modules/hbase/pages/usage-guide/monitoring.adoc index dd264162..c9c06eec 100644 --- a/docs/modules/hbase/pages/usage-guide/monitoring.adoc +++ b/docs/modules/hbase/pages/usage-guide/monitoring.adoc @@ -1,4 +1,5 @@ = Monitoring +:description: The managed HBase instances are automatically configured to export Prometheus metrics. The managed HBase instances are automatically configured to export Prometheus metrics. See xref:operators:monitoring.adoc[] for more details. diff --git a/docs/modules/hbase/pages/usage-guide/overrides.adoc b/docs/modules/hbase/pages/usage-guide/overrides.adoc index 7b56cf3f..b39fccb9 100644 --- a/docs/modules/hbase/pages/usage-guide/overrides.adoc +++ b/docs/modules/hbase/pages/usage-guide/overrides.adoc @@ -1,5 +1,6 @@ = Configuration, environment and Pod overrides +:description: Customize HBase with configuration, environment, and Pod overrides. Adjust properties in hbase-site.xml, hbase-env.sh, and security.properties as needed. The HBase xref:concepts:stacklet.adoc[Stacklet] definition also supports overriding configuration properties, environment variables and Pod specs, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role). diff --git a/docs/modules/hbase/pages/usage-guide/phoenix.adoc b/docs/modules/hbase/pages/usage-guide/phoenix.adoc index 4c800138..ba86a3f7 100644 --- a/docs/modules/hbase/pages/usage-guide/phoenix.adoc +++ b/docs/modules/hbase/pages/usage-guide/phoenix.adoc @@ -1,4 +1,5 @@ = Using Apache Phoenix +:description: Apache Phoenix lets you use SQL with HBase via JDBC. Use bundled psql.py or sqlline.py for table creation and querying, no separate installation needed. :phoenix-installation: https://phoenix.apache.org/installation.html :sqlline-github: https://github.com/julianhyde/sqlline diff --git a/docs/modules/hbase/pages/usage-guide/resource-requests.adoc b/docs/modules/hbase/pages/usage-guide/resource-requests.adoc index bc193cdf..f3909ee3 100644 --- a/docs/modules/hbase/pages/usage-guide/resource-requests.adoc +++ b/docs/modules/hbase/pages/usage-guide/resource-requests.adoc @@ -1,4 +1,5 @@ = Resource requests +:description: Stackable managed HBase defaults to minimal resource requests. Adjust CPU and memory limits for production clusters to ensure proper performance. include::home:concepts:stackable_resource_requests.adoc[] diff --git a/docs/modules/hbase/pages/usage-guide/security.adoc b/docs/modules/hbase/pages/usage-guide/security.adoc index 91f54e3c..6353ac26 100644 --- a/docs/modules/hbase/pages/usage-guide/security.adoc +++ b/docs/modules/hbase/pages/usage-guide/security.adoc @@ -1,4 +1,5 @@ = Security +:description: Apache HBase supports Kerberos for authentication and Open Policy Agent (OPA) for authorization. Configure both in HDFS and HBase for secure access. == Authentication Currently the only supported authentication mechanism is Kerberos, which is disabled by default. diff --git a/docs/modules/hbase/pages/usage-guide/snapshot-export.adoc b/docs/modules/hbase/pages/usage-guide/snapshot-export.adoc index c8475a42..491b95d2 100644 --- a/docs/modules/hbase/pages/usage-guide/snapshot-export.adoc +++ b/docs/modules/hbase/pages/usage-guide/snapshot-export.adoc @@ -1,4 +1,5 @@ = Exporting a snapshot to S3 +:description: Export HBase snapshots to S3 using export-snapshot-to-s3. Configure AWS settings, then use the script or create a job for efficient transfers. HBase snapshots can be exported with the command `hbase snapshot export`. To be able to export to S3, the AWS libraries from Hadoop must be on the classpath.