Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 22 additions & 10 deletions docs/modules/hbase/pages/getting_started/first_steps.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
= First steps
:description: Deploy and verify an HBase cluster using ZooKeeper, HDFS, and HBase configurations. Test with REST API and Apache Phoenix for table creation and data querying.
:phoenix: https://phoenix.apache.org/index.html

Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies.

Check notice on line 5 in docs/modules/hbase/pages/getting_started/first_steps.adoc

View workflow job for this annotation

GitHub Actions / LanguageTool

[LanguageTool] docs/modules/hbase/pages/getting_started/first_steps.adoc#L5

In American English, ‘afterward’ is the preferred variant. ‘Afterwards’ is more commonly used in British English and other dialects. (AFTERWARDS_US[1]) Suggestions: `Afterward` Rule: https://community.languagetool.org/rule/show/AFTERWARDS_US?lang=en-US&subId=1 Category: BRITISH_ENGLISH
Raw output
docs/modules/hbase/pages/getting_started/first_steps.adoc:5:194: In American English, ‘afterward’ is the preferred variant. ‘Afterwards’ is more commonly used in British English and other dialects. (AFTERWARDS_US[1])
 Suggestions: `Afterward`
 Rule: https://community.languagetool.org/rule/show/AFTERWARDS_US?lang=en-US&subId=1
 Category: BRITISH_ENGLISH
Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).

== Setup

Expand All @@ -11,7 +14,8 @@
[source,yaml]
include::example$getting_started/zk.yaml[]

We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper. Create another file called `znode.yaml` and define a separate ZNode for each service:
We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper.
Create another file called `znode.yaml` and define a separate ZNode for each service:

[source,yaml]
include::example$getting_started/znode.yaml[]
Expand All @@ -28,7 +32,8 @@

=== HDFS

An HDFS cluster has three components: the `namenode`, the `datanode` and the `journalnode`. Create a file named `hdfs.yaml` defining 2 `namenodes` and one `datanode` and `journalnode` each:
An HDFS cluster has three components: the `namenode`, the `datanode` and the `journalnode`.
Create a file named `hdfs.yaml` defining 2 `namenodes` and one `datanode` and `journalnode` each:

[source,yaml]
----
Expand All @@ -37,10 +42,12 @@

Where:

- `metadata.name` contains the name of the HDFS cluster
- the HBase version in the Docker image provided by Stackable must be set in `spec.image.productVersion`
* `metadata.name` contains the name of the HDFS cluster
* the HBase version in the Docker image provided by Stackable must be set in `spec.image.productVersion`

NOTE: Please note that the version you need to specify for `spec.image.productVersion` is the desired version of Apache HBase. You can optionally specify the `spec.image.stackableVersion` to a certain release like `23.11.0` but it is recommended to leave it out and use the default provided by the operator. For a list of available versions please check our https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fhbase%2Ftags[image registry].
NOTE: Please note that the version you need to specify for `spec.image.productVersion` is the desired version of Apache HBase.
You can optionally specify the `spec.image.stackableVersion` to a certain release like `24.7.0` but it is recommended to leave it out and use the default provided by the operator.
For a list of available versions please check our https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fhbase%2Ftags[image registry].
It should generally be safe to simply use the latest image version that is available.

Create the actual HDFS cluster by applying the file:
Expand All @@ -56,7 +63,8 @@

=== HBase

You can now create the HBase cluster. Create a file called `hbase.yaml` containing the following:
You can now create the HBase cluster.
Create a file called `hbase.yaml` containing the following:

[source,yaml]
----
Expand All @@ -65,7 +73,9 @@

== Verify that it works

To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table. You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase. These actions wil be carried out from one of the HBase components, the REST server.
To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table.
You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase.
These actions wil be carried out from one of the HBase components, the REST server.

First, check the cluster version with this callout:

Expand Down Expand Up @@ -124,7 +134,8 @@
[source]
include::example$getting_started/getting_started.sh[tag=create-table]

This will create a table `users` with a single column family `cf`. Its creation can be verified by listing it:
This will create a table `users` with a single column family `cf`.
Its creation can be verified by listing it:

[source]
include::example$getting_started/getting_started.sh[tag=get-table]
Expand All @@ -138,7 +149,8 @@
]
}

An alternative way to interact with HBase is to use the https://phoenix.apache.org/index.html[Phoenix] library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory). Use the python utility `psql.py` (found in /stackable/phoenix/bin) to create, populate and query a table called `WEB_STAT`:
An alternative way to interact with HBase is to use the {phoenix}[Phoenix] library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory).
Use the Python utility `psql.py` (found in /stackable/phoenix/bin) to create, populate and query a table called `WEB_STAT`:

[source]
include::example$getting_started/getting_started.sh[tag=phoenix-table]
Expand Down
7 changes: 4 additions & 3 deletions docs/modules/hbase/pages/getting_started/index.adoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
= Getting started

This guide will get you started with HBase using the Stackable operator. It will guide you through the installation of the operator and its dependencies, setting up your first HBase cluster and verifying its operation.
This guide will get you started with HBase using the Stackable operator.
It guides you through the installation of the operator and its dependencies, setting up your first HBase cluster and verifying its operation.

== Prerequisites

You will need:
To get started you need:

* a Kubernetes cluster
* kubectl
Expand All @@ -18,7 +19,7 @@ Resource sizing depends on cluster type(s), usage and scope, but as a starting p

== What's next

The Guide is divided into two steps:
The guide is divided into two steps:

* xref:getting_started/installation.adoc[Installing the Operators].
* xref:getting_started/first_steps.adoc[Setting up the HBase cluster and verifying it works].
20 changes: 11 additions & 9 deletions docs/modules/hbase/pages/getting_started/installation.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
= Installation
:description: Install Stackable HBase and required operators using stackablectl or Helm on Kubernetes. Follow setup and verification steps for a complete installation.
:kind: https://kind.sigs.k8s.io/

On this page you will install the Stackable HBase operator and its dependencies, the ZooKeeper and HDFS operators, as
well as the commons, secret and listener operators which are required by all Stackable operators.
On this page you will install the Stackable HBase operator and its dependencies, the ZooKeeper and HDFS operators, as well as the commons, secret and listener operators which are required by all Stackable operators.

== Stackable Operators

Expand All @@ -12,8 +13,8 @@ There are 2 ways to run Stackable operators

=== stackablectl

`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install
operators. Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.
`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install operators.
Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.

After you have installed stackablectl run the following command to install all operators necessary for the HBase cluster:

Expand All @@ -28,12 +29,13 @@ The tool will show
include::example$getting_started/install_output.txt[]


TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`. For
example, you can use the `--cluster kind` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind].
TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`.
For example, you can use the `--cluster kind` flag to create a Kubernetes cluster with {kind}[kind].

=== Helm

You can also use Helm to install the operators. Add the Stackable Helm repository:
You can also use Helm to install the operators.
Add the Stackable Helm repository:
[source,bash]
----
include::example$getting_started/getting_started.sh[tag=helm-add-repo]
Expand All @@ -45,8 +47,8 @@ Then install the Stackable Operators:
include::example$getting_started/getting_started.sh[tag=helm-install-operators]
----

Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs
for the required operators). You are now ready to deploy HBase in Kubernetes.
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs for the required operators).
You are now ready to deploy HBase in Kubernetes.

== What's next

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/hbase/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
= Stackable Operator for Apache HBase
:description: The Stackable Operator for Apache HBase is a Kubernetes operator that can manage Apache HBase clusters. Learn about its features, resources, dependencies, and demos, and see the list of supported HBase versions.
:description: Manage Apache HBase clusters on Kubernetes with the Stackable Operator: supports multiple HBase versions, integrates with ZooKeeper and HDFS.
:keywords: Stackable Operator, Apache HBase, Kubernetes, operator, ZooKeeper, HDFS
:hbase: https://hbase.apache.org/
:github: https://github.com/stackabletech/hbase-operator/
Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/compression.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
= Compression support
:hbase-docs-compression: https://hbase.apache.org/book.html#changing.compression
:description: Stackable HBase supports GZip and Snappy compression. Learn to enable and use compression for column families via HBase Shell commands.

Stackable images of Apache HBase support compressed column families.
The following compression algorithms are supported for all HBase 2.4 versions:
Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/hbck2.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Repairing a cluster with HBCK2
:description: Use HBCK2 from hbase-operator-tools to repair HBase clusters. It helps fix issues like unknown RegionServers via the hbck2 script.
:hbck2-github: https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2
:hbase-operator-tools-github: https://github.com/apache/hbase-operator-tools/

Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/logging.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Log aggregation
:description: The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent

The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent:

Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/monitoring.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Monitoring
:description: The managed HBase instances are automatically configured to export Prometheus metrics.

The managed HBase instances are automatically configured to export Prometheus metrics.
See xref:operators:monitoring.adoc[] for more details.
Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/overrides.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@

= Configuration, environment and Pod overrides
:description: Customize HBase with configuration, environment, and Pod overrides. Adjust properties in hbase-site.xml, hbase-env.sh, and security.properties as needed.

The HBase xref:concepts:stacklet.adoc[Stacklet] definition also supports overriding configuration properties, environment variables and Pod specs, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role).

Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/phoenix.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Using Apache Phoenix
:description: Apache Phoenix lets you use SQL with HBase via JDBC. Use bundled psql.py or sqlline.py for table creation and querying, no separate installation needed.
:phoenix-installation: https://phoenix.apache.org/installation.html
:sqlline-github: https://github.com/julianhyde/sqlline

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Resource requests
:description: Stackable managed HBase defaults to minimal resource requests. Adjust CPU and memory limits for production clusters to ensure proper performance.

include::home:concepts:stackable_resource_requests.adoc[]

Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/security.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Security
:description: Apache HBase supports Kerberos for authentication and Open Policy Agent (OPA) for authorization. Configure both in HDFS and HBase for secure access.

== Authentication
Currently the only supported authentication mechanism is Kerberos, which is disabled by default.
Expand Down
1 change: 1 addition & 0 deletions docs/modules/hbase/pages/usage-guide/snapshot-export.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Exporting a snapshot to S3
:description: Export HBase snapshots to S3 using export-snapshot-to-s3. Configure AWS settings, then use the script or create a job for efficient transfers.

HBase snapshots can be exported with the command `hbase snapshot export`.
To be able to export to S3, the AWS libraries from Hadoop must be on the classpath.
Expand Down
Loading