From c3de841cf3cb3444c3088ca781d20ee531151507 Mon Sep 17 00:00:00 2001 From: xeniape Date: Fri, 22 Aug 2025 15:34:06 +0200 Subject: [PATCH] chore: improve usability of external links --- modules/ROOT/pages/export.adoc | 8 +-- modules/ROOT/pages/kubernetes/aks.adoc | 3 +- modules/ROOT/pages/kubernetes/eks.adoc | 2 +- modules/ROOT/pages/kubernetes/gke.adoc | 2 +- .../ROOT/pages/kubernetes/huawei-cloud.adoc | 2 +- modules/ROOT/pages/kubernetes/ibm-cloud.adoc | 2 +- modules/ROOT/pages/kubernetes/index.adoc | 6 +- .../pages/kubernetes/ionos-managed-k8s.adoc | 2 +- .../kubernetes/ionos-managed-stackable.adoc | 2 +- modules/ROOT/pages/kubernetes/kind.adoc | 2 +- modules/ROOT/pages/kubernetes/microk8s.adoc | 2 +- modules/ROOT/pages/kubernetes/oke.adoc | 2 +- modules/ROOT/pages/kubernetes/openshift.adoc | 5 +- modules/ROOT/pages/kubernetes/ovh-mks.adoc | 2 +- modules/ROOT/pages/kubernetes/plusserver.adoc | 2 +- modules/ROOT/pages/kubernetes/ske.adoc | 2 +- modules/ROOT/pages/kubernetes/suse-k3s.adoc | 2 +- .../ROOT/pages/kubernetes/suse-rancher.adoc | 2 +- .../ROOT/pages/kubernetes/vmware_tanzu.adoc | 2 +- modules/ROOT/pages/licenses.adoc | 36 ++++++------ modules/ROOT/pages/policies.adoc | 8 +-- modules/ROOT/pages/product-information.adoc | 6 +- modules/ROOT/pages/quickstart.adoc | 10 ++-- .../release-notes/release-template.adoc | 2 +- modules/concepts/pages/authentication.adoc | 8 +-- modules/concepts/pages/container-images.adoc | 6 +- .../pages/multi-platform-support.adoc | 2 +- .../pages/observability/containerdebug.adoc | 2 +- .../concepts/pages/observability/labels.adoc | 2 +- .../concepts/pages/observability/logging.adoc | 14 ++--- modules/concepts/pages/opa.adoc | 8 +-- .../pages/operations/graceful_shutdown.adoc | 2 +- modules/concepts/pages/operations/index.adoc | 6 +- .../pages/operations/pod_disruptions.adoc | 6 +- .../pages/operations/pod_placement.adoc | 2 +- modules/concepts/pages/overrides.adoc | 2 +- .../pages/product-image-selection.adoc | 10 ++-- modules/concepts/pages/resources.adoc | 14 ++--- modules/concepts/pages/s3.adoc | 2 +- .../pages/stackable_resource_requests.adoc | 2 +- .../contributor/pages/code-style-guide.adoc | 14 ++--- .../contributor/pages/contributing-code.adoc | 58 +++++++++---------- .../pages/docs/backporting-changes.adoc | 4 +- .../pages/docs/crd-documentation.adoc | 4 +- modules/contributor/pages/docs/overview.adoc | 30 +++++----- .../pages/docs/releasing-a-new-version.adoc | 4 +- .../contributor/pages/docs/style-guide.adoc | 50 ++++++++-------- .../pages/docs/troubleshooting-antora.adoc | 6 +- .../pages/docs/using-tab-blocks.adoc | 2 +- .../pages/guidelines/opa-configuration.adoc | 16 ++--- .../pages/guidelines/service-discovery.adoc | 8 +-- .../pages/guidelines/webhook-server.adoc | 2 +- modules/contributor/pages/index.adoc | 14 ++--- .../contributor/pages/project-overview.adoc | 44 +++++++------- .../pages/testing-infrastructure.adoc | 18 +++--- .../pages/testing-on-kubernetes.adoc | 20 +++---- modules/guides/pages/custom-images.adoc | 6 +- ...ling-verification-of-image-signatures.adoc | 34 +++++------ .../pages/kubernetes-cluster-domain.adoc | 4 +- .../pages/providing-resources-with-pvcs.adoc | 22 +++---- ...stackable-in-an-airgapped-environment.adoc | 8 +-- .../pages/viewing-and-verifying-sboms.adoc | 16 ++--- modules/operators/pages/index.adoc | 2 +- modules/operators/pages/monitoring.adoc | 6 +- modules/reference/pages/duration.adoc | 8 +-- .../pages/authentication_with_openldap.adoc | 6 +- modules/tutorials/pages/jupyterhub.adoc | 8 +-- .../pages/logging-vector-aggregator.adoc | 10 ++-- 68 files changed, 312 insertions(+), 314 deletions(-) diff --git a/modules/ROOT/pages/export.adoc b/modules/ROOT/pages/export.adoc index c99e19d92..2512735e3 100644 --- a/modules/ROOT/pages/export.adoc +++ b/modules/ROOT/pages/export.adoc @@ -3,7 +3,7 @@ == USA -The USA adopts the https://en.wikipedia.org/wiki/Export_Administration_Regulations[Export Administration Regulations (EAR)] as the primary regulation to control exports. +The USA adopts the https://en.wikipedia.org/wiki/Export_Administration_Regulations[Export Administration Regulations (EAR){external-link-icon}^] as the primary regulation to control exports. All of our products are outside of the scope of EAR because they fall under the _publicly available_ exemption and do not contain non-standard cryptography. @@ -11,7 +11,7 @@ NOTE: That means E-Mail notifications to the NSA and BIS are not required and ar In particular: -* We are exempt under https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-734/section-734.3[EAR 734.3(b)] because we publish according to https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-734/section-734.7[EAR 734.7] -* We are exempt under https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-742/section-742.15[EAR 742.15(b)] as our software includes only https://ecfr.io/Title-15/Section-772.1[standard encryption] +* We are exempt under https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-734/section-734.3[EAR 734.3(b){external-link-icon}^] because we publish according to https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-734/section-734.7[EAR 734.7{external-link-icon}^] +* We are exempt under https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-742/section-742.15[EAR 742.15(b){external-link-icon}^] as our software includes only https://ecfr.io/Title-15/Section-772.1[standard encryption{external-link-icon}^] -The Stackable Data Platform is open source, and the source code of all the components can be found on https://github.com/stackabletech/[GitHub]. +The Stackable Data Platform is open source, and the source code of all the components can be found on https://github.com/stackabletech/[GitHub{external-link-icon}^]. diff --git a/modules/ROOT/pages/kubernetes/aks.adoc b/modules/ROOT/pages/kubernetes/aks.adoc index 60309d98b..377a6c99d 100644 --- a/modules/ROOT/pages/kubernetes/aks.adoc +++ b/modules/ROOT/pages/kubernetes/aks.adoc @@ -1,6 +1,6 @@ = Azure Kubernetes Service (AKS) -https://azure.microsoft.com/en-us/products/kubernetes-service +https://azure.microsoft.com/en-us/products/kubernetes-service[https://azure.microsoft.com/en-us/products/kubernetes-service{external-link-icon}^] Automatic Kubernetes clusters are not supported, as the xref:secret-operator:index.adoc[secret-operator] requires special privileges that are not granted in automatic Kubernetes clusters. @@ -47,4 +47,3 @@ image::managed-k8s/aks/6.png[] Access your Kubernetes by clicking on the `Connect` button and following the instructions. + image::managed-k8s/aks/7.png[] - diff --git a/modules/ROOT/pages/kubernetes/eks.adoc b/modules/ROOT/pages/kubernetes/eks.adoc index 75bc35d32..9dbd6935f 100644 --- a/modules/ROOT/pages/kubernetes/eks.adoc +++ b/modules/ROOT/pages/kubernetes/eks.adoc @@ -1,6 +1,6 @@ = Amazon Elastic Kubernetes Service (EKS) -https://aws.amazon.com/eks/ +https://aws.amazon.com/eks/[https://aws.amazon.com/eks/{external-link-icon}^] Please make sure that you have a default StorageClass in your cluster, so that PVCs will be provisioned. diff --git a/modules/ROOT/pages/kubernetes/gke.adoc b/modules/ROOT/pages/kubernetes/gke.adoc index 97169d4ee..841f5196b 100644 --- a/modules/ROOT/pages/kubernetes/gke.adoc +++ b/modules/ROOT/pages/kubernetes/gke.adoc @@ -1,6 +1,6 @@ = Google Kubernetes Engine (GKE) -https://cloud.google.com/kubernetes-engine +https://cloud.google.com/kubernetes-engine[https://cloud.google.com/kubernetes-engine{external-link-icon}^] Autopilot clusters are not suported, as the xref:secret-operator:index.adoc[secret-operator] requires special privileges that are not granted in Autopilot clusters. diff --git a/modules/ROOT/pages/kubernetes/huawei-cloud.adoc b/modules/ROOT/pages/kubernetes/huawei-cloud.adoc index 72505f93a..ba529b9b4 100644 --- a/modules/ROOT/pages/kubernetes/huawei-cloud.adoc +++ b/modules/ROOT/pages/kubernetes/huawei-cloud.adoc @@ -1,6 +1,6 @@ = Huawei Cloud Container Engine (CCE) -https://www.huaweicloud.com/intl/en-us/product/cce.html +https://www.huaweicloud.com/intl/en-us/product/cce.html[https://www.huaweicloud.com/intl/en-us/product/cce.html{external-link-icon}^] Huawei Cloud uses a non-standard Kubelet state directory. For this reason the secret-operator and listener-operator on Huawei Cloud require special handling. diff --git a/modules/ROOT/pages/kubernetes/ibm-cloud.adoc b/modules/ROOT/pages/kubernetes/ibm-cloud.adoc index 660a3cbea..9685cee0e 100644 --- a/modules/ROOT/pages/kubernetes/ibm-cloud.adoc +++ b/modules/ROOT/pages/kubernetes/ibm-cloud.adoc @@ -1,6 +1,6 @@ = IBM Cloud Kubernetes Service -https://www.ibm.com/products/kubernetes-service +https://www.ibm.com/products/kubernetes-service[https://www.ibm.com/products/kubernetes-service{external-link-icon}^] IBM Cloud Kubernetes Service uses a non-standard Kubelet state directory. For this reason the secret-operator and listener-operator on IBM Cloud Kubernetes Service require special handling. diff --git a/modules/ROOT/pages/kubernetes/index.adoc b/modules/ROOT/pages/kubernetes/index.adoc index a7d904186..12d02d425 100644 --- a/modules/ROOT/pages/kubernetes/index.adoc +++ b/modules/ROOT/pages/kubernetes/index.adoc @@ -28,12 +28,12 @@ Stackable's control plane is built around Kubernetes, and we'll give some brief === Installing kubectl -Stackable operators and their services are managed by applying manifest files to the Kubernetes cluster. For this purpose, you need to have the `kubectl` tool installed. Follow the instructions https://kubernetes.io/docs/tasks/tools/#kubectl[here] for your platform. +Stackable operators and their services are managed by applying manifest files to the Kubernetes cluster. For this purpose, you need to have the `kubectl` tool installed. Follow the instructions https://kubernetes.io/docs/tasks/tools/#kubectl[here{external-link-icon}^] for your platform. === Installing Kubernetes using Kind Kind offers a very quick and easy way to bootstrap your Kubernetes infrastructure in Docker. The big advantage of this is that you can simply remove the Docker containers when you're finished and clean up easily, making it great for testing and development. -If you don't already have Docker then visit https://docs.docker.com/get-docker/[Docker Website] to find out how to install Docker. Kind is a single executable that performs the tasks of installing and configuring Kubernetes for you within Docker containers. The https://kind.sigs.k8s.io/docs/user/quick-start/[Kind Website] has instructions for installing Kind on your system. +If you don't already have Docker then visit https://docs.docker.com/get-docker/[Docker Website{external-link-icon}^] to find out how to install Docker. Kind is a single executable that performs the tasks of installing and configuring Kubernetes for you within Docker containers. The https://kind.sigs.k8s.io/docs/user/quick-start/[Kind Website{external-link-icon}^] has instructions for installing Kind on your system. Once you have both of these installed then you can build a Kubernetes cluster in Docker. We're going to create a simple, single node cluster to test out Stackable, with the one node hosting both the Kubernetes control plane and the Stackable services. @@ -80,5 +80,5 @@ Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-syste === Configuring the cluster domain -In case a non-default cluster domain is used as described in https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/[Customizing DNS Service], +In case a non-default cluster domain is used as described in https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/[Customizing DNS Service{external-link-icon}^], Stackable operators can be configured accordingly. This is described in detail in the xref:guides:kubernetes-cluster-domain.adoc[Configuring the Kubernetes cluster domain] guide. diff --git a/modules/ROOT/pages/kubernetes/ionos-managed-k8s.adoc b/modules/ROOT/pages/kubernetes/ionos-managed-k8s.adoc index 9281a18e8..76026d515 100644 --- a/modules/ROOT/pages/kubernetes/ionos-managed-k8s.adoc +++ b/modules/ROOT/pages/kubernetes/ionos-managed-k8s.adoc @@ -1,6 +1,6 @@ = IONOS managed Kubernetes -https://cloud.ionos.com/managed/kubernetes +https://cloud.ionos.com/managed/kubernetes[https://cloud.ionos.com/managed/kubernetes{external-link-icon}^] TIP: IONOS also offers a xref:kubernetes/ionos-managed-stackable.adoc[managed Stackable service], which simplifies the usage of Stackable. diff --git a/modules/ROOT/pages/kubernetes/ionos-managed-stackable.adoc b/modules/ROOT/pages/kubernetes/ionos-managed-stackable.adoc index ae778d5f4..45b0a8f7f 100644 --- a/modules/ROOT/pages/kubernetes/ionos-managed-stackable.adoc +++ b/modules/ROOT/pages/kubernetes/ionos-managed-stackable.adoc @@ -1,5 +1,5 @@ = IONOS managed Stackable -https://cloud.ionos.com/managed/managed-stackable +https://cloud.ionos.com/managed/managed-stackable[https://cloud.ionos.com/managed/managed-stackable{external-link-icon}^] > The Managed Stackable Data Platform from IONOS Cloud is designed to enable you to work with maximum efficiency: Simply select the appropriate data management tools for your respective purpose, build individual stacks for yourself or your customers and make all your data productively usable as quickly as possible diff --git a/modules/ROOT/pages/kubernetes/kind.adoc b/modules/ROOT/pages/kubernetes/kind.adoc index c73db051f..6b00adc46 100644 --- a/modules/ROOT/pages/kubernetes/kind.adoc +++ b/modules/ROOT/pages/kubernetes/kind.adoc @@ -1,3 +1,3 @@ = kind -https://kind.sigs.k8s.io/ +https://kind.sigs.k8s.io/[https://kind.sigs.k8s.io/{external-link-icon}^] diff --git a/modules/ROOT/pages/kubernetes/microk8s.adoc b/modules/ROOT/pages/kubernetes/microk8s.adoc index 90c424ee9..2e143cc25 100644 --- a/modules/ROOT/pages/kubernetes/microk8s.adoc +++ b/modules/ROOT/pages/kubernetes/microk8s.adoc @@ -1,6 +1,6 @@ = Microk8s -https://microk8s.io/ +https://microk8s.io/[https://microk8s.io/{external-link-icon}^] Microk8s uses a non-standard Kubelet state directory. For this reason the secret-operator and listener-operator on Microk8s require special handling. diff --git a/modules/ROOT/pages/kubernetes/oke.adoc b/modules/ROOT/pages/kubernetes/oke.adoc index ba8830e04..2053a33f1 100644 --- a/modules/ROOT/pages/kubernetes/oke.adoc +++ b/modules/ROOT/pages/kubernetes/oke.adoc @@ -1,3 +1,3 @@ = Oracle Kubernetes Engine (OKE) -https://www.oracle.com/cloud/cloud-native/kubernetes-engine/ +https://www.oracle.com/cloud/cloud-native/kubernetes-engine/[https://www.oracle.com/cloud/cloud-native/kubernetes-engine/{external-link-icon}^] diff --git a/modules/ROOT/pages/kubernetes/openshift.adoc b/modules/ROOT/pages/kubernetes/openshift.adoc index 37df71deb..f5a726751 100644 --- a/modules/ROOT/pages/kubernetes/openshift.adoc +++ b/modules/ROOT/pages/kubernetes/openshift.adoc @@ -1,6 +1,6 @@ = Red Hat OpenShift -https://www.redhat.com/en/technologies/cloud-computing/openshift +https://www.redhat.com/en/technologies/cloud-computing/openshift[https://www.redhat.com/en/technologies/cloud-computing/openshift{external-link-icon}^] SDP operators are certified for the OpenShift platform and can be installed from the OperatorHub. @@ -8,7 +8,7 @@ IMPORTANT: OpenShift installations with FIPS mode enabled are not supported. Thi == Customizing operator installations -As described in the https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/subscription-config.md[Openshift Subscription documentation] you can configure the deployed operators. +As described in the https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/subscription-config.md[Openshift Subscription documentation{external-link-icon}^] you can configure the deployed operators. E.g. depending on the cluster size, you may need to customize the resources requested by the SDP operator containers. This is possible when installing the operators via a Subscription CustomResource. @@ -60,4 +60,3 @@ Starting with the release version `24.7.0`, all products run with the `nonroot-v Operators (with two exceptions) don't request a specific SCC to run with. Usually OpenShift will select the `restricted` or `restricted-v2` SCC unless the cluster admins have specifically assigned a different one to the namespace where the operators are running. The two exceptions are the secret and the listener operators. These need additional permissions not available in the `restricted` SCCs to propagate volume mounts to the requesting pods. - diff --git a/modules/ROOT/pages/kubernetes/ovh-mks.adoc b/modules/ROOT/pages/kubernetes/ovh-mks.adoc index 157542908..226cdb383 100644 --- a/modules/ROOT/pages/kubernetes/ovh-mks.adoc +++ b/modules/ROOT/pages/kubernetes/ovh-mks.adoc @@ -1,6 +1,6 @@ = OVH Managed Kubernetes Service (MKS) -https://www.ovhcloud.com/en/public-cloud/kubernetes/ +https://www.ovhcloud.com/en/public-cloud/kubernetes/[https://www.ovhcloud.com/en/public-cloud/kubernetes/{external-link-icon}^] The Stackable Data Platform should install normally on the OVH MKS out of the box. diff --git a/modules/ROOT/pages/kubernetes/plusserver.adoc b/modules/ROOT/pages/kubernetes/plusserver.adoc index b3b869718..0174c3473 100644 --- a/modules/ROOT/pages/kubernetes/plusserver.adoc +++ b/modules/ROOT/pages/kubernetes/plusserver.adoc @@ -1,3 +1,3 @@ = plusserver Kubernetes as a Service -https://www.plusserver.com/en/product/managed-kubernetes/ +https://www.plusserver.com/en/product/managed-kubernetes/[https://www.plusserver.com/en/product/managed-kubernetes/{external-link-icon}^] diff --git a/modules/ROOT/pages/kubernetes/ske.adoc b/modules/ROOT/pages/kubernetes/ske.adoc index f061d6829..13f945303 100644 --- a/modules/ROOT/pages/kubernetes/ske.adoc +++ b/modules/ROOT/pages/kubernetes/ske.adoc @@ -1,6 +1,6 @@ = STACKIT Kubernetes Engine (SKE) -https://www.stackit.de/de/produkt/stackit-kubernetes-engine/ +https://www.stackit.de/de/produkt/stackit-kubernetes-engine/[https://www.stackit.de/de/produkt/stackit-kubernetes-engine/{external-link-icon}^] SKE clusters by default have no public IPs assigned to the Kubernetes nodes. As of 2024-06-13 marking the nodes as public during the Kubernetes cluster creation is not supported. diff --git a/modules/ROOT/pages/kubernetes/suse-k3s.adoc b/modules/ROOT/pages/kubernetes/suse-k3s.adoc index e0345289f..ea1098c01 100644 --- a/modules/ROOT/pages/kubernetes/suse-k3s.adoc +++ b/modules/ROOT/pages/kubernetes/suse-k3s.adoc @@ -1,3 +1,3 @@ = SUSE K3S -https://www.suse.com/products/k3s/ +https://www.suse.com/products/k3s/[https://www.suse.com/products/k3s/{external-link-icon}^] diff --git a/modules/ROOT/pages/kubernetes/suse-rancher.adoc b/modules/ROOT/pages/kubernetes/suse-rancher.adoc index 380bf6f95..b3daa11f7 100644 --- a/modules/ROOT/pages/kubernetes/suse-rancher.adoc +++ b/modules/ROOT/pages/kubernetes/suse-rancher.adoc @@ -1,3 +1,3 @@ = SUSE Rancher -https://www.rancher.com/products/rancher +https://www.rancher.com/products/rancher[https://www.rancher.com/products/rancher{external-link-icon}^] diff --git a/modules/ROOT/pages/kubernetes/vmware_tanzu.adoc b/modules/ROOT/pages/kubernetes/vmware_tanzu.adoc index 962b1aca0..5b374fea0 100644 --- a/modules/ROOT/pages/kubernetes/vmware_tanzu.adoc +++ b/modules/ROOT/pages/kubernetes/vmware_tanzu.adoc @@ -1,6 +1,6 @@ = VMware Tanzu -https://www.vmware.com/products/app-platform/tanzu +https://www.vmware.com/products/app-platform/tanzu[https://www.vmware.com/products/app-platform/tanzu{external-link-icon}^] VMware Tanzu uses a non-standard Kubelet state directory. For this reason the secret-operator and listener-operator on VMware Tanzu require special handling. diff --git a/modules/ROOT/pages/licenses.adoc b/modules/ROOT/pages/licenses.adoc index 79b478f69..d98fd2c76 100644 --- a/modules/ROOT/pages/licenses.adoc +++ b/modules/ROOT/pages/licenses.adoc @@ -7,31 +7,31 @@ The Stackable Data Platform is open source, and the source code of all the compo Product Operators -* https://github.com/stackabletech/airflow-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache Airflow. -* https://github.com/stackabletech/druid-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache Druid. -* https://github.com/stackabletech/hbase-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache HBase. -* https://github.com/stackabletech/hdfs-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache Hadoop HDFS. -* https://github.com/stackabletech/hive-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache Hive. -* https://github.com/stackabletech/kafka-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache Kafka. -* https://github.com/stackabletech/nifi-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache NiFi. -* https://github.com/stackabletech/spark-k8s-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache Spark. -* https://github.com/stackabletech/trino-operator/blob/main/LICENSE[License] for the Stackable Operator for Trino. -* https://github.com/stackabletech/zookeeper-operator/blob/main/LICENSE[License] for the Stackable Operator for Apache ZooKeeper. -* https://github.com/stackabletech/opa-operator/blob/main/LICENSE[License] for the Stackable Operator for OpenPolicyAgent. +* https://github.com/stackabletech/airflow-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache Airflow. +* https://github.com/stackabletech/druid-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache Druid. +* https://github.com/stackabletech/hbase-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache HBase. +* https://github.com/stackabletech/hdfs-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache Hadoop HDFS. +* https://github.com/stackabletech/hive-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache Hive. +* https://github.com/stackabletech/kafka-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache Kafka. +* https://github.com/stackabletech/nifi-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache NiFi. +* https://github.com/stackabletech/spark-k8s-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache Spark. +* https://github.com/stackabletech/trino-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Trino. +* https://github.com/stackabletech/zookeeper-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for Apache ZooKeeper. +* https://github.com/stackabletech/opa-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Operator for OpenPolicyAgent. Additional Stackable Operators -* https://github.com/stackabletech/commons-operator/blob/main/LICENSE[License] for the Stackable Commons Operator. -* https://github.com/stackabletech/secret-operator/blob/main/LICENSE[License] for the Stackable Secret Operator. -* https://github.com/stackabletech/listener-operator/blob/main/LICENSE[License] for the Stackable Listener Operator. +* https://github.com/stackabletech/commons-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Commons Operator. +* https://github.com/stackabletech/secret-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Secret Operator. +* https://github.com/stackabletech/listener-operator/blob/main/LICENSE[License{external-link-icon}^] for the Stackable Listener Operator. == stackablectl -https://github.com/stackabletech/stackable-cockpit/blob/main/LICENSE[License] for stackablectl. +https://github.com/stackabletech/stackable-cockpit/blob/main/LICENSE[License{external-link-icon}^] for stackablectl. == Product images -https://github.com/stackabletech/docker-images/blob/main/LICENSE[License] for the product Docker images. +https://github.com/stackabletech/docker-images/blob/main/LICENSE[License{external-link-icon}^] for the product Docker images. -The Docker images are built on the https://catalog.redhat.com/software/containers/ubi9-minimal/61832888c0d15aff4912fe0d[Red Hat ubi9-minimal base image]. -It is https://www.redhat.com/licenses/EULA_Red_Hat_Universal_Base_Image_English_20190422.pdf[licensed seperately]. +The Docker images are built on the https://catalog.redhat.com/software/containers/ubi9-minimal/61832888c0d15aff4912fe0d[Red Hat ubi9-minimal base image{external-link-icon}^]. +It is https://www.redhat.com/licenses/EULA_Red_Hat_Universal_Base_Image_English_20190422.pdf[licensed seperately{external-link-icon}^]. diff --git a/modules/ROOT/pages/policies.adoc b/modules/ROOT/pages/policies.adoc index ceaf2af27..9512419f1 100644 --- a/modules/ROOT/pages/policies.adoc +++ b/modules/ROOT/pages/policies.adoc @@ -25,13 +25,13 @@ These policies are from *July 2023*. NOTE: This policy concerns releases of our platform as a whole and how long and to which extent we support each version. We do releases of our Stackable Data Platform. -These releases get a name based on the year and month they have been released in (e.g. `23.4`, `23.7`, also called https://calver.org/[CalVer]). This name does not follow Semantic Versioning (https://semver.org/[SemVer]). We may release patches for a release, which then follow the PATCH naming semantics of SemVer (e.g. `23.4.1`) or the _Micro_ name from CalVer. See below for our policy on patches for the SDP. +These releases get a name based on the year and month they have been released in (e.g. `23.4`, `23.7`, also called https://calver.org/[CalVer{external-link-icon}^]). This name does not follow Semantic Versioning (https://semver.org/[SemVer{external-link-icon}^]). We may release patches for a release, which then follow the PATCH naming semantics of SemVer (e.g. `23.4.1`) or the _Micro_ name from CalVer. See below for our policy on patches for the SDP. We support an SDP release for a specific amount of time after its initial release. An SDP release contains our operators and other code developed at Stackable as well as the product docker images. -TIP: Our policy is inspired by the https://kubernetes.io/releases/patch-releases/[Kubernetes] and the https://access.redhat.com/support/policy/updates/openshift#ocp4[OpenShift] policies. +TIP: Our policy is inspired by the https://kubernetes.io/releases/patch-releases/[Kubernetes{external-link-icon}^] and the https://access.redhat.com/support/policy/updates/openshift#ocp4[OpenShift{external-link-icon}^] policies. === Full support phase @@ -67,7 +67,7 @@ IMPORTANT: As of January 2024 all our CRDs are versioned as `alpha1`. We will st CustomResourceDefinitions at Stackable are versioned. -Our policies around CRD versioning are inspired by the https://kubernetes.io/docs/reference/using-api/deprecation-policy/[Kubernetes Deprecation Policy]. +Our policies around CRD versioning are inspired by the https://kubernetes.io/docs/reference/using-api/deprecation-policy/[Kubernetes Deprecation Policy{external-link-icon}^]. Specifically we try to follow these rules: @@ -141,7 +141,7 @@ Stackable will analyze published security vulnerabilities (e.g. CVEs but other s We take various sources into account when assigning a criticality. Among those sources is the NVD database, but we place higher value on the self-assessments by the projects themselves, and we will additionally evaluate vulnerabilities in the context of how they are used in the Stackable Data Platform. -We will then assign a criticality to each vulnerability according to similar rating categories that https://access.redhat.com/security/updates/classification[RedHat has established]: +We will then assign a criticality to each vulnerability according to similar rating categories that https://access.redhat.com/security/updates/classification[RedHat has established{external-link-icon}^]: Critical:: This rating is given to flaws that could be easily exploited by a remote unauthenticated attacker and lead to system compromise (arbitrary code execution) without requiring user interaction. Flaws that require authentication, local or physical access to a system, or an unlikely configuration are not classified as Critical impact. These are the types of vulnerabilities that can be exploited by worms. diff --git a/modules/ROOT/pages/product-information.adoc b/modules/ROOT/pages/product-information.adoc index 24bc26cd8..4824b09d2 100644 --- a/modules/ROOT/pages/product-information.adoc +++ b/modules/ROOT/pages/product-information.adoc @@ -55,14 +55,14 @@ Stackable components. === Operators and products All operators are supplied in container images. The products are also deployed in container images. The docker images -are available for download here: https://oci.stackable.tech/[]. Information on how to browse the images can be found xref:contributor:project-overview.adoc#docker-images[here]. +are available for download here: https://oci.stackable.tech/[https://oci.stackable.tech/{external-link-icon}^]. Information on how to browse the images can be found xref:contributor:project-overview.adoc#docker-images[here]. -Stackable supports installing the Operators via https://helm.sh/[Helm] or with <>. Every Operator includes +Stackable supports installing the Operators via https://helm.sh/[Helm{external-link-icon}^] or with <>. Every Operator includes installation instructions in the Getting started guide. ==== Helm Charts -The Helm Charts can be found here: https://oci.stackable.tech/api/v2.0/projects/sdp-charts[] Using the Helm Charts +The Helm Charts can be found here: https://oci.stackable.tech/api/v2.0/projects/sdp-charts[https://oci.stackable.tech/api/v2.0/projects/sdp-charts{external-link-icon}^] Using the Helm Charts requires Helm version 3 or above. Information on how to browse the OCI repository for Helm Charts is described xref:contributor:project-overview.adoc#product-artifacts[here]. diff --git a/modules/ROOT/pages/quickstart.adoc b/modules/ROOT/pages/quickstart.adoc index 40e5887a8..f4ee37fd0 100644 --- a/modules/ROOT/pages/quickstart.adoc +++ b/modules/ROOT/pages/quickstart.adoc @@ -1,5 +1,5 @@ = Quickstart -:latest-release: https://github.com/stackabletech/stackable-cockpit/releases/tag/stackablectl-24.11.1 +:latest-release: https://github.com/stackabletech/stackable-cockpit/releases/tag/stackablectl-1.1.0 :cockpit-releases: https://github.com/stackabletech/stackable-cockpit/releases :description: Quickstart guide for Stackable: Install stackablectl, set up a demo, and connect to services like Superset and Trino with easy commands and links. @@ -12,14 +12,14 @@ Install `stackablectl`, the Stackable CLI utility. === Installation on Linux -Download the `stackablectl-x86_64-unknown-linux-gnu` binary file from the link:{latest-release}[latest release], then +Download the `stackablectl-x86_64-unknown-linux-gnu` binary file from the link:{latest-release}[latest release{external-link-icon}^], then rename the file to `stackablectl`. You can also use the following command: [source,console] ---- -wget -O stackablectl https://github.com/stackabletech/stackable-cockpit/releases/download/stackablectl-24.11.1/stackablectl-x86_64-unknown-linux-gnu +wget -O stackablectl https://github.com/stackabletech/stackable-cockpit/releases/download/stackablectl-1.1.0/stackablectl-x86_64-unknown-linux-gnu # or -curl -L -o stackablectl https://github.com/stackabletech/stackable-cockpit/releases/download/stackablectl-24.11.1/stackablectl-x86_64-unknown-linux-gnu +curl -L -o stackablectl https://github.com/stackabletech/stackable-cockpit/releases/download/stackablectl-1.1.0/stackablectl-x86_64-unknown-linux-gnu ---- Mark the binary as executable: @@ -38,7 +38,7 @@ See the xref:management:stackablectl:installation.adoc[guide] for detailed infor == Install the Taxi data demo The xref:demos:trino-taxi-data.adoc[`trino-taxi-data`] Demo installs the latest Stackable platform release and a -visualization of https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page[New York City Taxi Data] using Trino and +visualization of https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page[New York City Taxi Data{external-link-icon}^] using Trino and Superset: [source,console] diff --git a/modules/ROOT/partials/release-notes/release-template.adoc b/modules/ROOT/partials/release-notes/release-template.adoc index 200ca0711..a0dc6bea6 100644 --- a/modules/ROOT/partials/release-notes/release-template.adoc +++ b/modules/ROOT/partials/release-notes/release-template.adoc @@ -35,7 +35,7 @@ The following product versions are deprecated and will be removed in a later rel ===== Removed versions -The following product versions are no longer supported (although images for released product versions remain available https://oci.stackable.tech/[here,window=_blank]. Information on how to browse the registry can be found xref:contributor:project-overview.adoc#docker-images[here,window=_blank].): +The following product versions are no longer supported (although images for released product versions remain available https://oci.stackable.tech/[here{external-link-icon}^]. Information on how to browse the registry can be found xref:contributor:project-overview.adoc#docker-images[here,window=_blank].): * ... diff --git a/modules/concepts/pages/authentication.adoc b/modules/concepts/pages/authentication.adoc index f345b2cf0..458acf6ff 100644 --- a/modules/concepts/pages/authentication.adoc +++ b/modules/concepts/pages/authentication.adoc @@ -38,12 +38,12 @@ In a diagram it would look like this: image::image$authentication-overview.drawio.svg[] -NOTE: Learn more in the xref:tutorials:authentication_with_openldap.adoc[OpenLDAP tutorial] and get a full overview of all the properties in the {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/#spec-provider-ldap[AuthenticationClass LDAP provider CRD reference]. +NOTE: Learn more in the xref:tutorials:authentication_with_openldap.adoc[OpenLDAP tutorial] and get a full overview of all the properties in the {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/#spec-provider-ldap[AuthenticationClass LDAP provider CRD reference{external-link-icon}^]. [#OIDC] === OpenID Connect -An OIDC provider like {keycloak}[Keycloak] could be configured as follows: +An OIDC provider like {keycloak}[Keycloak{external-link-icon}^] could be configured as follows: [source,yaml] ---- @@ -59,7 +59,7 @@ include::example$authenticationclass-keycloak.yaml[] <7> Optionally enable TLS and configure verification. When present, connections to the idP will use `https://` instead of `http://`. See xref:tls-server-verification.adoc[]. <8> Trust certificates signed by commonly trusted Certificate Authorities. -NOTE: Get a full overview of all the properties in the {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/#spec-provider-oidc[AuthenticationClass OIDC provider CRD reference]. +NOTE: Get a full overview of all the properties in the {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/#spec-provider-oidc[AuthenticationClass OIDC provider CRD reference{external-link-icon}^]. [#tls] === TLS @@ -113,4 +113,4 @@ include::example$authenticationclass-static-secret.yaml[] == Further reading * xref:tutorials:authentication_with_openldap.adoc[] tutorial -* {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/[AuthenticationClass CRD reference] +* {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/[AuthenticationClass CRD reference{external-link-icon}^] diff --git a/modules/concepts/pages/container-images.adoc b/modules/concepts/pages/container-images.adoc index 7415ddfe4..2520367dc 100644 --- a/modules/concepts/pages/container-images.adoc +++ b/modules/concepts/pages/container-images.adoc @@ -14,11 +14,11 @@ usually these products also only provide a single artifact that is used to run a Product images are built for xref:operators:supported_versions.adoc[] of products (Not all product versions are supported by all releases). -All images are stored in the {stackable-image-registry}[Stackable image registry]. +All images are stored in the {stackable-image-registry}[Stackable image registry{external-link-icon}^]. == Image structure and contents -All our images are built using the {ubi}[Red Hat Universal Base Image (UBI)] minimal as a base image. +All our images are built using the {ubi}[Red Hat Universal Base Image (UBI){external-link-icon}^] minimal as a base image. This is a requirement for the platform to achieve xref:ROOT:kubernetes/openshift.adoc[OpenShift] certification and be available in the RedHat Certified Operator catalog. The base image also contains only minimal dependencies and is vetted by RedHat. @@ -27,7 +27,7 @@ The file system structure is the same across all images, which makes the images Products are either built from source code or official artifacts are used. Beyond that, Stackable also adds plugins or extensions that are not shipped by default, to support features such as xref:operators:monitoring.adoc[] or xref:opa:index.adoc[OpenPolicyAgent] support. -Since Stackable release 24.3, {stackable-sboms}[SBOMs] for all images are provided. +Since Stackable release 24.3, {stackable-sboms}[SBOMs{external-link-icon}^] for all images are provided. Have a look at the xref:guides:viewing-and-verifying-sboms.adoc[] guide to learn how to use SBOMs. [#multi-platform-support] diff --git a/modules/concepts/pages/multi-platform-support.adoc b/modules/concepts/pages/multi-platform-support.adoc index 146fac564..dc0d5d6c6 100644 --- a/modules/concepts/pages/multi-platform-support.adoc +++ b/modules/concepts/pages/multi-platform-support.adoc @@ -5,7 +5,7 @@ WARNING: This status is still experimental, as we work to fine-tune the necessary workflows. -Starting with the Stackable Data Platform release 24.7, all images are {multi-platform-images}[multi-platform images], supporting the AMD64 and ARM64 architectures. +Starting with the Stackable Data Platform release 24.7, all images are {multi-platform-images}[multi-platform images{external-link-icon}^], supporting the AMD64 and ARM64 architectures. Each product image is built for each platform with an architecture-specific tag. For example, the Airflow images with tags `airflow:2.9.2-stackable24.7.0-amd64` and `airflow:2.9.2-stackable24.7.0-arm64` are bundled in the manifest list `airflow:2.9.2-stackable24.7.0` using an automated workflow. The appropriate image will then be transparently selected for the active platform/architecture. diff --git a/modules/concepts/pages/observability/containerdebug.adoc b/modules/concepts/pages/observability/containerdebug.adoc index f6ee03708..540522507 100644 --- a/modules/concepts/pages/observability/containerdebug.adoc +++ b/modules/concepts/pages/observability/containerdebug.adoc @@ -3,7 +3,7 @@ All Stackable-managed products regularly log information about their operating environment, such as available disk interfaces and network interfaces. -This logging is performed by our https://github.com/stackabletech/containerdebug[containerdebug] tool. +This logging is performed by our https://github.com/stackabletech/containerdebug[containerdebug{external-link-icon}^] tool. NOTE: This tool is intended as a debugging aid, and the particular information or format should not be considered stable. diff --git a/modules/concepts/pages/observability/labels.adoc b/modules/concepts/pages/observability/labels.adoc index e3c8aac88..cdd5609e7 100644 --- a/modules/concepts/pages/observability/labels.adoc +++ b/modules/concepts/pages/observability/labels.adoc @@ -9,7 +9,7 @@ The xref:management:stackablectl:index.adoc[`stackablectl`] tool, the cockpit, a == Resource labels added by the operators Every resource created by a Stackable operator has a common set of labels applied. -Some of these labels are {common-labels}[recommended] to use by the Kubernetes documentation. +Some of these labels are {common-labels}[recommended{external-link-icon}^] to use by the Kubernetes documentation. The following labels are added to resources created by our operators: * `app.kubernetes.io/name` - The name of the product, i.e. `druid` or `zookeeper`. diff --git a/modules/concepts/pages/observability/logging.adoc b/modules/concepts/pages/observability/logging.adoc index dc3b5249e..30f4bf186 100644 --- a/modules/concepts/pages/observability/logging.adoc +++ b/modules/concepts/pages/observability/logging.adoc @@ -35,7 +35,7 @@ For advanced log configurations, supplying custom product specific log configura == Architecture Below you can see the overall architecture using ZooKeeper as an example. -Stackable uses {vector}[Vector] for log aggregation and any of the supported {vector-sinks}[sinks] can be used to persist the logs. +Stackable uses {vector}[Vector{external-link-icon}^] for log aggregation and any of the supported {vector-sinks}[sinks{external-link-icon}^] can be used to persist the logs. image::logging_architecture.drawio.svg[An architecture diagram showing the Kubernetes resources involved in the logging stack] @@ -44,12 +44,12 @@ image::logging_architecture.drawio.svg[An architecture diagram showing the Kuber You configure your logging settings in the Stacklet definition (ZookeeperCluster in this example), seen in the top left in the diagram (see the <> section below). The operator reads this resource and creates the appropriate log configuration files in the ConfigMap which also holds other product configuration. The ConfigMap is then mounted into the containers. -The ZooKeeper Pod has three containers: The `prepare` sidecar container, the `zookeeper` container and the `vector` {vector-sidecar}[sidecar container]. -All logs get written into a shared mounted directory, from which the Vector agent reads them and sends them to the Vector {vector-aggregator}[aggregator]. +The ZooKeeper Pod has three containers: The `prepare` sidecar container, the `zookeeper` container and the `vector` {vector-sidecar}[sidecar container{external-link-icon}^]. +All logs get written into a shared mounted directory, from which the Vector agent reads them and sends them to the Vector {vector-aggregator}[aggregator{external-link-icon}^]. === Aggregator and sinks -The aggregator is configured to use one or multiple {vector-sinks}[sinks] (for example OpenSearch, Elasticsearch), it sends all logs to all sinks. +The aggregator is configured to use one or multiple {vector-sinks}[sinks{external-link-icon}^] (for example OpenSearch, Elasticsearch), it sends all logs to all sinks. If a sink is unavailable, the aggregator buffers the log messages. It is also the single point where the sinks are configured, so the sinks are decoupled from the Stacklet definitions and only need to be configured in this single location for the whole platform. @@ -122,8 +122,8 @@ Following the Stackable xref:stacklet.adoc#roles[roles] and xref::stacklet.adoc# === Configuring the Aggregator -Follow the {vector-agg-install}[installation instructions] for the aggregator. -Configure a {vector-source-vector}[Vector source] at adress `0.0.0.0:6000` and configure sinks and additional settings according to your needs. +Follow the {vector-agg-install}[installation instructions{external-link-icon}^] for the aggregator. +Configure a {vector-source-vector}[Vector source{external-link-icon}^] at adress `0.0.0.0:6000` and configure sinks and additional settings according to your needs. === Configuring the aggregator location @@ -150,4 +150,4 @@ logging: == Further reading To get some hands on experience and see logging in action, try out the xref:demos:logging.adoc[logging demo] or follow the xref:tutorials:logging-vector-aggregator.adoc[logging tutorial]. -The Vector documentation contains more information about the {vector-topology-centralized}[deployment topology] and {vector-sinks}[sinks] that can be used. +The Vector documentation contains more information about the {vector-topology-centralized}[deployment topology{external-link-icon}^] and {vector-sinks}[sinks{external-link-icon}^] that can be used. diff --git a/modules/concepts/pages/opa.adoc b/modules/concepts/pages/opa.adoc index a67968109..6b275f53d 100644 --- a/modules/concepts/pages/opa.adoc +++ b/modules/concepts/pages/opa.adoc @@ -4,14 +4,14 @@ :opa-docs: https://www.openpolicyagent.org/docs/latest/#overview :description: Stackable Data Platform uses OpenPolicyAgent (OPA) for policy-based access control with Rego rules, ensuring efficient, local policy evaluation across nodes. -The Stackable Data Platform offers policy-based access control via the {opa}[OpenPolicyAgent] (OPA) operator. -Authorization policies are defined in the {rego}[Rego] language, divided into packages and supplied via ConfigMaps. +The Stackable Data Platform offers policy-based access control via the {opa}[OpenPolicyAgent{external-link-icon}^] (OPA) operator. +Authorization policies are defined in the {rego}[Rego{external-link-icon}^] language, divided into packages and supplied via ConfigMaps. Every node is running an OPA instance for fast policy evaluation and products are connected to OPA with the xref:service_discovery.adoc[service discovery] mechanism. == What is OPA? // What's OPA? What are Rego Rules? OPA is an open-source, general-purpose policy engine. -It supports a high-level declarative language called {rego}[Rego]. +It supports a high-level declarative language called {rego}[Rego{external-link-icon}^]. Rego enables you to specify complex policies as code and transfer the decision-making processes from your software to OPA. Policies written in Rego are called _Rego rules_. @@ -72,7 +72,7 @@ data: The combination of arbitrary input data and the Rego rules enables you to specify and enforce almost any kind of policies. You can define powerful policies for e.g. user access for database tables, schemas, columns etc. You can enforce local network traffic, access time periods and many more. -See the {opa-docs}[OPA documentation] for further examples. +See the {opa-docs}[OPA documentation{external-link-icon}^] for further examples. === Connect a product diff --git a/modules/concepts/pages/operations/graceful_shutdown.adoc b/modules/concepts/pages/operations/graceful_shutdown.adoc index b09a60b1e..837bb1472 100644 --- a/modules/concepts/pages/operations/graceful_shutdown.adoc +++ b/modules/concepts/pages/operations/graceful_shutdown.adoc @@ -7,7 +7,7 @@ This could include closing open file handles, updating the instance state in the This contrasts with an uncontrolled shutdown where a process is terminated immediately and is unable to perform any of its normal shutdown activities. In the event that a service instance is unable to shut down in a reasonable amount of time, a timeout is set after which the process will be forcibly terminated to prevent a stuck server from remaining in the shutting down state indefinitely. -The article https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace[Kubernetes best practices: terminating with grace] describes how a graceful shutdown on Kubernetes works in detail. +The article https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace[Kubernetes best practices: terminating with grace{external-link-icon}^] describes how a graceful shutdown on Kubernetes works in detail. Our operators add the needed shutdown mechanism for their products that support graceful shutdown. They also configure a sensible amount of time Pods are granted to properly shut down without disrupting the availability of the product. diff --git a/modules/concepts/pages/operations/index.adoc b/modules/concepts/pages/operations/index.adoc index 949c75b6f..bcdab9aab 100644 --- a/modules/concepts/pages/operations/index.adoc +++ b/modules/concepts/pages/operations/index.adoc @@ -21,7 +21,7 @@ Make sure to go through the following checklist to achieve the maximum level of Details covering the graceful shutdown mechanism are described in xref:operations/graceful_shutdown.adoc[] as well as the actual operator documentation. 4. Spread workload across multiple Kubernetes nodes, racks, datacenter rooms or datacenters to guarantee availability in the case of e.g. power outages or fire in parts of the datacenter. All of this is supported by - configuring an https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[antiAffinity] as documented in + configuring an https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[antiAffinity{external-link-icon}^] as documented in xref:operations/pod_placement.adoc[] 5. Reduce the frequency of disruptions: Although we try our best to reduce the impact of disruptions, some tools simply don't support HA setups. @@ -42,7 +42,7 @@ You can achieve this using the following methods: 1. *Compute resources*: You can configure the available resource every product has using xref:concepts:resources.adoc[]. The defaults are very restrained, as you should be able to spin up multiple products running on your Laptop. 2. *Autoscaling*: Although not supported by the platform natively yet, you can use - https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale[HorizontalPodAutoscaler] to autoscale the number of Pods running for a given rolegroup dynamically based upon resource usage. + https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale[HorizontalPodAutoscaler{external-link-icon}^] to autoscale the number of Pods running for a given rolegroup dynamically based upon resource usage. To achieve this you need to omit the number of replicas on the rolegroup to be scaled, which in turn results in the created StatefulSet not having any replicas set as well. Afterwards you can deploy a HorizontalPodAutoscaler as usual. Please note that not all product-operators have implemented graceful shutdown, so the product might be disturbed during scale down. @@ -53,4 +53,4 @@ You can achieve this using the following methods: If you are not satisfied with the automatically created affinities you can use xref:operations/pod_placement.adoc[] to configure your own. 4. *Dedicated nodes*: If you want to have certain services running on dedicated nodes you can also use xref:operations/pod_placement.adoc[] to force the Pods to be scheduled on certain nodes. This is especially helpful if you e.g. have Kubernetes nodes with 16 cores and 64 GB, as you could allocate nearly 100% of these node resources to your Spark executors or Trino workers. - In this case it is important that you https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[taint] your Kubernetes nodes and use xref:overrides.adoc#pod-overrides[podOverrides] to add a `toleration` for the taint. + In this case it is important that you https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[taint{external-link-icon}^] your Kubernetes nodes and use xref:overrides.adoc#pod-overrides[podOverrides] to add a `toleration` for the taint. diff --git a/modules/concepts/pages/operations/pod_disruptions.adoc b/modules/concepts/pages/operations/pod_disruptions.adoc index 657986692..ae587f02d 100644 --- a/modules/concepts/pages/operations/pod_disruptions.adoc +++ b/modules/concepts/pages/operations/pod_disruptions.adoc @@ -9,7 +9,7 @@ Although downtime can't be prevented 100% of the time - especially if the produc Kubernetes provides mechanisms to ensure minimal *planned* downtime. Please keep in mind that this only affects planned (voluntary) downtime of Pods - unplanned Kubernetes node crashes can always occur. -Our operators will always deploy so-called {k8s-pdb}[PodDisruptionBudget (PDB)] resources as part of a xref:stacklet.adoc[]. +Our operators will always deploy so-called {k8s-pdb}[PodDisruptionBudget (PDB){external-link-icon}^] resources as part of a xref:stacklet.adoc[]. For every xref:stacklet.adoc#roles[role] that you specify (e.g. HDFS namenodes or Trino workers) a PDB is created. == Default values @@ -36,7 +36,7 @@ Otherwise, rolling re-deployments may take very long. IMPORTANT: The operators calculate the number of Pods for a given role by adding the number of replicas of every role group that is part of that role. In case there are no replicas defined on a role group, one Pod will be assumed for this role group, as the created Kubernetes objects (StatefulSets or Deployments) will default to a single replica as well. -However, in case there are https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[HorizontalPodAutoscaler] in place, the number of replicas of a rolegroup can change dynamically. +However, in case there are https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[HorizontalPodAutoscaler{external-link-icon}^] in place, the number of replicas of a rolegroup can change dynamically. In this case the operators might falsely assume that role groups have fewer Pods than they actually have. This is a pessimistic approach, as the number of allowed disruptions normally stays the same or even increases when the number of Pods increases. This should be safe, but in some cases more Pods *could* have been allowed to be unavailable which may increase the duration of rolling re-deployments. @@ -90,7 +90,7 @@ In case you are not satisfied with the PDBs that are written by the operators, y WARNING: In case you write custom PDBs, it is your responsibility to take care of the availability of the products -IMPORTANT: It is important to disable the PDBs created by the Stackable operators as described above before creating your own PDBs, as this is a https://github.com/kubernetes/kubernetes/issues/75957[limitation of Kubernetes]. +IMPORTANT: It is important to disable the PDBs created by the Stackable operators as described above before creating your own PDBs, as this is a https://github.com/kubernetes/kubernetes/issues/75957[limitation of Kubernetes{external-link-icon}^]. *After disabling the Stackable PDBs*, you can deploy you own PDB such as diff --git a/modules/concepts/pages/operations/pod_placement.adoc b/modules/concepts/pages/operations/pod_placement.adoc index ca8c17abd..3cbd5c417 100644 --- a/modules/concepts/pages/operations/pod_placement.adoc +++ b/modules/concepts/pages/operations/pod_placement.adoc @@ -2,7 +2,7 @@ :page-aliases: ../pod_placement.adoc :description: Configure pod affinity, anti-affinity, and node affinity for Stackable Data Platform operators using YAML definitions. -Several operators of the Stackable Data Platform permit the configuration of pod affinity as described in the Kubernetes https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[documentation]. +Several operators of the Stackable Data Platform permit the configuration of pod affinity as described in the Kubernetes https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[documentation{external-link-icon}^]. If no affinity is defined in the product's custom resource, the operators apply reasonable defaults that make use of the `preferred_during_scheduling_ignored_during_execution` property. Refer to the operator documentation for details. diff --git a/modules/concepts/pages/overrides.adoc b/modules/concepts/pages/overrides.adoc index 0c4375b83..94a497bfa 100644 --- a/modules/concepts/pages/overrides.adoc +++ b/modules/concepts/pages/overrides.adoc @@ -139,7 +139,7 @@ The priority of how to construct the final Pod submitted to Kubernetes looks as 3. PodTemplateSpec given in rolegroup level `podOverrides` Each of these are combined top to bottom using a deep merge. -The exact merge algorithm is described in the {k8s-openapi-deepmerge}[k8s-openapi docs], which basically tries to mimic the way Kubernetes merges patches onto objects. +The exact merge algorithm is described in the {k8s-openapi-deepmerge}[k8s-openapi docs{external-link-icon}^], which basically tries to mimic the way Kubernetes merges patches onto objects. The `podOverrides` will be merged onto the following resources the operators deploy: diff --git a/modules/concepts/pages/product-image-selection.adoc b/modules/concepts/pages/product-image-selection.adoc index 2d195ca73..392bc9791 100644 --- a/modules/concepts/pages/product-image-selection.adoc +++ b/modules/concepts/pages/product-image-selection.adoc @@ -63,9 +63,9 @@ At the bottom of this page, in the <<_common_scenarios, common scenarios>> secti == Stackable provided images -If your Kubernetes cluster has internet access, the easiest way is to use the publicly available images from the https://oci.stackable.tech/[Stackable Image Registry]. +If your Kubernetes cluster has internet access, the easiest way is to use the publicly available images from the https://oci.stackable.tech/[Stackable Image Registry{external-link-icon}^]. -TIP: All our images are also mirrored to our https://quay.io/organization/stackable[Stackable Quay.io organization]. +TIP: All our images are also mirrored to our https://quay.io/organization/stackable[Stackable Quay.io organization{external-link-icon}^]. [source,yaml] ---- @@ -91,7 +91,7 @@ Security updates within a release line will result in patch version bumps in the If you don't specify the Stackable version, the operator will use its own version, e.g., `25.7.0`. When using a nightly operator or a `pr` version, it will use the nightly `0.0.0-dev` image. -All the available images (with their product and Stackable versions) can be found in our https://oci.stackable.tech/api/v2.0/projects/sdp[Stackable OCI registry]. +All the available images (with their product and Stackable versions) can be found in our https://oci.stackable.tech/api/v2.0/projects/sdp[Stackable OCI registry{external-link-icon}^]. Information on how to browse the registry can be found in the xref:contributor:project-overview.adoc#docker-images[Docker images section of the project overview]. == Custom docker registry @@ -109,7 +109,7 @@ spec: stackableVersion: 25.7.0 # Optional repo: my.corp/myteam/stackable <.> ---- -<.> We recommend not including a slash at the end while we plan on https://github.com/stackabletech/operator-rs/issues/1020[improving the situation]. +<.> We recommend not including a slash at the end while we plan on https://github.com/stackabletech/operator-rs/issues/1020[improving the situation{external-link-icon}^]. This will change the image from the default Stackable repository `oci.stackable.tech/sdp/kafka:3.3.1-stackable23.7.0` to `my.corp/myteam/stackable/kafka:3.3.1-stackable23.7.0`. @@ -134,7 +134,7 @@ Only when the correct product version is given to the operator will the product Using custom images has a few limitations that users should be aware of: * The images must have the same structures that Stackable operators expect. -This should usually be ensured by specifying a Stackable image in the `FROM` clause of the Dockerfile (all the available images can be found in our https://oci.stackable.tech/api/v2.0/projects/sdp[Stackable OCI registry] - the schema is typically: `oci.stackable.tech/sdp/:-stackable`. +This should usually be ensured by specifying a Stackable image in the `FROM` clause of the Dockerfile (all the available images can be found in our https://oci.stackable.tech/api/v2.0/projects/sdp[Stackable OCI registry{external-link-icon}^] - the schema is typically: `oci.stackable.tech/sdp/:-stackable`. Information on how to browse the registry can be found in the xref:contributor:project-overview.adoc#docker-images[Docker images section of the project overview]). * Images will need to be upgraded for every new Stackable release to follow structural changes that Stackable may have made to their images. diff --git a/modules/concepts/pages/resources.adoc b/modules/concepts/pages/resources.adoc index a1b00fe78..85b0b551b 100644 --- a/modules/concepts/pages/resources.adoc +++ b/modules/concepts/pages/resources.adoc @@ -1,15 +1,15 @@ = Resource management :description: Learn how to manage CPU, memory, and storage resources for Stackable Data Platform products, including setting requests, limits, and StorageClasses. -The Stackable Data Platform and its xref:operators:index.adoc[operators] deploy their products in https://kubernetes.io/docs/concepts/containers/[containers] within https://kubernetes.io/docs/concepts/workloads/pods/[Pods] using https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSets] or https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSets]. -In order for the Kubernetes scheduler to select a suitable https://kubernetes.io/docs/concepts/architecture/nodes/[Node] for a Pod, https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[resource] requests and limits for CPU and memory can be specified. +The Stackable Data Platform and its xref:operators:index.adoc[operators] deploy their products in https://kubernetes.io/docs/concepts/containers/[containers{external-link-icon}^] within https://kubernetes.io/docs/concepts/workloads/pods/[Pods{external-link-icon}^] using https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSets{external-link-icon}^] or https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSets{external-link-icon}^]. +In order for the Kubernetes scheduler to select a suitable https://kubernetes.io/docs/concepts/architecture/nodes/[Node{external-link-icon}^] for a Pod, https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/[resource{external-link-icon}^] requests and limits for CPU and memory can be specified. The Kubernetes scheduler ensures that the sum of the CPU and memory requests does not exceed the capacity of a given Node. == Terminology The most commonly defined resources are CPU and memory (RAM). Keep in mind that there are other resources as well. -For more information have a look at the Kubernetes https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[documentation] on resources. +For more information have a look at the Kubernetes https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[documentation{external-link-icon}^] on resources. === CPU @@ -37,7 +37,7 @@ To avoid the restart it is critical to specify sufficient resources. === Storage Some Stackable products require data storage. -This is done using https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Persistent Volume Claims] where the size of storage can be specified. +This is done using https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Persistent Volume Claims{external-link-icon}^] where the size of storage can be specified. == Kubernetes resource requests @@ -75,7 +75,7 @@ include::stackable_resource_requests.adoc[] === Storage -This is an example on how to specify storage resources using the Stackable https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Custom Resources]: +This is an example on how to specify storage resources using the Stackable https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Custom Resources{external-link-icon}^]: [source, yaml] ---- @@ -116,9 +116,9 @@ IMPORTANT: Stackable operators use different names (`data` in this example) for A StorageClass defines a type of storage with certain properties. The StorageClasses that are available on a Kubernetes cluster are configured by the cluster administrator. Different classes can be configured to provide different levels of reliability or speed, or be configured to be more suited for read or write heavy loads. -This configuration is either done in the storage backend or Kubernetes settings (find more information in the https://kubernetes.io/docs/concepts/storage/storage-classes/[Kubernetes documentation]). +This configuration is either done in the storage backend or Kubernetes settings (find more information in the https://kubernetes.io/docs/concepts/storage/storage-classes/[Kubernetes documentation{external-link-icon}^]). -For Stackable resources, setting a StorageClass is not mandatory; if a StorageClass is not set, the https://kubernetes.io/docs/concepts/storage/storage-classes/#default-storageclass[default StorageClass] will be used. +For Stackable resources, setting a StorageClass is not mandatory; if a StorageClass is not set, the https://kubernetes.io/docs/concepts/storage/storage-classes/#default-storageclass[default StorageClass{external-link-icon}^] will be used. If you want to use a specific StorageClass for a particular storage, the StorageClass can be set on the resource: [source,yaml] diff --git a/modules/concepts/pages/s3.adoc b/modules/concepts/pages/s3.adoc index df6229fd2..2a135a900 100644 --- a/modules/concepts/pages/s3.adoc +++ b/modules/concepts/pages/s3.adoc @@ -283,4 +283,4 @@ region: == What's next -Read the {crd-docs}/s3.stackable.tech/s3bucket/v1alpha1/[S3Bucket CRD reference] and the {crd-docs}/s3.stackable.tech/s3connection/v1alpha1/[S3Connection CRD reference]. +Read the {crd-docs}/s3.stackable.tech/s3bucket/v1alpha1/[S3Bucket CRD reference{external-link-icon}^] and the {crd-docs}/s3.stackable.tech/s3connection/v1alpha1/[S3Connection CRD reference{external-link-icon}^]. diff --git a/modules/concepts/pages/stackable_resource_requests.adoc b/modules/concepts/pages/stackable_resource_requests.adoc index b51a89052..016127ac0 100644 --- a/modules/concepts/pages/stackable_resource_requests.adoc +++ b/modules/concepts/pages/stackable_resource_requests.adoc @@ -6,7 +6,7 @@ Resource requests are defined on xref:concepts:stacklet.adoc#roles[role] or xref On a role level this means that by default, all workers will use the same resource requests and limits. This can be further specified on role group level (which takes priority to the role level) to apply different resources. -This is an example on how to specify CPU and memory resources using the Stackable https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Custom Resources]: +This is an example on how to specify CPU and memory resources using the Stackable https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Custom Resources{external-link-icon}^]: [source, yaml] ---- diff --git a/modules/contributor/pages/code-style-guide.adoc b/modules/contributor/pages/code-style-guide.adoc index 696fbf09a..8ef400e97 100644 --- a/modules/contributor/pages/code-style-guide.adoc +++ b/modules/contributor/pages/code-style-guide.adoc @@ -2,11 +2,11 @@ == Cargo.toml -Follow the https://doc.rust-lang.org/nightly/style-guide/cargo.html[official formatting conventions] for the `Cargo.toml` file. +Follow the https://doc.rust-lang.org/nightly/style-guide/cargo.html[official formatting conventions{external-link-icon}^] for the `Cargo.toml` file. This means: * Put the `[package]` section at the top of the file. -* Put the `name` and `version` keys in that order at the top of that section, followed by the remaining keys other than `description` in order (sort keys with https://www.gnu.org/software/coreutils/manual/html_node/Version-sort-overview.html[version-sort]; very similar to lexical sorting)), followed by the `description` at the end of that section. +* Put the `name` and `version` keys in that order at the top of that section, followed by the remaining keys other than `description` in order (sort keys with https://www.gnu.org/software/coreutils/manual/html_node/Version-sort-overview.html[version-sort{external-link-icon}^]; very similar to lexical sorting)), followed by the `description` at the end of that section. * For other sections, sort keys with version-sort. [TIP.code-rule,caption=Examples of correct code for this rule] @@ -48,7 +48,7 @@ a_dependency = "1.2.3" ==== -NOTE: This formatting might be supported by `rustfmt` in the future, see the https://github.com/rust-lang/rustfmt/pull/5240[PR] here. +NOTE: This formatting might be supported by `rustfmt` in the future, see the https://github.com/rust-lang/rustfmt/pull/5240[PR{external-link-icon}^] here. == Identifier names @@ -57,7 +57,7 @@ NOTE: This formatting might be supported by `rustfmt` in the future, see the htt We use unabbreviated identifier names to avoid ambiguity. Short (even single letter) variable names are allowed in lambdas (closures), in one-liners, and when the context allows it. -[quote,Uncle Bob Martin, 'Source: https://twitter.com/unclebobmartin/status/360029878126514177[Twitter]'] +[quote,Uncle Bob Martin, 'Source: https://twitter.com/unclebobmartin/status/360029878126514177[Twitter,window=_blank]'] The shorter the scope the shorter the variable names, and the longer the function [...] names. And vice versa. The usage of well-known acronyms like CPU, TLS or OIDC are allowed. @@ -344,7 +344,7 @@ Comments should always form complete sentences with full stops at the end. ==== -Additionally, doc comments should follow the structure outlined by the Rust project, which is described https://doc.rust-lang.org/rustdoc/how-to-write-documentation.html#documenting-components[here]: +Additionally, doc comments should follow the structure outlined by the Rust project, which is described https://doc.rust-lang.org/rustdoc/how-to-write-documentation.html#documenting-components[here{external-link-icon}^]: [source] ---- @@ -539,7 +539,7 @@ enum Error { The `unwrap` function must not be used in any code. Instead, proper error handling like above should be used, unless there is a valid reason to use `expect` described below. -Using link:{unwrap_or}[`unwrap_or`], link:{unwrap_or_default}[`unwrap_or_default`] or link:{unwrap_or_else}[`unwrap_or_else`] is allowed because these functions will not panic. +Using link:{unwrap_or}[`unwrap_or`{external-link-icon}^], link:{unwrap_or_default}[`unwrap_or_default`{external-link-icon}^] or link:{unwrap_or_else}[`unwrap_or_else`{external-link-icon}^] is allowed because these functions will not panic. The `expect` function can be used when external factors cannot influence whether a panic will happen. For example, when compiling regular expressions inside const/static environments. For such cases code must use `expect` instead of `unwrap` to provide additional context for why a particular piece of code should never fail. @@ -642,7 +642,7 @@ format!("Hello {name}, hello again {name}", name = greetee); == Specifying resources measured in bytes and CPU fractions -Follow the Kubernetes convention described https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/[here]. +Follow the Kubernetes convention described https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/[here{external-link-icon}^]. === Resources measured in bytes diff --git a/modules/contributor/pages/contributing-code.adoc b/modules/contributor/pages/contributing-code.adoc index 0f873c300..d3529c6bc 100644 --- a/modules/contributor/pages/contributing-code.adoc +++ b/modules/contributor/pages/contributing-code.adoc @@ -8,20 +8,20 @@ In order to contribute source code, you need an environment that is capable of running the following tools: -* https://git-scm.com/[Git] -* https://www.gnu.org/software/make/manual/make.html[make] -* https://www.docker.com/[Docker] -* https://kind.sigs.k8s.io/[Kind] -* https://helm.sh/[Helm] -* https://kuttl.dev/[Kuttl] -* https://www.rust-lang.org/[Rust] -* https://www.python.org/[Python] -* https://jqlang.github.io/jq/[jq] -* https://github.com/mikefarah/yq[yq] -* https://tilt.dev/[Tilt] -* https://pre-commit.com/[pre-commit] (optional) - -Depending on the repository, you might also need https://go.dev/[Go], https://www.java.com/en/[Java] or other programming language support. +* https://git-scm.com/[Git{external-link-icon}^] +* https://www.gnu.org/software/make/manual/make.html[make{external-link-icon}^] +* https://www.docker.com/[Docker{external-link-icon}^] +* https://kind.sigs.k8s.io/[Kind{external-link-icon}^] +* https://helm.sh/[Helm{external-link-icon}^] +* https://kuttl.dev/[Kuttl{external-link-icon}^] +* https://www.rust-lang.org/[Rust{external-link-icon}^] +* https://www.python.org/[Python{external-link-icon}^] +* https://jqlang.github.io/jq/[jq{external-link-icon}^] +* https://github.com/mikefarah/yq[yq{external-link-icon}^] +* https://tilt.dev/[Tilt{external-link-icon}^] +* https://pre-commit.com/[pre-commit{external-link-icon}^] (optional) + +Depending on the repository, you might also need https://go.dev/[Go{external-link-icon}^], https://www.java.com/en/[Java{external-link-icon}^] or other programming language support. Almost all build scripts assume a Unix based environment (preferably Linux). @@ -29,19 +29,19 @@ Almost all build scripts assume a Unix based environment (preferably Linux). Of course you are free to use whatever works for you best. No editor is perfect but we have positive experience with: -* https://www.jetbrains.com/idea/[IntelliJ Idea] with the `Rust` plug-in -* https://code.visualstudio.com/[VisualStudio Code] with the `rust-analyzer` extension +* https://www.jetbrains.com/idea/[IntelliJ Idea{external-link-icon}^] with the `Rust` plug-in +* https://code.visualstudio.com/[VisualStudio Code{external-link-icon}^] with the `rust-analyzer` extension For `VisualStudio Code` we also recommend the following extensions: -* https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml[Even Better TOML] -* https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb[CodeLLDB] (for debugging) -* https://marketplace.visualstudio.com/items?itemName=usernamehw.errorlens[Error Lens] (inline error messages) -* https://marketplace.visualstudio.com/items?itemName=asciidoctor.asciidoctor-vscode[AsciiDoc] -* https://marketplace.visualstudio.com/items?itemName=GitHub.vscode-pull-request-github[GitHub Pull requests and Issues] -* https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens[GitLens] -* https://marketplace.visualstudio.com/items?itemName=ms-python.python[Python] -* https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker[Docker] +* https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml[Even Better TOML{external-link-icon}^] +* https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb[CodeLLDB{external-link-icon}^] (for debugging) +* https://marketplace.visualstudio.com/items?itemName=usernamehw.errorlens[Error Lens{external-link-icon}^] (inline error messages) +* https://marketplace.visualstudio.com/items?itemName=asciidoctor.asciidoctor-vscode[AsciiDoc{external-link-icon}^] +* https://marketplace.visualstudio.com/items?itemName=GitHub.vscode-pull-request-github[GitHub Pull requests and Issues{external-link-icon}^] +* https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens[GitLens{external-link-icon}^] +* https://marketplace.visualstudio.com/items?itemName=ms-python.python[Python{external-link-icon}^] +* https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker[Docker{external-link-icon}^] == Steps @@ -58,10 +58,10 @@ For `VisualStudio Code` we also recommend the following extensions: execute `make regenerate-charts`. . If it is useful for the users of the project to know about the change then it must be added to the changelog. For instance, if only the dependencies in an operator are upgraded but nothing changes for the user then the upgrade - should not be added to the changelog. Conversely, if the dependencies in the {operator-repo}[operator framework] are + should not be added to the changelog. Conversely, if the dependencies in the {operator-repo}[operator framework{external-link-icon}^] are upgraded then changes are probably required in the operators (which are the clients of the framework) and therefore the upgrade must be mentioned in the changelog. The changelog must be formatted according to - https://keepachangelog.com/en/1.1.0/[keep a changelog]. + https://keepachangelog.com/en/1.1.0/[keep a changelog{external-link-icon}^]. == Changes in the integration tests @@ -69,8 +69,8 @@ For `VisualStudio Code` we also recommend the following extensions: operator repository in the `tests` directory. . Create or adapt the tests. Try to mimic the style of the other tests. - They are written with https://kuttl.dev/[KUTTL] and our own templating tool https://github.com/stackabletech/beku.py[beku.py] to template tests and test multiple product versions at once. -. Start a test cluster using https://kind.sigs.k8s.io/[kind]. + They are written with https://kuttl.dev/[KUTTL{external-link-icon}^] and our own templating tool https://github.com/stackabletech/beku.py[beku.py{external-link-icon}^] to template tests and test multiple product versions at once. +. Start a test cluster using https://kind.sigs.k8s.io/[kind{external-link-icon}^]. . Run your development version of the operator with `make run-dev` (see also xref:testing-on-kubernetes.adoc[] for more information on this). This will deploy the operator directly into the cluster, also using part of the Helm Chart definition and therefore the RBAC rules. . When making changes to the Helm Chart, you should however test the Helm Chart explicitly. @@ -89,7 +89,7 @@ helm install deploy/helm// If a new version of a product was added then the following tasks must be performed: -* Add the new version in the https://github.com/stackabletech/docker-images[docker-images] repository. +* Add the new version in the https://github.com/stackabletech/docker-images[docker-images{external-link-icon}^] repository. * Update the operator to support the new version if necessary. * Update the examples in the operator to use the new version. * Update the integration tests. diff --git a/modules/contributor/pages/docs/backporting-changes.adoc b/modules/contributor/pages/docs/backporting-changes.adoc index 71ead16e1..c770ef36e 100644 --- a/modules/contributor/pages/docs/backporting-changes.adoc +++ b/modules/contributor/pages/docs/backporting-changes.adoc @@ -1,6 +1,6 @@ = Backporting changes -The documentation uses https://trunkbaseddevelopment.com/[trunk based development], so any new content or fixes should first be applied to the `main` branch and then ported to the release branches where the feature/fix also applies. +The documentation uses https://trunkbaseddevelopment.com/[trunk based development{external-link-icon}^], so any new content or fixes should first be applied to the `main` branch and then ported to the release branches where the feature/fix also applies. == Prerequisites @@ -19,4 +19,4 @@ The documentation uses https://trunkbaseddevelopment.com/[trunk based developmen That's it, you're done! The changes will become visible in the online documentation once the next build is triggered. -You can either wait for the nightly build, or trigger a build yourself with the https://github.com/stackabletech/documentation/actions/workflows/deploy.yml[Build and deploy production site] GitHub action. +You can either wait for the nightly build, or trigger a build yourself with the https://github.com/stackabletech/documentation/actions/workflows/deploy.yml[Build and deploy production site{external-link-icon}^] GitHub action. diff --git a/modules/contributor/pages/docs/crd-documentation.adoc b/modules/contributor/pages/docs/crd-documentation.adoc index af589f7ce..0840c5e0d 100644 --- a/modules/contributor/pages/docs/crd-documentation.adoc +++ b/modules/contributor/pages/docs/crd-documentation.adoc @@ -2,9 +2,9 @@ :crds-docs: https://crds.stackable.tech/ :crddocs-repo: https://github.com/stackabletech/crddocs -The {crds-docs}[CRD documentation] is generated from the CRD manifest files, which are in turn generated from the operator source code. +The {crds-docs}[CRD documentation{external-link-icon}^] is generated from the CRD manifest files, which are in turn generated from the operator source code. All the documentation strings are doc strings in the Rust source code. If you want to contribute documentation for a particular field, this needs to be done in doc string of the property on the struct that makes up that part of the CRD. -To change the UI, adjust the HTML template and CSS files in the {crddocs-repo}[crddocs repository]. +To change the UI, adjust the HTML template and CSS files in the {crddocs-repo}[crddocs repository{external-link-icon}^]. Also consult the README in that repository to learn more about how the site is generated. diff --git a/modules/contributor/pages/docs/overview.adoc b/modules/contributor/pages/docs/overview.adoc index 56efe799b..4abf8c5ac 100644 --- a/modules/contributor/pages/docs/overview.adoc +++ b/modules/contributor/pages/docs/overview.adoc @@ -15,15 +15,15 @@ :stackable-docs-ui-repo: https://github.com/stackabletech/documentation-ui :trunk-based-development: https://trunkbaseddevelopment.com/ -We use {antora-docs}[Antora] to write our user facing documentation, -{netlify}[Netlify] to host it and the {diataxis}[Diátaxis] framework as the guide for the structure of the content. -The main repository for the documentation is the {stackable-docs-repo}[Documentation] repository and +We use {antora-docs}[Antora{external-link-icon}^] to write our user facing documentation, +{netlify}[Netlify{external-link-icon}^] to host it and the {diataxis}[Diátaxis{external-link-icon}^] framework as the guide for the structure of the content. +The main repository for the documentation is the {stackable-docs-repo}[Documentation{external-link-icon}^] repository and each operator and other tools have a `docs` directory from which content is pulled in. Have a look at the xref:project-overview.adoc[] to learn about the repositories that are involved in providing the documentation. == Content structure: Diátaxis -The {diataxis}[Diátaxis] framework is a way to classify documentation content into four groups with distinct use cases. +The {diataxis}[Diátaxis{external-link-icon}^] framework is a way to classify documentation content into four groups with distinct use cases. .Source: {diataxis} image::diataxis.png[] @@ -46,7 +46,7 @@ The guide has to account for different use-cases (i.e. the user is using their o Since this kind of information is typically product specific, it is located in the usage guide section of individual operators. **Reference** information for the Stackable platform entails all the settings and Options in our YAMLs, which we generate. -The reference is found at {crddocs-site} and generated from the {stackable-crddocs-repo}[crddocs repository]. +The reference is found at {crddocs-site}[{crddocs-site}{external-link-icon}^] and generated from the {stackable-crddocs-repo}[crddocs repository{external-link-icon}^]. === Style guide @@ -54,17 +54,17 @@ The xref:docs/style-guide.adoc[] contains all the information about the writing == Technical bits: Antora, Netlify, Pagefind -{antora-docs}[Antora] uses a {antora-playbook}[playbook] to build the documentation. +{antora-docs}[Antora{external-link-icon}^] uses a {antora-playbook}[playbook{external-link-icon}^] to build the documentation. It pulls documentation content from all the individual operator repositories, so an operators documentation is maintained in the same repository as the code. -Antora builds a static website which we serve over {netlify}[Netlify]. -The web template of the documentation is also custom made and is developed in the {stackable-docs-ui-repo}[documentation-ui] repository. +Antora builds a static website which we serve over {netlify}[Netlify{external-link-icon}^]. +The web template of the documentation is also custom made and is developed in the {stackable-docs-ui-repo}[documentation-ui{external-link-icon}^] repository. -For search, we use {pagefind}[pagefind] - a static search. +For search, we use {pagefind}[pagefind{external-link-icon}^] - a static search. The search index is generated as part of the build process and no external index is queried during search. -For Antora, the {antora-zulipchat}[Antora Zulip chatroom] is a good place to get help (besides the documentation)! +For Antora, the {antora-zulipchat}[Antora Zulip chatroom{external-link-icon}^] is a good place to get help (besides the documentation)! -Building the documentation and also the deployment process on Netlify are documented in the {stackable-docs-readme}[README] file of the documentation repository. +Building the documentation and also the deployment process on Netlify are documented in the {stackable-docs-readme}[README{external-link-icon}^] file of the documentation repository. == Executable tutorials @@ -76,7 +76,7 @@ Have a look at the existing getting started guides on how to do this. == Templating There is a templating mechanism in the docs. -This has been introduced to template in mostly version numbers, so the updating doesn't have to be done by hand. +This has been introduced to template in mostly version numbers, so the updating doesn't have to be done by hand. Every Operator repo has a script `scripts/docs_templating.sh` and a file with templating variables `docs/templating_vars.yaml`. The script applies the variables to all `.j2` files in the `docs` directory. @@ -91,9 +91,9 @@ The documentation consists of two _components_, the `home` and `management` comp All documentation for the operators is in the `home` component, and it is versioned with the platform versions (23.11, 24.3, 24.7 etc.). The `management` component contains docs for `stackablectl` and the Stackable cockpit; it is not versioned (there is only a single, current version). -The `home` component is actually a {antora-distributed-components}[distributed component], which means that it is split across multiple repositories. +The `home` component is actually a {antora-distributed-components}[distributed component{external-link-icon}^], which means that it is split across multiple repositories. Each operator repository has a `docs` directory where the docs are found, and in there the `antora.yml` file specifies that the component is `home`, which means that the `home` component is partially defined across all operator repositories. -For versions, all Stackable repositories use {trunk-based-development}[trunk based development], and so the documentation is also pulled from different release branches in each repository. +For versions, all Stackable repositories use {trunk-based-development}[trunk based development{external-link-icon}^], and so the documentation is also pulled from different release branches in each repository. Each branch contains only a single version, and the `main` branch contains the `nightly` docs version. -Using branches to structure component version is also {antora-content-branches}[recommended by Antora]. +Using branches to structure component version is also {antora-content-branches}[recommended by Antora{external-link-icon}^]. diff --git a/modules/contributor/pages/docs/releasing-a-new-version.adoc b/modules/contributor/pages/docs/releasing-a-new-version.adoc index 87365e714..434048018 100644 --- a/modules/contributor/pages/docs/releasing-a-new-version.adoc +++ b/modules/contributor/pages/docs/releasing-a-new-version.adoc @@ -3,9 +3,9 @@ NOTE: This guide is directed at internal contributors, as an external contributor, you cannot release a new documentation version. Whenever there is a new Stackable Data Platform release, the documentation is also released with a new version. -This process has been automated with scripts, which are found in the https://github.com/stackabletech/documentation/tree/main/scripts[`scripts`] directory of the documentation repository. +This process has been automated with scripts, which are found in the https://github.com/stackabletech/documentation/tree/main/scripts[`scripts`{external-link-icon}^] directory of the documentation repository. -The process consists of two steps: +The process consists of two steps: . Making a new release branch (`make-release-branch.sh`). . Publishing the new version by modifying the playbooks (`publish-new-version.sh`). diff --git a/modules/contributor/pages/docs/style-guide.adoc b/modules/contributor/pages/docs/style-guide.adoc index 85eeebedb..87c4f1030 100644 --- a/modules/contributor/pages/docs/style-guide.adoc +++ b/modules/contributor/pages/docs/style-guide.adoc @@ -1,10 +1,10 @@ = Documentation style guide :page-aliases: style_guide.adoc, style-guide.adoc, docs-style-guide.adoc -:asciidoc-recommended-practices: https://asciidoctor.org/docs/asciidoc-recommended-practices[AsciiDoc recommended practices] -:kubernetes-style-guide: https://kubernetes.io/docs/contribute/style/style-guide/[Kubernetes style guide] -:google-style-guide: https://developers.google.com/style/[Google developer documentation style guide] -:apache-product-name-usage-guide: https://www.apache.org/foundation/marks/guide[Apache product name usage guide] +:asciidoc-recommended-practices: https://asciidoctor.org/docs/asciidoc-recommended-practices[AsciiDoc recommended practices{external-link-icon}^] +:kubernetes-style-guide: https://kubernetes.io/docs/contribute/style/style-guide/[Kubernetes style guide{external-link-icon}^] +:google-style-guide: https://developers.google.com/style/[Google developer documentation style guide{external-link-icon}^] +:apache-product-name-usage-guide: https://www.apache.org/foundation/marks/guide[Apache product name usage guide{external-link-icon}^] This page provides guidelines on how to write documentation for the Stackable platform. The guidelines cover overall document structure, text appearance and formatting, as well as writing style, language and grammar. @@ -24,10 +24,10 @@ If you are wondering about how to write, structure or format something and what == File names -We follow Googles recommendations on https://developers.google.com/search/docs/crawling-indexing/url-structure[URL structure]. +We follow Googles recommendations on https://developers.google.com/search/docs/crawling-indexing/url-structure[URL structure{external-link-icon}^]. This means we use hyphens (`-`) instead of underscores (`_`) in URLs. -Existing files with underscores can be renamed, use https://docs.antora.org/antora/latest/page/page-aliases/[Antora page aliases] when renaming a file to ensure that old links to the file still work. +Existing files with underscores can be renamed, use https://docs.antora.org/antora/latest/page/page-aliases/[Antora page aliases{external-link-icon}^] when renaming a file to ensure that old links to the file still work. Keep file names stable, that means don't add _experimental_ or similar to the filenames, as otherwise the file name would have to change once a feature matures. @@ -35,11 +35,11 @@ Keep file names stable, that means don't add _experimental_ or similar to the fi We rely on the AsciiDoc recommended practices for the overall layout and formatting of the AsciiDoc documents that make up the documentation. Here are the most important parts: -* https://asciidoctor.org/docs/asciidoc-recommended-practices/#one-sentence-per-line[Write one sentence per line], i.e. do not use fixed length line breaks. This has multiple advantages outlined in the linked page, among them easier diffing in source control, easier swapping of sentences and avoiding reflow when changing a subsection of a paragraph. -* https://asciidoctor.org/docs/asciidoc-recommended-practices/#document-attributes-i-e-variables[Use document attributes (variables) to improve text flow], especially for URLs. -* https://asciidoctor.org/docs/asciidoc-recommended-practices/#lists[Use asterisks for unordered lists]. +* https://asciidoctor.org/docs/asciidoc-recommended-practices/#one-sentence-per-line[Write one sentence per line{external-link-icon}^], i.e. do not use fixed length line breaks. This has multiple advantages outlined in the linked page, among them easier diffing in source control, easier swapping of sentences and avoiding reflow when changing a subsection of a paragraph. +* https://asciidoctor.org/docs/asciidoc-recommended-practices/#document-attributes-i-e-variables[Use document attributes (variables) to improve text flow{external-link-icon}^], especially for URLs. +* https://asciidoctor.org/docs/asciidoc-recommended-practices/#lists[Use asterisks for unordered lists{external-link-icon}^]. -Also - but these recommendations are fairly obvious - https://asciidoctor.org/docs/asciidoc-recommended-practices/#document-extension[use the `.adoc` extension for AsciiDoc files], https://asciidoctor.org/docs/asciidoc-recommended-practices/#section-titles[use asymmetric Atx-style for section headings], https://asciidoctor.org/docs/asciidoc-recommended-practices/#delimited-blocks[use only four characters for block delimiters]. +Also - but these recommendations are fairly obvious - https://asciidoctor.org/docs/asciidoc-recommended-practices/#document-extension[use the `.adoc` extension for AsciiDoc files{external-link-icon}^], https://asciidoctor.org/docs/asciidoc-recommended-practices/#section-titles[use asymmetric Atx-style for section headings{external-link-icon}^], https://asciidoctor.org/docs/asciidoc-recommended-practices/#delimited-blocks[use only four characters for block delimiters{external-link-icon}^]. Read the {asciidoc-recommended-practices} for more. @@ -51,12 +51,12 @@ The description is used by search engines in the search result snippets. Since the Stackable Data Platform is built on Kubernetes, the resources mentioned in our documentation are very similar to the ones mentioned in the Kubernetes documentation, so we follow the {kubernetes-style-guide} for formatting of code, Kubernetes resources and objects. Some examples: -* https://kubernetes.io/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects[Use PascalCase for API objects] such as ConfigMap or KafkaCluster -* https://kubernetes.io/docs/contribute/style/style-guide/#use-italics-to-define-or-introduce-new-terms[Use _italics_ to define or introduce new terms] -* https://kubernetes.io/docs/contribute/style/style-guide/#use-code-style-for-filenames-directories-and-paths[Use `code style` for filenames, directories and paths] -* https://kubernetes.io/docs/contribute/style/style-guide/#use-code-style-for-object-field-names-and-namespaces[Use `code style` for object field names and namespaces] -* https://kubernetes.io/docs/contribute/style/style-guide/#use-normal-style-for-string-and-integer-field-values[Use normal style for string and integer field values] -* https://kubernetes.io/docs/contribute/style/style-guide/#use-code-style-for-kubernetes-command-tool-and-component-names[Use `code style` for command line tools] such as `stackablectl` +* https://kubernetes.io/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects[Use PascalCase for API objects{external-link-icon}^] such as ConfigMap or KafkaCluster +* https://kubernetes.io/docs/contribute/style/style-guide/#use-italics-to-define-or-introduce-new-terms[Use _italics_ to define or introduce new terms{external-link-icon}^] +* https://kubernetes.io/docs/contribute/style/style-guide/#use-code-style-for-filenames-directories-and-paths[Use `code style` for filenames, directories and paths{external-link-icon}^] +* https://kubernetes.io/docs/contribute/style/style-guide/#use-code-style-for-object-field-names-and-namespaces[Use `code style` for object field names and namespaces{external-link-icon}^] +* https://kubernetes.io/docs/contribute/style/style-guide/#use-normal-style-for-string-and-integer-field-values[Use normal style for string and integer field values{external-link-icon}^] +* https://kubernetes.io/docs/contribute/style/style-guide/#use-code-style-for-kubernetes-command-tool-and-component-names[Use `code style` for command line tools{external-link-icon}^] such as `stackablectl` === Code blocks (scripts, console instructions) @@ -89,27 +89,27 @@ Hello World <.> <.> Prefix the command line with the dollar sign (`$`) so that when the reader clicks the 'Copy' button, only the command-lines are copied. <.> Do _not_ prefix output lines, to prevent the lines from being copied. -More information on code blocks in the https://docs.antora.org/antora/latest/asciidoc/source/[Antora documentation]. +More information on code blocks in the https://docs.antora.org/antora/latest/asciidoc/source/[Antora documentation{external-link-icon}^]. == Tone and writing style: Google developer documentation style guide For overall tone, writing style, language and grammar the {google-style-guide} is a good resource of guidelines. Some highlights: -* https://developers.google.com/style/tone[Be conversational and friendly] -* https://developers.google.com/style/person[Use second person]: "You" rather than "we" -* https://developers.google.com/style/voice[Use active voice] -* https://developers.google.com/style/capitalization[Use sentence case for headings] -* https://developers.google.com/style/future[Avoid talking about future features] -* https://developers.google.com/style/timeless-documentation[Timeless documentation] +* https://developers.google.com/style/tone[Be conversational and friendly{external-link-icon}^] +* https://developers.google.com/style/person[Use second person{external-link-icon}^]: "You" rather than "we" +* https://developers.google.com/style/voice[Use active voice{external-link-icon}^] +* https://developers.google.com/style/capitalization[Use sentence case for headings{external-link-icon}^] +* https://developers.google.com/style/future[Avoid talking about future features{external-link-icon}^] +* https://developers.google.com/style/timeless-documentation[Timeless documentation{external-link-icon}^] -The Google guide also includes it's own list of https://developers.google.com/style/highlights[highlights]. +The Google guide also includes it's own list of https://developers.google.com/style/highlights[highlights{external-link-icon}^]. Lastly, these are guidelines and not strict rules to follow. Use your own judgement to clearly communicate and explain - after all this is what documentation is about. == Images -Please include an alt text when https://docs.asciidoctor.org/asciidoc/latest/macros/images/[embedding images]. +Please include an alt text when https://docs.asciidoctor.org/asciidoc/latest/macros/images/[embedding images{external-link-icon}^]. The alt text should describe what can be seen on the picture, to make the documentation more accessible. == CRD documentation diff --git a/modules/contributor/pages/docs/troubleshooting-antora.adoc b/modules/contributor/pages/docs/troubleshooting-antora.adoc index 651d86cc4..1ae0512b6 100644 --- a/modules/contributor/pages/docs/troubleshooting-antora.adoc +++ b/modules/contributor/pages/docs/troubleshooting-antora.adoc @@ -7,10 +7,10 @@ Make sure that the release branches have the correct version set in their `antora.yml` files and also make sure that the `main` branch is still set to `nightly`. * An `xref` or `include` in the `home` component is not found (starting with `xref:home...`) ** If the page containing the `xref` is also in the `home` component, do **not** include the component name in the `xref`. - Read the {antora-xref-docs}[`xref` documentation]. + Read the {antora-xref-docs}[`xref` documentation{external-link-icon}^]. Creating a reference to a page and _specifying a component without a version_ means that the link is pointing to the latest version. This is usually not what you want! Omitting the component entirely will instead link to the target page _within the same version_ as the page resides in. * Other `xref` and `include` issues -** Familiarize yourself with the {antora-xref-docs}[`xref` documentation] which explains the syntax in great detail. - Unfortunately the xref syntax is a common place to make mistakes, and it is easy to accidently link to the wrong place. \ No newline at end of file +** Familiarize yourself with the {antora-xref-docs}[`xref` documentation{external-link-icon}^] which explains the syntax in great detail. + Unfortunately the xref syntax is a common place to make mistakes, and it is easy to accidently link to the wrong place. diff --git a/modules/contributor/pages/docs/using-tab-blocks.adoc b/modules/contributor/pages/docs/using-tab-blocks.adoc index e8599042e..2bc77f817 100644 --- a/modules/contributor/pages/docs/using-tab-blocks.adoc +++ b/modules/contributor/pages/docs/using-tab-blocks.adoc @@ -1,7 +1,7 @@ = Using tab blocks :asciidoctor-tabs-gh: https://github.com/asciidoctor/asciidoctor-tabs -The {asciidoctor-tabs-gh}[`asciidoctor/tabs`] extension is installed, so you can use tab blocks. +The {asciidoctor-tabs-gh}[`asciidoctor/tabs`{external-link-icon}^] extension is installed, so you can use tab blocks. An example can be found in the xref:management:stackablectl:installation.adoc[stackablectl installation instructions]. TIP: Be sure to use different block indicators nested admonition blocks so that they don't conflict with the tab blocks. diff --git a/modules/contributor/pages/guidelines/opa-configuration.adoc b/modules/contributor/pages/guidelines/opa-configuration.adoc index a7228f91f..5568ca102 100644 --- a/modules/contributor/pages/guidelines/opa-configuration.adoc +++ b/modules/contributor/pages/guidelines/opa-configuration.adoc @@ -5,11 +5,11 @@ == Introduction -The Stackable Platform offers an https://www.openpolicyagent.org[OpenPolicyAgent] (OPA) operator for policy-based access control. This document shows how to configure a Stackable operator and its managed product to query OPA to enforce policy-based access control. +The Stackable Platform offers an https://www.openpolicyagent.org[OpenPolicyAgent{external-link-icon}^] (OPA) operator for policy-based access control. This document shows how to configure a Stackable operator and its managed product to query OPA to enforce policy-based access control. == What is OPA? -OPA is an open source, general purpose policy engine. It supports a high-level declarative language called `https://www.openpolicyagent.org/docs/latest/policy-language/[Rego]`. Rego enables you to specify complex policies as code and transfer the decision-making processes from your software to OPA. We refer to policies written in Rego as _Rego rules_. +OPA is an open source, general purpose policy engine. It supports a high-level declarative language called `https://www.openpolicyagent.org/docs/latest/policy-language/[Rego{external-link-icon}^]`. Rego enables you to specify complex policies as code and transfer the decision-making processes from your software to OPA. We refer to policies written in Rego as _Rego rules_. The provided OPA REST API allows you to enforce policies within microservices, Kubernetes, CI/CD pipelines and more. @@ -20,11 +20,11 @@ OPA accepts arbitrary structured input data (e.g. JSON) when running queries aga The combination of arbitrary input data and the Rego rules enable you to specify and enforce almost any kind of policies. You can define powerful policies for e.g. user access for database tables, schemas, columns etc. You can enforce local network traffic, access time periods and many more. -See the https://www.openpolicyagent.org/docs/latest/#overview[OPA documentation] for further examples. +See the https://www.openpolicyagent.org/docs/latest/#overview[OPA documentation{external-link-icon}^] for further examples. == Stackable Operator for OPA -The https://github.com/stackabletech/opa-operator[Stackable Operator for OPA] deploys OPA as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet] in Kubernetes. This ensures that every registered https://kubernetes.io/de/docs/concepts/architecture/nodes/[Node] runs exactly one OPA instance. In order to reduce traffic and latency, deployed products querying OPA must use the local OPA provided on their respective Node. +The https://github.com/stackabletech/opa-operator[Stackable Operator for OPA{external-link-icon}^] deploys OPA as a https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet{external-link-icon}^] in Kubernetes. This ensures that every registered https://kubernetes.io/de/docs/concepts/architecture/nodes/[Node{external-link-icon}^] runs exactly one OPA instance. In order to reduce traffic and latency, deployed products querying OPA must use the local OPA provided on their respective Node. === Service Discovery @@ -79,7 +79,7 @@ In order to configure another operator and its product to query OPA, the service === Configure OPA access in the operator -The `https://github.com/stackabletech/operator-rs[operator-rs]` framework has a module called `opa.rs` that offers a predefined struct OpaConfig and several helper methods to extract the OPA URL from the service discovery ConfigMap. +The `https://github.com/stackabletech/operator-rs[operator-rs{external-link-icon}^]` framework has a module called `opa.rs` that offers a predefined struct OpaConfig and several helper methods to extract the OPA URL from the service discovery ConfigMap. [source,rust] ---- @@ -101,9 +101,9 @@ The Stackable platform uses internal (written by us) and external authorizers to === Internal -- https://github.com/stackabletech/trino-opa-authorizer[Trino] -- https://github.com/stackabletech/druid-opa-authorizer[Druid] +- https://github.com/stackabletech/trino-opa-authorizer[Trino{external-link-icon}^] +- https://github.com/stackabletech/druid-opa-authorizer[Druid{external-link-icon}^] === External -- https://github.com/anderseknert/opa-kafka-plugin[Apache Kafka] +- https://github.com/anderseknert/opa-kafka-plugin[Apache Kafka{external-link-icon}^] diff --git a/modules/contributor/pages/guidelines/service-discovery.adoc b/modules/contributor/pages/guidelines/service-discovery.adoc index 8edc0adc4..f31b9c30a 100644 --- a/modules/contributor/pages/guidelines/service-discovery.adoc +++ b/modules/contributor/pages/guidelines/service-discovery.adoc @@ -24,7 +24,7 @@ The following section offers some Rust code snippets to get an idea on how to cr === Create a discovery ConfigMap -Remember, per convention the discovery `ConfigMap` name of a cluster must be equal to the cluster name. The following code demonstrates how to create a discovery `ConfigMap` using the `ConfigMapBuilder` of the https://github.com/stackabletech/operator-rs[`operator-rs`] framework: +Remember, per convention the discovery `ConfigMap` name of a cluster must be equal to the cluster name. The following code demonstrates how to create a discovery `ConfigMap` using the `ConfigMapBuilder` of the https://github.com/stackabletech/operator-rs[`operator-rs`{external-link-icon}^] framework: [source,rust] ---- @@ -83,7 +83,7 @@ fn env_var_from_cm(name: &str, configmap_name: &str) -> EnvVar { } ---- -The returned `EnvVar` then can be added to a `Pod` container and used in the `command` or `args` field using the https://github.com/stackabletech/operator-rs[`operator-rs`] framework container builder: +The returned `EnvVar` then can be added to a `Pod` container and used in the `command` or `args` field using the https://github.com/stackabletech/operator-rs[`operator-rs`{external-link-icon}^] framework container builder: [source,rust] ---- @@ -129,9 +129,9 @@ The retrieved connection string can be used to configure the product to connect == Existing libraries -Currently, there is not much support from the https://github.com/stackabletech/operator-rs[`operator-rs`] framework to assist with service discovery. The related code is mostly contained in each operator and similar to the examples above. +Currently, there is not much support from the https://github.com/stackabletech/operator-rs[`operator-rs`{external-link-icon}^] framework to assist with service discovery. The related code is mostly contained in each operator and similar to the examples above. The following list should indicate support for certain products or helper methods: - `ConfigMapBuilder` in combination with `ObjectMetaBuilder` assists with building the discovery `ConfigMap` -- `OPA`: The https://github.com/stackabletech/operator-rs[`operator-rs`] framework has a module called `opa.rs` that supports the creation of the data API connection string +- `OPA`: The https://github.com/stackabletech/operator-rs[`operator-rs`{external-link-icon}^] framework has a module called `opa.rs` that supports the creation of the data API connection string diff --git a/modules/contributor/pages/guidelines/webhook-server.adoc b/modules/contributor/pages/guidelines/webhook-server.adoc index 4fcdac025..88185ce8f 100644 --- a/modules/contributor/pages/guidelines/webhook-server.adoc +++ b/modules/contributor/pages/guidelines/webhook-server.adoc @@ -70,7 +70,7 @@ The `stackable-webhook` library provides a ready-to-use `ConversionWebhookServer [NOTE] ==== -The `#[tokio::main]` attribute is only available when the https://docs.rs/tokio/latest/tokio/#feature-flags[`macros`] feature is enabled. +The `#[tokio::main]` attribute is only available when the https://docs.rs/tokio/latest/tokio/#feature-flags[`macros`{external-link-icon}^] feature is enabled. Using the `full` feature flag includes the `macros` flag. ==== diff --git a/modules/contributor/pages/index.adoc b/modules/contributor/pages/index.adoc index a20b158ee..39e7fc1c9 100644 --- a/modules/contributor/pages/index.adoc +++ b/modules/contributor/pages/index.adoc @@ -6,16 +6,16 @@ Welcome to Stackable, we're happy to have your contributions! Contributions can come in many different ways, and this document is the entry point to point you into the right direction to get your contribution posted as soon as possible. -* First of all, if you have a **question** and something is unclear and you couldn't find the information you needed, ask a question on https://github.com/orgs/stackabletech/discussions[GitHub discussions]. +* First of all, if you have a **question** and something is unclear and you couldn't find the information you needed, ask a question on https://github.com/orgs/stackabletech/discussions[GitHub discussions{external-link-icon}^]. This is the first place you should go to if you don't know where to start. * If you found a **bug or a feature request** and you already know where it would need to go, _search for similar issues first_! - If you cannot find anything, {gh-create-issue}[create an issue] in the repository that belongs to the component where you found the bug or have the feature request. + If you cannot find anything, {gh-create-issue}[create an issue{external-link-icon}^] in the repository that belongs to the component where you found the bug or have the feature request. You can also have a look at the xref:project-overview.adoc[] to find out which repository might be the right place to go to. An issue is also the right tool if you have a suggestion for a fix for a bug but first want to report the bug to raise awareness. When creating a new issue please provide as much information as you consider relevant. Issues can be bug reports, feature requests and so on. The Stackable repositories provide templates to make it easier to submit high-quality issues. -* If you are already familiar with the xref:project-overview.adoc[] and you have a **particular fix or feature to contribute in code**, you can {gh-pr}[create a pull request] in the specific repository. +* If you are already familiar with the xref:project-overview.adoc[] and you have a **particular fix or feature to contribute in code**, you can {gh-pr}[create a pull request{external-link-icon}^] in the specific repository. Again, it is useful to first to a quick search if there is already an issue or other pull request that is similar to yours. If that is the case, consider contributing to the existing issue by either adding new feedback or code. The steps to contribute a pull request are outlined below, and you should also consult the xref:contributing-code.adoc[] guidelines to make sure your contribution isn't missing anything and follow the <> below. @@ -28,8 +28,8 @@ Please see the xref:project-overview.adoc[] page to get an overview of the most [[contributing-workflow]] == General pull request guidelines -All our development is done on https://github.com/stackabletech[GitHub] and contributions should be made through {gh-pr}[creating pull requests], -follow the GitHub instructions on how to do this. +All our development is done on https://github.com/stackabletech[GitHub{external-link-icon}^] and contributions should be made through {gh-pr}[creating pull requests{external-link-icon}^], +follow the GitHub instructions on how to do this. If you are an external contributor, you will need to fork the repository where you want your change to be made. === Signed commits @@ -37,12 +37,12 @@ If you are an external contributor, you will need to fork the repository where y As a supply chain security policy, all commits and tags in Stackable repositories need to be signed. Signed commits ensure authenticity by verifying that a commit has indeed been made by a certain person and integrity to make sure that no data has been changed after the fact. -Read https://stackable.tech/en/notes-on-signed-commits-with-git-and-github/[Notes on Signed Commits with Git and Github] for more information on using signed commits. +Read https://stackable.tech/en/notes-on-signed-commits-with-git-and-github/[Notes on Signed Commits with Git and Github{external-link-icon}^] for more information on using signed commits. === Instructions Please make sure that you base your pull request on the latest changes in the `main` branch of the repository if it is a general change you want to see added to the platform, or off of a specific release branch (named `release-23.11` for example) if you want to contribute a fix that is specific to a release. -At Stackable we use a branch structure based on https://trunkbaseddevelopment.com/[trunk based development]. +At Stackable we use a branch structure based on https://trunkbaseddevelopment.com/[trunk based development{external-link-icon}^]. ==== Review preparation diff --git a/modules/contributor/pages/project-overview.adoc b/modules/contributor/pages/project-overview.adoc index 70363a0d0..ded323dfc 100644 --- a/modules/contributor/pages/project-overview.adoc +++ b/modules/contributor/pages/project-overview.adoc @@ -7,28 +7,28 @@ This page gives you a high-level overview of all the technical bits in the Stack [[repositories]] == Repositories -On GitHub you can find more than a 100 repositories in the https://github.com/orgs/stackabletech/repositories[stackabletech organization]. +On GitHub you can find more than a 100 repositories in the https://github.com/orgs/stackabletech/repositories[stackabletech organization{external-link-icon}^]. Below you find an overview of the majority of these repositories and how they relate to each other. [[operator-repositories]] === Operator repositories, templating, operator-rs -At the core of the Stackable Platform are the Kubernetes operators used to install and manage various data products, like the https://github.com/stackabletech/nifi-operator[nifi-operator] for example. -You can find all of the operators if you https://github.com/orgs/stackabletech/repositories?q=operator[search the organization repositories]. +At the core of the Stackable Platform are the Kubernetes operators used to install and manage various data products, like the https://github.com/stackabletech/nifi-operator[nifi-operator{external-link-icon}^] for example. +You can find all of the operators if you https://github.com/orgs/stackabletech/repositories?q=operator[search the organization repositories{external-link-icon}^]. image::project-overview-operators.drawio.svg[] -All the operators are written in https://www.rust-lang.org/[Rust] and the source code is found in the `rust` directory. -`tests` contains the integration tests which use https://kuttl.dev/[kuttl] and our own test template https://github.com/stackabletech/beku.py[beku.py]. -Documentation is written in https://antora.org/[Antora] and found in the `docs` directory, see also <> further down the page. +All the operators are written in https://www.rust-lang.org/[Rust{external-link-icon}^] and the source code is found in the `rust` directory. +`tests` contains the integration tests which use https://kuttl.dev/[kuttl{external-link-icon}^] and our own test template https://github.com/stackabletech/beku.py[beku.py{external-link-icon}^]. +Documentation is written in https://antora.org/[Antora{external-link-icon}^] and found in the `docs` directory, see also <> further down the page. `deploy` and `docker` contain files used to package the operator into a Docker image and Helm chart. Some files in these repositories are actually _templated_: -The https://github.com/stackabletech/operator-templating[operator-templating] repository contains a template for all operator repositories, where shared files are distributed from. +The https://github.com/stackabletech/operator-templating[operator-templating{external-link-icon}^] repository contains a template for all operator repositories, where shared files are distributed from. You can read the README in that repository to find out more about the details. Whenever common files are changed, a GitHub action is used to distribute the changes to all operator repositories. -The https://github.com/stackabletech/operator-rs/[operator-rs] repository contains the common framework library for all operators. +The https://github.com/stackabletech/operator-rs/[operator-rs{external-link-icon}^] repository contains the common framework library for all operators. It is a Rust library that is used by all operators and contains shared structs and shared functionality. [[docker-images-repository]] @@ -36,14 +36,14 @@ It is a Rust library that is used by all operators and contains shared structs a image::project-overview-docker-images.drawio.svg[] -The https://github.com/stackabletech/docker-images/[docker-images] repository contains Dockerfiles for all the products that are supported by the SDP. +The https://github.com/stackabletech/docker-images/[docker-images{external-link-icon}^] repository contains Dockerfiles for all the products that are supported by the SDP. The actual product artifacts are pulled from the <> and packaged into images. The images are pushed into an <>. [[management-tooling]] === Management tooling: stackablectl, stackable-cockpit -The `stackablectl` commandline tool and the Stackable Cockpit UI are both found in the https://github.com/stackabletech/stackable-cockpit[stackable-cockpit] repository, and they both share some code. +The `stackablectl` commandline tool and the Stackable Cockpit UI are both found in the https://github.com/stackabletech/stackable-cockpit[stackable-cockpit{external-link-icon}^] repository, and they both share some code. The structure of the repository is documented in its README. [[documentation]] @@ -51,29 +51,29 @@ The structure of the repository is documented in its README. image::project-overview-documentation.drawio.svg[] -The documentation is built with https://antora.org/[Antora] and the playbook file to build it is located in the https://github.com/stackabletech/documentation[documentation] repository, among some common platform documentation. -The UI for the documentation is found in the https://github.com/stackabletech/documentation-ui[documentation-ui] repository; it is included as a submodule in the documentation repository. +The documentation is built with https://antora.org/[Antora{external-link-icon}^] and the playbook file to build it is located in the https://github.com/stackabletech/documentation[documentation{external-link-icon}^] repository, among some common platform documentation. +The UI for the documentation is found in the https://github.com/stackabletech/documentation-ui[documentation-ui{external-link-icon}^] repository; it is included as a submodule in the documentation repository. The documentation pulls in operator documentation files from the operator repositories. -The documentation is found at https://docs.stackable.tech/. +The documentation is found at https://docs.stackable.tech/[https://docs.stackable.tech/{external-link-icon}^]. -There is also https://crds.stackable.tech/ where you can find generated documentation for all the CustomResourceDefinitions on the platform. -The code to generate this page is found in the https://github.com/stackabletech/crddocs[crddocs] repository. +There is also https://crds.stackable.tech/[https://crds.stackable.tech/{external-link-icon}^] where you can find generated documentation for all the CustomResourceDefinitions on the platform. +The code to generate this page is found in the https://github.com/stackabletech/crddocs[crddocs{external-link-icon}^] repository. [[tooling-repositories]] === Tooling repositories On top of the mentioned repositories, there are various smaller tools or product extensions that Stackable developed, they are all linked to from relevant places. -Two examples are the https://github.com/stackabletech/image-tools[image-tools] used to build Docker images and the https://github.com/stackabletech/druid-opa-authorizer/[druid-opa-authorizer] which is a Druid extension that enables OPA support for Druid. +Two examples are the https://github.com/stackabletech/image-tools[image-tools{external-link-icon}^] used to build Docker images and the https://github.com/stackabletech/druid-opa-authorizer/[druid-opa-authorizer{external-link-icon}^] which is a Druid extension that enables OPA support for Druid. [[infrastructure-repositories]] === Infastructure: T2 -https://github.com/stackabletech/t2[T2 - Test & Troubleshoot Platform] is used for integration testing across different versions and cloud providers, find more information in the README of the repository. +https://github.com/stackabletech/t2[T2 - Test & Troubleshoot Platform{external-link-icon}^] is used for integration testing across different versions and cloud providers, find more information in the README of the repository. [[issues-repository]] === Issues -The https://github.com/stackabletech/issues[issues] repository exists solely for the purpose of tracking issues related to the Stackable Platform in general. +The https://github.com/stackabletech/issues[issues{external-link-icon}^] repository exists solely for the purpose of tracking issues related to the Stackable Platform in general. Large topics that impact many or even all of the platform components are discussed here. There is no code in this repository. @@ -85,7 +85,7 @@ Where are binaries, Helm Charts and Docker images stored? [[product-artifacts]] === Product artifacts -A lot of artifacts are stored in the https://oci.stackable.tech[OCI registry]. Currently, those artifacts can be browsed by calling the API. +A lot of artifacts are stored in the https://oci.stackable.tech[OCI registry{external-link-icon}^]. Currently, those artifacts can be browsed by calling the API. The following command lists all the different projects the artifacts are distributed across: @@ -101,7 +101,7 @@ stackable-charts ---- `sdp` contains the product and operator Docker images. The Helm Charts for the operators are found under `sdp-charts`. Some artifacts like the -product binaries are stored in the https://repo.stackable.tech/#browse/browse[Nexus repo] under `packages`. +product binaries are stored in the https://repo.stackable.tech/#browse/browse[Nexus repo{external-link-icon}^] under `packages`. List the Helm Charts in `sdp-charts`: @@ -130,7 +130,7 @@ $ curl -X GET --header 'Accept: application/json' 'https://oci.stackable.tech/ap [[docker-images]] === Docker images -Docker images are stored in https://oci.stackable.tech as mentioned above. To list all the available repositories in a project, for example in +Docker images are stored in https://oci.stackable.tech[https://oci.stackable.tech{external-link-icon}^] as mentioned above. To list all the available repositories in a project, for example in the `sdp` project, run this command: [source,console] @@ -174,7 +174,7 @@ $ curl -X GET --header 'Accept: application/json' 'https://oci.stackable.tech/ap Similar to the previous command, the API call uses pagination again. So the `page` value in the command can be incremented to see more results. Here the `page_size` parameter was also used to increase the results per page. -Another possibility, instead of using `curl`, would be the https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md[crane tool], which can also be used +Another possibility, instead of using `curl`, would be the https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md[crane tool{external-link-icon}^], which can also be used to browse the tags when given the path to a repository. [source,console] diff --git a/modules/contributor/pages/testing-infrastructure.adoc b/modules/contributor/pages/testing-infrastructure.adoc index a6380f9d1..a09e591eb 100644 --- a/modules/contributor/pages/testing-infrastructure.adoc +++ b/modules/contributor/pages/testing-infrastructure.adoc @@ -1,6 +1,6 @@ = Understanding the integration testing infrastructure :beku: https://github.com/stackabletech/beku.py -:ci: https://ci.stackable.tech/ +:ci: https://testing.stackable.tech/ :demos: https://github.com/stackabletech/demos :kind: https://kind.sigs.k8s.io/ :kuttl: https://kuttl.dev/ @@ -9,8 +9,8 @@ Contributors are encouraged to write tests for their operators. For small bug fixes and enhancements, unit tests are preferred. Larger developments should have integration tests. -Integration tests are declarative in form of manifest files that are executed by {kuttl}[KuTTL] on a Kubernetes cluster. -Each test case is comprised of multiple steps, each with a setup and an assertion file. +Integration tests are declarative in form of manifest files that are executed by {kuttl}[KuTTL{external-link-icon}^] on a Kubernetes cluster. +Each test case is comprised of multiple steps, each with a setup and an assertion file. Furthermore, to support a high dimensional matrix of configurations, the manifest files are partly templated as Jinja2 files. This allows for example to test the behaviour of multiple product versions with and without various features enabled. Having the manifest files as Jinja2 templates avoids duplication and makes the tests more maintainable. @@ -19,13 +19,13 @@ Having the manifest files as Jinja2 templates avoids duplication and makes the t To run the integration tests from your local machine, you need to have the follwing tools installed: -* {python}[Python] to run scripts. -* {kuttl}[KuTTL] to run the tests. -* {beku}[beku] our custom test expander. +* {python}[Python{external-link-icon}^] to run scripts. +* {kuttl}[KuTTL{external-link-icon}^] to run the tests. +* {beku}[beku{external-link-icon}^] our custom test expander. * xref:management:stackablectl:installation.adoc[stackablectl] to install operators. A Kubernetes cluster is needed to run the tests. -You can use {kind}[kind] to create one on your local machine. +You can use {kind}[kind{external-link-icon}^] to create one on your local machine. == Running the integration tests @@ -86,7 +86,7 @@ tests == How to run tests in the CI (Jenkins) -Stackable operators a {ci}[Jenkins instance] where the tests are ran every night. +Stackable operates a {ci}[Jenkins instance{external-link-icon}^] where the tests are ran every night. You can also run the tests manually there, for a particular pull request, if you have an account (only internal contributors). == Adding a new test @@ -96,4 +96,4 @@ Have a look at other tests to familiarize yourself with the general pattern of s == Further reading -Also have a look at the {demos}[Demos] for another way to deploy Stackable components and test them. +Also have a look at the {demos}[Demos{external-link-icon}^] for another way to deploy Stackable components and test them. diff --git a/modules/contributor/pages/testing-on-kubernetes.adoc b/modules/contributor/pages/testing-on-kubernetes.adoc index 1955ba6cc..7f2f2b7b0 100644 --- a/modules/contributor/pages/testing-on-kubernetes.adoc +++ b/modules/contributor/pages/testing-on-kubernetes.adoc @@ -9,21 +9,21 @@ Also, if you need to make changes to an operator and the framework at the same t For these reasons we have created a developer focused deployment mechanism that allows for easy local development, while still enabling full-scale testing in an actual Kubernetes cluster. -The main tool that is used for enabling these short feedback loops is called https://tilt.dev/[Tilt]. +The main tool that is used for enabling these short feedback loops is called https://tilt.dev/[Tilt{external-link-icon}^]. Tilt is a tool that continuously monitors your local codebase and automatically deploys any changes you make to the Kubernetes cluster defined by your current kubeconfig. Effectively this means, that when you have reached a state in your code that you would like to deploy to Kubernetes to look at more in depth, all you need to do is .. nothing - it has already been built, packaged and deployed in the background. A very important prerequisite for this of course is short build times! -To shorten these, we have settled on a tool called https://github.com/kolloch/crate2nix[crate2nix]. -This tool uses the https://nixos.org/[Nix package manager] to cache intermediate build steps and only recompile what has actually changed, thus significantly shortening build times. +To shorten these, we have settled on a tool called https://github.com/kolloch/crate2nix[crate2nix{external-link-icon}^]. +This tool uses the https://nixos.org/[Nix package manager{external-link-icon}^] to cache intermediate build steps and only recompile what has actually changed, thus significantly shortening build times. == Installation Due to the nature of how Nix works, all the setup steps are defined in the operator repositories and automatically applied when you start using this workflow. -The only prerequisite you need to install is the actual Nix package manager - you can find installation instructions and additional documentation on the https://nixos.org/download.html[Nix website]. +The only prerequisite you need to install is the actual Nix package manager - you can find installation instructions and additional documentation on the https://nixos.org/download.html[Nix website{external-link-icon}^]. **TL/DR** [source,bash] @@ -31,7 +31,7 @@ The only prerequisite you need to install is the actual Nix package manager - yo sh <(curl -L https://nixos.org/nix/install) --daemon ---- -If you don't want to run an arbitrary shellscript directly from the web, have a look at how to https://nixos.org/manual/nix/stable/installation/installing-binary#installing-from-a-binary-tarball[install from a binary distribution] or the list of https://nix-community.github.io/nix-installers/[maintained packages]. +If you don't want to run an arbitrary shellscript directly from the web, have a look at how to https://nixos.org/manual/nix/stable/installation/installing-binary#installing-from-a-binary-tarball[install from a binary distribution{external-link-icon}^] or the list of https://nix-community.github.io/nix-installers/[maintained packages{external-link-icon}^]. After this is done you also need to add a setting to your Nix config in `/etc/nix/nix.conf`: ---- @@ -52,13 +52,13 @@ Just installing Nix does not affect your system much, as it keeps all its config The Docker images need to be built on a Linux host. Nix can automatically delegate the build to a remote worker, but it must be configured to do so. -https://github.com/stackabletech/nix-docker-builder can set this up for you. +https://github.com/stackabletech/nix-docker-builder[https://github.com/stackabletech/nix-docker-builder{external-link-icon}^] can set this up for you. == Using The build and deploy steps for installing and running the operator are defined in the `Tiltfile` in the operators repository. We do encourage you to check out this file if you are interested in how things work under the hood, but you can also just use the command provided below and everything should _just work_. -For more context on how to read this file please have a look at the https://docs.tilt.dev/api.html[Tiltfile API Reference], which is based on https://github.com/bazelbuild/starlark/blob/32993fa0d1f1e4f3af167d249be95885ba5014ad/spec.md[Starlark]. +For more context on how to read this file please have a look at the https://docs.tilt.dev/api.html[Tiltfile API Reference{external-link-icon}^], which is based on https://github.com/bazelbuild/starlark/blob/32993fa0d1f1e4f3af167d249be95885ba5014ad/spec.md[Starlark{external-link-icon}^]. We provide a target in the Makefile to start everything up: @@ -86,11 +86,11 @@ You can now either hit the spacebar to open the Tilt user interface, or manually NOTE: The port used will be different for every repository from the Stackable organisation, in order to allow running multiple deployment workflows at the same time without getting port conflicts. === Configuring the Registry Used -If you are using a local Kubernetes like Kind, K3s or similar for your development Tilt will work right out of the box for you, as it will directly push the images to your local Kubernetes cluster (see https://docs.tilt.dev/personal_registry.html for more information). +If you are using a local Kubernetes like Kind, K3s or similar for your development Tilt will work right out of the box for you, as it will directly push the images to your local Kubernetes cluster (see https://docs.tilt.dev/personal_registry.html[https://docs.tilt.dev/personal_registry.html{external-link-icon}^] for more information). Due to the way that images are pushed to Kind this can be fairly inefficient, as the whole image will need to be pushed to every Kind node every time, not just changed layers once. -To work around this, Kind can be https://kind.sigs.k8s.io/docs/user/local-registry/[set up] to use a local registry, to which Tilt can then push the images. -The easiest way that we found to do this is by using https://github.com/tilt-dev/ctlptl[ctlptl] which enables you to easily set up local Kind, K3s or minikube clusters with a local registry. +To work around this, Kind can be https://kind.sigs.k8s.io/docs/user/local-registry/[set up{external-link-icon}^] to use a local registry, to which Tilt can then push the images. +The easiest way that we found to do this is by using https://github.com/tilt-dev/ctlptl[ctlptl{external-link-icon}^] which enables you to easily set up local Kind, K3s or minikube clusters with a local registry. Tilt should then automatically discover this registry from the cluster config and push images there. If you are using a remote cluster, Tilt will push the generated container images to a remote registry, in order for your Kubernetes to be able to access them. diff --git a/modules/guides/pages/custom-images.adoc b/modules/guides/pages/custom-images.adoc index d12c3fd14..6d32d9303 100644 --- a/modules/guides/pages/custom-images.adoc +++ b/modules/guides/pages/custom-images.adoc @@ -21,7 +21,7 @@ To use a customized image, you need to: The Stackable operators rely on the structure and contents of the product images, so any modifications need to be done using the Stackable images as base images. -You can find the Stackable Docker images in the {stackable-oci-registry}[Stackable OCI registry]. +You can find the Stackable Docker images in the {stackable-oci-registry}[Stackable OCI registry{external-link-icon}^]. Images follow a naming schema: `oci.stackable.tech/sdp/:-stackable` where `` includes products like `druid`, `trino`, and `opa`, `` are product versions like `28.0.1` (i.e. Apache Druid 28.0.1), `414`, or `0.61.0`, and `` is a Stackable platform version like `25.3.0` or `25.7.0`. The Stackable version can also be `0.0.0-dev` for nightly images. You can use this naming schema together with the xref:operators:supported_versions.adoc[] list to quickly find the base image you need. @@ -40,7 +40,7 @@ For example, for a custom image with a MySQL driver added, you might tag your im To deploy containers using this image, the Kubernetes cluster needs to be able to access the image. You can either upload the image into a custom registry and pull it from there — refer to your registry documentation on how to do this — or make the image available to the Kubernetes cluster directly. -For example, in {kind}[`kind`], you can use the {kind-load-image}[`kind load docker-image`] command to load a local image into the Kind cluster. +For example, in {kind}[`kind`{external-link-icon}^], you can use the {kind-load-image}[`kind load docker-image`{external-link-icon}^] command to load a local image into the Kind cluster. === Use your customized image in your Stacklet definition @@ -62,5 +62,5 @@ With this configuration, the operator deploys your Stacklet using your custom im == Further reading and useful links * Read about xref:concepts:product-image-selection.adoc[] to learn about other ways of specifying a product version or images, for example, how to use a custom registry when mirroring Stackable images. -* Have a look at the {stackable-oci-registry}[Stackable OCI registry] to find out which images are available to use as a base. +* Have a look at the {stackable-oci-registry}[Stackable OCI registry{external-link-icon}^] to find out which images are available to use as a base. Information on how to browse the registry can be found xref:contributor:project-overview.adoc#docker-images[here]. diff --git a/modules/guides/pages/enabling-verification-of-image-signatures.adoc b/modules/guides/pages/enabling-verification-of-image-signatures.adoc index df0171028..02d227b23 100644 --- a/modules/guides/pages/enabling-verification-of-image-signatures.adoc +++ b/modules/guides/pages/enabling-verification-of-image-signatures.adoc @@ -2,8 +2,8 @@ :page-aliases: tutorials:enabling-verification-of-image-signatures.adoc :description: Learn to enable and verify image signatures in Kubernetes using Sigstore’s Policy Controller, ensuring image authenticity and security in your cluster. -Image signing is a security measure that helps ensure the authenticity and integrity of container images. Starting with SDP 23.11, all our images are signed https://docs.sigstore.dev/cosign/openid_signing/["keyless"]. By verifying these signatures, cluster administrators can ensure that the images pulled from Stackable's container registry are authentic and have not been tampered with. -Since Kubernetes does not have native support for verifying image signatures yet, we will use Sigstore's https://docs.sigstore.dev/policy-controller/overview/[Policy Controller] in this tutorial. +Image signing is a security measure that helps ensure the authenticity and integrity of container images. Starting with SDP 23.11, all our images are signed "https://docs.sigstore.dev/cosign/openid_signing/[keyless{external-link-icon}^]". By verifying these signatures, cluster administrators can ensure that the images pulled from Stackable's container registry are authentic and have not been tampered with. +Since Kubernetes does not have native support for verifying image signatures yet, we will use Sigstore's https://docs.sigstore.dev/policy-controller/overview/[Policy Controller{external-link-icon}^] in this tutorial. IMPORTANT: Releases prior to SDP 23.11 do not have signed images. If you are using an older release and enforce image signature verification, Pods with Stackable images will be prevented from starting. @@ -17,7 +17,7 @@ helm repo update helm install policy-controller sigstore/policy-controller ---- -The default settings might not be appropriate for your environment, please refer to the https://artifacthub.io/packages/helm/sigstore/policy-controller[configurable values for the Helm chart] for more information. +The default settings might not be appropriate for your environment, please refer to the https://artifacthub.io/packages/helm/sigstore/policy-controller[configurable values for the Helm chart{external-link-icon}^] for more information. == Creating a policy to verify image signatures @@ -41,26 +41,26 @@ kubectl label namespace stackable policy.sigstore.dev/include=true ---- The Policy Controller checks all newly created Pods in this namespace that run any image matching `+++**+++.stackable.tech/+++**+++` (this matches images provided by Stackable) and ensures that these images have been signed by a Stackable Github Action that was tagged with a version number (meaning that this was a release version). If the signature of an image is invalid or missing, the policy will deny the pod creation. -For a more detailed explanation of the policy options, please refer to the https://docs.sigstore.dev/policy-controller/overview/#configuring-image-patterns[Sigstore documentation]. +For a more detailed explanation of the policy options, please refer to the https://docs.sigstore.dev/policy-controller/overview/#configuring-image-patterns[Sigstore documentation{external-link-icon}^]. If the `subjectRegExp` field in the policy is changed to something like `https://github.com/test/.+`, the policy will deny the creation of pods with Stackable images because the identity of the subject that signed the image (a Stackable Github Action Workflow) will no longer match the expression specified in the policy. NOTE: If for some reason you are using our `0.0.0-dev` images, the example policy will deny the creation of Pods with these images. To allow creation of these Pods, you can for example relax the policy by changing the `subjectRegExp` field to `^https://github.com/stackabletech/.+/.github/workflows/.+@refs/tags/.+$`. This will only check if an image has been signed by any Github Action of Stackable, regardless of the version. However, this is not recommended for production. == Verifying image signatures in an air-gapped environment -As mentioned before, our images and Helm charts for SDP are signed keyless. Keyless signing is more complex than "classic" signing with a private and public key, especially when you want to verify signatures in an air-gapped environment. However, it brings several https://www.chainguard.dev/unchained/benefits-of-keyless-software-signing[benefits] and by signing our images keyless, we're also in line with Kubernetes, https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/[which uses keyless signing as well]. +As mentioned before, our images and Helm charts for SDP are signed keyless. Keyless signing is more complex than "classic" signing with a private and public key, especially when you want to verify signatures in an air-gapped environment. However, it brings several https://www.chainguard.dev/unchained/benefits-of-keyless-software-signing[benefits{external-link-icon}^] and by signing our images keyless, we're also in line with Kubernetes, https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/[which uses keyless signing as well{external-link-icon}^]. === The general setup -To verify keyless signatures, the Policy Controller needs an up-to-date version of the root of trust, which is distributed as a collection of files (to put it simply). In an online setting, these files are automatically fetched via HTTP, by default from the https://tuf-repo-cdn.sigstore.dev/[Sigstore TUF Repo CDN]. +To verify keyless signatures, the Policy Controller needs an up-to-date version of the root of trust, which is distributed as a collection of files (to put it simply). In an online setting, these files are automatically fetched via HTTP, by default from the https://tuf-repo-cdn.sigstore.dev/[Sigstore TUF Repo CDN{external-link-icon}^]. -NOTE: https://docs.sigstore.dev/signing/overview/#root-of-trust[The Update Framework (TUF)] is the mechanism used by the Policy Controller to initialize and update the root of trust. +NOTE: https://docs.sigstore.dev/signing/overview/#root-of-trust[The Update Framework (TUF){external-link-icon}^] is the mechanism used by the Policy Controller to initialize and update the root of trust. -In an air-gapped environment, this CDN is not reachable, so instead you have to provide those files yourself. You can get these files from https://github.com/sigstore/root-signing/tree/main/repository/repository[GitHub]. +In an air-gapped environment, this CDN is not reachable, so instead you have to provide those files yourself. You can get these files from https://github.com/sigstore/root-signing/tree/main/repository/repository[GitHub{external-link-icon}^]. There are multiple ways how you can provide these files to the Policy Controller, please pick the one that works best for your air-gapped environment: * Serve them via an HTTP server that is reachable by the Policy Controller. + - If you can reach a bastion host from your air-gapped environment that has internet access, configuring a reverse proxy to https://tuf-repo-cdn.sigstore.dev/ will most likely be the easiest way to go for you. This avoids the need to manually update files periodically. + + If you can reach a bastion host from your air-gapped environment that has internet access, configuring a reverse proxy to https://tuf-repo-cdn.sigstore.dev/[https://tuf-repo-cdn.sigstore.dev/{external-link-icon}^] will most likely be the easiest way to go for you. This avoids the need to manually update files periodically. + If that's not possible, you can clone the TUF repository and serve it via HTTP, like so: + [source,bash] @@ -70,14 +70,14 @@ cd root-signing/repository/repository python3 -m http.server 8081 ---- + -In both cases, you can provide the host's IP address and port as the mirror URL to the policy controller. For how to do this exactly, we refer to the https://docs.sigstore.dev/policy-controller/overview/#configuring-trustroot-for-custom-tuf-root[Policy Controller's documentation]. +In both cases, you can provide the host's IP address and port as the mirror URL to the policy controller. For how to do this exactly, we refer to the https://docs.sigstore.dev/policy-controller/overview/#configuring-trustroot-for-custom-tuf-root[Policy Controller's documentation{external-link-icon}^]. -* Packing the files into an archive, serializing them and putting them directly into a the `TrustRoot` resource. This is explained in the https://docs.sigstore.dev/policy-controller/overview/#configuring-trustroot-for-custom-tuf-repository[Policy Controller's documentation] as well. +* Packing the files into an archive, serializing them and putting them directly into a the `TrustRoot` resource. This is explained in the https://docs.sigstore.dev/policy-controller/overview/#configuring-trustroot-for-custom-tuf-repository[Policy Controller's documentation{external-link-icon}^] as well. Both options yield you a `TrustRoot` custom resource which you then need to configure in your `ClusterImagePolicy`. -This is done via the `trustRootRef` attribute, as shown https://docs.sigstore.dev/policy-controller/overview/#configuring-verification-against-different-sigstore-instances[in the Policy Controller's documentation]. +This is done via the `trustRootRef` attribute, as shown https://docs.sigstore.dev/policy-controller/overview/#configuring-verification-against-different-sigstore-instances[in the Policy Controller's documentation{external-link-icon}^]. -Now there's one problem left: When starting, the Policy Controller tries to fetch the root of trust from https://tuf-repo-cdn.sigstore.dev/ by default. This will obviously fail in an air-gapped environment. To circumvent this, you can either set `.webhook.extraArgs.disable-tuf` to `true` in the Helm chart values, which disables the default initialization of the TUF repository. Or, if you configured a TUF mirror that's reachable via HTTP anyway, you can set `.webhook.extraArgs.tuf-mirror` to the URL of your mirror, to use it as the default TUF repository. In that case, you also don't have to create and configure the `TrustRoot` resource anymore. +Now there's one problem left: When starting, the Policy Controller tries to fetch the root of trust from https://tuf-repo-cdn.sigstore.dev/[https://tuf-repo-cdn.sigstore.dev/{external-link-icon}^] by default. This will obviously fail in an air-gapped environment. To circumvent this, you can either set `.webhook.extraArgs.disable-tuf` to `true` in the Helm chart values, which disables the default initialization of the TUF repository. Or, if you configured a TUF mirror that's reachable via HTTP anyway, you can set `.webhook.extraArgs.tuf-mirror` to the URL of your mirror, to use it as the default TUF repository. In that case, you also don't have to create and configure the `TrustRoot` resource anymore. === Updating the root of trust @@ -91,7 +91,7 @@ If you provide the files as serialized repository in the `TrustRoot` resource, t There's a lot more to learn about how keyless signing and verification works. We recommend the following resources: -* https://docs.sigstore.dev/signing/overview/ -* https://docs.sigstore.dev/policy-controller/overview/ -* https://www.chainguard.dev/unchained/life-of-a-sigstore-signature -* https://blog.sigstore.dev/why-you-cant-use-sigstore-without-sigstore-de1ed745f6fc/ +* https://docs.sigstore.dev/signing/overview/[https://docs.sigstore.dev/signing/overview/{external-link-icon}^] +* https://docs.sigstore.dev/policy-controller/overview/[https://docs.sigstore.dev/policy-controller/overview/{external-link-icon}^] +* https://www.chainguard.dev/unchained/life-of-a-sigstore-signature[https://www.chainguard.dev/unchained/life-of-a-sigstore-signature{external-link-icon}^] +* https://blog.sigstore.dev/why-you-cant-use-sigstore-without-sigstore-de1ed745f6fc/[https://blog.sigstore.dev/why-you-cant-use-sigstore-without-sigstore-de1ed745f6fc/{external-link-icon}^] diff --git a/modules/guides/pages/kubernetes-cluster-domain.adoc b/modules/guides/pages/kubernetes-cluster-domain.adoc index e967fb390..d53ecde1d 100644 --- a/modules/guides/pages/kubernetes-cluster-domain.adoc +++ b/modules/guides/pages/kubernetes-cluster-domain.adoc @@ -3,7 +3,7 @@ :dns-custom-nameservers: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ :dns-pod-service: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ -Stackable operators allow the configuration of a non-default cluster domain as described in {dns-custom-nameservers}[Customizing DNS Service] (and more in {dns-pod-service}[DNS for Services and Pods]). +Stackable operators allow the configuration of a non-default cluster domain as described in {dns-custom-nameservers}[Customizing DNS Service{external-link-icon}^] (and more in {dns-pod-service}[DNS for Services and Pods{external-link-icon}^]). The cluster domain can be configured using an environment variable `KUBERNETES_CLUSTER_DOMAIN` set on the operators. This environment variable can be configured via the helm values property `kubernetesClusterDomain` during the installation of the operators. @@ -12,6 +12,6 @@ This environment variable can be configured via the helm values property `kubern helm install -operator stackable-stable/-operator --set kubernetesClusterDomain="my.domain" ``` -You can also specify a custom cluster domain with a trailing dot (`my.domain.` instead of `my.domain`) to reduce the number of DNS requests under certain conditions (see https://github.com/stackabletech/issues/issues/656 for details). Note however that support for this is still considered experimental. +You can also specify a custom cluster domain with a trailing dot (`my.domain.` instead of `my.domain`) to reduce the number of DNS requests under certain conditions (see https://github.com/stackabletech/issues/issues/656[https://github.com/stackabletech/issues/issues/656{external-link-icon}^] for details). Note however that support for this is still considered experimental. If the environment variable `KUBERNETES_CLUSTER_DOMAIN` (or the helm property `kubernetesClusterDomain`) are not set / overriden, the operator will default the cluster domain to `cluster.local`. diff --git a/modules/guides/pages/providing-resources-with-pvcs.adoc b/modules/guides/pages/providing-resources-with-pvcs.adoc index 7c6cd82d8..97578980b 100644 --- a/modules/guides/pages/providing-resources-with-pvcs.adoc +++ b/modules/guides/pages/providing-resources-with-pvcs.adoc @@ -9,9 +9,9 @@ Several of the tools on the Stackable platform can use external resources that t Airflow users can access DAG jobs this way, and Spark users can do the same for data or other job dependencies, to name just two examples. A PersistentVolume will usually be provisioned by the Kubernetes Container Storage Interface (CSI) on behalf of the cluster administrator, who will take into account the type of storage that is required. -This will include, for example, an {pvc-capacity}[appropriate sizing], and relevant access modes (which in turn are dependent on the StorageClass chosen to back the PersistentVolume). +This will include, for example, an {pvc-capacity}[appropriate sizing{external-link-icon}^], and relevant access modes (which in turn are dependent on the StorageClass chosen to back the PersistentVolume). -The relationship between a PersistentVolume and a PersistentVolumeClaim can be illustrated by these https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume[two] https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim[examples]: +The relationship between a PersistentVolume and a PersistentVolumeClaim can be illustrated by these https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume[two{external-link-icon}^] https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim[examples{external-link-icon}^]: [source,yaml] ---- @@ -49,14 +49,14 @@ spec: ---- <1> The name of the storage class, which will be used by the PersistentVolumeClaim <2> The capacity of the PersistentVolume -<3> a list of https://kubernetes.io/docs/concepts/storage/persistent-volumes/?force_isolation=true#access-modes[access modes] +<3> a list of https://kubernetes.io/docs/concepts/storage/persistent-volumes/?force_isolation=true#access-modes[access modes{external-link-icon}^] <4> The storageClassName which is used to match a PersistentVolume to a claim <5> The specific quantity of the resource that is being claimed == Access modes and the StorageClass -Not all storage classes support all {pvc-access-modes}[access modes]. -The supported access modes also depend on the Kubernetes implementation, see for example the compatiblity table https://docs.openshift.com/container-platform/4.8/storage/understanding-persistent-storage.html#pv-access-modes_understanding-persistent-storage[Supported access modes for PVs] in the OpenShift documentation. Other managed Kubernetes implementations will be similar, albeit with different default storage class names. +Not all storage classes support all {pvc-access-modes}[access modes{external-link-icon}^]. +The supported access modes also depend on the Kubernetes implementation, see for example the compatiblity table https://docs.openshift.com/container-platform/4.8/storage/understanding-persistent-storage.html#pv-access-modes_understanding-persistent-storage[Supported access modes for PVs{external-link-icon}^] in the OpenShift documentation. Other managed Kubernetes implementations will be similar, albeit with different default storage class names. The important point is that the default StorageClass only supports `ReadWriteOnce`, which limits access to the PersistentVolumeClaim to a single node. A strategy governing PersistentVolumeClaim resources will thus address the following: @@ -68,11 +68,11 @@ If a PersistentVolumeClaim should be mounted on a single node for the applicatio == Node selection -The Kubernetes https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[documentation] states the following with regard to assigning pods to specific nodes: +The Kubernetes https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[documentation{external-link-icon}^] states the following with regard to assigning pods to specific nodes: ____ the scheduler will automatically do a reasonable placement (for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources). ____ -This suggests that resources are automatically considered when pods are assigned to nodes, but it is not clear if the same is true for implicit dependencies, such as PersistentVolumeClaim usage by multiple pods. The https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/[scheduler] will take various factors into account, such as +This suggests that resources are automatically considered when pods are assigned to nodes, but it is not clear if the same is true for implicit dependencies, such as PersistentVolumeClaim usage by multiple pods. The https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/[scheduler{external-link-icon}^] will take various factors into account, such as ____ ...individual and collective resource requirements, hardware / software / policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference.... @@ -91,10 +91,10 @@ Managed Kubernetes clusters will normally have a default storage implementation == Operator usage === Spark-k8s -Users of the xref:spark-k8s:index.adoc[Spark-k8s operator] have a variety of ways to manage SparkApplication dependencies, one of which is to xref:spark-k8s:usage-guide/examples.adoc#_pyspark_externally_located_dataset_artifact_available_via_pvcvolume_mount[mount resources on a PersistentVolumeClaim]. An example is shown https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report.yaml[here]. +Users of the xref:spark-k8s:index.adoc[Spark-k8s operator] have a variety of ways to manage SparkApplication dependencies, one of which is to xref:spark-k8s:usage-guide/examples.adoc#_pyspark_externally_located_dataset_artifact_available_via_pvcvolume_mount[mount resources on a PersistentVolumeClaim]. An example is shown https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report.yaml[here{external-link-icon}^]. == Further reading -* {pvcs}[Persistent Volumes] -* https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim[PV/PVC example] -* https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and selectors] +* {pvcs}[Persistent Volumes{external-link-icon}^] +* https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim[PV/PVC example{external-link-icon}^] +* https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and selectors{external-link-icon}^] diff --git a/modules/guides/pages/running-stackable-in-an-airgapped-environment.adoc b/modules/guides/pages/running-stackable-in-an-airgapped-environment.adoc index dd8839d4c..051955195 100644 --- a/modules/guides/pages/running-stackable-in-an-airgapped-environment.adoc +++ b/modules/guides/pages/running-stackable-in-an-airgapped-environment.adoc @@ -4,13 +4,13 @@ The main challenge with running Stackable in an air-gapped environment is how to get the artifacts (container images and Helm charts) into the environment. There are a few ways to do this: -* Mirror our images and Helm charts into a registry in the air-gapped environment. You need to find out what images are relevant for your specific Stackable deployment and transfer them to the target registry. Feel free to browse through the images in https://oci.stackable.tech/api/v2.0/projects/sdp[our registry] by using this xref:contributor:project-overview.adoc#docker-images[guide]. -* If possible, setup a reverse proxy to Stackable's registry on a node with internet connection that is reachable from all nodes within your air-gapped environment. You could, for example, use https://distribution.github.io/distribution/[distribution] for this. Here's a command to spin up a pull-through cache to `oci.stackable.tech` on port 5001: `docker run -d --name proxy-stackable -p 5001:5000 --restart=always -e REGISTRY_PROXY_REMOTEURL=https://oci.stackable.tech registry:2`. The registry is now available on localhost:5001 via HTTP. Once an image has been loaded, it will be cached by the proxy. +* Mirror our images and Helm charts into a registry in the air-gapped environment. You need to find out what images are relevant for your specific Stackable deployment and transfer them to the target registry. Feel free to browse through the images in https://oci.stackable.tech/api/v2.0/projects/sdp[our registry{external-link-icon}^] by using this xref:contributor:project-overview.adoc#docker-images[guide]. +* If possible, setup a reverse proxy to Stackable's registry on a node with internet connection that is reachable from all nodes within your air-gapped environment. You could, for example, use https://distribution.github.io/distribution/[distribution{external-link-icon}^] for this. Here's a command to spin up a pull-through cache to `oci.stackable.tech` on port 5001: `docker run -d --name proxy-stackable -p 5001:5000 --restart=always -e REGISTRY_PROXY_REMOTEURL=https://oci.stackable.tech registry:2`. The registry is now available on localhost:5001 via HTTP. Once an image has been loaded, it will be cached by the proxy. * Download our images (e.g. using `docker save`) on a machine with internet access, copy them onto the nodes in your air-gapped environment and load them (e.g. using `ctr images import`). Then render the Helm charts using the `helm template` subcommand, copy the rendered YAML files your air-gapped environment and apply them. In the first two scenarios, you need to make sure that the nodes load the images from your local registry mirror. Again, there are a several ways to do this: -* Specify the image repository in the CRDs (see https://docs.stackable.tech/home/nightly/concepts/product-image-selection#_custom_docker_registry["Custom docker registry"]) and in the values of the Helm charts when installing the operators (`helm install --set image.repository="my.custom.registry/stackable/nifi-operator" ...`). +* Specify the image repository in the CRDs (see "https://docs.stackable.tech/home/nightly/concepts/product-image-selection#_custom_docker_registry[Custom docker registry{external-link-icon}^]") and in the values of the Helm charts when installing the operators (`helm install --set image.repository="my.custom.registry/stackable/nifi-operator" ...`). * If you use `containerd` as your container runtime, you can patch the `containerd` config on every node to use the mirrored registry instead of `oci.stackable.tech`. + Example: Let's assume you have a registry mirror running on `10.7.228.12`, reachable via HTTPS on port 443 using a self signed certificate. Now copy the certificate over to your Kubernetes node, in this example we'll place it in the `/etc/pki/tls/certs` folder. Create the file `/etc/containerd/certs.d/oci.stackable.tech/hosts.toml` on the node, with the following contents: @@ -30,7 +30,7 @@ Modify your containerd config (usually located at `/etc/containerd/config.toml`) [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" ---- -Then restart the `containerd` service. Now `containerd` will fetch all images that would normally be fetched from `oci.stackable.tech` from `10.7.228.12` instead. The registry host name is determined by the path `hosts.toml` is located in, so other registry hosts are not affected. For further information, see https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration["Registry configuration"]. +Then restart the `containerd` service. Now `containerd` will fetch all images that would normally be fetched from `oci.stackable.tech` from `10.7.228.12` instead. The registry host name is determined by the path `hosts.toml` is located in, so other registry hosts are not affected. For further information, see "https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration[Registry configuration{external-link-icon}^]". * Add an alias for `oci.stackable.tech` to the `/etc/hosts` file on every node (like `10.7.228.12 oci.stackable.tech`), issue a self-signed certificate for `oci.stackable.tech` to your registry and add the certificate to the trusted certificates on every node. Note that if you also want to enforce signature checks for Stackable's images via a policy controller, you will need to add this host alias to the Pod of the policy controller as well (and make it trust the certificate). diff --git a/modules/guides/pages/viewing-and-verifying-sboms.adoc b/modules/guides/pages/viewing-and-verifying-sboms.adoc index 6e623c19c..cf7d4bc36 100644 --- a/modules/guides/pages/viewing-and-verifying-sboms.adoc +++ b/modules/guides/pages/viewing-and-verifying-sboms.adoc @@ -2,19 +2,19 @@ :page-aliases: tutorials:viewing-and-verifying-sboms.adoc :description: Learn to view and verify SBOMs for Stackable Data Platform using the CycloneDX standards and cosign. Ensure SBOM authenticity with Policy Controller. -With release 24.3 of SDP, we started providing SBOMs (Software Bill of Materials) for our container images. Please note that they currently are in a draft stage and we are continually working on improving them. As a first step, we aim to provide a list of all primary (top level) components and their versions included in each container image. Our SBOMs follow the https://cyclonedx.org/[CycloneDX] standard and are available in JSON format. +With release 24.3 of SDP, we started providing SBOMs (Software Bill of Materials) for our container images. Please note that they currently are in a draft stage and we are continually working on improving them. As a first step, we aim to provide a list of all primary (top level) components and their versions included in each container image. Our SBOMs follow the https://cyclonedx.org/[CycloneDX{external-link-icon}^] standard and are available in JSON format. NOTE: Starting with SDP 25.7, we now embed the exact source code used to build each product directly into our images. You can find the source code in files ending with `-src.tar.gz` within the `/stackable` directory of each image. -You can browse through our SBOMs at https://sboms.stackable.tech/. +You can browse through our SBOMs at https://sboms.stackable.tech/[https://sboms.stackable.tech/{external-link-icon}^]. You will find a simple hierarchical structure, one directory per release, containing a list of all container images included in that release. For each container image, one SBOM per version of the image is listed. -This page is a simple wrapper on top of the Stackable OCI registry, where the SBOMs are attached as signed https://github.com/in-toto/attestation[attestations] to the container images. When you click on a link in the SBOM browser, the SBOM is validated, extracted from the container registry, and then downloaded to your device. +This page is a simple wrapper on top of the Stackable OCI registry, where the SBOMs are attached as signed https://github.com/in-toto/attestation[attestations{external-link-icon}^] to the container images. When you click on a link in the SBOM browser, the SBOM is validated, extracted from the container registry, and then downloaded to your device. The next step of this guide explains the single steps happening under the hood when a link is clicked, and how to do them manually. == Verifying and extracting an SBOM manually with cosign -To verify and extract the SBOM, a tool called https://github.com/sigstore/cosign[cosign] is needed. Please have a look at the https://docs.sigstore.dev/system_config/installation/[installation instructions] in the cosign documentation and choose your preferred installation method. Additionally, https://github.com/jqlang/jq[jq] is used to parse the JSON output of cosign. +To verify and extract the SBOM, a tool called https://github.com/sigstore/cosign[cosign{external-link-icon}^] is needed. Please have a look at the https://docs.sigstore.dev/system_config/installation/[installation instructions{external-link-icon}^] in the cosign documentation and choose your preferred installation method. Additionally, https://github.com/jqlang/jq[jq{external-link-icon}^] is used to parse the JSON output of cosign. With the following chain of commands, the SBOM of `airflow-operator` version `24.3.0` is verified and extracted: @@ -34,14 +34,14 @@ Explanation of the commands and parameters: The `--type` parameter specifies the type of the predicate, in this case `cyclonedx` for CycloneDX SBOMs. The `--certificate-identity-regexp` parameter specifies a regular expression that is used to match the identity of the signer of the attestation. In this case, that means: The attestation must be signed by a GitHub Actions workflow run by the `stackabletech` organization (the *identity*). Now, because in general anyone could claim to be a `stackabletech` workflow run, the `--certificate-oidc-issuer` parameter ensures that this identity was actually verified by GitHub. -If the identity of the signer matches, you can be sure the contents of the attestation are authentic and were created by one of Stackable's Github Action Workflows. `cosign verify-attestation` then prints the signed attestation to `stdout`, which is an https://github.com/in-toto/attestation[in-toto attestation] wrapped in a https://github.com/secure-systems-lab/dsse[DSSE]. The next command (`jq '.payload'`) gets the payload of the envelope, which is the base64 encoded attestation. `base64 -d` decodes it and returns the attestation in JSON format. The attestation has a `subject` attribute, which provides information about the container image the SBOM belongs to. `predicate` is the actual SBOM, which is extrated by the `jq '.predicate'` command and then printed to `stdout` in JSON format. +If the identity of the signer matches, you can be sure the contents of the attestation are authentic and were created by one of Stackable's Github Action Workflows. `cosign verify-attestation` then prints the signed attestation to `stdout`, which is an https://github.com/in-toto/attestation[in-toto attestation{external-link-icon}^] wrapped in a https://github.com/secure-systems-lab/dsse[DSSE{external-link-icon}^]. The next command (`jq '.payload'`) gets the payload of the envelope, which is the base64 encoded attestation. `base64 -d` decodes it and returns the attestation in JSON format. The attestation has a `subject` attribute, which provides information about the container image the SBOM belongs to. `predicate` is the actual SBOM, which is extrated by the `jq '.predicate'` command and then printed to `stdout` in JSON format. `cosign` also prints information to `stderr`, which can be used to determine further information on the verification results and the exact Github Action workflow that was used to create this attestation. You can now be sure that the SBOM was attested to the container image you're interested in by a Stackable Github Action workflow, it's even possible to look at the workflow to see how exactly this happened. == Enabling automatic verification of SBOMs -Similar to our xref:enabling-verification-of-image-signatures.adoc[image signature verification] guide, it's possible to enforce that only container images with SBOMs that are signed by Stackable are allowed to run in your cluster. Sigstore's https://docs.sigstore.dev/policy-controller/overview/[Policy Controller] can be used to achieve this. +Similar to our xref:enabling-verification-of-image-signatures.adoc[image signature verification] guide, it's possible to enforce that only container images with SBOMs that are signed by Stackable are allowed to run in your cluster. Sigstore's https://docs.sigstore.dev/policy-controller/overview/[Policy Controller{external-link-icon}^] can be used to achieve this. IMPORTANT: Releases prior to SDP 24.3 do not have signed SBOMs. If you are using an older release and enforce SBOM verification, Pods with Stackable images will be prevented from starting. @@ -55,7 +55,7 @@ helm repo update helm install policy-controller sigstore/policy-controller ---- -The default settings might not be appropriate for your environment, please refer to the https://artifacthub.io/packages/helm/sigstore/policy-controller[configurable values for the Helm chart] for more information. +The default settings might not be appropriate for your environment, please refer to the https://artifacthub.io/packages/helm/sigstore/policy-controller[configurable values for the Helm chart{external-link-icon}^] for more information. === Creating a policy to verify SBOMs @@ -79,4 +79,4 @@ kubectl label namespace stackable policy.sigstore.dev/include=true ---- The Policy Controller checks all newly created Pods in this namespace that run any image matching `+++**+++.stackable.tech/+++**+++` (this matches images provided by Stackable) and ensures that these images have an attested SBOM that's been signed by a Stackable Github Action. If no SBOM is present or its signature is invalid or missing, the policy will deny the pod creation. -For a more detailed explanation of the policy options, please refer to the https://docs.sigstore.dev/policy-controller/overview/#configuring-image-patterns[Sigstore documentation]. +For a more detailed explanation of the policy options, please refer to the https://docs.sigstore.dev/policy-controller/overview/#configuring-image-patterns[Sigstore documentation{external-link-icon}^]. diff --git a/modules/operators/pages/index.adoc b/modules/operators/pages/index.adoc index 3e7c8b15e..6de3902bf 100644 --- a/modules/operators/pages/index.adoc +++ b/modules/operators/pages/index.adoc @@ -3,7 +3,7 @@ :keywords: Stackable Operator, Kubernetes, operator :k8s-operators: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/ -This section of the Stackable documentation contains information about the individual {k8s-operators}[operators] that make up the Stackable Data Platform. +This section of the Stackable documentation contains information about the individual {k8s-operators}[operators{external-link-icon}^] that make up the Stackable Data Platform. You can find an overview over the <> as well as <> below. This section also contains an overview over the xref:supported_versions.adoc[supported product versions] and how to enable xref:monitoring.adoc[monitoring] in all operators. diff --git a/modules/operators/pages/monitoring.adoc b/modules/operators/pages/monitoring.adoc index cdeec2e00..561c35566 100644 --- a/modules/operators/pages/monitoring.adoc +++ b/modules/operators/pages/monitoring.adoc @@ -3,11 +3,11 @@ :prometheus-operator: https://prometheus-operator.dev/ :description: Monitor Stackable services with Prometheus. Install via Prometheus Operator or use an existing setup. Configure scraping for Kubernetes services. -Services managed by Stackable support monitoring via {prometheus}[Prometheus]. +Services managed by Stackable support monitoring via {prometheus}[Prometheus{external-link-icon}^]. == Prometheus operator -Stackable does not currently provide Prometheus, instead we suggest using {prometheus-operator}[Prometheus operator]. +Stackable does not currently provide Prometheus, instead we suggest using {prometheus-operator}[Prometheus operator{external-link-icon}^]. === Installing Prometheus @@ -40,4 +40,4 @@ An existing Prometheus installation can also be used to monitor Stackable servic In this case, it should be configured to scrape Kubernetes `Service` objects with the label `prometheus.io/scrape: "true"`. For more details, see their official documentation for -https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config[``]. +https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config[``{external-link-icon}^]. diff --git a/modules/reference/pages/duration.adoc b/modules/reference/pages/duration.adoc index 3811daf35..1fdb5bd27 100644 --- a/modules/reference/pages/duration.adoc +++ b/modules/reference/pages/duration.adoc @@ -6,9 +6,9 @@ :go: https://go.dev/ :description: Understand the human-readable duration format used by Stackable operators, based on Go's time.ParseDuration, with units like days, hours, minutes, and seconds. -All Stackable operators use a human-readable duration format. It very closely resembles the format used by the {go}[Go] programming language - which Kubernetes uses internally. -Every duration field of a {k8s-cr}[CustomResource], for example, the xref:trino:usage-guide/operations/graceful-shutdown.adoc[`spec.workers.roleConfig.gracefulShutdownTimeout`] field, supports this format. -There is no official format specification, but the source code of {go-std-time}[`time.ParseDuration`] in the Go standard library can be used as an implementation reference. +All Stackable operators use a human-readable duration format. It very closely resembles the format used by the {go}[Go{external-link-icon}^] programming language - which Kubernetes uses internally. +Every duration field of a {k8s-cr}[CustomResource{external-link-icon}^], for example, the xref:trino:usage-guide/operations/graceful-shutdown.adoc[`spec.workers.roleConfig.gracefulShutdownTimeout`] field, supports this format. +There is no official format specification, but the source code of {go-std-time}[`time.ParseDuration`{external-link-icon}^] in the Go standard library can be used as an implementation reference. The format looks like this: `15d18h34m42s`. xref:trino:index.adoc[Trino], for example, uses it in the following way: @@ -25,4 +25,4 @@ spec: Valid time units are: `d`, `h`, `m`, `s`, and `ms`. Separating the duration fragments, which is a tuple of time value and time unit (`15d`), by spaces is **not** supported and will result in an error. The maximum amount of time which can safely be represented without precision loss or integer overflow is 584,942,417,355 years. -See {rust-duration-max}[here] for more information. +See {rust-duration-max}[here{external-link-icon}^] for more information. diff --git a/modules/tutorials/pages/authentication_with_openldap.adoc b/modules/tutorials/pages/authentication_with_openldap.adoc index 0dcd548b0..d2d1d537b 100644 --- a/modules/tutorials/pages/authentication_with_openldap.adoc +++ b/modules/tutorials/pages/authentication_with_openldap.adoc @@ -9,10 +9,10 @@ more about authentication in the Stackable Platform on the xref:concepts:authent Prerequisites: -* a k8s cluster available, or {kind}[kind] installed +* a k8s cluster available, or {kind}[kind{external-link-icon}^] installed * xref:management:stackablectl:index.adoc[] installed * basic knowledge of how to create resources in Kubernetes (i.e. `kubectl apply -f .yaml`) and inspect them - (`kubectl get` or a tool like {k9s}[k9s]) + (`kubectl get` or a tool like {k9s}[k9s{external-link-icon}^]) == Setup @@ -354,7 +354,7 @@ The LDAP connection details only need to be written down once, in the Authentica == Further Reading - xref:concepts:authentication.adoc[Authentication concepts page] -* {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/[AuthenticationClass CRD reference] +* {crd-docs}/authentication.stackable.tech/authenticationclass/v1alpha1/[AuthenticationClass CRD reference{external-link-icon}^] - xref:superset:getting_started/index.adoc[Getting started with the Stackable Operator for Apache Superset] - xref:trino:getting_started/index.adoc[Getting started with the Stackable Operator for Trino] // TODO Operator docs for LDAP diff --git a/modules/tutorials/pages/jupyterhub.adoc b/modules/tutorials/pages/jupyterhub.adoc index a4a8e29df..dfe97bfa3 100644 --- a/modules/tutorials/pages/jupyterhub.adoc +++ b/modules/tutorials/pages/jupyterhub.adoc @@ -3,12 +3,12 @@ :keywords: notebook, JupyterHub, Kubernetes, k8s, Apache Spark, HDFS, S3 This tutorial illustrates various scenarios and configuration options when using JupyterHub on Kubernetes. -The Custom Resources and configuration settings that are discussed here are based on the xref:demos:jupyterhub-keycloak.adoc[JupyterHub-Keycloak demo], so you may find it helpful to have that demo running to reference the various https://github.com/stackabletech/demos/blob/main/stacks/jupyterhub-keycloak[Resource definitions] as you read through this tutorial. +The Custom Resources and configuration settings that are discussed here are based on the xref:demos:jupyterhub-keycloak.adoc[JupyterHub-Keycloak demo], so you may find it helpful to have that demo running to reference the various https://github.com/stackabletech/demos/blob/main/stacks/jupyterhub-keycloak[Resource definitions{external-link-icon}^] as you read through this tutorial. The example notebook is used to demonstrate simple read/write interactions with an S3 storage backend using Apache Spark. == Keycloak -Keycloak is installed using a https://github.com/stackabletech/demos/blob/main/stacks/jupyterhub-keycloak/keycloak.yaml[Deployment] that loads its realm configuration mounted as a ConfigMap. +Keycloak is installed using a https://github.com/stackabletech/demos/blob/main/stacks/jupyterhub-keycloak/keycloak.yaml[Deployment{external-link-icon}^] that loads its realm configuration mounted as a ConfigMap. [#services] === Services @@ -207,7 +207,7 @@ For the self-signed certificate to be accepted during the handshake between Jupy === Realm -The Keycloak https://github.com/stackabletech/demos/blob/main/stacks/jupyterhub-keycloak/keycloak-realm-config.yaml[realm configuration] for the demo basically contains a set of users and groups, along with a JupyterHub client definition: +The Keycloak https://github.com/stackabletech/demos/blob/main/stacks/jupyterhub-keycloak/keycloak-realm-config.yaml[realm configuration{external-link-icon}^] for the demo basically contains a set of users and groups, along with a JupyterHub client definition: [source,yaml] ---- @@ -231,7 +231,7 @@ Wildcards are used for `redirectUris` and `webOrigins`, mainly for the sake of s === Authentication This tutorial covers two methods of authentication: Native and OAuth. -Other implementations are documented https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html[here]. +Other implementations are documented https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html[here{external-link-icon}^]. ==== Native Authenticator diff --git a/modules/tutorials/pages/logging-vector-aggregator.adoc b/modules/tutorials/pages/logging-vector-aggregator.adoc index 2d78c7a46..7ac903fe1 100644 --- a/modules/tutorials/pages/logging-vector-aggregator.adoc +++ b/modules/tutorials/pages/logging-vector-aggregator.adoc @@ -7,11 +7,11 @@ Logging on the Stackable Data Platform is always configured in the same way, so Prerequisites: -* a k8s cluster available, or https://kind.sigs.k8s.io/[kind] installed +* a k8s cluster available, or https://kind.sigs.k8s.io/[kind{external-link-icon}^] installed * xref:management:stackablectl:index.adoc[] installed -* https://helm.sh/[Helm] installed to deploy Vector +* https://helm.sh/[Helm{external-link-icon}^] installed to deploy Vector * basic knowledge of how to create resources in Kubernetes (i.e. `kubectl apply -f .yaml`) and inspect them - (`kubectl get` or a tool like https://k9scli.io/[k9s]) + (`kubectl get` or a tool like https://k9scli.io/[k9s{external-link-icon}^]) == Install the ZooKeeper operator @@ -42,7 +42,7 @@ Deploy Vector with these values using Helm: include::example$logging-aggregator/main.sh[tag=vector-agg] This is a minimal working configuration. The source should be defined in this way, but you can configure different sinks, depending on your needs. -You can find an https://vector.dev/docs/reference/configuration/sinks/[overview] of all sinks in the Vector documentation, specifically the https://vector.dev/docs/reference/configuration/sinks/elasticsearch/[Elasticsearch] sink might be useful, it also works when configured with OpenSearch. +You can find an https://vector.dev/docs/reference/configuration/sinks/[overview{external-link-icon}^] of all sinks in the Vector documentation, specifically the https://vector.dev/docs/reference/configuration/sinks/elasticsearch/[Elasticsearch{external-link-icon}^] sink might be useful, it also works when configured with OpenSearch. To make the Vector aggregator discoverable to ZooKeeper, deploy a xref:concepts:service_discovery.adoc[discovery ConfigMap] called `vector-aggregator-discovery`. Create a file called `vector-aggregator-discovery.yaml`: @@ -117,4 +117,4 @@ Congratulations, this concludes the tutorial! == What's next? -Look into different sink configurations which are more suited to production use in the https://vector.dev/docs/reference/configuration/sinks/[sinks overview documetation] or learn more about how logging works on the platform in the xref:concepts:observability/logging.adoc[concepts documentation]. +Look into different sink configurations which are more suited to production use in the https://vector.dev/docs/reference/configuration/sinks/[sinks overview documetation{external-link-icon}^] or learn more about how logging works on the platform in the xref:concepts:observability/logging.adoc[concepts documentation].