diff --git a/.github/workflows/hugo.yaml b/.github/workflows/hugo.yaml
new file mode 100644
index 000000000..f546d1b5a
--- /dev/null
+++ b/.github/workflows/hugo.yaml
@@ -0,0 +1,47 @@
+name: 'Release(hugo): GitHub Pages'
+
+on:
+ release:
+ types: [published]
+
+env:
+ HUGO_DIR: 'docs/hugo'
+
+jobs:
+ gh-pages:
+ runs-on: ubuntu-latest
+ timeout-minutes: 10
+ concurrency:
+ group: ${{ github.workflow }}-${{ github.ref }}
+ defaults:
+ run:
+ working-directory: ${{ env.HUGO_DIR }}
+
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ lfs: true
+ fetch-depth: 0
+
+ - uses: actions/setup-go@v5
+ with:
+ go-version: '^1.23'
+
+ - name: Setup Hugo
+ uses: peaceiris/actions-hugo@v3
+ with:
+ hugo-version: '0.143.1'
+ extended: true
+
+ - name: Get hugo dependencies (theme)
+ run: hugo mod get
+
+ - name: Build
+ run: hugo --minify
+
+ - name: Deploy to gh-pages
+ uses: peaceiris/actions-gh-pages@v4
+ with:
+ github_token: ${{ secrets.GITHUB_TOKEN }}
+ publish_branch: gh-pages
+ publish_dir: docs/hugo/public
\ No newline at end of file
diff --git a/docs/hugo/.hugo_build.lock b/docs/hugo/.hugo_build.lock
new file mode 100644
index 000000000..e69de29bb
diff --git a/docs/hugo/content/__to_do__/_index.md b/docs/hugo/content/__to_do__/_index.md
new file mode 100644
index 000000000..9f8d2bc65
--- /dev/null
+++ b/docs/hugo/content/__to_do__/_index.md
@@ -0,0 +1,7 @@
+---
+title: "How to use"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 320
+---
+
diff --git a/docs/hugo/content/__to_do__/clone.md b/docs/hugo/content/__to_do__/clone.md
new file mode 100644
index 000000000..451404ee6
--- /dev/null
+++ b/docs/hugo/content/__to_do__/clone.md
@@ -0,0 +1,116 @@
+---
+title: "Single Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: true
+---
+
+Setting up a basic Cluster is pretty easy, we just need the minimum Definiton of a cluster-manifest which can also be find in the operator-tutorials repo on github.
+We need the following Definitions for the basic cluster.
+## minimal Single Cluster
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-16.1-6-dev"
+ numberOfInstances: 1
+ postgresql:
+ version: "16"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
+After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+```
+
+We can now starting to modify our cluster with some more Definitons.
+### Use a specific Storageclass
+```
+spec:
+ ...
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
+
+### Expanding Volume
+The Operator allows to you expand your volume if the storage-System is able to do this.
+```
+spec:
+ ...
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+
+```
+
+### Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+### Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/__to_do__/examples.md b/docs/hugo/content/__to_do__/examples.md
new file mode 100644
index 000000000..451404ee6
--- /dev/null
+++ b/docs/hugo/content/__to_do__/examples.md
@@ -0,0 +1,116 @@
+---
+title: "Single Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: true
+---
+
+Setting up a basic Cluster is pretty easy, we just need the minimum Definiton of a cluster-manifest which can also be find in the operator-tutorials repo on github.
+We need the following Definitions for the basic cluster.
+## minimal Single Cluster
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-16.1-6-dev"
+ numberOfInstances: 1
+ postgresql:
+ version: "16"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
+After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+```
+
+We can now starting to modify our cluster with some more Definitons.
+### Use a specific Storageclass
+```
+spec:
+ ...
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
+
+### Expanding Volume
+The Operator allows to you expand your volume if the storage-System is able to do this.
+```
+spec:
+ ...
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+
+```
+
+### Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+### Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/__to_do__/sidecars.md b/docs/hugo/content/__to_do__/sidecars.md
new file mode 100644
index 000000000..451404ee6
--- /dev/null
+++ b/docs/hugo/content/__to_do__/sidecars.md
@@ -0,0 +1,116 @@
+---
+title: "Single Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: true
+---
+
+Setting up a basic Cluster is pretty easy, we just need the minimum Definiton of a cluster-manifest which can also be find in the operator-tutorials repo on github.
+We need the following Definitions for the basic cluster.
+## minimal Single Cluster
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-16.1-6-dev"
+ numberOfInstances: 1
+ postgresql:
+ version: "16"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
+After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+```
+
+We can now starting to modify our cluster with some more Definitons.
+### Use a specific Storageclass
+```
+spec:
+ ...
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
+
+### Expanding Volume
+The Operator allows to you expand your volume if the storage-System is able to do this.
+```
+spec:
+ ...
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+
+```
+
+### Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+### Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/__to_do__/slots.md b/docs/hugo/content/__to_do__/slots.md
new file mode 100644
index 000000000..451404ee6
--- /dev/null
+++ b/docs/hugo/content/__to_do__/slots.md
@@ -0,0 +1,116 @@
+---
+title: "Single Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: true
+---
+
+Setting up a basic Cluster is pretty easy, we just need the minimum Definiton of a cluster-manifest which can also be find in the operator-tutorials repo on github.
+We need the following Definitions for the basic cluster.
+## minimal Single Cluster
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-16.1-6-dev"
+ numberOfInstances: 1
+ postgresql:
+ version: "16"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
+After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+```
+
+We can now starting to modify our cluster with some more Definitons.
+### Use a specific Storageclass
+```
+spec:
+ ...
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
+
+### Expanding Volume
+The Operator allows to you expand your volume if the storage-System is able to do this.
+```
+spec:
+ ...
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+
+```
+
+### Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+### Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/__to_do__/standby.md b/docs/hugo/content/__to_do__/standby.md
new file mode 100644
index 000000000..451404ee6
--- /dev/null
+++ b/docs/hugo/content/__to_do__/standby.md
@@ -0,0 +1,116 @@
+---
+title: "Single Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: true
+---
+
+Setting up a basic Cluster is pretty easy, we just need the minimum Definiton of a cluster-manifest which can also be find in the operator-tutorials repo on github.
+We need the following Definitions for the basic cluster.
+## minimal Single Cluster
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-16.1-6-dev"
+ numberOfInstances: 1
+ postgresql:
+ version: "16"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
+After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+```
+
+We can now starting to modify our cluster with some more Definitons.
+### Use a specific Storageclass
+```
+spec:
+ ...
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
+
+### Expanding Volume
+The Operator allows to you expand your volume if the storage-System is able to do this.
+```
+spec:
+ ...
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+
+```
+
+### Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+### Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/en/_index.md b/docs/hugo/content/en/_index.md
new file mode 100644
index 000000000..9e88cf57a
--- /dev/null
+++ b/docs/hugo/content/en/_index.md
@@ -0,0 +1,32 @@
+---
+title: "CPO (CYBERTEC-PG-Operator)"
+date: 2024-03-11T14:26:51+01:00
+draft: false
+weight: 1
+---
+Current Release: 0.8.3 (04.04.2025) [Release Notes](release_notes)
+
+
+
+CPO (CYBERTEC PG Operator) allows you to create and run PostgreSQL clusters on Kubernetes.
+
+The operator reduces your efforts and simplifies the administration of your PostgreSQL clusters so that you can concentrate on other things.
+
+The following features characterise our operator:
+- Declarative mode of operation
+- Takes over all the necessary steps for setting up and managing the PG cluster.
+- Integrated backup solution, automatic backups and very easy restore (snapshot & PITR)
+- Rolling update procedure for adjustments to the pods and minor updates
+- Major upgrade with minimum interruption time
+- Reduction of downtime thanks to redundancy, pod anti-affinity, auto-failover and self-healing
+
+CPO is tested on the following platforms:
+- Kubernetes: 1.21 - 1.28
+- Openshift: 4.8 - 4.13
+- Rancher
+- AWS EKS
+- Azure AKS
+- Google GKE
+
+Furthermore, CPO is basically executable on any [CSCF-certified](https://www.cncf.io/certification/software-conformance/) Kubernetes platform.
+
diff --git a/docs/hugo/content/en/architecture/_index.md b/docs/hugo/content/en/architecture/_index.md
new file mode 100644
index 000000000..4d21d2560
--- /dev/null
+++ b/docs/hugo/content/en/architecture/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Architecture"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 300
+---
+
diff --git a/docs/hugo/content/en/architecture/architecture.md b/docs/hugo/content/en/architecture/architecture.md
new file mode 100644
index 000000000..54e488e97
--- /dev/null
+++ b/docs/hugo/content/en/architecture/architecture.md
@@ -0,0 +1,40 @@
+---
+title: "Architecture"
+date: 2023-03-07T14:26:51+01:00
+draft: true
+weight: 1
+---
+This chapter covers all important aspects relating to the architecture of CPO and the associated components. In addition to the underlying Kubertnetes, the various components and their interaction for the operation of a PostgreSQL cluster are analysed.
+
+### Brief overview of the components
+
+
+
+
+### Network-Traffic
+
+
+#### PG-Cluster-intern Traffic
+With internal PG cluster-internal traffic, we are talking about all traffic that is necessary for the operation of the cluster itself. This includes
+- Communication for the sync of the replicas:
+ - pg_basebackup & streaming replication
+- Communication with pgBackRest (if configured)
+ - Backups
+ - WAL archiving
+ - replica-create for new replicas
+
+The figure below shows the internal traffic flows with pgBackRest based on block storage (left) or cloud storage (right)
+
+
+
+
+
+
+
+#### External Traffic
+
+External traffic, i.e. the connection to the database for the user or the application, takes place via defined Kubernetes services. A distinction must be made here between read/write and read only traffic.
+
+##### read/write
+
+##### read-only
\ No newline at end of file
diff --git a/docs/hugo/content/en/architecture/compontens.md b/docs/hugo/content/en/architecture/compontens.md
new file mode 100644
index 000000000..0ae491233
--- /dev/null
+++ b/docs/hugo/content/en/architecture/compontens.md
@@ -0,0 +1,44 @@
+---
+title: "Software-Components"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 2
+---
+
+Various software components are used to operate CPO. This chapter lists the most important components and their respective purposes.
+
+Basically, the CPO project focusses on the main tasks of each individual component. This means that each component does what it does best and only that.
+In addition to reliable operation, this should also ensure efficient development and project management that utilises existing approaches rather than fighting against them.
+
+### 1. CYBERTEC-pg-operator
+The CYBERTEC-pg-operator is a Kubernetes operator that automates the operation and management of PostgreSQL databases on Kubernetes clusters. It facilitates the provisioning, scaling, backup and recovery of PostgreSQL clusters and integrates tools such as Patroni and pgBackRest for high availability and backup management.
+
+The main focus of the operator is the creation of the necessary templates and objects for Kubernetes, the regular check whether the declarative description of the cluster is still up to date and for the implementation of various tasks in the cluster, which were commissioned by the user.
+
+### 2. Kubernetes
+
+Kubernetes is an open source platform for automating the deployment, scaling and management of containerised applications. It enables the management of container clusters in different environments and offers functions such as automatic load balancing, self-healing and rollouts. Kubernetes ensures that applications are always available and scalable and provides a framework for managing infrastructure in a cloud-native environment.
+
+The focus of Kubernetes in the context of CPO is the use of the operator's templates to create the required objects.
+For example, the statefulset controller creates the desired pods based on the template. Kubernetes or the respective controllers monitor the generated objects independently and react if they are missing or do not correspond to the template.
+This means, for example, that pods that have been removed are automatically regenerated even if the operator is not currently running. This avoids the operator as a single point of failure.
+
+### 3. Patroni
+Patroni is an open source tool for managing PostgreSQL high availability clusters. It uses a distributed consensus mechanism, often based on Etcd, Consul or Zookeeper, to manage the role of the PostgreSQL primary node and perform automatic failovers. Patroni ensures that only one primary database server is active at a time, enabling consistency and availability of PostgreSQL databases in a cluster.
+
+The focus of Patroni is to build, configure and monitor the PostgreSQL cluster based on the configuration created by the operator. Patroni therefore takes over all tasks such as leader selection, cluster monitoring, auto-failover and much more independently.
+Patroni is included in every PostgreSQL container and therefore pod and focussed on the individual cluster.
+This means that cluster management is guaranteed even without a currently running operator and therefore runs independently of the operator. This avoids the operator as a single point of failure.
+
+### 4. PostgreSQL
+PostgreSQL is a powerful, open source object-relational database management system (ORDBMS). It is known for its reliability, robustness and compliance with SQL standards. PostgreSQL supports advanced data types, functions and offers extensive customisation options. It is suitable for applications of any size and offers strong support for ACID transactions and Multi-Version Concurrency Control (MVCC).
+
+The main role of PostgreSQL in the context of CPO is quite clear. Controlled by Patroni, PostgreSQL takes care of its task as a DBMS.
+
+### 5. pgBackRest
+pgBackRest is a reliable backup and restore tool for PostgreSQL databases. It offers features such as incremental backups, parallel backup and restore, compression and encryption. pgBackRest is designed for use in large databases and offers both local and remote backup options. It integrates well into Kubernetes environments and enables automated and efficient backup strategies.
+
+pgBackRest is configured based on the cluster manifest and therefore via the operator. Automatic backups, on the other hand, are based on Kubernetes cron jobs and are therefore independent of the operator, apart from the template generation by the operator.
+
+### 6. pgBouncer
+PgBouncer is a lightweight connection pooler for PostgreSQL. It reduces the load on the database server by consolidating and efficiently managing incoming client connections. PgBouncer improves the performance and scalability of PostgreSQL-based applications by reducing the number of active connections while enabling fast switching times between different connections.
\ No newline at end of file
diff --git a/docs/hugo/content/en/architecture/rolling_update.md b/docs/hugo/content/en/architecture/rolling_update.md
new file mode 100644
index 000000000..ee2a144ef
--- /dev/null
+++ b/docs/hugo/content/en/architecture/rolling_update.md
@@ -0,0 +1,17 @@
+---
+title: "Rolling-Updates"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 3
+---
+
+Whether updating the minor version, changing the hardware definitions of the cluster or other adjustments that require a pod restart, CPO ensures that the update is as uninterrupted as possible.
+
+This means that adjustments are carried out on the various pods of a particular cluster one after the other and in a sensible sequence. This happens as soon as a cluster consists of more than 1 PostgreSQL node.
+
+In the event of a necessary restart, the operator independently stops the pods and does not leave this to Kubernetes. The idea behind this is that all replica pods are restarted one after the other first. The operator recognises these by the label cpo.opensource.cybertec.at/role=replica set by Patroni
+
+As soon as all replicas are ready again, the operator checks whether one of the replicas is able to take over cluster operation and performs a switchover. Only then is the former leader pod stopped and restarted.
+
+This ensures that the only effect on the application is the switchover.
+{{< hint type=info >}} A completely uninterrupted handover of operation is not possible due to the architecture and connection handling of PostgreSQL. {{< /hint >}}
\ No newline at end of file
diff --git a/docs/hugo/content/en/backup/_index.md b/docs/hugo/content/en/backup/_index.md
new file mode 100644
index 000000000..e8e5a3db4
--- /dev/null
+++ b/docs/hugo/content/en/backup/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Backup"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1300
+---
\ No newline at end of file
diff --git a/docs/hugo/content/en/backup/aws.md b/docs/hugo/content/en/backup/aws.md
new file mode 100644
index 000000000..88d693743
--- /dev/null
+++ b/docs/hugo/content/en/backup/aws.md
@@ -0,0 +1,74 @@
+---
+title: "via S3"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2
+---
+
+This chapter describes the use of pgBackRest in combination with with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore or SwiftStack. It is not absolutely necessary to operate a Kubernetes on the AWS Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
+
+This Chapter will use AWS S3 for the example, the usage of different s3-compatible Storage is similiar.
+
+{{< hint type=important >}} Precondition: a S3-bucket and a priviledged role with credentials is needed for this chapter. {{< /hint >}}
+
+### Create a s3-bucket on the AWS console
+
+### Create a priviledged service-role
+
+### Modifying the Cluster
+As soon as all requirements are met:
+
+- A S3 bucket
+- Access-Token and Secret-Access-Key for the service role with the required authorisations for the bucket
+
+the cluster can be modified. Firstly, a secret containing the Credentials is created and the cluster manifest is adapted accordingly.
+
+The first step is to create the required secret. This is most easily done storing the needed data in a file called s3.conf and using a `kubectl` command.
+
+```
+# Create a file with name s3.conf and add the following infos. Please replace the placeholder by the credentials
+[global]
+repo1-s3-key=YOUR_S3_ACCESS_KEY
+repo1-s3-key-secret=YOUR_S3_KEY_SECRET
+repo1-cipher-pass=YOUR_ENCRYPTION_PASSPHRASE
+
+# Create the secret with the credentials
+kubectl create secret generic cluster-1-s3-credentials --from-file=s3.conf=s3.conf
+```
+
+In the next step, the secret name ais stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for `pgBackRest` is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster
+ namespace: cpo
+spec:
+ backup:
+ pgbackrest:
+ image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
+ repos:
+ - endpoint: 'https://s3-zurich.cyberlink.cloud:443'
+ name: repo1
+ region: zurich
+ resource: cpo-cluster-bucket
+ schedule:
+ full: 30 2 * * *
+ incr: '*/30 * * * *'
+ storage: s3
+ configuration:
+ secret: cluster-1-s3-credential
+ global:
+ repo1-path: /cluster/repo1/
+ repo1-retention-full: '7'
+ repo1-retention-full-type: count
+```
+
+This example creates a backup in the defined S3 bucket. In addition to the above configurations, a secret is also required which contains the access data for the S3 storage. The name of the secret must be stored in the `spec.backup.pgbackrest.configuration.secret` object and the secret must be located in the same namespace as the cluster.
+Information required to address the S3 bucket:
+- `Endpoint`: S3 api endpoint
+- `Region`: Region of the bucket
+- `resource`: Name of the bucket
+
+An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a sercret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
diff --git a/docs/hugo/content/en/backup/azure_blob.md b/docs/hugo/content/en/backup/azure_blob.md
new file mode 100644
index 000000000..110136f6e
--- /dev/null
+++ b/docs/hugo/content/en/backup/azure_blob.md
@@ -0,0 +1,55 @@
+---
+title: "via Azure-Blob"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 4
+---
+
+This chapter describes the use of pgBackRest in combination with Azure Blob Storage. It is not absolutely necessary to operate a Kubernetes on the Azure Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
+
+{{< hint type=important >}} Precondition: a blob-storage-volume and a priviledged role is needed for this chapter. {{< /hint >}}
+
+### Create a blob-storage-volume on the Azure console
+
+### Create a priviledged service-role
+
+### Modifying the Cluster
+As soon as all requirements are met:
+
+- An Azure-Blob-Storage-Volume
+- A JSON token for the service role with the required authorisations for the Volume
+
+the cluster can be modified. Firstly, a secret containing the JSON token is created and the cluster manifest is adapted accordingly.
+
+The first step is to create the required secret. This is most easily done using a `kubectl` command.
+
+```
+kubectl create secret generic cluster-1-gcs-credentials --from-file=gcs.json=fluent.json
+```
+
+In the next step, both the secret name and the file name of the JSON token are stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for `pgBackRest` is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ backup:
+ pgbackrest:
+ configuration:
+ secret: cluster-1-gcs-credentials
+ global:
+ repo1-path: /cluster-1/repo1/
+ repo1-retention-full: '7'
+ repo1-retention-full-type: count
+ image: docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
+ repos:
+ - name: repo1
+ resource: postgresql-backup-bucket
+ key: gcs.json
+ keyType: service
+ schedule:
+ full: 30 2 * * *
+ storage: gcs
+```
diff --git a/docs/hugo/content/en/backup/check_backups.md b/docs/hugo/content/en/backup/check_backups.md
new file mode 100644
index 000000000..9b0cf31d2
--- /dev/null
+++ b/docs/hugo/content/en/backup/check_backups.md
@@ -0,0 +1,79 @@
+---
+title: "Check/Monitor Backups"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 7
+---
+There are several ways to gain an insight into the current status of pgBackRest.
+One of these is to use pgBackRest within the container. This can be done both via the repo host and the Postgres pod.
+
+### pgbackrest via terminal (Repo-Host-Pod)
+```
+kubectl exec cluster-5-pgbackrest-repo-host-0 --stdin --tty -- pgbackrest info
+stanza: db
+ status: ok
+ cipher: none
+
+ db (current)
+ wal archive min/max (16): 00000006000000000000005C/000000070000000000000092
+
+ full backup: 20240517-125730F
+ timestamp start/stop: 2024-05-17 12:57:30+00 / 2024-05-17 12:57:41+00
+ wal start/stop: 00000007000000000000005E / 00000007000000000000005E
+ database size: 22.9MB, database backup size: 22.9MB
+ repo1: backup set size: 3MB, backup size: 3MB
+
+ incr backup: 20240517-125730F_20240517-130003I
+ timestamp start/stop: 2024-05-17 13:00:03+00 / 2024-05-17 13:00:05+00
+ wal start/stop: 000000070000000000000060 / 000000070000000000000060
+ database size: 22.9MB, database backup size: 904.3KB
+ repo1: backup set size: 3MB, backup size: 149.4KB
+ backup reference list: 20240517-125730F
+
+ incr backup: 20240517-125730F_20240517-131503I
+ timestamp start/stop: 2024-05-17 13:15:03+00 / 2024-05-17 13:15:04+00
+ wal start/stop: 000000070000000000000062 / 000000070000000000000062
+ database size: 22.9MB, database backup size: 24.3KB
+ repo1: backup set size: 3MB, backup size: 2.9KB
+ backup reference list: 20240517-125730F, 20240517-125730F_20240517-130003I
+```
+### pgbackrest via terminal (Postgres-Pod)
+```
+kubectl exec cluster-5-0 --stdin --tty -- pgbackrest info
+Defaulted container "postgres" out of: postgres, postgres-exporter, pgbackrest-restore (init)
+stanza: db
+ status: ok
+ cipher: none
+
+ db (current)
+ wal archive min/max (16): 00000006000000000000005C/000000070000000000000092
+
+ full backup: 20240517-125730F
+ timestamp start/stop: 2024-05-17 12:57:30+00 / 2024-05-17 12:57:41+00
+ wal start/stop: 00000007000000000000005E / 00000007000000000000005E
+ database size: 22.9MB, database backup size: 22.9MB
+ repo1: backup set size: 3MB, backup size: 3MB
+
+ incr backup: 20240517-125730F_20240517-130003I
+ timestamp start/stop: 2024-05-17 13:00:03+00 / 2024-05-17 13:00:05+00
+ wal start/stop: 000000070000000000000060 / 000000070000000000000060
+ database size: 22.9MB, database backup size: 904.3KB
+ repo1: backup set size: 3MB, backup size: 149.4KB
+ backup reference list: 20240517-125730F
+
+ incr backup: 20240517-125730F_20240517-131503I
+ timestamp start/stop: 2024-05-17 13:15:03+00 / 2024-05-17 13:15:04+00
+ wal start/stop: 000000070000000000000062 / 000000070000000000000062
+ database size: 22.9MB, database backup size: 24.3KB
+ repo1: backup set size: 3MB, backup size: 2.9KB
+ backup reference list: 20240517-125730F, 20240517-125730F_20240517-130003I
+```
+There is the "normal" output, as well as the output format Json, which can be processed directly in the terminal.
+
+```
+kubectl exec cluster-5-0 --stdin --tty -- pgbackrest info --output=json
+```
+
+### Check pgBackrest via Monitoring
+
+In addition to reading the status via the containers, pgBackRest can also be analysed and monitored via the monitoring stack. You can find information on setting up the monitoring stack and further information [here](documentation/how-to-use/monitoring).
\ No newline at end of file
diff --git a/docs/hugo/content/en/backup/encryption.md b/docs/hugo/content/en/backup/encryption.md
new file mode 100644
index 000000000..9d1a76541
--- /dev/null
+++ b/docs/hugo/content/en/backup/encryption.md
@@ -0,0 +1,52 @@
+---
+title: "Encrypted Backups"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 6
+---
+pgBackRest also allows you to encrypt your backups on the client side before uploading them. This is possible with any type of storage and is very easy to activate.
+
+Firstly, we need to define an encryption key. This must be specified separately for each repo and stored in the same secret that is defined in the `spec.backup.pgbackrest.configuration.secret` object.
+```
+kind: Secret
+apiVersion: v1
+metadata:
+ name: cluster-1-s3-credential
+ namespace: cpo
+stringData:
+ s3.conf |
+ [global]
+ repo1-s3-key=YOUR_S3_KEY
+ repo1-s3-key-secret=YOUR_S3_KEY_SECRET
+ repo1-cipher-pass=YOUR_ENCRYPTION_KEY
+```
+
+We also need to configure the type of encryption for pgBackRest. This is done via the cipher-type parameter, which must also be specified for each repo. You can find the available values for the parameter [here](https://pgbackrest.org/configuration.html#section-repository/option-repo-cipher-type)
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster
+ namespace: cpo
+spec:
+ backup:
+ pgbackrest:
+ configuration:
+ secret: cluster-1-s3-credential
+ global:
+ repo1-path: /cluster/repo1/
+ repo1-retention-full: '7'
+ repo1-retention-full-type: count
+ repo1-cipher-type: aes-256-cbc
+ image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
+ repos:
+ - endpoint: 'https://s3-zurich.cyberlink.cloud:443'
+ name: repo1
+ region: zurich
+ resource: cpo-cluster-bucket
+ schedule:
+ full: 30 2 * * *
+ incr: '*/30 * * * *'
+ storage: s3
+```
\ No newline at end of file
diff --git a/docs/hugo/content/en/backup/gcs.md b/docs/hugo/content/en/backup/gcs.md
new file mode 100644
index 000000000..bd231df20
--- /dev/null
+++ b/docs/hugo/content/en/backup/gcs.md
@@ -0,0 +1,55 @@
+---
+title: "via GCS"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 3
+---
+
+This chapter describes the use of pgBackRest in combination with Google Cloud Storage (gcs). It is not absolutely necessary to operate a Kubernetes on the Google Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
+
+{{< hint type=important >}} Precondition: a gcs-bucket and a priviledged role is needed for this chapter. {{< /hint >}}
+
+### Create a gcs-bucket on the google cloud console
+
+### Create a priviledged service-role
+
+### Modifying the Cluster
+As soon as all requirements are met:
+
+- A GCS bucket
+- A JSON token for the service role with the required authorisations for the bucket
+
+the cluster can be modified. Firstly, a secret containing the JSON token is created and the cluster manifest is adapted accordingly.
+
+The first step is to create the required secret. This is most easily done using a `kubectl` command.
+
+```
+kubectl create secret generic cluster-1-gcs-credentials --from-file=gcs.json=fluent.json
+```
+
+In the next step, both the secret name and the file name of the JSON token are stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for `pgBackRest` is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ backup:
+ pgbackrest:
+ configuration:
+ secret: cluster-1-gcs-credentials
+ global:
+ repo1-path: /cluster-1/repo1/
+ repo1-retention-full: '7'
+ repo1-retention-full-type: count
+ image: docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
+ repos:
+ - name: repo1
+ resource: postgresql-backup-bucket
+ key: gcs.json
+ keyType: service
+ schedule:
+ full: 30 2 * * *
+ storage: gcs
+```
diff --git a/docs/hugo/content/en/backup/introduction.md b/docs/hugo/content/en/backup/introduction.md
new file mode 100644
index 000000000..162259b73
--- /dev/null
+++ b/docs/hugo/content/en/backup/introduction.md
@@ -0,0 +1,83 @@
+---
+title: "Introduction"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1
+---
+Backups are essential for databases. From broken storage to deployments gone wrong, backups often save the day. Starting with pg_dump, which was released in the late 1990s, to the archiving of WAL files (PostgreSQL 8.0 / 2005) and pg_basebackup (PostgreSQL 9.0 / 2010), PostgreSQL already offers built-in options for backups and restores based on logical and physical backups.
+
+### Backups with pgBackRest
+
+CPO relies on [pgBackRest](www.pgbackrest.org) as its backup solution, a tried-and-tested tool with extensive backup and restore options.
+The backup is based on two elements:
+- Snapshots in the form of physical backups
+- WAL archive: Continuous archiving of the WAL files
+
+### Backups
+
+Backups represent a snapshot of the database in the form of pyhsical files. This contains all relevant information that PostgreSQL holds in its data folder.
+With pgBackRest it is possible to create different types of Backups:
+- full Snapshot: This captures and saves all files at the time of the backup
+- Differential backup: Only captures all files that have been changed since the last full Backup
+- Incremental backup: Only records the files that have been changed since the last backup (of any kind).
+
+When restoring using differential or incremental Backup, it is necessary to also use the previous Backup that provide the basis for the selected Backup.
+
+{{< hint type=Info >}}The choice of Backup types depends on factors such as the size of the database, the time available for backups and the restore.{{< /hint >}}
+
+### WAL-Archive
+
+The WAL (Write-Ahead-Log) refers to log files which record all changes to the database data before they are written to the actual files. The basic idea here is to guarantee the consistency and recoverability of the comitted data even in the event of failures.
+
+PostgreSQL normally cleans up or recycles the WAL files that are no longer required. By using WAL archiving, the WAL files are saved to a different location before this process so that they can be used for various activities in the future.
+These activities include
+- Providing the WAL files for replicas to keep them up to date
+- Restoring instances that have lost parts of the WAL files in the event of a failure and cannot return to a consistent state without them without losing data
+- Point-In-Time-Recovery (PITR): In contrast to Backups, which map a fixed point in time, WAL files make it possible to jump dynamically to a desired point in time and restore the database to the closest available consistent data point
+
+{{< hint type=Info >}}WAL archiving is an indispensable tool for data availability, recoverability and the continuous availability of PostgreSQL.{{< /hint >}}
+
+### Backup your Cluster
+
+With pgBackRest, backups can be stored on different types of storage:
+- Block storage (PVC)
+- S3 / S3-compatible storage
+- Azure blob storage
+- GCS
+
+### How a Backup works
+
+The operator creates a cronjob object on Kubernetes based on the defined times for automatic backups. This means that the Kubernetes core ([CronJob Controller](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)) will take care of processing the automatic backups and create a job and thus a pod at the appropriate time.
+The pod will send the backup command to the primary or, if block storage is used, to the repo host and monitor it. As soon as the backup is successfully completed, the pod stops with Completed and thus completes the job.
+
+```
+kubectl get cronjobs
+---------------------------------------------------------------------------------------
+NAME | SCHEDULE | SUSPEND | ACTIVE | LAST SCHEDULE | AGE
+pgbackrest-cluster-repo1-full | 30 2 * * * | False | 0 | 4h46m | 14h
+pgbackrest-cluster-repo1-incr | */30 * * * * | False | 1 | 81s | 106m
+
+kubectl get jobs
+-----------------------------------------------------------------------
+NAME | COMPLETIONS | DURATION | AGE
+pgbackrest-cluster-repo1-full-28597110 | 1/1 | 52s | 140m
+pgbackrest-cluster-repo1-incr-28597365 | 1/1 | 2m37s | 32m
+pgbackrest-cluster-repo1-incr-28597380 | 1/1 | 2m38s | 17m
+pgbackrest-cluster-repo1-incr-28597395 | 0/1 | 2m3s | 2m3s
+
+```
+
+If there are problems such as a timeout, the pod will stop with exit code 1 and thus indicate an error. In this case, a new pod will be created which will attempt to complete the backup. The maximum number of attempts is 6, so if the backup fails six times, the job is deemed to have failed and will not be attempted again until the next cronjob execution. The job pod log provides information about the problems.
+
+```
+kubectl get pods
+-----------------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-0 | 2/2 | Running | 2 | 14h
+cluster-pgbackrest-repo-host-0 | 1/1 | Running | 0 | 107m
+pgbackrest-cluster-repo1-full-28597110-x8zpw | 0/1 | Completed | 0 | 143m
+pgbackrest-cluster-repo1-incr-28597365-7bb5l | 0/1 | Completed | 0 | 34m
+pgbackrest-cluster-repo1-incr-28597380-j76rr | 0/1 | Completed | 0 | 19m
+pgbackrest-cluster-repo1-incr-28597395-rh86t | 0/1 | Completed | 0 | 4m27s
+postgres-operator-66bbff5c54-5sjmk | 1/1 | Running | 0 | 47m
+```
diff --git a/docs/hugo/content/en/backup/pvc.md b/docs/hugo/content/en/backup/pvc.md
new file mode 100644
index 000000000..90d96ab9d
--- /dev/null
+++ b/docs/hugo/content/en/backup/pvc.md
@@ -0,0 +1,40 @@
+---
+title: "via Blockstorage (pvc)"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1
+---
+
+### Backups on PVC (PersistentVolumeClaim)
+
+When using block storage, the operator creates an additional pod that acts as a repo host. Based on a TLS connection, the repo host obtains the data for the Backup from the current primary of the cluster, which is compressed before being sent.
+WAL archives are pushed from the primary pod to the repo host.
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster
+ namespace: cpo
+spec:
+ backup:
+ pgbackrest:
+ image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
+ repos:
+ - name: repo1
+ schedule:
+ full: 30 2 * * *
+ storage: pvc
+ volume:
+ size: 15Gi
+ storageClass: default
+ global:
+ repo1-retention-full: '7'
+ repo1-retention-full-type: count
+```
+
+This example creates backups based on a repo host with a daily full Backup at 2:30 am. In addition, pgBackRest is instructed to keep a maximum of 7 full Backups. The oldest one is always removed when a new Backup is created. You can increase the pvc-size all time if needed. Therefore you just need to update the `size` value to a higher amount of Gi. Please be aware that shrinking the volume is not possible.
+
+{{< hint type=info >}} In addition, further configurations for pgBackRest can be defined in the global object. Information on possible configurations can be found in the [pgBackRest documentation](https://pgbackrest.org/configuration.html) {{< /hint >}}
+
+
diff --git a/docs/hugo/content/en/certs/_index.md b/docs/hugo/content/en/certs/_index.md
new file mode 100644
index 000000000..7fb8a1e9c
--- /dev/null
+++ b/docs/hugo/content/en/certs/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Certificates"
+date: 2023-12-28T14:26:51+01:00
+draft: true
+weight: 1500
+---
+tbd
\ No newline at end of file
diff --git a/docs/hugo/content/en/clone-cluster/_index.md b/docs/hugo/content/en/clone-cluster/_index.md
new file mode 100644
index 000000000..6a6c1b220
--- /dev/null
+++ b/docs/hugo/content/en/clone-cluster/_index.md
@@ -0,0 +1,87 @@
+---
+title: "Clone Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2050
+---
+The function of a cluster clone was implemented to create the possibility of duplicating the current status of a cluster in order to carry out tests such as a major upgrade.
+It creates an autonomous and independent cluster based on an existing local cluster or from a cloud storage via pgBackRest (S3, gcs or Azure Blob)
+
+### Preconditions:
+The primary cluster must either:
+- be accessible from the standby cluster via streaming replication
+- the backup storage used by the standby cluster (S3, GCS or Azure Blob) must be accessible for the standby cluster
+
+The passwords for the Postgres user, the replication user and the exporter user (if monitoring is active) must be created as a secret for the standby cluster. Otherwise connection problems will occur
+
+### Clone a cluster via pvc
+
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1-clone
+spec:
+ dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1'
+ numberOfInstances: 1
+ postgresql:
+ version: '17'
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ teamId: acid
+ volume:
+ size: 5Gi
+ clone:
+ cluster: cluster-1
+ pgbackrest:
+ configuration:
+ secret: cluster-1-pvc-configuration
+ repo:
+ storage: pvc
+```
+
+### Clone a cluster via s3
+
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1-clone
+spec:
+ dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1'
+ numberOfInstances: 1
+ postgresql:
+ version: '17'
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ teamId: acid
+ volume:
+ size: 5Gi
+ clone:
+ cluster: cluster-1 # A random cluster name can be used if the source cluster is not present on the k8s.
+ pgbackrest:
+ configuration:
+ secret: cluster-1-s3-credentials
+ options:
+ repo1-path: /YOUR_PATH_INSIDE_THE_BUCKET_TO_THE_SOURCE_STANZA/repo1/
+ repo:
+ endpoint: YOUR_SOURCE_S3_ENDPOINT
+ name: repo1
+ region: YOUR_SOURCE_S3_REGION
+ resource: YOUR_SOURCE_BUCKET_NAME
+ storage: s3
+```
+
+### Limitations
+A primary cluster cannot be demoted to a standby cluster.
+If necessary, the recommendation is to create a new cluster as a standby cluster.
\ No newline at end of file
diff --git a/docs/hugo/content/en/config_cluster/_index.md b/docs/hugo/content/en/config_cluster/_index.md
new file mode 100644
index 000000000..e196fc3bb
--- /dev/null
+++ b/docs/hugo/content/en/config_cluster/_index.md
@@ -0,0 +1,84 @@
+---
+title: "PostgreSQL Configuration"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 1200
+---
+
+Users who are already used to working with PostgreSQL from Baremetal or VMs are already familiar with the need for various files to configure PostgreSQL. These include
+- postgresql.conf
+- pg_hba.conf
+- ...
+
+Although these files are available in the container, direct modification is not planned. As part of the declarative mode of operation of the operator, these files are defined via the operator. The modifying intervention within the container also represents a contradiction to the immutability of the container.
+
+For these reasons, the operator provides a way to make adjustments to the various files, from PostgreSQL to Patroni.
+
+We differentiate between two main objects in the cluster manifest:
+- [`postgresql`](documentation/how-to-use/configuration/#postgresql) with the child objects `version` and `parameters`
+- [`patroni`](documentation/how-to-use/configuration/#patroni) with objects for the `pg_hab`, `slots` and much more
+
+## postgresql
+
+The `postgresql `object consists of the following elements:
+- `version` - allows you to select the major version of PostgreSQL used.
+- `parameters`- enables the postgresql.conf to be changed
+
+```
+spec:
+ postgresql:
+ parameters:
+ shared_preload_libraries: 'pg_stat_statements,pgnodemx, timescaledb'
+ shared_buffers: '512MB'
+ version: '16'
+```
+
+Any known PostgreSQL parameter from postgresql.conf can be entered here and will be delivered by the operator to all nodes of the cluster accordingly.
+
+You can find more information about the parameters in the [PostgreSQL documentation](https://www.postgresql.org/docs/)
+
+## patroni
+
+The patroni object contains numerous options for customising the patroni-setu, and the pg_hba.conf is also configured here. A complete list of all available elements can be found here.
+
+The most important elements include
+- `pg_hba` - pg_hba.conf
+- `slots`
+- `synchronous_mode` - enables synchronous mode in the cluster. The default is set to `false`
+- `maximum_lag_on_failover` - Specifies the maximum lag so that the pod is still considered healthy in the event of a failover.
+- `failsafe_mode` Allows you to cancel the downgrading of the leader if all cluster members can be reached via the Patroni Rest Api.
+You can find more information on this in the [Patroni documentation](https://patroni-readthedocs-io.translate.goog/en/master/dcs_failsafe_mode.html?_x_tr_sl=auto&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp)
+
+### pg_hba
+
+The pg_hba.conf contains all defined authentication rules for PostgreSQL.
+
+When customising this configuration, it is important that the entire version of pg_hba is written to the manifest.
+The current configuration can be read out in the database using table pg_hba_file_rules ;.
+
+Further information can be found in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
+
+
+### slots
+
+When using user-defined slots, for example for the use of CDC using Debezium, there are problems when interacting with Patroni, as the slot and its current status are not automatically synchronised to the replicas.
+
+In the event of a failover, the client cannot start replication as both the entire slot and the information about the data that has already been synchronised are missing.
+
+To resolve this problem, slots must be defined in the cluster manifest rather than in PostgreSQL.
+
+```
+spec:
+ patroni:
+ slots:
+ cdc-example:
+ database: app_db
+ plugin: pgoutput
+ type: logical
+```
+This example creates a logical replication slot with the name `cdc-example` within the `app_db` database and uses the `pgoutput` plugin for the slot.
+
+
+{{< hint type=Info >}}Slots are only synchronised from the leader/standby leader to the replicas. This means that using the slots read-only on the replicas will cause a problem in the event of a failover.{{< /hint >}}
+
+
diff --git a/docs/hugo/content/en/connection_pooler/_index.md b/docs/hugo/content/en/connection_pooler/_index.md
new file mode 100644
index 000000000..f62488b20
--- /dev/null
+++ b/docs/hugo/content/en/connection_pooler/_index.md
@@ -0,0 +1,56 @@
+---
+title: "connection pooler"
+date: 2024-05-31T14:26:51+01:00
+draft: false
+weight: 1700
+---
+
+A connection pooler is a tool that acts as a proxy between the application and the database and enables the performance of the application to be improved and the load on the database to be reduced. The reason for this lies in the connection handling of PostgreSQL.
+
+## How PostgreSQL handles connection
+PostgreSQL use a new Process for every database-connection created by the postmaster. This process is handling the connection. On the positive side, this enables a stable connection and isolation, but it is not particularly efficient for short-lived connections due to the effort required to create them.
+
+## How Connection Pooling solves this problem
+
+With connection pooling, the application connects to the pooler, which in turn maintains a number of connections to the PostgreSQL database.
+This makes it possible to use the connections from the pooler to the database for a long time instead of short-lived connections and to recycle them accordingly.
+
+In addition to utilising long-term connections, a ConnectionPooler also makes it possible to reduce the number of connections required to the database. For example, if you have 3 application nodes, each of which maintains 100 connections to the database at the same time, that would be 300 connections in total. The application usually does not even begin to utilise this number of connections. With the pgBouncer, this can be optimised so that the applications open the 300 connections to the pgBouncer, but the pgBouncer only generates 100 connections to PostgreSQL, for example, thus reducing the load by 2/3.
+
+{{< hint type=Info >}}It is important to correctly configure the bouncer and thus the connections to be created between pgBouncer and PostgreSQL so that enough connections are available for the workload. {{< /hint >}}
+
+## How does this work with CPO
+CPO relies on pgBouncer, a popular and above all lightweight open source tool. pgBouncer manages individual user-database connections for each user used, which can be used immediately for incoming client connections.
+
+## How do I create a pooler for a cluster?
+
+- connection_pooler.number_of_instances - How many instances of connection pooler to create. Default is 2 which is also the required minimum.
+- connection_pooler.schema - Database schema to create for credentials lookup function to be used by the connection pooler. Is is created in every database of the Postgres cluster. You can also choose an existing schema. Default schema is pooler.
+- connection_pooler.user - User to create for connection pooler to be able to connect to a database. You can also choose an existing role, but make sure it has the LOGIN privilege. Default role is pooler.
+- connection_pooler.image - Docker image to use for connection pooler deployment. Default: “registry.opensource.zalan.do/acid/pgbouncer”
+- connection_poole.max_db_connections - How many connections the pooler can max hold. This value is divided among the pooler pods. Default is 60 which will make up 30 connections per pod for the default setup with two instances.
+- connection_pooler.mode - Defines pooler mode. Available Value: `session`, `transaction` or `statement`. Default is `transaction`.
+- connection_pooler.resources - Hardware definition for the pooler pods
+
+- enableConnectionPooler - Defines whether poolers for read/write access should be created based on the spec.connectionPooler definition.
+- enableReplicaConnectionPooler- Defines whether poolers for read-only access should be created based on the spec.connectionPooler definition.
+
+```
+spec:
+ connectionPooler:
+ mode: transaction
+ numberOfInstances: 2
+ resources:
+ limits:
+ cpu: '1'
+ memory: 100Mi
+ requests:
+ cpu: 500m
+ memory: 100Mi
+ schema: pooler
+ user: pooler
+ enableConnectionPooler: true
+ enableReplicaConnectionPooler: true
+```
+
+
diff --git a/docs/hugo/content/en/crd/_index.md b/docs/hugo/content/en/crd/_index.md
new file mode 100644
index 000000000..1dae5a992
--- /dev/null
+++ b/docs/hugo/content/en/crd/_index.md
@@ -0,0 +1,7 @@
+---
+title: "References"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 2400
+---
+
diff --git a/docs/hugo/content/en/crd/crd-operator-configurator.md b/docs/hugo/content/en/crd/crd-operator-configurator.md
new file mode 100644
index 000000000..21d6cca50
--- /dev/null
+++ b/docs/hugo/content/en/crd/crd-operator-configurator.md
@@ -0,0 +1,89 @@
+---
+title: "Operator-Configuration"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 332
+---
+
+| Name | Type | default | Description |
+| -------------------------------- |:-------:| --------:| ------------------:|
+| enable_crd_registration | boolean | true | |
+| crd_categories | string | all | |
+| enable_lazy_spilo_upgrade | boolean | false | |
+| enable_pgversion_env_var | boolean | true | |
+| enable_spilo_wal_path_combat | boolean | false | |
+| etcd_host | string | | |
+| kubernetes_use_configmaps | boolean | false | |
+| docker_image | string | | |
+| sidecars | list | | |
+| enable_shm_volume | boolean | true | |
+| workers | int | 8 | |
+| max_instances | int | -1 | |
+| min_instances | int | -1 | |
+| resync_period | string | 30m | |
+| repair_period | string | 5m | |
+| set_memory_request_to_limit | boolean | false | |
+| debug_logging | boolean | true | |
+| enable_db_access | boolean | true | |
+| spilo_privileged | boolean | false | |
+| spilo_allow_privilege_escalation | boolean | true | |
+| watched_namespace | string | * | |
+
+#### major-upgrade-specific
+
+| Name | Type | default | Description |
+| ------------------------------------- |:-------:| --------:| ------------------:|
+| major_version_upgrade_mode | string | off | |
+| major_version_upgrade_team_allow_list | string | | |
+| minimal_major_version | string | 9.6 | |
+| target_major_version | string | 14 | |
+
+#### aws-specific
+
+| Name | Type | default | Description |
+| ------------------------------------- |:-------:| --------:| ------------------:|
+| wal_s3_bucket | string | | |
+| log_s3_bucket | string | | |
+| kube_iam_role | string | | |
+| aws_region | string | | |
+| additional_secret_mount | string | | |
+| additional_secret_mount_path | string | | |
+| enable_ebs_gp3_migration | boolean | | |
+| enable_ebs_gp3_migration_max_size | int | | |
+
+#### logical-backup-specific
+
+| Name | Type | default | Description |
+| ------------------------------------- |:-------:| --------:| ------------------:|
+| logical_backup_docker_image | string | | |
+| logical_backup_google_application_credentials | string | | |
+| logical_backup_job_prefix | string | | |
+| logical_backup_provider | string | | |
+| logical_backup_s3_access_key_id | string | | |
+| logical_backup_s3_bucket | string | | |
+| logical_backup_s3_endpoint | string | | |
+| logical_backup_s3_region | string | | |
+| logical_backup_s3_secret_access_key | string | | |
+| logical_backup_s3_sse | string | | |
+| logical_backup_s3_retention_time | string | | |
+| logical_backup_schedule | string | | (Cron-Syntax) |
+
+#### team-api-specific
+
+| Name | Type | default | Description |
+| ------------------------------------- |:-------:| --------:| ------------------:|
+| enable_teams_api | string | | |
+| teams_api_url | string | | |
+| teams_api_role_configuration | string | | |
+| enable_team_superuser | boolean | | |
+| team_admin_role | boolean | | |
+| enable_admin_role_for_users | boolean | | |
+| pam_role_name | string | | |
+| pam_configuration | string | | |
+| protected_role_names | list | | |
+| postgres_superuser_teams | string | | |
+| role_deletion_suffix | string | | |
+| enable_team_member_deprecation | boolean | | |
+| enable_postgres_team_crd | boolean | | |
+| enable_postgres_team_crd_superusers | boolean | | |
+| enable_team_id_clustername_prefix | boolean | | |
\ No newline at end of file
diff --git a/docs/hugo/content/en/crd/crd-postgresql.md b/docs/hugo/content/en/crd/crd-postgresql.md
new file mode 100644
index 000000000..97ba2544a
--- /dev/null
+++ b/docs/hugo/content/en/crd/crd-postgresql.md
@@ -0,0 +1,467 @@
+---
+title: "PostgreSQL"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 331
+---
+#### CRD for kind postgresql
+
+| Name | Type | required | Description |
+| ----------- |:--------------:| ---------:| ------------------:|
+| apiVersion | string | true | acid.zalando.do/v1 |
+| kind | string | true | |
+| metadata | object | true | |
+| [spec](#spec) | object | true | |
+| [status](#status) | object | false | |
+
+---
+
+#### spec
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| [additionalVolumes](#additionalvolumes) | array | false | List of additional volumes to mount in each container of the statefulset pod |
+| allowedSourceRanges | array | false | The corresponding load balancer is accessible only to the networks defined by this parameter |
+| [backup](#backup) | object | false | Enables the definition of a customised backup solution for the cluster |
+| [clone](#clone) | object | false | Defines the clone-target for the Cluster |
+| [connectionPooler](#connectionpooler) | object | false | Defines the configuration and settings for every type of a connectionPoolers (Primary and Replica). |
+| databases | map | false | Defines the name of the database, they are created by the operator. See [tutorial](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/configure_users_and_databases) |
+| dockerImage | string | true | Defines the used PostgreSQL-Container-Image for this cluster |
+| enableLogicalBackup | boolean | false | Enable logical Backups for this Cluster (Stored on S3) - s3-configuration for Operator is needed (Not for pgBackRest) |
+| enableConnectionPooler | boolean | false | creates a ConnectionPooler for the primary Pod |
+| enableReplicaConnectionPooler | boolean | false | creates a ConnectionPooler for the replica Pods |
+| enableMasterLoadBalancer | boolean | false | Define whether to enable the load balancer pointing to the Postgres primary |
+| enableReplicaLoadBalancer | boolean | false | Define whether to enable the load balancer pointing to the Postgres replicas |
+| enableMasterPoolerLoadBalancer | boolean | false | Define whether to enable the load balancer pointing to the primary ConnectionPooler |
+| enableReplicaPoolerLoadBalancer| boolean | false | Define whether to enable the load balancer pointing to the Replica-ConnectionPooler |
+| enableShmVolume | boolean | false | Start a database pod without limitations on shm memory. By default Docker limit /dev/shm to 64M (see e.g. the docker issue, which could be not enough if PostgreSQL uses parallel workers heavily. If this option is present and value is true, to the target database pod will be mounted a new tmpfs volume to remove this limitation. |
+| [env](#env) | array | false | Allows to add own Envs to the PostgreSQL containers |
+| [initContainers](#initcontainers) | array | false | Enables the definition of init-containers |
+| logicalBackupSchedule | string | false | Enables the scheduling of logical backups based on cron-syntax. Example: `30 00 * * *` |
+| maintenanceWindows | array | false | Enables the definition of maintenance windows for the cluster. Example: `Sat:00:00-04:00` |
+| masterServiceAnnotations | map | false | Enables the definition of annotations for the Primary Service |
+| [monitor](#monitor) | map | false | Enables monitoring on the basis of the defined image |
+| nodeAffinity | map | false | Enables overwriting of the nodeAffinity |
+| numberOfInstances | int | true | Number of nodes of the cluster |
+| [patroni](#patroni) | map | false | Enables the customisation of patroni settings |
+| podPriorityClassName | string | false | a name of the priority class that should be assigned to the cluster pods. If not set then the default priority class is taken. The priority class itself must be defined in advance |
+| podAnnotations | map | false | A map of key value pairs that gets attached as annotations to each pod created for the database. |
+| [postgresql](#postgresql) | map | false | Enables the customisation of PostgreSQL settings and parameters |
+| [preparedDatabases](#prepareddatabases) | map | false | Allows you to define databases including owner, schemas and extension and have the operator generate them. item See [tutorial](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/prepared_databases) |
+| replicaServiceAnnotations | map | false | Enables the definition of annotations for the Replica Service |
+| [resources](#resources) | map | true | CPU & Memory (Limit & Request) definition for the Postgres container |
+| ServiceAnnotations | map | false | A map of key value pairs that gets attached as annotations to each Service created for the database. |
+| [sidecars](#sidecars) | array | false | Enables the definition of custom sidecars |
+| spiloFSGroup | int | false | the Persistent Volumes for the Spilo pods in the StatefulSet will be owned and writable by the group ID specified. This will override the spilo_fsgroup operator parameter |
+| spiloRunAsGroup | int | false | sets the group ID which should be used in the container to run the process. |
+| spiloRunAsUser | int | false | Sets the user ID which should be used in the container to run the process. This must be set to run the container without root. |
+| [standby](#standby) | map | false | Enables the creation of a standby cluster at the time of the creation of a new cluster |
+| [streams](#streams) | array | false | Enables change data capture streams for defined database tables |
+| [tde](#tde) | map | false | Enables the activation of TDE if a new cluster is created |
+| teamId | string | true | name of the team the cluster belongs to. Will be removed soon |
+| [tls](#tls) | map | false | Custom TLS certificate |
+| [tolerations](#tolerations) | array | false | a list of tolerations that apply to the cluster pods. Each element of that list is a dictionary with the following fields:
+key, operator, value, effect and tolerationSeconds |
+| [topologySpreadConstraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) | map | false | Enables the definition of a topologySpreadConstraint. See [K8s-Documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) |
+| users | map | false | a map of usernames to user flags for the users that should be created in the cluster by the operator. See [tutorial](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/configure_users_and_databases) |
+| usersWithSecretRotation | list | false | list of users to enable credential rotation in K8s secrets. The rotation interval can only be configured globally. |
+| usersWithInPlaceSecretRotation | list | false | list of users to enable in-place password rotation in K8s secrets. The rotation interval can only be configured globally. |
+| [volume](#volume) | map | true | define the properties of the persistent storage that stores Postgres data |
+
+
+{{< back >}}
+
+---
+
+#### additionalVolumes
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| name | string | true | Enables the definition of a pgbackrest-setup for the cluster |
+| mountPath | string | true | Enables the definition of a pgbackrest-setup for the cluster |
+| targetContainers | array | true | Enables the definition of a pgbackrest-setup for the cluster |
+| subPath | string | false | Enables the definition of a pgbackrest-setup for the cluster |
+| isSubPathExpr | boolean | false | Enables the definition of a pgbackrest-setup for the cluster |
+| [volumeSource](#volumeSource) | map | true | Enables the definition of a pgbackrest-setup for the cluster |
+
+{{< back >}}
+
+---
+
+#### backup
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| [pgbackrest](#pgbackrest) | object | false | Enables the definition of a pgbackrest-setup for the cluster |
+
+{{< back >}}
+
+---
+
+#### clone
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| cluster | string | true | Name of the cluster to be cloned. Random value if the cluster does not exist locally. |
+| [pgbackrest](#pgbackrest) | object | false | Enables the definition of a pgbackrest-setup for the cluster |
+
+{{< back >}}
+
+---
+
+#### connectionPooler
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| numberOfInstances | int | true | Number of Pods per Pooler |
+| mode | string | true | pooling mode for pgBouncer (session, transaction, statement) |
+| schema | string | true | Schema for Pooler (Default: pooler) |
+| user | string | true | Username for Pooler (Default: pooler) |
+| maxDBConnections | string | true | maxConnections to the DB-Pod(s) |
+| [resources](#resources) | map | true | CPU & Memory (Limit & Request) definition for the Pooler |
+
+{{< back >}}
+---
+
+#### env
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| name | string | true | Keyfield for the ENV-Entry |
+| value | string | true | Valuefield for the ENV-Entry |
+
+{{< back >}}
+
+---
+
+#### initContainers
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| name | string | true | Name for the container |
+| image | string | true | Docker-Image for container |
+| command | string | false | to override CMD inside the container |
+| [env](#env) | array | false | Allows to add own Envs to the container |
+| [resources](#resources) | map | false | CPU & Memory (Limit & Request) definition for the container |
+| [ports](ports) | array | false | Define open ports for the container |
+
+{{< back >}}
+
+---
+
+#### monitor
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| image | string | true | Docker-Image for the metric exporter |
+
+{{< back >}}
+
+---
+
+#### patroni
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| failsafe_mode | boolean | false | Patroni failsafe_mode parameter value. See the [Patroni documentation](https://patroni.readthedocs.io/en/master/dcs_failsafe_mode.html) for more details. |
+| initdb | map | false | a map of key-value pairs describing initdb parameters |
+| loop_wait | string | false | Patroni `loop_wait` parameter value, optional. The default is set by the PostgreSQL image. |
+| maximum_lag_on_failover | string | false | Patroni `maximum_lag_on_failover` parameter value, optional. The default is set by the PostgreSQL image. |
+| [multisite](#multisite) | map | false | Multisite configuration - Check the [Documentation](CYBERTEC-pg-operator/multisite/) first |
+| pg_hba | array | false | list of custom pg_hba lines to replace default ones. One entry per item (example: - hostssl all all 0.0.0.0/0 scram-sha-256) |
+| retry_timeout | int | false | Patroni `retry_timeout` parameter value, optional. The default is set by the PostgreSQL image. |
+| [slots](#slots) | map | false | permanent replication slots that Patroni preserves after failover by re-creating them on the new primary immediately. after doing a promote. Use preferred slot-name as map-item |
+| synchronous_mode | boolean | false | DPatroni `synchronous_mode` parameter value, optional. The default is false. |
+| synchronous_mode_strict | boolean | false | Patroni `synchronous_mode_strict` parameter value, optional. The default is false. |
+| synchronous_node_count | int | false | Patroni `synchronous_node_count` parameter value, optional. The default is set to 1. Only used if `synchronous_mode_strict` is true |
+| ttl | int | false | Patroni `ttl` parameter value, optional. The default is set by the PostgreSQL image. |
+
+{{< back >}}
+
+---
+
+#### PostgreSQL
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| parameters | map | false | PostgreSQL-Parameter as item (Example: max_connections: "100"). For help check out the [CYBERTEC PostgreSQL Configurator](https://pgconfigurator.cybertec.at) |
+| version | string | false | a map of key-value pairs describing initdb parameters |
+
+{{< back >}}
+
+---
+
+#### preparedDatabases
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| defaultUsers | boolean | false | Creates roles with `LOGIN` permission and `_user`suffix. Default: false |
+| extensions | map | false | Includes the Extensions as items (key:value). Key is the Name of the Extension and value the schema. Example: pgcrypto: public |
+| [schemas](#schemas) | map | false | Includes the schemanames as items. |
+
+{{< back >}}
+
+---
+
+#### resources
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| [requests](#requests) | map | true | cpu and memory definitons (request.cpu / request.memory) |
+| [limits](#limits) | map | true | cpu and memory definitons (limits.cpu / limits.memory) |
+
+{{< back >}}
+
+---
+
+#### sidecars
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| name | string | true | Name for the container |
+| image | string | true | Docker-Image for container |
+| command | string | false | to override CMD inside the container |
+| [env](#env) | array | false | Allows to add own Envs to the container |
+| [resources](#resources) | map | false | CPU & Memory (Limit & Request) definition for the container |
+| [ports](ports) | array | false | Define open ports for the container |
+
+{{< back >}}
+
+#### standby
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| standby_host | string | true | Endpoint of the primary cluster |
+| standby_port | string | true | PostgreSQL port of the primary cluster |
+
+{{< back >}}
+
+---
+
+#### streams
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| applicationId | string | true | The application name to which the database and CDC belongs to. |
+| database | string | true | Name of the database from where events will be published via Postgres' logical decoding feature. |
+| tables | map | true | Defines a map of table names and their properties (eventType, idColumn and payloadColumn). |
+| batchSize | int | false | Defines the size of batches in which events are consumed. Defaults to 1 |
+| enableRecovery | boolean | false | Flag to enable a dead letter queue recovery for all streams tables. |
+| filter | string | false | Streamed events can be filtered by a jsonpath expression for each table. |
+| standby_port | string | false | PostgreSQL port of the primary cluster |
+
+{{< back >}}
+
+---
+
+#### tde
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| enable | boolean | true | enable TDE during initDB |
+
+{{< back >}}
+
+---
+
+#### tolerations
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| key | string | false | Key for the taint attribute of the node |
+| operator | string | false | Comparison operator (Equal or Exists). |
+| value | string | false | Value of the taint (only relevant for ‘Equal’). |
+| effect | string | false | Specifies how the node handles the pod (NoExecute, NoSchedule, PreferNoSchedule) |
+| tolerationSeconds | int | false | Specifies how long the pod tolerates the taint (only for NoExecute). |
+
+
+{{< back >}}
+
+---
+
+#### volume
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| size | string | true | the size of the target volume. Usual Kubernetes size modifiers, i.e. Gi or Mi, apply |
+| storageClass | string | false | the name of the Kubernetes storage class to draw the persistent volume from. If empty K8s will choose the default StorageClass |
+| subPath | string | false | Subpath to use when mounting volume into PostgreSQL container. |
+| iops | int | false | When running the operator on AWS the latest generation of EBS volumes (gp3) allows for configuring the number of IOPS. Maximum is 16000 |
+| throughput | int | false | When running the operator on AWS the latest generation of EBS volumes (gp3) allows for configuring the throughput in MB/s. Maximum is 1000 |
+| selector | map | false | A label query over PVs to consider for binding. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) for details on using matchLabels and matchExpressions |
+
+{{< back >}}
+
+---
+
+#### volumeSource
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| emptyDir | string | false | emptyDir: {} |
+| [PersistentVolumeClaim](#volumeSource-PersistentVolumeClaim) | map | false | PersistentVolumeClaim-Objekt |
+| [configMap](#volumeSource-configMap) | map | false | configMap-Objekt |
+
+{{< back >}}
+
+---
+
+#### volumeSource-PersistentVolumeClaim
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| claimName | string | true | Name of the PersistentVolumeClaim |
+| readyOnly | boolean | false | PersistentVolumeClaim-Objekt |
+
+{{< back >}}
+
+---
+
+#### volumeSource-configMap
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| name | string | true | Name of the Configmap |
+
+{{< back >}}
+
+---
+
+#### multisite
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| enable | boolean | true | Enable multisite-feature |
+| [etcd](#etcd) | map | true | Enables the definition of a pgbackrest-setup for the cluster |
+| retry_timeout | int | true | Patroni `retry_timeout` parameter value for the global etcd, optional. The default is set by the PostgreSQL image. |
+| site | string | true | Name for the site of this cluster |
+| ttl | int | true | Patroni `ttl` parameter value for the global etcd, optional. The default is set by the PostgreSQL image. |
+
+{{< back >}}
+
+---
+
+#### slots
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| type | string | true | Slot-Type (`physical` or `logical`) |
+| database | string | false | Databasename - for logical replication only |
+| plugin | string | false | Plugin - for logical replication only |
+
+{{< back >}}
+---
+
+#### schemas
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| defaultRoles | boolean | false | Creates schema exclusiv roles with `NOLOGIN` permission and `_user`suffix Default: true |
+| defaultUsers | boolean | false | Creates schema exclusiv roles with `LOGIN` permission and `_user`suffix Default: false |
+
+#### etcd
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| hosts | string | true | list of etcd hosts, including etcd-client-port (default: `2379`), comma separated like in the etcd config |
+| password | string | false | Password for the global etcd |
+| protocol | string | true | Protocol for the global etcd (http or https) |
+| user | string | false | Username for the global etcd |
+
+{{< back >}}
+
+---
+
+#### requests
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| cpu | string | true | cpu definitons Example: 1000m|
+| memory | string | true | memory definitons Example: 1000Mi|
+
+{{< back >}}
+
+---
+
+#### limits
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| cpu | string | true | cpu definitons Example: 1000m|
+| memory | string | true | memory definitons Example: 1000Mi|
+
+{{< back >}}
+
+---
+
+#### pgbackrest
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| [configuration](#configuration)| object | false | Enables the definition of a pgbackrest-setup for the cluster |
+| global | object | false | |
+| image | string | true | |
+| [repos](#repos) | array | true | |
+| [resources](#resources) | object | false | CPU & Memory (Limit & Request) definition for the pgBackRest container|
+
+{{< back >}}
+
+---
+
+#### configuration
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| secret | object | false | Secretname with the contained S3 credentials (AccessKey & SecretAccessKey) (Note: must be placed in the same namespace as the cluster) |
+| [protection](#protection) | object | false | Enable Protection-Options |
+
+{{< back >}}
+
+---
+
+
+
+#### protection
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| restore | boolean | false | A restore is ignored as long as this option is set to true. |
+
+{{< back >}}
+
+---
+
+#### repos
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| name | string | true | Name of the repository Required:Repo[1-4] |
+| storage | string | true | Defines the used backup-storage (Choose from List: pvc,s3,blob,gcs) |
+| resource | string | true | Bucket-/Instance-/Storage- or PVC-Name |
+| endpoint | string | false | The Endpoint for the choosen Storage (Not required for local storage) |
+| region | string | false | Region for the choosen Storage (S3 only) |
+| [schedule](#schedule) | string | false | Object for defining automatic backups |
+
+{{< back >}}
+
+---
+
+#### schedule
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| full | string | false | (Cron-Syntax) Define full backup |
+| incr | string | false | (Cron-Syntax) Define incremental backup |
+| diff | string | false | (Cron-Syntax) Define differential backup |
+
+{{< back >}}
+
+---
+
+#### status
+
+| Name | Type | required | Description |
+| ------------------------------ |:-------:| ---------:| ------------------:|
+| PostgresClusterStatus | string | false | Shows the cluster status. Filled by the Operator |
+
+{{< back >}}
diff --git a/docs/hugo/content/en/customize_cluster/_index.md b/docs/hugo/content/en/customize_cluster/_index.md
new file mode 100644
index 000000000..c8329e684
--- /dev/null
+++ b/docs/hugo/content/en/customize_cluster/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Customize Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1000
+---
\ No newline at end of file
diff --git a/docs/hugo/content/en/customize_cluster/additional-volumes.md b/docs/hugo/content/en/customize_cluster/additional-volumes.md
new file mode 100644
index 000000000..8127841e6
--- /dev/null
+++ b/docs/hugo/content/en/customize_cluster/additional-volumes.md
@@ -0,0 +1,32 @@
+---
+title: "Additional Volumes"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 3
+---
+
+```
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+# - name: data
+# mountPath: /home/postgres/pgdata/partitions
+# targetContainers:
+# - postgres
+# volumeSource:
+# PersistentVolumeClaim:
+# claimName: pvc-postgresql-data-partitions
+# readyOnly: false
+# - name: conf
+# mountPath: /etc/telegraf
+# subPath: telegraf.conf
+# targetContainers:
+# - telegraf-sidecar
+# volumeSource:
+# configMap:
+# name: my-config-map
+```
\ No newline at end of file
diff --git a/docs/hugo/content/en/customize_cluster/sidecars.md b/docs/hugo/content/en/customize_cluster/sidecars.md
new file mode 100644
index 000000000..babbec38b
--- /dev/null
+++ b/docs/hugo/content/en/customize_cluster/sidecars.md
@@ -0,0 +1,110 @@
+---
+title: "Sidecars"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1
+---
+Starting with the Single-Node-Cluster from the previous section, we want to modify the Instance a bit to see.
+## CPU and Memory
+```
+spec:
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500mi
+```
+Based on the ressources-Definiton we're able to modify the reserved Hardware (requests) and the limits, which allows use to consume more than the reserved definitons if the k8s-worker has this hardware available. There are some Restrictions when modifiying the limits-section. Because of the behaviour of Databases we should never define a diff between requests.memory and limits.memory. A Database is after some time using all available Memory, for Cache and other things. Limits are optional and the worker node can force them back. forcing back memory will create big problems inside a database like creating corruption, forcing OutOfMemory-Killer and so on.
+CPU on the other side is a ressource we can use inside the limits definiton to allow our database using more cpu if needed and available.
+
+## Sidecars
+Sidecars are further Containers running on the same Pod as the Database. We can use them for serveral different Jobs.
+The Operator allows us to define them directly inside the Cluster-Manifest.
+```
+spec:
+ sidecars:
+ - name: "telegraf-sidecar"
+ image: "telegraf:latest"
+ ports:
+ - name: metrics
+ containerPort: 8094
+ protocol: TCP
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ env:
+ - name: "USEFUL_VAR"
+ value: "perhaps-true"
+```
+This Example will add a second Container to our Pods. This will trigger a restart, which creates Downtime if you're not running a HA-Cluster.
+
+## Init-Containers
+We can exactly the same as for sidecars also for Init-Containers.
+The difference is, that a sidecar is running normally on a pod.
+An Init-Container will just run as first container when the pod is created and it will ends after his job is done.
+The "normal" Containers has to wait till all init-Containers finished their jobs and ended with a exit-status.
+```
+spec:
+ initContainers:
+ - name: date
+ image: busybox
+ command: [ "/bin/date" ]
+```
+
+## TLS-Certificates
+One Startup the Containers will create a custom TLS-Certificate which allows creating tls-secured-connections to the Database.
+But this Certificates cannot verified, because the application has no information about the CA. Because of this the certificates are no protection against MITM-Attacks.
+You're able to configure your own Certificates and CA to ensure, that you can use secured and verified connections between your application and your database.
+```
+spec:
+ tls:
+ secretName: "" # should correspond to a Kubernetes Secret resource to load
+ certificateFile: "tls.crt"
+ privateKeyFile: "tls.key"
+ caFile: "" # optionally configure Postgres with a CA certificate
+ caSecretName: "" # optionally the ca.crt can come from this secret instead.
+```
+You need to store the needed values from tls.crt, tls.key and ca.crt in a secret and define the secrtetname inside the tls-object.
+if you want you can create a separate sercet just for the ca and use this secret for every cluster inside the Namespace.
+To get Information about creating Certificates and the secrets check the Tutorial in the additonal-Section or click [here](additonal/tutorials/tls)
+
+## Node-Affinity
+Node-Affinity will ensure that the Cluster-pods only deployed on Kubernetes-Nodes which has the defined Labelkey and -Value
+```
+spec:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: cpo
+ operator: In
+ values:
+ - enabled
+```
+This allowes you to use specific database-nodes in a mixed cluster for example.
+In the Example above the Cluster-Pods are just deployed on Nodes with the Key: cpo and the value: enabled
+So you're able to seperate your Workload.
+
+## PostgreSQL-Configuration
+Every Cluster will start with the default PostgreSQL-Configuration. Every Parameter can be overriden based in definitions inside the Cluster-Manifest.
+Therefore we just need a add the section parameters to the postgresql-Object
+```
+spec:
+ postgresql:
+ version: 16
+ parameters:
+ max_connections: "53"
+ log_statement: "all"
+ track_io_timing: "true"
+```
+These Definitions will change the PostgreSQL-Configuration. Based on the needs of Parameter changes the Pods may needs a restart, which creates a Downtime if its not a HA-Cluster.
+You can check Parameters and allowed Values on this Sources to ensure a correct Value.
+- PostgreSQL Documentation
+- [PostgreSQL.org](https://postgresql.org)
+- [PostgreSQLco.nf](https://postgresqlco.nf/)
diff --git a/docs/hugo/content/en/db_users/_index.md b/docs/hugo/content/en/db_users/_index.md
new file mode 100644
index 000000000..8a7a54d9e
--- /dev/null
+++ b/docs/hugo/content/en/db_users/_index.md
@@ -0,0 +1,153 @@
+---
+title: "Databases & Users"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 900
+---
+
+CPO not only supports you in deploying your cluster, it also supports you in setting it up in terms of the database and users.
+CPO offers you three different options for this:
+- Create roles
+- Create databases
+- preapared databases
+
+## Create Roles
+The creation of users is based on the definition of the user name and the definition of the required rights for this user. Available rights are
+- `superuser`
+- `inherit`
+- `login`
+- `nologin`
+- `createrole`
+- `createdb`
+- `replication`
+- `bypassrls`
+
+Unless explicitly defined via `NOLOGIN`, a created user automatically receives the `LOGIN` permission.
+
+```
+spec:
+ users:
+ db_owner:
+ - login
+ - createdb
+ appl_user:
+ - login
+```
+
+For each user created, CPO automatically creates a secret with `username` and `password` in the namespace of the cluster, which follows the following naming convention:
+[USERNAME].[CLUSTERNAME].credentials.postgresql.cpo.opensource.cybertec.at
+
+If the secrets for an application are to be stored in a different namespace, for example, it is necessary to define the setting enable_cross_namespace_secret as true in the operator configuration. You can find more information about the operator configuration [here](documentation/how-to-use/operator_configuration/).
+
+The namespace must then be written before the user name.
+```
+spec:
+ users:
+ db_owner:
+ - login
+ - createdb
+ app_namespace.appl_user:
+ - login
+```
+
+## Create Databases
+
+Databases are basically created in a very similar way to users.
+The definition is based on the database name and the database owner.
+
+```
+spec:
+ users:
+ db_owner:
+ - login
+ - createdb
+ app_namespace.appl_user:
+ - login
+ databases;
+ app_db: app_namespace.appl_user
+```
+
+{{< hint type=Info >}}Be aware that the user name must be defined for the database owner in the same way as it is done in the users object. {{< /hint >}}
+
+## Prepared Databases
+
+The `preparedDatabases` object is available for a much more extensive setup of databases and users.
+In addition to the creation of `databases` and `users`, this also enables the creation of `schemas` and `extensions`. A more detailed rights management is also available.
+
+### Databases and Schema
+
+Creating the preparedDatabases object already creates a database whose name is based on the cluster name. `preparedDatabases: {}`
+
+{{< hint type=Info >}}For the database name, `-` is replaced with `_` in the cluster name{{< /hint >}}
+
+To create your own database names and elements such as schemas and extensions within the database, an object must be created within preparedDatabases for each database.
+
+```
+spec:
+ preparedDatabases:
+ appl_db:
+ extensions:
+ dblink: public
+ schemas:
+ data: {}
+```
+
+This example creates a database with the name `appl_db` and creates a schema with the name `data` in it, as well as creating the `dblink` extension in the schema `public`.
+
+### Management of users and Permissions
+
+For rights management, we distinguish between `NOLOGIN` roles and `LOGIN` roles. `Users` have login rights and inherit the other rights from the `NOLOGIN` role.
+
+#### NoLogin roles (defaultRoles)
+
+The roles are created if `defaultroles` is not explicitly set to false.
+```
+spec:
+ preparedDatabases:
+ appl_db:
+ extensions:
+ dblink: public
+ schemas:
+ data: {}
+```
+This creates roles for the schema owner, writer and reader
+
+#### Login roles (defaultUsers)
+
+The roles described in the previous paragraph can be assigned to LOGIN roles via the users section in the manifest. Optionally, the Postgres operator can also create standard `LOGIN` roles for the database and each individual schema. These roles are given the suffix _user and inherit all rights from their NOLOGIN counterparts. Therefore, you cannot set defaultRoles to false and activate defaultUsers at the same time.
+
+```
+spec:
+ preparedDatabases:
+ appl_db:
+ defaultUsers: true
+ extensions:
+ dblink: public
+ schemas:
+ data: {}
+ history:
+ defaultRoles: true
+ defaultUsers: false
+```
+This example creates the following users and inheritances
+
+Role name | Attributes | inherits from
+------------------------|---------------------------|--------------------------------
+ appl_db_owner | Cannot login | appl_db_reader,appl_db_owner,appl_data_owner,...
+ appl_db_owner_user | | appl_db_owner
+ appl_db_reader | Cannot login |
+ appl_db_reader_user | | appl_db_reader
+ appl_db_writer | Cannot login | appl_db_reader
+ appl_db_writer_user | | appl_db_writer
+ appl_db_data_owner | Cannot login | appl_db_data_reader,appl_db_data_writer
+ appl_db_data_reader | Cannot login |
+ appl_db_data_writer | Cannot login | appl_db_data_reader
+ appl_db_history_owner | Cannot login | appl_db_history_reader,appl_db_history_writer
+ appl_db_history_reader | Cannot login |
+ appl_db_history_writer | Cannot login | appl_db_history_reader
+
+Default access permissions are also defined for LOGIN roles when databases and schemas are created. This means that they are not currently set if defaultUsers (or defaultRoles for schemas) are activated at a later time.
+
+#### User Secrets
+
+For each user created by cpo with `LOGIN` permissions, the operator also creates a secret with username and password, as with the creation of roles via the `users` object.
\ No newline at end of file
diff --git a/docs/hugo/content/en/extensions/_index.md b/docs/hugo/content/en/extensions/_index.md
new file mode 100644
index 000000000..cbe16db49
--- /dev/null
+++ b/docs/hugo/content/en/extensions/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Extensions"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1900
+---
\ No newline at end of file
diff --git a/docs/hugo/content/en/extensions/pg13.md b/docs/hugo/content/en/extensions/pg13.md
new file mode 100644
index 000000000..9b57bb925
--- /dev/null
+++ b/docs/hugo/content/en/extensions/pg13.md
@@ -0,0 +1,86 @@
+---
+title: "PostgreSQL 13"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1813
+---
+
+{{< hint type=info >}} The extensions listed are included in the standard images. This list refers to PostgreSQL 13. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|---------------------|-----------------|----------------------------------------------------------------------------|
+| adminpack | 2.1 | Administrative functions for PostgreSQL |
+| amcheck | 1.2 | Functions for verifying relation integrity |
+| autoinc | 1.0 | Functions for autoincrementing fields |
+| bloom | 1.0 | Bloom access method - signature file based index |
+| btree_gin | 1.3 | Support for indexing common datatypes in GIN |
+| btree_gist | 1.5 | Support for indexing common datatypes in GiST |
+| citext | 1.6 | Data type for case-insensitive character strings |
+| credcheck | 3.0.0 | credcheck - PostgreSQL plain text credential checker |
+| cube | 1.4 | Data type for multidimensional cubes |
+| dblink | 1.2 | Connect to other PostgreSQL databases from within a database |
+| dict_int | 1.0 | Text search dictionary template for integers |
+| dict_xsyn | 1.0 | Text search dictionary template for extended synonym processing |
+| earthdistance | 1.1 | Calculate great-circle distances on the surface of the Earth |
+| file_fdw | 1.0 | Foreign-data wrapper for flat file access |
+| fuzzystrmatch | 1.1 | Determine similarities and distance between strings |
+| hstore | 1.7 | Data type for storing sets of (key, value) pairs |
+| hstore_plperl | 1.0 | Transform between hstore and plperl |
+| hstore_plperlu | 1.0 | Transform between hstore and plperlu |
+| hstore_plpython3u | 1.0 | Transform between hstore and plpython3u |
+| insert_username | 1.0 | Functions for tracking who changed a table |
+| intagg | 1.1 | Integer aggregator and enumerator (obsolete) |
+| intarray | 1.3 | Functions, operators, and index support for 1-D arrays of integers |
+| isn | 1.2 | Data types for international product numbering standards |
+| jsonb_plperl | 1.0 | Transform between jsonb and plperl |
+| jsonb_plperlu | 1.0 | Transform between jsonb and plperlu |
+| jsonb_plpython3u | 1.0 | Transform between jsonb and plpython3u |
+| lo | 1.1 | Large Object maintenance |
+| ltree | 1.2 | Data type for hierarchical tree-like structures |
+| ltree_plpython3u | 1.0 | Transform between ltree and plpython3u |
+| moddatetime | 1.0 | Functions for tracking last modification time |
+| pageinspect | 1.8 | Inspect the contents of database pages at a low level |
+| pg_buffercache | 1.3 | Examine the shared buffer cache |
+| pg_cron | 1.6 | Job scheduler for PostgreSQL |
+| pg_freespacemap | 1.2 | Examine the free space map (FSM) |
+| pg_permissions | 1.3 | View object permissions and compare them with the desired state |
+| pg_prewarm | 1.2 | Prewarm relation data |
+| pg_proctab | | Placeholder - see pg_proctab--0.0.10-compat.control |
+| pg_stat_statements | 1.8 | Track planning and execution statistics of all SQL statements executed |
+| pg_trgm | 1.5 | Text similarity measurement and index searching based on trigrams |
+| pg_visibility | 1.2 | Examine the visibility map (VM) and page-level visibility info |
+| pgaudit | 1.5.3 | Provides auditing functionality |
+| pgauditlogtofile | 1.6 | pgAudit addon to redirect audit entries to an independent file |
+| pgcrypto | 1.3 | Cryptographic functions |
+| pgnodemx | 1.7 | SQL functions that allow capture of node OS metrics from PostgreSQL |
+| pgrowlocks | 1.2 | Show row-level locking information |
+| pgstattuple | 1.5 | Show tuple-level statistics |
+| plpgsql | 1.0 | PL/pgSQL procedural language |
+| plpython3u | 1.0 | PL/Python3U untrusted procedural language |
+| pltcl | 1.0 | PL/Tcl procedural language |
+| pltclu | 1.0 | PL/TclU untrusted procedural language |
+| postgres_fdw | 1.0 | Foreign-data wrapper for remote PostgreSQL servers |
+| refint | 1.0 | Functions for implementing referential integrity (obsolete) |
+| seg | 1.3 | Data type for representing line segments or floating-point intervals |
+| set_user | 4.1.0 | Similar to SET ROLE but with added logging |
+| sslinfo | 1.2 | Information about SSL certificates |
+| tablefunc | 1.0 | Functions that manipulate whole tables, including crosstab |
+| tcn | 1.0 | Triggered change notifications |
+| timescaledb | 2.15.3 | Enables scalable inserts and complex queries for time-series data (Apache 2 Edition) |
+| tsm_system_rows | 1.0 | TABLESAMPLE method which accepts number of rows as a limit |
+| tsm_system_time | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit |
+| unaccent | 1.1 | Text search dictionary that removes accents |
+| uuid-ossp | 1.1 | Generate universally unique identifiers (UUIDs) |
+| xml2 | 1.1 | XPath querying and XSLT |
+
+{{< hint type=info >}} The following extensions are additionally included in the Postgis images. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|----------------------------|-----------------|-----------------------------------------------------------------------------------------------------|
+| address_standardizer | 3.4.4 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
+| address_standardizer_data_us | 3.4.4 | Address Standardizer US dataset example |
+| postgis | 3.4.4 | PostGIS geometry and geography spatial types and functions |
+| postgis_raster | 3.4.4 | PostGIS raster types and functions |
+| postgis_sfcgal | 3.4.4 | PostGIS SFCGAL functions |
+| postgis_tiger_geocoder | 3.4.4 | PostGIS tiger geocoder and reverse geocoder |
+| postgis_topology | 3.4.4 | PostGIS topology spatial types and functions |
diff --git a/docs/hugo/content/en/extensions/pg14.md b/docs/hugo/content/en/extensions/pg14.md
new file mode 100644
index 000000000..857061e05
--- /dev/null
+++ b/docs/hugo/content/en/extensions/pg14.md
@@ -0,0 +1,88 @@
+---
+title: "PostgreSQL 14"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1814
+---
+
+{{< hint type=info >}} The extensions listed are included in the standard images. This list refers to PostgreSQL 14. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|---------------------|-----------------|----------------------------------------------------------------------------|
+| adminpack | 2.1 | Administrative functions for PostgreSQL |
+| amcheck | 1.3 | Functions for verifying relation integrity |
+| autoinc | 1.0 | Functions for autoincrementing fields |
+| bloom | 1.0 | Bloom access method - signature file based index |
+| btree_gin | 1.3 | Support for indexing common datatypes in GIN |
+| btree_gist | 1.6 | Support for indexing common datatypes in GiST |
+| citext | 1.6 | Data type for case-insensitive character strings |
+| credcheck | 3.0.0 | credcheck - PostgreSQL plain text credential checker |
+| cube | 1.5 | Data type for multidimensional cubes |
+| dblink | 1.2 | Connect to other PostgreSQL databases from within a database |
+| dict_int | 1.0 | Text search dictionary template for integers |
+| dict_xsyn | 1.0 | Text search dictionary template for extended synonym processing |
+| earthdistance | 1.1 | Calculate great-circle distances on the surface of the Earth |
+| file_fdw | 1.0 | Foreign-data wrapper for flat file access |
+| fuzzystrmatch | 1.1 | Determine similarities and distance between strings |
+| hstore | 1.8 | Data type for storing sets of (key, value) pairs |
+| hstore_plperl | 1.0 | Transform between hstore and plperl |
+| hstore_plperlu | 1.0 | Transform between hstore and plperlu |
+| hstore_plpython3u | 1.0 | Transform between hstore and plpython3u |
+| insert_username | 1.0 | Functions for tracking who changed a table |
+| intagg | 1.1 | Integer aggregator and enumerator (obsolete) |
+| intarray | 1.5 | Functions, operators, and index support for 1-D arrays of integers |
+| isn | 1.2 | Data types for international product numbering standards |
+| jsonb_plperl | 1.0 | Transform between jsonb and plperl |
+| jsonb_plperlu | 1.0 | Transform between jsonb and plperlu |
+| jsonb_plpython3u | 1.0 | Transform between jsonb and plpython3u |
+| lo | 1.1 | Large Object maintenance |
+| ltree | 1.2 | Data type for hierarchical tree-like structures |
+| ltree_plpython3u | 1.0 | Transform between ltree and plpython3u |
+| moddatetime | 1.0 | Functions for tracking last modification time |
+| old_snapshot | 1.0 | Utilities in support of old_snapshot_threshold |
+| pageinspect | 1.9 | Inspect the contents of database pages at a low level |
+| pg_buffercache | 1.3 | Examine the shared buffer cache |
+| pg_cron | 1.6 | Job scheduler for PostgreSQL |
+| pg_freespacemap | 1.2 | Examine the free space map (FSM) |
+| pg_permissions | 1.3 | View object permissions and compare them with the desired state |
+| pg_prewarm | 1.2 | Prewarm relation data |
+| pg_proctab | | Placeholder - see pg_proctab--0.0.10-compat.control |
+| pg_stat_statements| 1.9 | Track planning and execution statistics of all SQL statements executed |
+| pg_surgery | 1.0 | Extension to perform surgery on a damaged relation |
+| pg_trgm | 1.6 | Text similarity measurement and index searching based on trigrams |
+| pg_visibility | 1.2 | Examine the visibility map (VM) and page-level visibility info |
+| pgaudit | 1.6.3 | Provides auditing functionality |
+| pgauditlogtofile | 1.6 | pgAudit addon to redirect audit entries to an independent file |
+| pgcrypto | 1.3 | Cryptographic functions |
+| pgnodemx | 1.7 | SQL functions that allow capture of node OS metrics from PostgreSQL |
+| pgrowlocks | 1.2 | Show row-level locking information |
+| pgstattuple | 1.5 | Show tuple-level statistics |
+| plpgsql | 1.0 | PL/pgSQL procedural language |
+| plpython3u | 1.0 | PL/Python3U untrusted procedural language |
+| pltcl | 1.0 | PL/Tcl procedural language |
+| pltclu | 1.0 | PL/TclU untrusted procedural language |
+| postgres_fdw | 1.1 | Foreign-data wrapper for remote PostgreSQL servers |
+| refint | 1.0 | Functions for implementing referential integrity (obsolete) |
+| seg | 1.4 | Data type for representing line segments or floating-point intervals |
+| set_user | 4.1.0 | Similar to SET ROLE but with added logging |
+| sslinfo | 1.2 | Information about SSL certificates |
+| tablefunc | 1.0 | Functions that manipulate whole tables, including crosstab |
+| tcn | 1.0 | Triggered change notifications |
+| timescaledb | 2.18.2 | Enables scalable inserts and complex queries for time-series data (Apache 2 Edition) |
+| tsm_system_rows | 1.0 | TABLESAMPLE method which accepts number of rows as a limit |
+| tsm_system_time | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit |
+| unaccent | 1.1 | Text search dictionary that removes accents |
+| uuid-ossp | 1.1 | Generate universally unique identifiers (UUIDs) |
+| xml2 | 1.1 | XPath querying and XSLT |
+
+{{< hint type=info >}} The following extensions are additionally included in the Postgis images. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|----------------------------|-----------------|-----------------------------------------------------------------------------------------------------|
+| address_standardizer | 3.4.4 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
+| address_standardizer_data_us | 3.4.4 | Address Standardizer US dataset example |
+| postgis | 3.4.4 | PostGIS geometry and geography spatial types and functions |
+| postgis_raster | 3.4.4 | PostGIS raster types and functions |
+| postgis_sfcgal | 3.4.4 | PostGIS SFCGAL functions |
+| postgis_tiger_geocoder | 3.4.4 | PostGIS tiger geocoder and reverse geocoder |
+| postgis_topology | 3.4.4 | PostGIS topology spatial types and functions |
diff --git a/docs/hugo/content/en/extensions/pg15.md b/docs/hugo/content/en/extensions/pg15.md
new file mode 100644
index 000000000..1bbd33a7b
--- /dev/null
+++ b/docs/hugo/content/en/extensions/pg15.md
@@ -0,0 +1,89 @@
+---
+title: "PostgreSQL 15"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1815
+---
+
+{{< hint type=info >}} The extensions listed are included in the standard images. This list refers to PostgreSQL 15. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|---------------------|-----------------|----------------------------------------------------------------------------|
+| adminpack | 2.1 | Administrative functions for PostgreSQL |
+| amcheck | 1.3 | Functions for verifying relation integrity |
+| autoinc | 1.0 | Functions for autoincrementing fields |
+| bloom | 1.0 | Bloom access method - signature file based index |
+| btree_gin | 1.3 | Support for indexing common datatypes in GIN |
+| btree_gist | 1.7 | Support for indexing common datatypes in GiST |
+| citext | 1.6 | Data type for case-insensitive character strings |
+| credcheck | 3.0.0 | credcheck - PostgreSQL plain text credential checker |
+| cube | 1.5 | Data type for multidimensional cubes |
+| dblink | 1.2 | Connect to other PostgreSQL databases from within a database |
+| dict_int | 1.0 | Text search dictionary template for integers |
+| dict_xsyn | 1.0 | Text search dictionary template for extended synonym processing |
+| earthdistance | 1.1 | Calculate great-circle distances on the surface of the Earth |
+| file_fdw | 1.0 | Foreign-data wrapper for flat file access |
+| fuzzystrmatch | 1.1 | Determine similarities and distance between strings |
+| hstore | 1.8 | Data type for storing sets of (key, value) pairs |
+| hstore_plperl | 1.0 | Transform between hstore and plperl |
+| hstore_plperlu | 1.0 | Transform between hstore and plperlu |
+| hstore_plpython3u | 1.0 | Transform between hstore and plpython3u |
+| insert_username | 1.0 | Functions for tracking who changed a table |
+| intagg | 1.1 | Integer aggregator and enumerator (obsolete) |
+| intarray | 1.5 | Functions, operators, and index support for 1-D arrays of integers |
+| isn | 1.2 | Data types for international product numbering standards |
+| jsonb_plperl | 1.0 | Transform between jsonb and plperl |
+| jsonb_plperlu | 1.0 | Transform between jsonb and plperlu |
+| jsonb_plpython3u | 1.0 | Transform between jsonb and plpython3u |
+| lo | 1.1 | Large Object maintenance |
+| ltree | 1.2 | Data type for hierarchical tree-like structures |
+| ltree_plpython3u | 1.0 | Transform between ltree and plpython3u |
+| moddatetime | 1.0 | Functions for tracking last modification time |
+| old_snapshot | 1.0 | Utilities in support of old_snapshot_threshold |
+| pageinspect | 1.11 | Inspect the contents of database pages at a low level |
+| pg_buffercache | 1.3 | Examine the shared buffer cache |
+| pg_cron | 1.6 | Job scheduler for PostgreSQL |
+| pg_freespacemap | 1.2 | Examine the free space map (FSM) |
+| pg_permissions | 1.3 | View object permissions and compare them with the desired state |
+| pg_prewarm | 1.2 | Prewarm relation data |
+| pg_proctab | | Placeholder - see pg_proctab--0.0.10-compat.control |
+| pg_stat_statements| 1.10 | Track planning and execution statistics of all SQL statements executed |
+| pg_surgery | 1.0 | Extension to perform surgery on a damaged relation |
+| pg_trgm | 1.6 | Text similarity measurement and index searching based on trigrams |
+| pg_visibility | 1.2 | Examine the visibility map (VM) and page-level visibility info |
+| pg_walinspect | 1.0 | Functions to inspect contents of PostgreSQL Write-Ahead Log |
+| pgaudit | 1.7 | Provides auditing functionality |
+| pgauditlogtofile | 1.6 | pgAudit addon to redirect audit entries to an independent file |
+| pgcrypto | 1.3 | Cryptographic functions |
+| pgnodemx | 1.7 | SQL functions that allow capture of node OS metrics from PostgreSQL |
+| pgrowlocks | 1.2 | Show row-level locking information |
+| pgstattuple | 1.5 | Show tuple-level statistics |
+| plpgsql | 1.0 | PL/pgSQL procedural language |
+| plpython3u | 1.0 | PL/Python3U untrusted procedural language |
+| pltcl | 1.0 | PL/Tcl procedural language |
+| pltclu | 1.0 | PL/TclU untrusted procedural language |
+| postgres_fdw | 1.1 | Foreign-data wrapper for remote PostgreSQL servers |
+| refint | 1.0 | Functions for implementing referential integrity (obsolete) |
+| seg | 1.4 | Data type for representing line segments or floating-point intervals |
+| set_user | 4.1.0 | Similar to SET ROLE but with added logging |
+| sslinfo | 1.2 | Information about SSL certificates |
+| tablefunc | 1.0 | Functions that manipulate whole tables, including crosstab |
+| tcn | 1.0 | Triggered change notifications |
+| timescaledb | 2.18.2 | Enables scalable inserts and complex queries for time-series data (Apache 2 Edition) |
+| tsm_system_rows | 1.0 | TABLESAMPLE method which accepts number of rows as a limit |
+| tsm_system_time | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit |
+| unaccent | 1.1 | Text search dictionary that removes accents |
+| uuid-ossp | 1.1 | Generate universally unique identifiers (UUIDs) |
+| xml2 | 1.1 | XPath querying and XSLT |
+
+{{< hint type=info >}} The following extensions are additionally included in the Postgis images. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|----------------------------|-----------------|-----------------------------------------------------------------------------------------------------|
+| address_standardizer | 3.4.4 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
+| address_standardizer_data_us | 3.4.4 | Address Standardizer US dataset example |
+| postgis | 3.4.4 | PostGIS geometry and geography spatial types and functions |
+| postgis_raster | 3.4.4 | PostGIS raster types and functions |
+| postgis_sfcgal | 3.4.4 | PostGIS SFCGAL functions |
+| postgis_tiger_geocoder | 3.4.4 | PostGIS tiger geocoder and reverse geocoder |
+| postgis_topology | 3.4.4 | PostGIS topology spatial types and functions |
diff --git a/docs/hugo/content/en/extensions/pg16.md b/docs/hugo/content/en/extensions/pg16.md
new file mode 100644
index 000000000..1fa318580
--- /dev/null
+++ b/docs/hugo/content/en/extensions/pg16.md
@@ -0,0 +1,89 @@
+---
+title: "PostgreSQL 16"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1816
+---
+
+{{< hint type=info >}} The extensions listed are included in the standard images. This list refers to PostgreSQL 16. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|---------------------|-----------------|----------------------------------------------------------------------------|
+| adminpack | 2.1 | Administrative functions for PostgreSQL |
+| amcheck | 1.3 | Functions for verifying relation integrity |
+| autoinc | 1.0 | Functions for autoincrementing fields |
+| bloom | 1.0 | Bloom access method - signature file based index |
+| btree_gin | 1.3 | Support for indexing common datatypes in GIN |
+| btree_gist | 1.7 | Support for indexing common datatypes in GiST |
+| citext | 1.6 | Data type for case-insensitive character strings |
+| credcheck | 3.0.0 | credcheck - PostgreSQL plain text credential checker |
+| cube | 1.5 | Data type for multidimensional cubes |
+| dblink | 1.2 | Connect to other PostgreSQL databases from within a database |
+| dict_int | 1.0 | Text search dictionary template for integers |
+| dict_xsyn | 1.0 | Text search dictionary template for extended synonym processing |
+| earthdistance | 1.2 | Calculate great-circle distances on the surface of the Earth |
+| file_fdw | 1.0 | Foreign-data wrapper for flat file access |
+| fuzzystrmatch | 1.2 | Determine similarities and distance between strings |
+| hstore | 1.8 | Data type for storing sets of (key, value) pairs |
+| hstore_plperl | 1.0 | Transform between hstore and plperl |
+| hstore_plperlu | 1.0 | Transform between hstore and plperlu |
+| hstore_plpython3u | 1.0 | Transform between hstore and plpython3u |
+| insert_username | 1.0 | Functions for tracking who changed a table |
+| intagg | 1.1 | Integer aggregator and enumerator (obsolete) |
+| intarray | 1.5 | Functions, operators, and index support for 1-D arrays of integers |
+| isn | 1.2 | Data types for international product numbering standards |
+| jsonb_plperl | 1.0 | Transform between jsonb and plperl |
+| jsonb_plperlu | 1.0 | Transform between jsonb and plperlu |
+| jsonb_plpython3u | 1.0 | Transform between jsonb and plpython3u |
+| lo | 1.1 | Large Object maintenance |
+| ltree | 1.2 | Data type for hierarchical tree-like structures |
+| ltree_plpython3u | 1.0 | Transform between ltree and plpython3u |
+| moddatetime | 1.0 | Functions for tracking last modification time |
+| old_snapshot | 1.0 | Utilities in support of old_snapshot_threshold |
+| pageinspect | 1.12 | Inspect the contents of database pages at a low level |
+| pg_buffercache | 1.4 | Examine the shared buffer cache |
+| pg_cron | 1.6 | Job scheduler for PostgreSQL |
+| pg_freespacemap | 1.2 | Examine the free space map (FSM) |
+| pg_permissions | 1.3 | View object permissions and compare them with the desired state |
+| pg_prewarm | 1.2 | Prewarm relation data |
+| pg_proctab | | Placeholder - see pg_proctab--0.0.10-compat.control |
+| pg_stat_statements| 1.10 | Track planning and execution statistics of all SQL statements executed |
+| pg_surgery | 1.0 | Extension to perform surgery on a damaged relation |
+| pg_trgm | 1.6 | Text similarity measurement and index searching based on trigrams |
+| pg_visibility | 1.2 | Examine the visibility map (VM) and page-level visibility info |
+| pg_walinspect | 1.1 | Functions to inspect contents of PostgreSQL Write-Ahead Log |
+| pgaudit | 16.1 | Provides auditing functionality |
+| pgauditlogtofile | 1.6 | pgAudit addon to redirect audit entries to an independent file |
+| pgcrypto | 1.3 | Cryptographic functions |
+| pgnodemx | 1.7 | SQL functions that allow capture of node OS metrics from PostgreSQL |
+| pgrowlocks | 1.2 | Show row-level locking information |
+| pgstattuple | 1.5 | Show tuple-level statistics |
+| plpgsql | 1.0 | PL/pgSQL procedural language |
+| plpython3u | 1.0 | PL/Python3U untrusted procedural language |
+| pltcl | 1.0 | PL/Tcl procedural language |
+| pltclu | 1.0 | PL/TclU untrusted procedural language |
+| postgres_fdw | 1.1 | Foreign-data wrapper for remote PostgreSQL servers |
+| refint | 1.0 | Functions for implementing referential integrity (obsolete) |
+| seg | 1.4 | Data type for representing line segments or floating-point intervals |
+| set_user | 4.1.0 | Similar to SET ROLE but with added logging |
+| sslinfo | 1.2 | Information about SSL certificates |
+| tablefunc | 1.0 | Functions that manipulate whole tables, including crosstab |
+| tcn | 1.0 | Triggered change notifications |
+| timescaledb | 2.18.2 | Enables scalable inserts and complex queries for time-series data (Apache 2 Edition) |
+| tsm_system_rows | 1.0 | TABLESAMPLE method which accepts number of rows as a limit |
+| tsm_system_time | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit |
+| unaccent | 1.1 | Text search dictionary that removes accents |
+| uuid-ossp | 1.1 | Generate universally unique identifiers (UUIDs) |
+| xml2 | 1.1 | XPath querying and XSLT |
+
+{{< hint type=info >}} The following extensions are additionally included in the Postgis images. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|----------------------------|-----------------|-----------------------------------------------------------------------------------------------------|
+| address_standardizer | 3.4.4 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
+| address_standardizer_data_us | 3.4.4 | Address Standardizer US dataset example |
+| postgis | 3.4.4 | PostGIS geometry and geography spatial types and functions |
+| postgis_raster | 3.4.4 | PostGIS raster types and functions |
+| postgis_sfcgal | 3.4.4 | PostGIS SFCGAL functions |
+| postgis_tiger_geocoder | 3.4.4 | PostGIS tiger geocoder and reverse geocoder |
+| postgis_topology | 3.4.4 | PostGIS topology spatial types and functions |
diff --git a/docs/hugo/content/en/extensions/pg17.md b/docs/hugo/content/en/extensions/pg17.md
new file mode 100644
index 000000000..6de7df126
--- /dev/null
+++ b/docs/hugo/content/en/extensions/pg17.md
@@ -0,0 +1,87 @@
+---
+title: "PostgreSQL 17"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1817
+---
+
+{{< hint type=info >}} The extensions listed are included in the standard images. This list refers to PostgreSQL 17. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|---------------------|-----------------|----------------------------------------------------------------------------|
+| amcheck | 1.4 | Functions for verifying relation integrity |
+| autoinc | 1.0 | Functions for autoincrementing fields |
+| bloom | 1.0 | Bloom access method - signature file based index |
+| btree_gin | 1.3 | Support for indexing common datatypes in GIN |
+| btree_gist | 1.7 | Support for indexing common datatypes in GiST |
+| citext | 1.6 | Data type for case-insensitive character strings |
+| credcheck | 3.0.0 | credcheck - PostgreSQL plain text credential checker |
+| cube | 1.5 | Data type for multidimensional cubes |
+| dblink | 1.2 | Connect to other PostgreSQL databases from within a database |
+| dict_int | 1.0 | Text search dictionary template for integers |
+| dict_xsyn | 1.0 | Text search dictionary template for extended synonym processing |
+| earthdistance | 1.2 | Calculate great-circle distances on the surface of the Earth |
+| file_fdw | 1.0 | Foreign-data wrapper for flat file access |
+| fuzzystrmatch | 1.2 | Determine similarities and distance between strings |
+| hstore | 1.8 | Data type for storing sets of (key, value) pairs |
+| hstore_plperl | 1.0 | Transform between hstore and plperl |
+| hstore_plperlu | 1.0 | Transform between hstore and plperlu |
+| hstore_plpython3u | 1.0 | Transform between hstore and plpython3u |
+| insert_username | 1.0 | Functions for tracking who changed a table |
+| intagg | 1.1 | Integer aggregator and enumerator (obsolete) |
+| intarray | 1.5 | Functions, operators, and index support for 1-D arrays of integers |
+| isn | 1.2 | Data types for international product numbering standards |
+| jsonb_plperl | 1.0 | Transform between jsonb and plperl |
+| jsonb_plperlu | 1.0 | Transform between jsonb and plperlu |
+| jsonb_plpython3u | 1.0 | Transform between jsonb and plpython3u |
+| lo | 1.1 | Large Object maintenance |
+| ltree | 1.3 | Data type for hierarchical tree-like structures |
+| ltree_plpython3u | 1.0 | Transform between ltree and plpython3u |
+| moddatetime | 1.0 | Functions for tracking last modification time |
+| pageinspect | 1.12 | Inspect the contents of database pages at a low level |
+| pg_buffercache | 1.5 | Examine the shared buffer cache |
+| pg_cron | 1.6 | Job scheduler for PostgreSQL |
+| pg_freespacemap | 1.2 | Examine the free space map (FSM) |
+| pg_permissions | 1.3 | View object permissions and compare them with the desired state |
+| pg_prewarm | 1.2 | Prewarm relation data |
+| pg_proctab | | Placeholder - see pg_proctab--0.0.10-compat.control |
+| pg_stat_statements| 1.11 | Track planning and execution statistics of all SQL statements executed |
+| pg_surgery | 1.0 | Extension to perform surgery on a damaged relation |
+| pg_trgm | 1.6 | Text similarity measurement and index searching based on trigrams |
+| pg_visibility | 1.2 | Examine the visibility map (VM) and page-level visibility info |
+| pg_walinspect | 1.1 | Functions to inspect contents of PostgreSQL Write-Ahead Log |
+| pgaudit | 17.1 | Provides auditing functionality |
+| pgauditlogtofile | 1.6 | pgAudit addon to redirect audit entries to an independent file |
+| pgcrypto | 1.3 | Cryptographic functions |
+| pgnodemx | 1.7 | SQL functions that allow capture of node OS metrics from PostgreSQL |
+| pgrowlocks | 1.2 | Show row-level locking information |
+| pgstattuple | 1.5 | Show tuple-level statistics |
+| plpgsql | 1.0 | PL/pgSQL procedural language |
+| plpython3u | 1.0 | PL/Python3U untrusted procedural language |
+| pltcl | 1.0 | PL/Tcl procedural language |
+| pltclu | 1.0 | PL/TclU untrusted procedural language |
+| postgres_fdw | 1.1 | Foreign-data wrapper for remote PostgreSQL servers |
+| refint | 1.0 | Functions for implementing referential integrity (obsolete) |
+| seg | 1.4 | Data type for representing line segments or floating-point intervals |
+| set_user | 4.1.0 | Similar to SET ROLE but with added logging |
+| sslinfo | 1.2 | Information about SSL certificates |
+| tablefunc | 1.0 | Functions that manipulate whole tables, including crosstab |
+| tcn | 1.0 | Triggered change notifications |
+| timescaledb | 2.18.2 | Enables scalable inserts and complex queries for time-series data (Apache 2 Edition) |
+| tsm_system_rows | 1.0 | TABLESAMPLE method which accepts number of rows as a limit |
+| tsm_system_time | 1.0 | TABLESAMPLE method which accepts time in milliseconds as a limit |
+| unaccent | 1.1 | Text search dictionary that removes accents |
+| uuid-ossp | 1.1 | Generate universally unique identifiers (UUIDs) |
+| xml2 | 1.1 | XPath querying and XSLT |
+
+{{< hint type=info >}} The following extensions are additionally included in the Postgis images. {{< /hint >}}
+
+| Name | Default Version | Comment |
+|----------------------------|-----------------|-----------------------------------------------------------------------------------------------------|
+| address_standardizer | 3.4.4 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
+| address_standardizer_data_us | 3.4.4 | Address Standardizer US dataset example |
+| postgis | 3.4.4 | PostGIS geometry and geography spatial types and functions |
+| postgis_raster | 3.4.4 | PostGIS raster types and functions |
+| postgis_sfcgal | 3.4.4 | PostGIS SFCGAL functions |
+| postgis_tiger_geocoder | 3.4.4 | PostGIS tiger geocoder and reverse geocoder |
+| postgis_topology | 3.4.4 | PostGIS topology spatial types and functions |
diff --git a/docs/hugo/content/en/first_cluster/_index.md b/docs/hugo/content/en/first_cluster/_index.md
new file mode 100644
index 000000000..fdadc11a1
--- /dev/null
+++ b/docs/hugo/content/en/first_cluster/_index.md
@@ -0,0 +1,120 @@
+---
+title: "Create a Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 600
+---
+
+To set up a cluster, the implementation is based on a description, as with the other Kubernetes deplyoments. To do this, the operator uses a document of type `postgresql`.
+
+You can also find the basic minimum specifications for a single-node cluster in our [tutorial project on Github](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/blob/main/cluster-tutorials/single-cluster/postgres.yaml)
+
+## minimal Single-Node Cluster
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1"
+ numberOfInstances: 1
+ postgresql:
+ version: "17"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-17-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
+After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+```
+
+{{< hint type=Info >}}[Here](documentation/crd/crd-postgresql/) you will find a complete overview of the available options within the cluster manifest.{{< /hint >}}
+
+### Use a specific Storageclass
+```
+spec:
+ ...
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
+
+### Expanding Volume
+The Operator allows to you expand your volume if the storage-System is able to do this.
+```
+spec:
+ ...
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+
+```
+
+### Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+### Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/en/ha_cluster/_index.md b/docs/hugo/content/en/ha_cluster/_index.md
new file mode 100644
index 000000000..ed2a1e6b0
--- /dev/null
+++ b/docs/hugo/content/en/ha_cluster/_index.md
@@ -0,0 +1,59 @@
+---
+title: "High Availability"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1100
+---
+
+High availability (HA) is a critical aspect of running database systems, especially in mission-critical applications where downtime is unacceptable. This section explains why high availability is important for PostgreSQL and how Patroni acts as a solution to ensure HA.
+Why High Availability (HA) for PostgreSQL?
+1. To minimise downtime: In modern, data-driven applications, downtime can cause significant financial and reputational losses. High availability ensures that the database remains available even in the event of hardware failures or network problems.
+2. Data integrity and security: A database failure can lead to data loss or data inconsistencies. High-availability solutions protect against such scenarios through continuous data replication and automatic failover.
+3. Scalability and load balancing: HA setups make it possible to distribute the load across multiple nodes, resulting in better performance and faster response times. This is particularly important in environments with high data traffic.
+4. Ease of maintenance: By setting up high availability, database maintenance can be performed without interrupting services. Nodes can be maintained incrementally while the database remains available.
+
+#### Patroni - the cluster manager
+In our PostgreSQL environment, we use [Patroni](../../patroni) in the PG containers by default. This has the advantage that even single-node instances basically function as Patroni clusters. This configuration offers several important advantages:
+- Easy scalability: by using Patroni in all PG containers, scaling pods up and down is possible at any time. You can easily add additional pods as needed to improve performance or increase capacity, or remove pods to free up resources. This flexibility is particularly useful in dynamic environments where requirements can change quickly.
+- Automated cluster management: Patroni automatically takes over the management of the cluster. When a new pod is added to an existing cluster, Patroni takes care of setting up the new node itself, including initialising and starting replication. This means you don't have to perform any manual steps to configure or manage new nodes - Patroni does it all for you automatically.
+- Seamless integration: As Patroni is active in every PG container by default, you don't have to worry about compatibility or manual configuration. This makes deployment and maintenance much easier, as all the necessary components are already preconfigured.
+- Optimisation of resources: Even with a minimal setup (single-node instance), you benefit from the advantages of a Patroni cluster, including the possibility of easy expansion and automatic failover in the event of a failure. This ensures optimal resource utilisation and minimises downtime.
+
+#### Upgrade the cluster to high availability
+
+The necessary changes to a high-availability cluster are very limited.
+Only the number of desired instances needs to be increased.
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+spec:
+ dockerImage: "docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1"
+ numberOfInstances: 2
+ postgresql:
+ version: "17"
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ volume:
+ size: 5Gi
+```
+
+You can either create a new cluster with the document or update an existing cluster with it.
+This makes it possible to scale the cluster up and down during operation.
+
+The example above will create a HA-Cluster based on two Nodes.
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 3d
+cluster-1-1 | 1/1 | Running | 0 | 31s
+
+```
diff --git a/docs/hugo/content/en/images/architecture_cluster_backup_cloud_storage.png b/docs/hugo/content/en/images/architecture_cluster_backup_cloud_storage.png
new file mode 100644
index 000000000..082032dc9
Binary files /dev/null and b/docs/hugo/content/en/images/architecture_cluster_backup_cloud_storage.png differ
diff --git a/docs/hugo/content/en/images/architecture_cluster_backup_pvc.png b/docs/hugo/content/en/images/architecture_cluster_backup_pvc.png
new file mode 100644
index 000000000..1cdbf5bad
Binary files /dev/null and b/docs/hugo/content/en/images/architecture_cluster_backup_pvc.png differ
diff --git a/docs/hugo/content/en/images/architecture_overview.png b/docs/hugo/content/en/images/architecture_overview.png
new file mode 100644
index 000000000..d51e3c12d
Binary files /dev/null and b/docs/hugo/content/en/images/architecture_overview.png differ
diff --git a/docs/hugo/content/en/installation/_index.md b/docs/hugo/content/en/installation/_index.md
new file mode 100644
index 000000000..685fe460a
--- /dev/null
+++ b/docs/hugo/content/en/installation/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Installation"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 500
+---
\ No newline at end of file
diff --git a/docs/hugo/content/en/installation/configuration_operator.md b/docs/hugo/content/en/installation/configuration_operator.md
new file mode 100644
index 000000000..189aea392
--- /dev/null
+++ b/docs/hugo/content/en/installation/configuration_operator.md
@@ -0,0 +1,84 @@
+---
+title: "Operator-Configuration"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 503
+---
+
+Users who are already used to working with PostgreSQL from Baremetal or VMs are already familiar with the need for various files to configure PostgreSQL. These include
+- postgresql.conf
+- pg_hba.conf
+- ...
+
+Although these files are available in the container, direct modification is not planned. As part of the declarative mode of operation of the operator, these files are defined via the operator. The modifying intervention within the container also represents a contradiction to the immutability of the container.
+
+For these reasons, the operator provides a way to make adjustments to the various files, from PostgreSQL to Patroni.
+
+We differentiate between two main objects in the cluster manifest:
+- [`postgresql`](documentation/how-to-use/configuration/#postgresql) with the child objects `version` and `parameters`
+- [`patroni`](documentation/how-to-use/configuration/#patroni) with objects for the `pg_hab`, `slots` and much more
+
+## postgresql
+
+The `postgresql `object consists of the following elements:
+- `version` - allows you to select the major version of PostgreSQL used.
+- `parameters`- enables the postgresql.conf to be changed
+
+```
+spec:
+ postgresql:
+ parameters:
+ shared_preload_libraries: 'pg_stat_statements,pgnodemx, timescaledb'
+ shared_buffers: '512MB'
+ version: '16'
+```
+
+Any known PostgreSQL parameter from postgresql.conf can be entered here and will be delivered by the operator to all nodes of the cluster accordingly.
+
+You can find more information about the parameters in the [PostgreSQL documentation](https://www.postgresql.org/docs/)
+
+## patroni
+
+The patroni object contains numerous options for customising the patroni-setu, and the pg_hba.conf is also configured here. A complete list of all available elements can be found here.
+
+The most important elements include
+- `pg_hba` - pg_hba.conf
+- `slots`
+- `synchronous_mode` - enables synchronous mode in the cluster. The default is set to `false`
+- `maximum_lag_on_failover` - Specifies the maximum lag so that the pod is still considered healthy in the event of a failover.
+- `failsafe_mode` Allows you to cancel the downgrading of the leader if all cluster members can be reached via the Patroni Rest Api.
+You can find more information on this in the [Patroni documentation](https://patroni-readthedocs-io.translate.goog/en/master/dcs_failsafe_mode.html?_x_tr_sl=auto&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp)
+
+### pg_hba
+
+The pg_hba.conf contains all defined authentication rules for PostgreSQL.
+
+When customising this configuration, it is important that the entire version of pg_hba is written to the manifest.
+The current configuration can be read out in the database using table pg_hba_file_rules ;.
+
+Further information can be found in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
+
+
+### slots
+
+When using user-defined slots, for example for the use of CDC using Debezium, there are problems when interacting with Patroni, as the slot and its current status are not automatically synchronised to the replicas.
+
+In the event of a failover, the client cannot start replication as both the entire slot and the information about the data that has already been synchronised are missing.
+
+To resolve this problem, slots must be defined in the cluster manifest rather than in PostgreSQL.
+
+```
+spec:
+ patroni:
+ slots:
+ cdc-example:
+ database: app_db
+ plugin: pgoutput
+ type: logical
+```
+This example creates a logical replication slot with the name `cdc-example` within the `app_db` database and uses the `pgoutput` plugin for the slot.
+
+
+{{< hint type=Info >}}Slots are only synchronised from the leader/standby leader to the replicas. This means that using the slots read-only on the replicas will cause a problem in the event of a failover.{{< /hint >}}
+
+
diff --git a/docs/hugo/content/en/installation/dev-k8s.md b/docs/hugo/content/en/installation/dev-k8s.md
new file mode 100644
index 000000000..54bd65266
--- /dev/null
+++ b/docs/hugo/content/en/installation/dev-k8s.md
@@ -0,0 +1,73 @@
+---
+title: "Setup local Kubernetes"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 501
+---
+
+There are various options for setting up a local Kubernetes environment. This chapter deals with the following two variants:
+- minikube
+- crc (CodeReadyContainers from RedHat)
+
+### Minikube
+Minikube is a tool that makes it possible to run Kubernetes locally on a single computer. It sets up a minimal but functional Kubernetes environment suitable for development and testing purposes. Minikube supports most Kubernetes features and provides an easy way to launch and manage Kubernetes clusters on local machines without the need for a complex cloud infrastructure.
+
+#### Install Kubectl & Minikube
+To use Minikube, it is essential to install the Kubectl client.
+
+[Here](https://kubernetes.io/docs/tasks/tools/) you will find all the information you need to install kubectl on your Linux, Mac or Windows device.
+
+You can Install Minikube on your Linux-, Mac- or Windows-Devide using [this Documentation](https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fbinary+download).
+
+#### Use Minikube
+
+Before starting minikube, it is advisable to define a path for the kubeconfig.
+```bash
+export KUBECONFIG=/home/USERNAME/kubeconfig_minikube.conf
+```
+You can then start minikube and all the necessary data is written directly to the conf. The definition of a user-defined path ensures that other configs are not inadvertently overwritten.
+The path must be defined again via ENV in each new user session. Alternatively, this can also be permanently defined via .bashrc.
+If the default path is not used for any other purpose, the ENV does not need to be set.
+```bash
+# Start minikube
+minikube start
+
+# get pods from default namespace
+kubectl get pods
+
+# change default namespace to cpo
+kubectl config set-context --namespace=cpo
+```
+
+### CRC
+CRC (CodeReady Containers) is a tool from Red Hat that provides a local OpenShift environment. It is specifically designed to run a compact version of OpenShift on a local machine to provide developers and testers with an easy way to develop and test applications optimised for use in OpenShift. CRC includes all the necessary OpenShift components and makes it possible to use Red Hat's container platform locally without building a full cloud infrastructure.
+
+#### Install oc-client & CRC
+To use CRC, it is essential to install the oc-client or the kubectl-client.
+
+[Here](https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html) you will find all the information you need to install kubectl on your Linux, Mac or Windows device.
+
+You can Download and install CRC on your Linux-, Mac- or Windows-Devide using [this informations](https://developers.redhat.com/products/openshift-local/overview).
+
+#### Use CRC
+
+Before installing crc, it is advisable to define a path for the kubeconfig.
+```bash
+export KUBECONFIG=/home/USERNAME/kubeconfig_crc.conf
+```
+You can then install and start crc and all the necessary data is written directly to the conf. The definition of a user-defined path ensures that other configs are not inadvertently overwritten.
+The path must be defined again via ENV in each new user session. Alternatively, this can also be permanently defined via .bashrc.
+If the default path is not used for any other purpose, the ENV does not need to be set.
+```bash
+# Install crc
+crc setup
+
+# Start crc
+crc start
+
+# get pods from default namespace
+oc get pods
+
+# change default namespace to cpo
+oc project cpo
+```
\ No newline at end of file
diff --git a/docs/hugo/content/en/installation/install_operator.md b/docs/hugo/content/en/installation/install_operator.md
new file mode 100644
index 000000000..c36f5d2ce
--- /dev/null
+++ b/docs/hugo/content/en/installation/install_operator.md
@@ -0,0 +1,61 @@
+---
+title: "Install CPO"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 502
+---
+
+## Prerequisites
+
+For the installation you either need our CPO tutorial repository or you install CPO directly from our registry.
+Exception: Installation via Operatorhub (Openshift only)
+
+### CPO-Tutorial-Repository
+
+To get started, you can fork our tutorial repository on Github and then download it.
+[CYBERTEC-operator-tutorials](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/fork)
+
+```
+GITHUB_USER='[YOUR_USERNAME]'
+git clone https://github.com/$GITHUB_USER/CYBERTEC-operator-tutorials.git
+cd CYBERTEC-operator-tutorials
+```
+
+### Helm-Registry
+
+helm repo add cpo https://cybertec-postgresql.github.io/CYBERTEC-operator-tutorials
+### Create Namespace
+
+```
+# kubectl
+kubectl create namespace cpo
+
+# oc
+oc create namespace cpo
+```
+
+## Install CPO
+
+There are several ways to install CPO:
+- [Use Helm](#helm)
+- [Use apply](#apply)
+- [Use Operatorhub (On Openshift only)](#operatorhub)
+
+### Helm
+
+You can check and change the value.yaml of the helm diagram under the path helm/operator/values.yaml
+By default, the operator is defined so that it is configured via crd-configuration. If you wish, you can change this to configmap. There are also some other default settings.
+
+```
+helm install -n cpo cpo helm/operator/.
+```
+
+The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
+
+### Apply
+
+The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
+
+### Operatorhub
+
+The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
\ No newline at end of file
diff --git a/docs/hugo/content/en/monitoring/_index.md b/docs/hugo/content/en/monitoring/_index.md
new file mode 100644
index 000000000..48d1366df
--- /dev/null
+++ b/docs/hugo/content/en/monitoring/_index.md
@@ -0,0 +1,143 @@
+---
+title: "Monitoring"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2000
+---
+The CPO-Project has prepared severall Tools which allows to setup a Monitoring-Stack including Alerting and Metric-Viewer.
+These Stack is based on:
+- Prometheus
+- Alertmanager
+- Grafana
+- exporter-container
+
+CPO has prepared an own Exporter for the PostgreSQl-Pod which can used as a sidecar.
+
+#### Setting up the Monitoring Stack
+To setup the Monitoring-Stack we suggest that you create an own namespace and use the prepared kustomization file inside the Operator-Tutorials.
+```
+$ kubectl create namespace cpo-monitoring
+namespace/cpo-monitoring created
+$ kubectl get pods -n cpo-monitoring
+No resources found in cpo-monitoring namespace.
+
+git clone https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorial
+cd CYBERTEC-operator-tutorial/setup/monitoring
+
+# Hint: Please check if youn want to use a specific storage-class the file pvcs.yaml and add your storageclass on the commented part. Please ensure that you removed the comment-char.
+
+$ kubectl apply -n cpo-monitoring -k .
+serviceaccount/cpo-monitoring created
+serviceaccount/cpo-monitoring-tools created
+clusterrole.rbac.authorization.k8s.io/cpo-monitoring unchanged
+clusterrolebinding.rbac.authorization.k8s.io/cpo-monitoring unchanged
+configmap/alertmanager-config created
+configmap/alertmanager-rules-config created
+configmap/cpo-prometheus-cm created
+configmap/grafana-dashboards created
+configmap/grafana-datasources created
+secret/grafana-secret created
+service/cpo-monitoring-alertmanager created
+service/cpo-monitoring-grafana created
+service/cpo-monitoring-prometheus created
+persistentvolumeclaim/alertmanager-pvc created
+persistentvolumeclaim/grafana-pvc created
+persistentvolumeclaim/prometheus-pvc created
+deployment.apps/cpo-monitoring-alertmanager created
+deployment.apps/cpo-monitoring-grafana created
+deployment.apps/cpo-monitoring-prometheus created
+
+Hint: If you're not running Openshift you will get a error like this:
+error: resource mapping not found for name: "grafana" namespace: "" from ".":
+no matches for kind "Route" in version "route.openshift.io/v1" ensure CRDs are installed first
+
+You can ignore this, because it depends on an object with the type route which is part of Openshift.
+It is not needed replaced by ingress-rules or an loadbalancer-service.
+```
+
+After installing the Monitoring-Stack we're able to check the created pods inside the namespace
+```
+$ kubectl get pods -n cpo-monitoring
+----------------------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cpo-monitoring-alertmanager-5bb8bc79f7-8pdv4 | 1/1 | Running | 0 | 3m35s
+cpo-monitoring-grafana-7c7c4f787b-jbj2f | 1/1 | Running | 0 | 3m35s
+cpo-monitoring-prometheus-67969b757f-k26jd | 1/1 | Running | 0 | 3m35s
+
+```
+The configuration of this monitoring-stack is based on severall configmaps which can be modified.
+
+#### Prometheus-Configuration
+
+
+#### Alertmanager-Configuration
+
+
+#### Grafana-Configuration
+
+
+#### Configure a PostgreSQL-Cluster to allow Prometheus to gather metrics
+
+To allow Prometheus to gather metrics from your cluster you need to do some small modfications on the Cluster-Manifest.
+We need to create the monitor-object for this:
+```
+kubectl edit postgresqls.cpo.opensource.cybertec.at cluster-1
+
+...
+spec:
+ ...
+ monitor:
+ image: docker.io/cybertecpostgresql/cybertec-pg-container:exporter-16.2-1
+```
+
+The Operator will add automatically the monitoring sidecar to your pods, create a new postgres-user and add some structure inside the postgres-database to enable everthing needed for the Monitoring. Also every Ressource of your Cluster will get a new label: cpo_monitoring_stack=true. This is needed for Prometheus to identify all clusters which should be added to the monitoring.
+Removing this label will stop Prometheus to gather data from this cluster.
+
+After changing your Cluster-Manifest the Pods needs to be recreated which is done by a rolling update.
+After this you can see that the pod has now more than just one container.
+
+```
+kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 2/2 | Running | 0 | 54s
+cluster-1-1 | 2/2 | Running | 0 | 31s
+
+```
+You can check the logs to see that the exporter is working and with curl you can see the output of the exporter.
+
+```
+kubectl logs cluster-1-0 -c postgres-exporter
+kubectl exec --stdin --tty cluster-1-0 -c postgres-exporter -- /bin/bash
+[exporter@cluster-1-0 /]# curl http://127.0.0.1:9187/metrics
+
+```
+You can now setup a LoadBalancer-Service or create an Ingress-Rule to allow access von outside to the grafana. Alternativ you can use a port-forward.
+
+##### LoadBalancer or Nodeport
+
+##### Ingress-Rule
+
+##### Port-Forwarding
+```
+$ kubectl get pods -n cpo-monitoring
+----------------------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cpo-monitoring-alertmanager-5bb8bc79f7-8pdv4 | 1/1 | Running | 0 | 6m42s
+cpo-monitoring-grafana-7c7c4f787b-jbj2f | 1/1 | Running | 0 | 6m42s
+cpo-monitoring-prometheus-67969b757f-k26jd | 1/1 | Running | 0 | 6m42s
+
+$ kubectl port-forward cpo-monitoring-grafana-7c7c4f787b-jbj2f -n cpo-monitoring 9000:9000
+Forwarding from 127.0.0.1:9000 -> 9000
+Forwarding from [::1]:9000 -> 9000
+
+```
+Call http://localhost:9000 in the [Browser](http://localhost:9000)
+
+##### Use a Route (Openshift only)
+
+```
+kubectl get route -n cpo-monitoring
+
+```
+Use the Route-Adress to access Grafana
\ No newline at end of file
diff --git a/docs/hugo/content/en/multisite/_index.md b/docs/hugo/content/en/multisite/_index.md
new file mode 100644
index 000000000..6fc188b96
--- /dev/null
+++ b/docs/hugo/content/en/multisite/_index.md
@@ -0,0 +1,613 @@
+---
+title: "Multisite"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2200
+---
+Multisite is a function specially developed for Patroni that makes it possible to combine two separate Patroni clusters into a common cluster unit. ‘Separate’ in this context means that the clusters run independently of each other and can even be located on different Kubernetes clusters.
+With Multisite, both clusters benefit from the well-known Patroni features such as automatic failover and demotion of members, resulting in a significant extension compared to a conventional standby cluster.
+This feature significantly improves high availability and redundancy by managing multiple geographically or infrastructurally separated clusters as one logical unit. This allows one cluster to seamlessly transition to another in the event of a failure without having to rely on manual switchovers or third-party replication solutions.
+
+### Prerequisites
+
+In order to set up the multisite PostgreSQL operator you will need the following:
+
+- Two or more Kubernetes or OpenShift clusters (also possible with bare metal or VMs)
+ - Kubernetes version 1.25+, OpenShift version 4.12+.
+ - Support for defining LoadBalancer services with external IP addresses that are accessible from the other cluster(s).
+ - Persistent volumes with must be available (only ReadWriteOnce capability is needed).
+- A separate VM or Kubernetes/OpenShift cluster to provide quorum (if using less then three Kubernetes or OpenShift clusters).
+ - For high availability there should not be a shared point of failure between the quorum and the two Kubernetes clusters.
+ - VM or a LoadBalancer IP must be accessible on ports 2379/2380 to the two other clusters.
+ - 2 vCPU and 2 GB of memory and 20GB of persistent storage is needed for the quorum site.
+- Set up etcd cluster with 3 sites accessible from each of the sites. etcd needs to support API version 3.
+- For backups an object storage system with S3 compatible API is needed. Minio, Ceph and major cloud provider object storages are known to work.
+
+{{< hint type=important >}} An additional etcd is set up for Multisite, which spans the Kubernetes or Openshift clusters and must contain the quorum. {{< /hint >}}
+
+
+### Architecture
+Helm based deployment of the multisite operator contains two helm charts, postgres-operator and postgres-cluster. The first is used to deploy the operator and associated objects to a single Kubernetes cluster. The operator is responsible for managing PostgreSQL clusters based on Custom Resource Definitions (CRDs) of type postgresqls/pg.
+
+
+
+The diagram contains in green the Helm charts that are used to deploy operator and clusters, in blue the objects
+deployed by the operator helm chart and in gold the objects deployed by the cluster chart.
+
+Operator helm chart deployed objects have the following purposes:
+
+* `deployments/postgres-operator` - Deployment for the operator itself.
+* `opconfig/postgres-operator` - Operator configuration parameters that are read on operator startup. These apply to
+ all clusters managed by this operator.
+* `crd/operatorconfigurations.cpo.opensource.cybertec.at` - Schema for the operator configuration.
+* `clusterrole/postgres-operator` - Defines the Kubernetes API resource access used by the operator. Assigned to
+ postgres-operator service account.
+* `clusterrole/postgres-pod` - The Kubernetes API access needed by database pods. Access is needed to access leader
+ status, config and other things. This is assigned to postgres-pod service account used by database pods.
+* `crd/postgresqls.cpo.opensource.cybertec.at` - Schema for PostgreSQL cluster definitions.
+* `clusterrole/postgres-operator:users:{admin,edit,view}` - If `rbac.createAggregateClusterRoles` is set then user
+ facing roles are added for accessing the postgresqls CRDs.
+
+The cluster chart creates an instance of postgresqls CRD, which will be called cluster manifest from here on. When this
+cluster manifest is created operator will create the needed resources for the cluster. These include:
+
+* `statefulset/$clustername` - StatefulSet is responsible for creating and managing database pods and their associated
+ PersistentVolumeClaims for storing the databases. Each database pod will run internally an instance of Patroni
+ process, which will coordinate over the Kubernetes API initialization of the database, startup, leader election
+ and other control plane actions.
+* `service/$clustername`,`endpoints/$clustername` - The main access point for users accessing the database. When load
+ balancer is enabled in the CRD or multisite mode is enabled, this service will be set to be a LoadBalancer service and
+ accessible from outside the Kubernetes cluster. The service is created without a selector. Instead, for leader
+ elections database pods will update the IP address of this endpoint to point to the current leader.
+
+ The endpoint also holds annotations that determine the duration of the leader lease.
+
+ In multicluster operation mode the standby site leader will be in read-only mode.
+* `service/$clustername-repl` - Service that points to non-leader (read-only) instances of the database cluster.
+* `service/$clustername-config` - A headless service with an endpoint that holds Patroni configuration in annotations.
+* `poddisruptionbudget/postgres-$clustername-pdb` - A pod disruption budget that does not allow Kubernetes to shut
+ down pods in leader role. On some Kubernetes clusters `kubernetes.enable_pod_disruption_budgets` may need to be
+ turned off to allow nodes to be drained for upgrades.
+
+### Multisite mode
+
+In multisite operation mode there are multiple independent Kubernetes clusters with operators capable of independent
+operation. To coordinate which site has the current leader process the database pods use a shared etcd cluster to
+store a leader lease.
+
+
+
+
+During bootstrap the first site to acquire the leader lease gets to initialize the database contents. Secondary
+sites are configured to replicate from primary site using Patroni's standby_cluster mechanism.
+
+To be able to communicate between Kubernetes clusters a LoadBalancer service is needed. For this the operator
+automatically turns the primary service of the cluster to be of kind LoadBalancer. Operator will wait for an
+external IP address to be assigned to this service and passes this information to the database pod. The leader of
+each site, whether primary or standby site, will periodically advertise the externally visible IP address for their
+site in etcd. Based on this the standby site can configure the standby cluster mechanism to replicate from primary
+site.
+
+## Deployment
+
+In multisite mode postgres-operator can manage a replicated PostgreSQL cluster that is deployed across multiple
+Kubernetes clusters. Multisite operation can be turned on on a cluster by cluster basis, or can be configured to
+default to on for all cluster managed by a single operator.
+
+Setting up a GR deployment consists of the following steps:
+
+1. Creating a shared etcd cluster.
+2. Configuring multisite operation parameters for the postgres-operator.
+3. Creating a multisite enabled cluster.
+
+## Etcd deployment
+
+Multisite operation mode requires an etcd cluster to achieve consensus on which site gets to accept write
+transactions. This functionality is critical to avoid situations where multiple site accept incompatible writes that
+cannot be reconciliated, also known as a split brain scenario.
+
+A highly available etcd cluster consists of an odd number of nodes, at least 3. It is very important that a quorum
+of etcd instances (for 3 node clusters, any two instances) do not share a single point of failure. Otherwise the
+write availability of database clusters is limited to this single point of failure. Effectively this means that to
+protect 3 node etcd clusters from whole site failure, any site can only contain 1 etcd node and there needs to be
+at least 3 sites.
+
+Postgres-operator is agnostic to the exact method of etcd setup, but for ease of use there is a
+[Helm chart packaged](https://github.com/cybertec-postgresql/ansible-hpe/tree/main/etcd-helm) that demonstrates the
+setup.
+
+### Example etcd setup
+
+This example uses one etcd instance deployed outside Kubernetes cluster as quorum. This etcd needs to be started
+with the following configuration. Note that IP address that is advertised must be routed to the host that runs this
+etcd.
+
+```
+ETCD_NAME=quorum
+ETCD_INITIAL_CLUSTER=quorum=http://10.100.1.100:2380
+ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.100.1.100:2380
+ETCD_INITIAL_CLUSTER_TOKEN=hpe_etcd
+ETCD_ADVERTISE_CLIENT_URLS=http://10.100.1.100:2379
+ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
+ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
+```
+
+Kubernetes clusters can then be joined to this node. This needs to be a two step process as typically the externally
+visible IP address or port is not known before creating the LoadBalancer service. For this first create a free standing
+loadbalancer service that will be overwritten by the Helm chart.
+
+```
+helm template global-etcd ./etcd-helm/ -f etcd-helm/site_a.yaml \
+| awk '/service.yaml/{flag=1;next}/---/{flag=0}flag' \
+| kubectl apply -f -
+```
+
+Then check what external IP address the load balancer service got assigned to it.
+
+```
+$ kubectl get svc -l app.kubernetes.io/instance=global-etcd
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+global-etcd-etcd-helm LoadBalancer 10.109.171.146 10.200.1.101 2379:32488/TCP,2380:30336/TCP 17h
+```
+
+And set in the values for the helm chart:
+
+1. Unique name of the site.
+2. Externally visible IP address of this service
+3. IP address of existing etcd service.
+4. Peer list that includes both existing and to be added etcd instance.
+
+Example:
+
+```
+site:
+ name: site_a
+ host: 10.100.2.101
+
+etcd:
+ existing_etcd_cluster_hostname: 10.100.1.100
+ token: hpe_etcd
+ state: existing
+ peers:
+ #Peers should only include working peers and the current one
+ - quorum=http://10.100.1.100:2380
+ - site_a_etcd0=http://10.100.2.101:2380
+ client_port: 2379
+ peer_port: 2380
+```
+
+Then install the helm chart:
+
+```
+helm install global-etcd ./etcd-helm/ -f etcd-helm/site_a.yaml
+```
+
+This then needs to be repeated for the other site.
+
+## Configuring operator for multisite operations
+
+Multisite operation needs at a minimum the configuration options `multisite.etcd_host`, `multisite.site`
+and `multisite.enabled`. All of them can be configured either in operator configuration or per cluster.
+
+`multisite.etcd_host` needs to point at the global etcd. The port is currently assumed to be 2379. Normally
+all clusters under one operator would be using the same etcd clusters, so it makes sense to configure it
+in the operator configuration. At runtime database pods will discover the whole etcd cluster member list
+and will also take notice of any membership changes. It is enough to use local etcd instance service name
+here.
+
+`multisite.site` is a unique identifier for this site. It will be prefixed to globally advertised database pod names
+to distinguish them from pods in other sites. This also makes sense in the operator configuration.
+
+`multisite.enabled` turns of the multisite behavior. Typically it would make sense to control this at the
+cluster level, but the default could be turned on globally.
+
+These parameters are exposed in Helm chart values file as `configMultisite.*`.
+
+Example config:
+
+```
+$ kubectl get opconfig/postgres-operator -o yaml | grep multisite -B1 -A3
+ min_instances: -1
+ multisite:
+ etcd_host: global-etcd-etcd-helm.default.svc.cluster.local
+ site: s1
+ postgres_pod_resources:
+```
+
+This needs to be repeated with a different site name in the second Kubernetes cluster.
+
+## Creating a multisite enabled postgres cluster
+
+If the operator is configured for multisite operation then creating a multisite cluster only needs
+enabling of the multisite mode.
+
+Here is an example values file to use for creating multisite enabled clusters:
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: multisite-cluster
+ namespace: cpo
+ labels:
+ app.kubernetes.io/name: postgres-cluster
+ app.kubernetes.io/instance: multisite-cluster
+spec:
+ dockerImage: docker.io/cybertecpostgresql/cybertec-pg-container:postgres-multisite-17.4-1
+ numberOfInstances: 1
+ postgresql:
+ version: '17'
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ teamId: acid
+ volume:
+ size: 5Gi
+ patroni:
+ ttl: 30
+ loop_wait: 10
+ retry_timeout: 10
+ multisite:
+ enable: true
+```
+
+There is no coordination needed between creating the two or more sites and they can use identical
+configuration. The clusters need to be in the same namespace and have the same name to be considered
+the same cluster. The first cluster to boot up will acquire multisite leader status and will bootstrap
+the database. The other clusters will automatically fetch a copy from the leader cluster and start
+replicating.
+
+Multisite operation needs that the database cluster are capable of communicating with each other.
+To do this a load balancer service is created in each cluster for the cluster leader. The operator
+then waits for an external IP to be assigned and injects it into the database pods to be used for
+advertising their identity.
+
+## Observing operations
+
+If database pods have not been created, the first place to check for information is operator logs. Operator logs can
+be checked with the following command (add --follow if you want to observe in real-time):
+
+```shell
+kubectl logs $(kubectl get po -l 'app.kubernetes.io/name=postgres-operator' -o name)
+```
+
+The logs for a successful cluster creation look like this
+
+```
+time="2023-02-22T15:24:12Z" level=info msg="ADD event has been queued" cluster-name=cpo/multisite-cluster pkg=controller worker=1
+time="2023-02-22T15:24:12Z" level=info msg="creating a new Postgres cluster" cluster-name=cpo/multisite-cluster pkg=controller worker=1
+time="2023-02-22T15:24:12Z" level=warning msg="master is not running, generated master endpoint does not contain any addresses" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=info msg="endpoint \"cpo/multisite-cluster\" has been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=debug msg="final load balancer source ranges as seen in a service spec (not necessarily applied): [\"0.0.0.0/0\"]" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=info msg="master service \"cpo/multisite-cluster\" has been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=info msg="replica service \"cpo/multisite-cluster-repl\" has been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=debug msg="team API is disabled" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=debug msg="team API is disabled" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=info msg="users have been initialized" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=info msg="syncing secrets" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:12Z" level=debug msg="created new secret cpo/postgres.multisite-cluster.credentials.postgresql.cpo.opensource.cybertec.at, namespace: default, uid: 75ded2eb-a2c9-4968-a1d7-50d2996baeb3" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=debug msg="created new secret cpo/standby.multisite-cluster.credentials.postgresql.cpo.opensource.cybertec.at, namespace: default, uid: 45a2560a-65a8-4bd5-954f-34d80d8a1894" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=info msg="secrets have been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=info msg="pod disruption budget \"cpo/postgres-multisite-cluster-pdb\" has been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=info msg="waiting for load balancer IP to be assigned" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=debug msg="created new statefulset \"cpo/multisite-cluster\", uid: \"b83647ea-17f6-40aa-aa0c-b1111e76cdc0\"" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=info msg="statefulset \"cpo/multisite-cluster\" has been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:13Z" level=info msg="waiting for the cluster being ready" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:16Z" level=debug msg="Waiting for 1 pods to become ready" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="pods are ready" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="Create roles" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=debug msg="closing database connection" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="users have been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=debug msg="closing database connection" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="databases have been successfully created" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found pod disruption budget: \"cpo/postgres-multisite-cluster-pdb\" (uid: \"986a0118-83e7-4736-9843-ec80c0ea9270\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found statefulset: \"cpo/multisite-cluster\" (uid: \"b83647ea-17f6-40aa-aa0c-b1111e76cdc0\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found secret: \"cpo/postgres.multisite-cluster.credentials.postgresql.cpo.opensource.cybertec.at\" (uid: \"75ded2eb-a2c9-4968-a1d7-50d2996baeb3\") namesapce: default" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found secret: \"cpo/standby.multisite-cluster.credentials.postgresql.cpo.opensource.cybertec.at\" (uid: \"45a2560a-65a8-4bd5-954f-34d80d8a1894\") namesapce: default" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found master endpoint: \"cpo/multisite-cluster\" (uid: \"d9f7870e-dd51-4a88-a36a-1c2eb258a31c\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found master service: \"cpo/multisite-cluster\" (uid: \"4b30df50-ca53-4def-8171-b792c4eefc17\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found replica service: \"cpo/multisite-cluster-repl\" (uid: \"a77c3a49-3eea-4b6b-92b1-032e13d78f02\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found pod: \"cpo/multisite-cluster-0\" (uid: \"9b31d378-c9eb-4c1a-8637-e78933187ed7\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="found PVC: \"cpo/pgdata-multisite-cluster-0\" (uid: \"03e66572-27ed-42b4-87bd-825d32131d36\")" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=debug msg="syncing connection pooler (master, replica) from (false, nil) to (false, false)" cluster-name=cpo/multisite-cluster pkg=cluster worker=1
+time="2023-02-22T15:24:28Z" level=info msg="cluster has been created" cluster-name=cpo/multisite-cluster pkg=controller worker=1
+```
+
+When database pods have been created, then Patroni logs can be checked from the pod logs:
+
+```
+kubectl logs multisite-cluster-0
+```
+
+Successful start of first database pod will have amongst other output the following lines:
+
+```
+. . .
+# Kubernetes API access
+2023-02-22 15:24:21,061 INFO: Selected new K8s API server endpoint https://192.168.49.2:8443
+. . .
+# Set ourselves as multisite leader
+2023-02-22 15:24:21,218 INFO: Selected new etcd server http://192.168.50.101:2379
+2023-02-22 15:24:21,348 INFO: Running multisite consensus.
+2023-02-22 15:24:21,349 INFO: Touching member s1-multisite-cluster with {'host': '192.168.49.103', 'port': 5432}
+2023-02-22 15:24:21,447 INFO: Became multisite leader
+. . .
+# Initializing a new empty database
+2023-02-22 15:24:21,563 INFO: trying to bootstrap a new cluster
+. . .
+Success. You can now start the database server using:
+. . .
+# Database started
+2023-02-22 15:24:23,934 INFO: postmaster pid=73
+. . .
+# Running global database intitialization script
+2023-02-22 15:24:25,061 INFO: running post_bootstrap
+. . .
+# Bootstrap done
+2023-02-22 15:24:26,310 INFO: initialized a new cluster
+. . .
+# Repeated information about health every 10s
+2023-02-22 15:24:26,313 INFO: Lock owner: multisite-cluster-0; I am multisite-cluster-0
+2023-02-22 15:24:26,361 INFO: Triggering multisite hearbeat
+2023-02-22 15:24:26,364 INFO: Running multisite consensus.
+2023-02-22 15:24:26,364 INFO: Multisite has leader and it is us
+2023-02-22 15:24:26,409 INFO: Updated multisite leader lease
+2023-02-22 15:24:26,409 INFO: Touching member s1-multisite-cluster with {'host': '192.168.49.103', 'port': 5432}
+2023-02-22 15:24:26,422 INFO: no action. I am (multisite-cluster-0), the leader with the lock
+. . .
+```
+
+Bootstrap of standby on primary site will have these lines:
+
+```
+. . .
+# Determine leader
+2023-02-22 15:47:04,552 INFO: Lock owner: multisite-cluster-0; I am multisite-cluster-1
+2023-02-22 15:47:04,677 INFO: trying to bootstrap from leader 'multisite-cluster-0'
+. . .
+# Data copied to replica successfully
+2023-02-22 15:47:06,805 INFO: replica has been created using basebackup_fast_xlog
+2023-02-22 15:47:06,807 INFO: bootstrapped from leader 'multisite-cluster-0'
+# Postgres up
+2023-02-22 15:47:07,205 INFO: postmaster pid=73
+. . .
+# Normal operation
+2023-02-22 15:47:08,380 INFO: no action. I am (multisite-cluster-1), a secondary, and following a leader (multisite-cluster-0)
+```
+
+Standby cluster will have the following information:
+
+```
+. . .
+# Discovering multisite status
+2023-02-22 15:49:58,406 INFO: Running multisite consensus.
+2023-02-22 15:49:58,407 INFO: Touching member s2-multisite-cluster with {'host': '192.168.50.103', 'port': 5432}
+2023-02-22 15:49:58,454 INFO: Multisite has leader and it is s1-multisite-cluster
+2023-02-22 15:49:58,454 INFO: Multisite replicate from Member(index='118', name='s1-multisite-cluster', session='4113060022582527194', data={'host': '192.168.49.103', 'port': 5432})
+2023-02-22 15:49:58,454 INFO: Setting standby configuration to: {'host': '192.168.49.103', 'port': 5432}
+2023-02-22 15:49:58,455 INFO: Touching member s2-multisite-cluster with {'host': '192.168.50.103', 'port': 5432}
+. . .
+# Acquiring standby site leader status and starting copy from primary site
+2023-02-22 15:49:58,290 INFO: Lock owner: None; I am multisite-cluster-0
+2023-02-22 15:49:58,566 INFO: trying to bootstrap a new standby leader
+. . .
+# Replica creation successful
+2023-02-22 15:50:00,326 INFO: replica has been created using basebackup
+2023-02-22 15:50:00,327 INFO: bootstrapped clone from remote master postgresql://192.168.49.103:5432
+# Postgres started
+2023-02-22 15:50:00,577 INFO: postmaster pid=58
+. . .
+# Normal operation output of standby leader
+2023-02-22 15:50:01,835 INFO: Lock owner: multisite-cluster-0; I am multisite-cluster-0
+2023-02-22 15:50:01,886 INFO: Triggering multisite hearbeat
+2023-02-22 15:50:01,888 INFO: Running multisite consensus.
+2023-02-22 15:50:01,888 INFO: Multisite has leader and it is s1-multisite-cluster
+2023-02-22 15:50:01,888 INFO: Multisite replicate from Member(index='118', name='s1-multisite-cluster', session='4113060022582527194', data={'host': '192.168.49.103', 'port': 5432})
+2023-02-22 15:50:01,888 INFO: Touching member s2-multisite-cluster with {'host': '192.168.50.103', 'port': 5432}
+2023-02-22 15:50:01,899 INFO: no action. I am (multisite-cluster-0), the standby leader with the lock
+```
+
+In case access to PostgreSQL logs is needed, the easiest way is to exec into a running database pod
+with `kubectl exec -it multisite-cluster-0 -- bash` and view the files there. Logs are stored
+as `/home/postgres/pgdata/pgroot/pg_log/postgresql-*.csv`, with one file per weekday.
+
+Replication state can be queried from PostgreSQL:
+
+```
+kubectl exec -it $(kubectl get -o name po -l 'spilo-role=master,cluster-name=multisite-cluster') -- su postgres -c \
+ 'psql -xc "SELECT application_name, client_addr, backend_start, write_lag FROM pg_stat_replication"'
+```
+
+To check how multisite mode is doing one option is to check the etcd state. For example by executing
+in any one of your database pods:
+
+```
+kubectl exec multisite-cluster-0 -- bash -c \
+'ETCDCTL_API=3 etcdctl --endpoints=http://${MULTISITE_ETCD_HOST}:2379 \
+ get /multisite/${POD_NAMESPACE}/${SCOPE}/{leader,members0}'
+```
+
+This will output state stored in etcd. Example:
+
+```
+/multisite/cpo/multisite-cluster/leader
+s1-multisite-cluster
+/multisite/cpo/multisite-cluster/members/s1-multisite-cluster
+{"host":"192.168.49.102","port":5432}
+/multisite/cpo/multisite-cluster/members/s2-multisite-cluster
+{"host":"192.168.50.102","port":5432}
+```
+
+Each cluster state is stored with the prefix `/multisite/$NAMESPACE/$CLUSTER_NAME`. In this state
+there is `/leader` key storing current leader of the cluster and `/members/$SITE_$CLUSTER_NAME` for
+each sites externally visible service.
+
+### Triggering switchover manually
+
+Sometimes it is necessary to move leader role from one site to another. For this the operator REST API has an endpoint
+named `/clusters/$namespace/$cluster/multisite/`. This accepts a POST request with a request JSON. The document has the
+following attributes:
+
+* **switchover_to**: name of the site that should become the new multisite leader.
+
+Example:
+
+```shell
+curl --data-raw '{"switchover_to": "s1"}' -H "Content-type: application/json" \
+ http://postgres-operator.default.svc.cluster.local:8080/clusters/cpo/multisite-cluster/multisite/
+```
+
+The POST request to this endpoint will return immediately when the switchover request has been registered. The
+actual switchover process will take some time to coordinate.
+
+### Observing multisite status
+
+Current multisite status is published to cluster CRD status subresource in `Multisite` field. The possible values
+are `Leader` and `Standby`. When the role changes there will also be an event published.
+
+Example output from a kubectl describe on the cluster CRD resource:
+
+```
+Status:
+ Multisite: Leader
+ Postgres Cluster Status: Running
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Promote 13s patroni Acquired multisite leader status
+```
+
+Ouptut from the standby side:
+
+```
+Status:
+Multisite: Standby
+Postgres Cluster Status: Running
+Events:
+Type Reason Age From Message
+ ---- ------ ---- ---- -------
+Normal Demote 62s patroni Lost leader lock to s1-multisite-cluster
+Normal Multisite 97s postgres-operator Multisite switching over to "multisite-cluster" at site "s1"
+Normal Multisite 97s postgres-operator Successfully started switchover to "multisite-cluster" at "s1"
+```
+
+
+## Development environment tips
+
+### MetalLB based cross cluster communication with minikube
+
+Minikube is a useful distribution for deploying development Kubernetes clusters. With a bit of configuration it is
+possible to set up 2 Minikube clusters with MetalLB deployed so that MetalLB assigned IP addresses are accessible
+from the other cluster.
+
+Pre-requisite is to have 2 virtual machines that either are in the same L2 network, or that have a subnet routed to
+them.
+
+This example is based on docker based deployment, same approach might work with other deployment options (e.g.
+Virtualbox), but may require some extra configuration tuning.
+
+Start minikube's on the two hosts using different internal subnets, and configure and enable the metallb addon to
+assign IP addresses from this subnet. The subnets chosen should not be in use for services needed by these two VMs,
+but other hosts are not affected by the choice of the subnets.
+
+```
+# Host A
+minikube start --subnet=192.168.49.2
+minikube addons configure metallb
+-- Enter Load Balancer Start IP: 192.168.49.100
+-- Enter Load Balancer End IP: 192.168.49.200
+ ▪ Using image docker.io/metallb/speaker:v0.9.6
+ ▪ Using image docker.io/metallb/controller:v0.9.6
+✅ metallb was successfully configured
+minikube addons enable metallb
+
+# Host B
+minikube start --subnet=192.168.50.2
+minikube addons configure metallb
+-- Enter Load Balancer Start IP: 192.168.50.100
+-- Enter Load Balancer End IP: 192.168.50.200
+ ▪ Using image docker.io/metallb/speaker:v0.9.6
+ ▪ Using image docker.io/metallb/controller:v0.9.6
+✅ metallb was successfully configured
+minikube addons enable metallb
+```
+
+On both hosts turn on ip forwarding in sysctl.conf and reload it with `sysctl -p`
+
+```
+net.ipv4.ip_forward=1
+```
+
+In IP tables allow forwarding:
+
+```
+sudo iptables -A FORWARD -j ACCEPT
+```
+
+Configure on each host routing to access the other clusters metallb IP range via the other VMs IP address (need to
+replace IP addresses and network interfaces with actual ones from the VMs):
+
+```
+# Host A
+sudo ip route add 192.168.50.0/24 via 192.168.2.12 dev eth1
+# Host B
+sudo ip route add 192.168.49.0/24 via 192.168.2.11 dev eth1
+```
+
+To check if load balancer works, here's an example HTTP service:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: hello-blue-whale
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: hello-blue-whale-app
+ template:
+ metadata:
+ name: hello-blue-whale-pod
+ labels:
+ app: hello-blue-whale-app
+ spec:
+ containers:
+ - name: hello-blue-whale-container
+ image: vamsijakkula/hello-blue-whale:v1
+ ports:
+ - containerPort: 80
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: hello-blue-whale-svc
+ labels:
+ app: hello-blue-whale-app
+spec:
+ selector:
+ app: hello-blue-whale-app
+ type: LoadBalancer
+ ports:
+ - port: 80
+ targetPort: 80
+```
+
+Then check what external ip got assigned to the service (should be the first IP from the range given above).
+
+```
+kubectl get svc/hello-blue-whale-svc
+```
+
+And then from the other host use curl to see if the service can be accessed.
+
+```
+curl -v http://192.168.49.100/
+```
+
+Other hosts on the same network can have the same routes added to access services in the clusters. If access from
+other networks is needed, then the chosen subnets need to be routed to these VMs across your network.
diff --git a/docs/hugo/content/en/pg_versioning/_index.md b/docs/hugo/content/en/pg_versioning/_index.md
new file mode 100644
index 000000000..5abc8a950
--- /dev/null
+++ b/docs/hugo/content/en/pg_versioning/_index.md
@@ -0,0 +1,6 @@
+---
+title: "PG versioning"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2100
+---
diff --git a/docs/hugo/content/en/pg_versioning/major_upgrades.md b/docs/hugo/content/en/pg_versioning/major_upgrades.md
new file mode 100644
index 000000000..8d77b3712
--- /dev/null
+++ b/docs/hugo/content/en/pg_versioning/major_upgrades.md
@@ -0,0 +1,87 @@
+---
+title: "Major version upgrade"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2120
+---
+
+CPO enables the use of the in-place upgrade, which makes it possible to upgrade a cluster to a new PG major. For this purpose, pg_upgrade is used in the background.
+
+{{< hint type=info >}}Note that an in-place upgrade generates both a pod restore in the form of a rolling update and an operational interruption of the cluster during the actual execution of the restore.{{< /hint >}}
+
+
+## How does the upgrade work?
+
+### Preconditions:
+1. Pod restart - Use the rolling update strategy to replace all pods based on the new ENV `PGVERSION` with the version you want to update to.
+2. Check - Check that the new `PGVERSION` is larger than the previously used one.
+3. Check whether the new `PGVERSION` is larger than the previously used one and the maintenance mode of the cluster must be deactivated. In addition, the replicas should not have a high lag.
+
+### Preliminary checks
+
+1. use initdb to prepare a new data_dir (`data_new`) based on the new `PGVERSION`.
+2. check the upgrade possibility with `pg_upgrade --check`
+
+{{< hint type=info >}}If one of the steps is aborted, a cleanup is performed{{< /hint >}}
+
+### Prepare the Upgrade
+1. remove dependencies that can cause problems. For example, the extensions `pg_stat_statements` and `pgaudit`.
+2. activate the maintenance mode of the cluster
+3. terminate PostgreSQL in an orderly manner
+4. check pg_controldata for the checkpoint position and wait until all replicas apply the latest checkpoint location
+5. use port `5432` for rsyncd and start it
+
+### Start the Upgrade
+
+1. Call pg_upgrade -k to start the Upgrade
+{{< hint type=Info >}}if the process failed, we need to rollback, if it was sucessful we're reaching the point of no return{{< /hint >}}
+2. Rename the directories. `data -> data_old` and `data_new -> data`
+3. Update the Patroni.config (`postgres.yml`)
+4. Call Checkpoint on every replica and trigger rsync on the Replicas
+5. Wait for Replicas to complete rsxnc. `Timeout: 300`
+6. Stop rsyncd on Primary and remove ininitialize key from DCS, because its based on the old sysid
+7. Start Patroni on the Primary and start the postgres locally
+8. Reset custom staticstics, warmup the Memory and start Analyze in stages in separate threads
+9. Wait for every Replica to become ready
+10. Disable the maintenance mode for the Cluster
+11. Restore custom statistics, analyze these tables and restore dropped objetcs from `Prepare the upgrade`
+
+### Completion of the upgrade
+1. Drop directory `data_old`
+2. Trigger new Backup
+
+### How a rollback is working?
+1. Stop rsynd if its running
+2. Disable the maintenance mode for the Cluster
+3. Drop directory `data_new`
+
+
+## How to trigger a In-Place-Upgrade with cpo?
+
+```
+spec:
+ postgresql:
+ version: "17"
+```
+To trigger an In-Place-Upgrade you have just to increase the parameter `spec.postgresql.version`. If you choose a valid number the Operator will start with the prozedure, described above.
+
+```sh
+kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
+'{"spec":{"postgresql":{"version":"17"}}}'
+```
+
+## Upgrade on cloning
+
+When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.
+
+## manual upgrade via the PostgreSQL container
+
+In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:
+
+```
+python3 /scripts/inplace_upgrade.py N
+```
+
+where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.
+
+{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}
diff --git a/docs/hugo/content/en/pg_versioning/minor_updates.md b/docs/hugo/content/en/pg_versioning/minor_updates.md
new file mode 100644
index 000000000..b584be323
--- /dev/null
+++ b/docs/hugo/content/en/pg_versioning/minor_updates.md
@@ -0,0 +1,81 @@
+---
+title: "Minor version update"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2110
+---
+
+Minor version updates for PostgreSQL are performed by updating the PostgreSQL container image in use.
+With the update object `spec.dockerImage` of the cluster manifest, the operator takes over the update based on the rolling update strategy. This means that the pods are replaced one after the other, with the replicas being updated first and then the old primary after a switchover. The operational interruption should generally last less than 5 seconds (switchover time), but the clients must still reconnect.
+
+If necessary, the operator also supports the downgrade of minor releases in the same way.
+
+To install minor version updates, PostgreSQL only requires the binaries to be replaced and the database to be restarted. For more information see [PostgreSQL - Versioning Policy](https://www.postgresql.org/support/versioning/)
+
+{{< hint type=info >}}This procedure can also be used for all other containers in a cluster. Whether sidecars, exporter, pooler or backup image{{< /hint >}}
+
+
+### Preconditions:
+1. Check if there is a newer image for the PostgreSQL container - [Check on Docker hub](https://hub.docker.com/repository/docker/cybertecpostgresql/cybertec-pg-container/general)
+2. Check - Check that the new `PGVERSION` is larger than the previously used one.
+3. Check whether the new `PGVERSION` is larger than the previously used one and the maintenance mode of the cluster must be deactivated. In addition, the replicas should not have a high lag.
+
+### Updating PostgreSQL-Container-Image
+Old-Manifest:
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+ namespace: cpo
+spec:
+ dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.3-1'
+```
+New-Manifest:
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+ namespace: cpo
+spec:
+ dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1'
+```
+#### Updating via kubectl/oc-client
+```sh
+kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
+'{"spec":{"dockerImage":"docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1"}}'
+```
+
+### Updating Exporter-Container-Image
+
+#### Updating Cluster-Manifest:
+Old-Manifest:
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+ namespace: cpo
+spec:
+ monitor:
+ image: 'docker.io/cybertecpostgresql/cybertec-pg-container:exporter-17.3-1'
+```
+New-Manifest:
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-1
+ namespace: cpo
+spec:
+ monitor:
+ image: 'docker.io/cybertecpostgresql/cybertec-pg-container:exporter-17.4-1'
+```
+
+#### Updating via kubectl/oc-client
+```sh
+kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
+'{"spec":{"monitor":{"image":"docker.io/cybertecpostgresql/cybertec-pg-container:exporter-17.4-1"}}}'
+```
+
diff --git a/docs/hugo/content/en/postgis/_index.md b/docs/hugo/content/en/postgis/_index.md
new file mode 100644
index 000000000..c5f1739d2
--- /dev/null
+++ b/docs/hugo/content/en/postgis/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Postgis"
+date: 2024-03-11T14:26:51+01:00
+draft: true
+weight: 1800
+---
\ No newline at end of file
diff --git a/docs/hugo/content/en/postgis/introduction.md b/docs/hugo/content/en/postgis/introduction.md
new file mode 100644
index 000000000..45c588525
--- /dev/null
+++ b/docs/hugo/content/en/postgis/introduction.md
@@ -0,0 +1,7 @@
+---
+title: "Introduction"
+date: 2024-03-11T14:26:51+01:00
+draft: true
+weight: 1
+---
+asdasdasd
\ No newline at end of file
diff --git a/docs/hugo/content/en/project/_index.md b/docs/hugo/content/en/project/_index.md
new file mode 100644
index 000000000..e3d86e9fd
--- /dev/null
+++ b/docs/hugo/content/en/project/_index.md
@@ -0,0 +1,6 @@
+---
+title: "CPO"
+date: 2024-03-11T14:26:51+01:00
+draft: false
+weight: 200
+---
\ No newline at end of file
diff --git a/docs/hugo/content/en/project/container_images.md b/docs/hugo/content/en/project/container_images.md
new file mode 100644
index 000000000..28c5f2c57
--- /dev/null
+++ b/docs/hugo/content/en/project/container_images.md
@@ -0,0 +1,32 @@
+---
+title: "Container Images"
+date: 2024-03-11T14:26:51+01:00
+draft: false
+weight: 202
+---
+
+For each version of the operator and the required PostgreSQL and other required containers, the corresponding image is provided on Dockerhub.
+
+#### Operator container images
+The operator images are the central components that control the operation and administration of the PostgreSQL databases. These images are available in the following repository on DockerHub:
+
+[Operator Images](https://hub.docker.com/repository/docker/cybertecpostgresql/cybertec-pg-operator)
+
+The repository contains all the necessary images for running the Cybertec PG Operator in a Kubernetes environment. These images are updated regularly to ensure the latest features and security updates.
+
+#### Additional container images
+In addition to the operator images, various container images are required to support a complete PostgreSQL environment. These images are available in the following repository:
+[CYBERTEC-PG-Container Images](https://hub.docker.com/repository/docker/cybertecpostgresql/cybertec-pg-container/general)
+
+This repository contains images for the following components:
+
+- PostgreSQL: The main database image, which contains all supported major versions of PostgreSQL. The name of the tag always reflects the latest release, e.g. currently `17.4` for PostgreSQL `17.4`. For the other major versions, the corresponding minor versions released by the PostgreSQL community at the same time are included.
+- Postgres-GIS: A specialised image that combines PostgreSQL with the PostGIS extension to support spatial data processing functions. You can find more information about Postgis [here](../../postgis).
+The tag for Postgis also includes the included version of Postgis. Example: postgres-gis-17.4-34-1 Postgis: `3.4.x`
+- PGBackRest: A backup and restore tool developed specifically for PostgreSQL and available as a separate container image.
+- Exporter: Images for monitoring PostgreSQL databases that collect metrics and make them available for monitoring tools such as Prometheus.
+- PgBouncer: A lightweight connection pooler for PostgreSQL that manages and optimises the number of concurrent connections.
+
+
+#### Extensions
+You can view the versions included in the [Extensions](../../extensions/pg17/) section.
\ No newline at end of file
diff --git a/docs/hugo/content/en/project/project.md b/docs/hugo/content/en/project/project.md
new file mode 100644
index 000000000..19a227e6b
--- /dev/null
+++ b/docs/hugo/content/en/project/project.md
@@ -0,0 +1,56 @@
+---
+title: "The Project"
+date: 2024-03-11T14:26:51+01:00
+draft: false
+weight: 201
+---
+The CYBERTEC PostgreSQL Operator (CPO) enables the simple provision and management of PostgreSQL clusters on Kubernetes. It reduces the administration effort and facilitates the management of single-node and HA clusters.
+## Main components
+- [CYBERTEC-pg-operator](https://github.com/cybertec-postgresql/CYBERTEC-pg-operator): Kubernetes operator for the automation of PostgreSQL clusters.
+- [CYBERTEC-pg-container](https://github.com/cybertec-postgresql/CYBERTEC-pg-container): Docker container suite for PostgreSQL, Patroni and etcd for the provision of HA clusters.
+- [CYBERTEC-operator-tutorials](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials): Tutorials and instructions for installing and using the operator.
+## Features
+- Cluster management:
+ - Single-node and HA (High Availability) clusters via [Patroni](https://patroni.readthedocs.io/en/latest/)
+ - Reduction of downtime thanks to redundancy, pod anti-affinity, auto-failover and self-healing
+ - Automated failover
+ - Live volume resize without pod restarts
+ - Basic credential and user management on K8s, eases application deployments
+ - Compatible with OpenShift and Rancher
+- PostgreSQL compatibility:
+ - Supports PostgreSQL versions 13 to 17
+ - Inplace upgrades for smooth version changes and minimal downtime
+ - Extensive extension support, including pgAudit, TimescaleDB and PostGIS
+ - Standby-Cluster
+- Backup & Restore:
+ - Integrated pgBackRest support
+ - Automatic backups
+ - Point-in-Time- and Snapshot-based Restores / Disaster Recovery
+- Connection management:
+ - pgBouncer for connection pooling
+- Monitoring & alerting stack
+ - Integrated metrics exporter
+ - Prometheus, alert manager for metrics collection and alerting
+ - Grafana for visual monitoring of the clusters
+- Operator UI:
+ - Web interface for managing clusters
+
+## Installation
+Detailed instructions on installation and configuration can be found in the CYBERTEC operator tutorials and in the following chapters
+Example of installation via Helm:
+```
+helm repo add cybertec https://cybertec-postgresql.github.io/helm-charts/
+helm install pg-operator cybertec/cybertec-pg-operator
+```
+
+More information: [Installation]({{< relref "installation/install_operator" >}})
+
+## Contribution
+This project is open source, and contributions to its further development are expressly encouraged.
+Possible forms of contribution:
+- Bug reports and feature requests
+- Code contributions (pull requests welcome)
+- Improvement of the documentation
+Further details on contributions can be found in the respective GitHub repositories.
+## Licence
+The CYBERTEC PostgreSQL Operator is licensed under the Apache 2.0 licence.
\ No newline at end of file
diff --git a/docs/hugo/content/en/quickstart/_index.md b/docs/hugo/content/en/quickstart/_index.md
new file mode 100644
index 000000000..4419cb715
--- /dev/null
+++ b/docs/hugo/content/en/quickstart/_index.md
@@ -0,0 +1,112 @@
+---
+title: "Quickstart"
+date: 2023-03-07T14:26:51+01:00
+draft: false
+weight: 400
+---
+
+We can tell and document so much about our project but it seems you just want to get started. Let us show you the fastest way to use CPO.
+
+## Preconditions
+
+- git
+- helm (optional)
+- kubectl or oc
+
+## Let's start
+
+### Step 1 - Preparations
+To get started, you can fork our tutorial repository on Github and then download it.
+[CYBERTEC-operator-tutorials](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/fork)
+
+```
+git clone https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials.git
+cd CYBERTEC-operator-tutorials
+```
+
+### Step 2 - Install the Operator
+Two options are available for the installation:
+- Installation via Helm-Chart (local or via helmn-repo)
+- Installation via apply
+
+#### Installation via Helm-Chart
+
+If you want to use the helm-chart, YOU can decide for yourself whether you want to use the helm-cahrt from the operator-tutorials on github or directly connect the helm-repo for the cpo-project and install the helm-chart over it.
+
+```
+#add helm-repo (optional)
+ helm repo add cpo https://cybertec-postgresql.github.io/CYBERTEC-operator-tutorials
+ kubectl apply -n cpo -k setup/namespace/.
+ helm install -n cpo cpo cpo/postgres-operator
+
+or
+
+# use local helm-chart from git
+ kubectl apply -n cpo -k setup/namespace/.
+ helm install cpo -n cpo setup/helm/operator/
+```
+
+#### Installation via apply
+```
+kubectl apply -n cpo -k setup/namespace/.
+kubectl apply -n cpo -k setup/helm/operator/.
+```
+
+You can check if the operator pod is in operation.
+```
+kubectl get pods -n cpo --selector=cpo.cybertec.at/pod/type=postgres-operator
+```
+The result should look like this:
+```
+NAME READY STATUS RESTARTS AGE
+postgres-operator-599688d948-fw8pw 1/1 Running 0 41s
+```
+
+The operator is ready and the setup is complete. The next step is the creation of a Postgres cluster
+
+### Step 3 - Create a Cluster
+To create a simple cluster, the following command is sufficient
+```
+kubectl apply -n cpo -f cluster-tutorials/single-cluster
+```
+
+```
+watch kubectl get pods -n cpo --selector cluster-name=cluster-1
+```
+The result should look like this:
+```
+Alle 2.0s: kubectl get pods -n cpo --selector cluster-name=cluster-1
+
+NAME READY STATUS RESTARTS AGE
+cluster-1-0 2/2 Running 0 28s
+cluster-1-1 0/2 PodInitializing 0 9s
+```
+
+### Step 4 - Connect to the Database
+Get your login information from the secret.
+```
+kubectl get secret -n cpo postgres.cluster-1.credentials.postgresql.cpo.opensource.cybertec.at -o jsonpath='{.data}' | jq '.|map_values(@base64d)'
+```
+The result should look like this:
+```
+{
+ "password": "2rZG1Kx9asdHscswQGzff4Ru0xW6uasacy3GQ0sjdCH3wWr0kguUXUZek6dkemsf",
+ "username": "postgres"
+}
+```
+#### Connection via port-forward
+
+```
+kubectl port-forward -n cpo cluster-1-0 5432:5432
+```
+
+```
+# using psql
+PGPASSWORD=2rZG1Kx9asdHscswQGzffjdCH3wWr0kguUXUZek6dkemsf psql -h 127.0.0.1 -p 5432 -U postgres
+
+# using usql
+PGPASSWORD=2rZG1Kx9asdHscswQGzffjdCH3wWr0kguUXUZek6dkemsf usql postgresql://postgres@127.0.0.1/postgres
+```
+
+## Next Steps
+Congratulations, your first cluster is ready and you were able to connect to it. On the following pages we have put together an introduction with lots of information and details to show you the different possibilities and components of CPO.
\ No newline at end of file
diff --git a/docs/hugo/content/en/release_notes/_index.md b/docs/hugo/content/en/release_notes/_index.md
new file mode 100644
index 000000000..b62aa7778
--- /dev/null
+++ b/docs/hugo/content/en/release_notes/_index.md
@@ -0,0 +1,200 @@
+---
+title: "Release-Notes"
+date: 2024-03-11T14:26:51+01:00
+draft: false
+weight: 2500
+---
+
+### 0.8.3
+
+#### Fixes
+- Majorupgrade updated for Patroni 4.x.x
+- Fixes for PGEE
+- Fix for Monitoring-User
+- Dependency updates and several small changes
+
+#### Supported Versions
+
+- PG: 13 - 17
+- Patroni: 4.0.5
+- pgBackRest: 2.54.2
+- Kubernetes: 1.21 - 1.32
+- Openshift: 4.8 - 4.18
+
+### 0.8.2
+
+#### Features
+- Added Clone-Functionality with pgBackRest
+
+#### Supported Versions
+
+- PG: 13 - 17
+- Patroni: 3.3.2
+- pgBackRest: 2.54.0
+- Kubernetes: 1.21 - 1.32
+- Openshift: 4.8 - 4.18
+
+### 0.8.1
+
+#### Features
+- Added pgbackrest to Monitoring
+
+#### Fixes
+- Fixed role creation for monitoring
+
+#### Supported Versions
+
+- PG: 13 - 17
+- Patroni: 3.3.2
+- pgBackRest: 2.53
+- Kubernetes: 1.21 - 1.32
+- Openshift: 4.8 - 4.18
+
+### 0.8.0
+
+#### Features
+- Multisite - Support
+- use icu as default for pg > 14
+
+#### Fixes
+- Fixed role creation for monitoring.
+- Fix for the use of gcs with pgBackRest
+
+#### Supported Versions
+
+- PG: 13 - 16 & 17Beta2
+- Patroni: 3.3.2
+- pgBackRest: 2.53
+- Kubernetes: 1.21 - 1.32
+- Openshift: 4.8 - 4.18
+
+### 0.7.1
+
+#### Fixes
+- Fixed role creation for monitoring.
+- Fix for the use of gcs with pgBackRest
+
+#### Supported Versions
+
+- PG: 13 - 16 & 17Beta2
+- Patroni: 3.3.2
+- pgBackRest: 2.53
+- Kubernetes: 1.21 - 1.28
+- Openshift: 4.8 - 4.13
+
+### 0.7.0
+
+#### Features
+- Monitoring-Sidecar integrated via CRD [Start with Monitoring](documentation/cluster/monitoring)
+- Password-Hash per default set to scram-sha-256
+- pgBackRest with blockstorage using RepoHost
+- Internal Certification-Management for RepoHost-Certificates
+- Compatible with PG17Beta2
+
+#### Changes
+- API Change acid.zalan.do is replaced by cpo.opensource.cybertec.at - If you're updating your Operator from previous Versions, please check this [HowTo Migrate to new API](documentation/operator/migrateToNewApi/)
+- Patroni-Compatibility has increased to Version 3.3.2
+- pgBackRest-Compatbility has increased to Version 2.52.1
+- Revision of the restore process
+- Revision of the backup jobs
+- Operator now using Rocky9 as Baseimage
+- Updates Go-Package to 1.22.5
+
+#### Fixes
+- PDB Bug fixed - Single-Node Clusters are not creating PDBs anymore which can break Kubernetes-Update
+- Wrong Templates inside Cronjobs fixed
+
+#### Supported Versions
+
+- PG: 13 - 16 & 17Beta2
+- Patroni: 3.3.2
+- pgBackRest: 2.52.1
+- Kubernetes: 1.21 - 1.28
+- Openshift: 4.8 - 4.13
+
+### 0.6.1
+
+Release with fixes
+
+#### Fixes
+- Backup-Pod now runs with "best-effort" resource definition
+- Der Init-Container für die Wiederherstellung verwendet nun die gleiche Ressource-Definition wie der Datenbank-Container, wenn es keine spezifische Definition im Cluster-Manifest gibt (spec.backup.pgbackrest.resources)
+
+#### Software-Versions
+
+- PostgreSQL: 15.3 14.8, 13.11, 12.15
+- Patroni: 3.0.4
+- pgBackRest: 2.47
+- OS: Rocky-Linux 9.1 (4.18)
+
+___
+
+### 0.6.0
+
+Release with some improvements and stabilisation measuresm
+
+#### Features
+- Added [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/)
+- Added support for TDE based on the CYBERTEC PostgreSQL Enterprise Images (Licensed Container Suite)
+
+#### Software-Versions
+
+- PostgreSQL: 15.3 14.8, 13.11, 12.15
+- Patroni: 3.0.4
+- pgBackRest: 2.47
+- OS: Rocky-Linux 9.1 (4.18)
+
+___
+
+### 0.5.0
+
+Release with new Software-Updates and some internal Improvements
+### Features
+- Updated to Zalando Operator 1.9
+
+#### Fixes
+- internal Problems with Cronjobs
+- updates for some API-Definitions
+
+#### Software-Versions
+
+- PostgreSQL: 15.2 14.7, 13.10, 12.14
+- Patroni: 3.0.2
+- pgBackRest: 2.45
+- OS: Rocky-Linux 9.1 (4.18)
+
+___
+
+### 0.3.0
+
+Release with some improvements and stabilisation measuresm
+
+#### Fixes
+- missing pgbackrest_restore configmap fixed
+
+#### Software-Versions
+
+- PostgreSQL: 15.1 14.7, 13.9, 12.13, 11.18 and 10.23
+- Patroni: 3.0.1
+- pgBackRest: 2.44
+- OS: Rocky-Linux 9.1 (4.18)
+
+___
+
+### 0.1.0
+
+Initial Release as a Fork of the Zalando-Operator
+
+#### Features
+
+- Added Support for pgBackRest (PoC-State)
+ - Stanza-create and Initial-Backup are executed automatically
+ - Schedule automatic updates (Full/Incremental/Differential-Backup)
+ - Securely store backups on AWS S3 and S3-compatible storage
+
+#### Software-Versions
+
+- PostgreSQL: 14.6, 13.9, 12.13, 11.18 and 10.23
+- Patroni: 2.4.1
+- pgBackRest: 2.42
+- OS: Rocky-Linux 9.0 (4.18)
diff --git a/docs/hugo/content/en/resources/_index.md b/docs/hugo/content/en/resources/_index.md
new file mode 100644
index 000000000..5f29fc12a
--- /dev/null
+++ b/docs/hugo/content/en/resources/_index.md
@@ -0,0 +1,57 @@
+---
+title: "Apply Ressources"
+date: 2024-04-28T14:26:51+01:00
+draft: false
+weight: 700
+---
+
+Kubernetes workloads are often deployed without a direct resource definition. This means that, apart from the limits specified by the administrators, the workloads can use the required resources of the worker node very dynamically.
+
+The cluster manifest is used to define the Postgres pod resources via the typical resources objects.
+
+There are basically two different definitions:
+- `requests`: Basic requirement and guaranteed by the worker node
+- `limits`: maximum availability, allocation is increased dynamically if the worker node can provide the resources.
+
+For the planning of the cluster, a proper definition should be carried out in terms of the required hardware, which is then defined as `requests`. These resources are thus guaranteed to the cluster and are taken into account when deploying the pod. Accordingly, a pod can only be deployed on a worker if it can provide these resources. Any limits beyond this are not taken into account when deploying.
+
+The unit of measurement should be taken into account when planning the necessary CPUs:
+cpu specifications are based on millicores
+- `1 cpu` corresponds to `1 core`
+- `1 core `corresponds to `1000 millicores (m)`
+- `1/2 core` corresponds to `500 m`
+
+```
+ resources:
+ limits:
+ cpu: 500m
+ memory: 1Gi
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+```
+
+This example corresponds to a guaranteed availability of half a core and 1 Gibibyte. However, if necessary and available, the container can use up to one core. The allocation takes place dynamically and for the required time.
+
+Pods can be categorised into three Quality of Services (QoS) based on the defined information on the resources.
+
+- `Best-Effort`: The containers of a pod have no resource information
+- `Burstable`: A container of the pod has a memory or CPU `requests` or `limits`.
+- `Guaranteed`: Each container of a pod has both cpu and memory `requests` and `limits`. In addition, the details of the respective `limits` correspond to the `requests` details
+
+If you would like more information and explanations, you can take a look at the [Kubernetes documentation on QoS](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes).
+
+We generally recommend using the Guaranteed Status for PostgreSQL workloads. However, many users very successfully use the deviation of the CPU limit by factors such as 2.
+For example:
+```
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 1Gi
+ requests:
+ cpu: 2000m
+ memory: 1Gi
+```
+This is intended to create the possibility of additional CPU resources for sudden load peaks.
+
+{{< hint type=Info >}}The use of burstable definitions does not release you from a correct resource calculation, as `limits` resources are not guaranteed and therefore an undersupply can occur if the requests are not properly defined.{{< /hint >}}
diff --git a/docs/hugo/content/en/restore/_index.md b/docs/hugo/content/en/restore/_index.md
new file mode 100644
index 000000000..da757e8f7
--- /dev/null
+++ b/docs/hugo/content/en/restore/_index.md
@@ -0,0 +1,83 @@
+---
+title: "Restore"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1400
+---
+
+Restore or recovery is the process of starting a PostgreSQL instance or a cluster based on a defined and existing backup. This can be just a Backup or a combination of a Backup and additional WAL files. The difference is that a Backup represents a fixed point in time, whereas the combination with WAL enables a point-in-time recovery(PITR).
+
+You can find more information about backups [here](../backup/introduction/)
+
+### Rescue my cluster
+
+CPO enables the restore based on an existing backup using pgBackRest. To do this, it needs the relevant information about the point in time or snapBackupshot to which it should restore and where the data for this comes from.
+As we have already provided the operator with all the information relating to the storage of backups in the previous chapter, it only needs the following information:
+- `id`: Control variable, must be increased for each restore process
+- `type`: What type of restore is required
+- `repo`: Which repo the data should come from
+- `set`: Specific Backup to restore - Check [backup](../backup/check_backups/) to see how to get the identifier
+
+{{< hint type=Info >}}To ensure that the operator does not repeat an already done restore, the defined object `id` in the restore section is saved by the operator, so the value of this `id` must be changed for a new restore.{{< /hint >}}
+
+#### Details for a Backup restore
+With this information, we define a fixed Backup from `repo1` and that pgBackRest should stop at the end of the Backup
+```
+restore:
+ id: '1'
+ options:
+ type: 'immediate'
+ set: '20240515-164100F'
+ repo: 'repo1'
+```
+
+{{< hint type=info >}} Without the specification `--type=immediate`, pgBackRest would then consume the entire WAL that is available and thus restore the last available consistent data point. {{< /hint >}}
+
+#### Details for a point-in-time recoery (PITR)
+We use this information to define a point-in-time recovery (PITR) and define the end point using a timestamp and the start point using a Backup specification. The latter is optional. Without this information, pgBackRest would automatically start at the last previous full Backup.
+```
+restore:
+ id: '1'
+ options:
+ type: 'time'
+ set: '20240515-164100F'
+ target: '2024-05-16 07:46:05.506817+00'
+
+ repo: '1'
+```
+{{< hint type=info >}}`--type=time` indicates that it is a point-in-time recovery (PITR). {{< /hint >}}
+
+## Example in a cluster manifest
+
+```
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: cluster-5
+ namespace: cpo
+spec:
+ backup:
+ pgbackrest:
+ configuration:
+ secret: cluster-1-pvc-credentials
+ global:
+ repo1-retention-full: '7'
+ repo1-retention-full-type: count
+ image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
+ repos:
+ - name: repo1
+ schedule:
+ full: 30 2 * * *
+ storage: pvc
+ volume:
+ size: 1Gi
+ restore:
+ id: '1'
+ options:
+ type: 'time'
+ set: '20240515-164100F'
+ target: '2024-05-16 07:46:05.506817+00'
+```
+An example of this can also be found in our tutorials. For a point-in-time recovery (PITR) you can find it [here](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/restore_pitr).
+
+{{< hint type=warning >}} Incorrect information for the Backup or the timestamp can result in pgBackRest not being able to complete the restore successfully. In the event of an error, the information must be corrected and another restore must be started. {{< /hint >}}
diff --git a/docs/hugo/content/en/standby-cluster/_index.md b/docs/hugo/content/en/standby-cluster/_index.md
new file mode 100644
index 000000000..744793c57
--- /dev/null
+++ b/docs/hugo/content/en/standby-cluster/_index.md
@@ -0,0 +1,65 @@
+---
+title: "Standby Cluster"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2080
+---
+
+A standby cluster is an independent PostgreSQL cluster that consists of a standby leader and optionally further replicas (if `numberOfInstances` > 1). The standby leader runs in read-only mode and does not accept any write operations. A standby cluster can be promoted to a primary cluster if required, whereby the standby leader becomes a fully-fledged leader and allows write operations.
+
+### Preconditions:
+The primary cluster must either:
+- be accessible from the standby cluster via streaming replication
+- the backup storage used by the standby cluster (S3, GCS or Azure Blob) must be accessible for the standby cluster
+
+The passwords for the Postgres user, the replication user and the exporter user (if monitoring is active) must be created as a secret for the standby cluster. Otherwise connection problems will occur
+
+### Create standby cluster
+
+The `standby` object in the cluster manifest is required to create a standby cluster.
+
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: standby-cluster-1
+spec:
+ standby:
+ standby_host: "cluster-1.cpo"
+ standby_port: "5432"
+ dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-17.4-1'
+ numberOfInstances: 1
+ postgresql:
+ version: '17'
+ resources:
+ limits:
+ cpu: 500m
+ memory: 500Mi
+ requests:
+ cpu: 500m
+ memory: 500Mi
+ teamId: acid
+ volume:
+ size: 5Gi
+```
+
+The primary cluster must be accessible from the standby cluster. It can be located in the same Kubernetes cluster or in a different one.
+
+- `standby_host`: Corresponds to the endpoint via which the primary pod can be reached. It can be a kubernetes-internal DNS name or an IP or DNS name that can be reached in the network.
+- `standby_port`: Corresponds to the PostgreSQL port used (default 5432)
+
+
+### Promoting cluster
+
+To promote a cluster, it is only necessary to remove the standby object.
+The cluster is then promoted to a primary cluster.
+
+```sh
+kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
+'{"spec":{"standby":null}}'
+```
+
+
+### Limitations
+A primary cluster cannot be demoted to a standby cluster.
+If necessary, the recommendation is to create a new cluster as a standby cluster.
\ No newline at end of file
diff --git a/docs/hugo/content/en/storage/_index.md b/docs/hugo/content/en/storage/_index.md
new file mode 100644
index 000000000..f4e4b070b
--- /dev/null
+++ b/docs/hugo/content/en/storage/_index.md
@@ -0,0 +1,98 @@
+---
+title: "Storage"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 800
+---
+Storage is crucial for the performance of a database and is therefore a central element. As with systems based on bare metal or virtual machines, the same requirements apply to Kubernetes workloads, such as constant availability, good performance, consistency and durability.
+
+A basic distinction is made between local storage, which is directly connected to the worker node, and network storage, which is mounted on the worker node and thus made available to the pod.
+
+In probably the vast majority of Kubernetes systems, network storage is used, for example from systems from hyperscalers or other cloud providers or own systems such as CEPH.
+
+With network storage in particular, attention must be paid to performance in terms of throughput (speed and guaranteed IOPS) and, above all, latency. It is also important to ensure that the different volumes do not compete with each other in terms of load.
+
+> **_PAY ATTENTION:_** Before using a CPO cluster, make sure that the storage is suitable for the intended use and provides the necessary performance. In addition, check the storage with benchmarks before use. We recommend the use of [pgbench](https://www.postgresql.org/docs/current/pgbench.html) for this purpose.
+
+## Define Storage-Volume
+
+The storage is defined via the volume object and enables the size and storage class for the storage to be defined, among other things.
+```
+spec:
+ volume:
+ size: 5Gi
+ storageClass: default-provisioner
+ ...
+```
+
+The volume is currently used for both PG and WAL data. In future, there will be an optional option to create a separate WAL volume.
+Please check our [roadmap](roadmap)
+
+{{< hint type=Info >}}Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.{{< /hint >}}
+
+The volume is currently used for both PG and WAL data. In future, there will be an optional option to create a separate WAL volume.
+
+## Expanding Volume
+
+{{< hint type=Info >}}Kubernetes is able to forward requests to expand the storage to the storage system and enable the expand without the need to restart the container. However, this also requires the associated storage system and the driver used to support this. This information can be found in the storage class under the object: allowVolumeExpansion. A distinction must also be made between online and offline expand. The latter requires a restart of the pod. To do this, the pod must be deleted manually.{{< /hint >}}
+
+To Expand the Volume, the value of the object volume.size must be increased
+```
+spec:
+ volume:
+ size: 10Gi
+ storageClass: default-provisioner
+ ...
+```
+This will trigger the expand of your Cluster-Volumes. It will need some time and you can check the current state inside the pvc.
+```
+kubectl get pvc pgdata-cluster-1-0 -o yaml
+-------------------------------------------------------
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: crc-csi-hostpath-provisioner
+ volumeMode: Filesystem
+ volumeName: pvc-800d7ecc-2d5f-4ef4-af83-1cd94c766d37
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 5Gi
+ phase: Bound
+```
+
+## Creating additonal Volumes
+The Operator allows you to modify your cluster with additonal Volumes.
+```
+spec:
+ ...
+ additionalVolumes:
+ - name: empty
+ mountPath: /opt/empty
+ targetContainers:
+ - all
+ volumeSource:
+ emptyDir: {}
+```
+This example will create an emptyDir and mount it to all Containers inside the Database-Pod.
+
+
+## Specific Settings for aws gp3 Storage
+For the gp3 Storage aws you can define more informations
+```
+ volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
+```
+The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput [AWS docs](https://aws.amazon.com/ebs/general-purpose/).
+
+To ensure that the settings are updates properly please define the Operator-Configuration 'storage_resize_mode' from default to 'mixed'
diff --git a/docs/hugo/content/en/tde/_index.md b/docs/hugo/content/en/tde/_index.md
new file mode 100644
index 000000000..ce8a20e13
--- /dev/null
+++ b/docs/hugo/content/en/tde/_index.md
@@ -0,0 +1,88 @@
+---
+title: "TDE"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2200
+---
+## What is Transparent Data Encryption (TDE)?
+
+Transparent Data Encryption (TDE) is a technology for encrypting databases at file level. The data is automatically encrypted before it is stored on the storage medium and decrypted transparently for authorised applications and users if required. This ensures that the data is protected at rest without the need for changes to existing applications. TDE is used by various database vendors such as Microsoft, Oracle and IBM to increase the security of database files.
+
+### Difference between hard disk encryption and TDE
+
+Hard disk encryption, also known as Full Disk Encryption (FDE), encrypts the entire hard disk or individual partitions to prevent unauthorised access to sensitive data. This method protects all data on a system, including the operating system, but only when the system is switched off. As soon as an authorised user logs on, the encryption is removed and the data is accessible to anyone who can access the computer while the user is logged on.
+
+In contrast, TDE specifically encrypts the database files at file level. Encryption is transparent to the applications accessing the database and protects the data at rest, regardless of the status of the operating system or hardware. This provides an additional protection mechanism, especially in scenarios where hard disk encryption is not sufficient or not implemented.
+
+
+### Advantages of TDE
+
+- **Protection of data at rest**: Data on the storage medium is encrypted, reducing the risk of data leaks.
+- **Transparency for applications**: Encryption is done directly at database level, so no changes to existing applications are required.
+- **Integration with PGEE**: Full support in Kubernetes environments and other modern IT infrastructures.
+- **Fulfilment of regulatory requirements**: Support for compliance requirements such as GDPR, HIPAA and other data protection standards.
+- **Additional security features**: In combination with other PGEE features such as data masking and obfuscation, comprehensive protection of sensitive data is ensured.
+
+Further information on TDE and PGEE can be found here: [CYBERTEC TDE](https://www.cybertec-postgresql.com/en/products/postgresql-transparent-data-encryption/).
+
+## Securing clusters with TDE
+
+The CYBERTEC pg operator, together with Patroni, takes over the setup and administration of the TDE functionality in conjunction with the cost-effective PGEE containers
+
+### Preconditions
+- CYBERTEC-pgee-container
+- Valid licence agreement for PGEE
+
+### Deploy a TDE-Cluster
+
+Setting up a TDE cluster is basically the same as setting up a conventional cluster.
+The only difference is the defined Postgres. container and the object TDE.enabled: true, which instructs the operator to initialise the database with the TDE functionality.
+
+```yaml
+apiVersion: cpo.opensource.cybertec.at/v1
+kind: postgresql
+metadata:
+ name: tde-cluster-1
+ namespace: cpo
+spec:
+ dockerImage: 'containers.cybertec.at/cybertec-pgee-container/postgres:rocky9-17.4-1'
+ numberOfInstances: 1
+ postgresql:
+ version: '17'
+ resources:
+ limits:
+ cpu: 250m
+ memory: 500Mi
+ requests:
+ cpu: 250m
+ memory: 500Mi
+ tde:
+ enable: true
+ teamId: acid
+ volume:
+ size: 5Gi
+```
+- `dockerImage` - Must contain a PostgreSQL image of the pgee container suite
+- `tde.enabled`- initialises the DB with TDE
+
+{{< hint type=important >}} Please note that the activation of TDE is only possible when creating new clusters. Subsequent activation is not possible. {{< /hint >}}
+
+### Check TDE-Status
+
+```sh
+[postgres@tde-cluster-1-0 ~]$ psql
+psql (17.4 EE 1.4.1)
+ ____ ____ _____ _____
+| _ \ / ___| ____| ____|
+| |_) | | _| _| | _|
+| __/| |_| | |___| |___
+|_| \____|_____|_____|
+PostgreSQL EE by CYBERTEC
+Type "help" for help.
+
+postgres=# show data_encryption;
+ data_encryption
+-----------------
+ on
+(1 row)
+```
\ No newline at end of file
diff --git a/docs/hugo/content/en/tls/_index.md b/docs/hugo/content/en/tls/_index.md
new file mode 100644
index 000000000..488ca9d03
--- /dev/null
+++ b/docs/hugo/content/en/tls/_index.md
@@ -0,0 +1,120 @@
+---
+title: "TLS/SSL connections"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 1600
+---
+Each cluster created is automatically equipped with a self-generated TLS certificate and is preconfigured for the use of TLS/SSL. However, this certificate is not based on a Certificate Authority (CA) that is known to the clients. This means that although communication between the client and server is encrypted, the certificate cannot be verified by the client.
+
+The following chapter deals with the creation of custom certificates and the steps required to integrate these certificates into the PostgreSQL cluster. In the example, a custom CA is created, on the basis of which the certificates are then generated and signed by this CA. This step can be skipped if certificates have already been obtained from another trusted organisation.
+
+### Create a custom CA and Certificates
+{{< hint type=important >}} Precondition: This chapter requires openssl {{< /hint >}}
+#### Create the CA
+The first step is to create a custom CA. An organisation name is required for this. You can also add further details about the country, district and location.
+The CA serves as the central authority that signs the certificates and thus guarantees the correctness of the certificate. In order to successfully complete the verification of a certificate, the CA's certificate must be stored on the client system.
+```
+ORGANIZATION=MyCustomOrganization
+CA=$ORGANIZATION-RootCA
+
+mkdir $CA
+cd $CA
+
+# Creating the CA-Key
+openssl genpkey -algorithm EC -out $CA.key -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -aes256
+
+# Creating the CA-Certificate
+openssl req -x509 -new -nodes -key $CA.key -sha512 -days 1826 -out $CA.crt -subj "/CN=${ORGANIZATION} Root-CA/C=AT/ST=Lower Austria/L=Woellersdorf/O=${ORGANIZATION}"
+
+```
+
+#### Create a custom Certificate
+The server needs a certificate signed by a CA and a private key so that it can claim to be trustworthy.
+
+{{< hint type=important >}} It is important that the CA certificate is stored as trustworthy with the client. Otherwise, no certificate check is possible. {{< /hint >}}
+
+
+```
+CN=cluster-1
+DNS2="${CN}-repl"
+DNS3="${CN}-pooler"
+DNS4="${CN}-pooler-repl"
+
+# Creating the private Key
+openssl genpkey -algorithm EC -out $CN.key -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve
+
+# Creating Certificate Signing Request (CSR))
+openssl req -new -key $CN.key -out $CN.csr \
+ -subj "/C=AT/ST=Lower Austria/L=Woellersdorf/O=${ORGANIZATION}/OU=OrgUnit/CN=${CN}" \
+ -addext "subjectAltName=DNS:${CN},DNS:${DNS2},DNS:${DNS3},DNS:${DNS4}"
+
+
+# Sign CSR with the CA
+openssl x509 -req -in $CN.csr -CA $CA.crt -CAkey $CA.key -CAcreateserial -out $CN.crt -days 365 \
+ -extfile <(echo -e "[ v3_req ]\nsubjectAltName=DNS:${CN},DNS:${DNS2},DNS:${DNS3},DNS:${DNS4}") -extensions v3_req
+
+```
+
+#### Add Certicate to the Cluster
+
+For adding the Certificate to your cluster a secret on kubernetes is needed.
+There are two different options here.
+For the first option, a secret is created that contains all the necessary information. I.e.
+- Server certificate
+- Private server key
+- CA certificate
+In the second variant, the CA certificate is separated and written in a separate secret. The advantage of this is that the CA only needs to be saved once and changed in the event of an update.
+
+##### First Option: Using one secret for all three objects
+
+```
+kubectl create secret generic cluster-1-tls \
+ --from-file=tls.crt=$CN.crt \
+ --from-file=tls.key=$CN.key \
+ --from-file=ca.crt=$CA.crt
+```
+
+Finally, the definition is made in the cluster manifest so that the operator adapts the cluster.
+
+```yaml
+apiVersion: "cpo.opensource.cybertec.at/v1"
+kind: postgresql
+...
+metadata:
+ name: cluster-1
+spec:
+ tls:
+ secretName: "cluster-1-tls"
+ caFile: "ca.crt"
+```
+
+##### Second Option: Using a separat Secret for the CA
+
+```
+kubectl create secret generic cpo-root-ca --from-file=ca.crt=ca.crt
+```
+
+```
+kubectl create secret generic cluster-1-tls \
+ --from-file=tls.crt=$CN.crt \
+ --from-file=tls.key=$CN.key \
+```
+
+Finally, the definition is made in the cluster manifest so that the operator adapts the cluster.
+
+```yaml
+apiVersion: "cpo.opensource.cybertec.at/v1"
+kind: postgresql
+
+metadata:
+ name: cluster-1
+spec:
+ tls:
+ secretName: "cluster-1-tls"
+ caSecretName: "cpo-root-ca"
+ caFile: "ca.crt"
+```
+
+A regular check of the mounted certificates takes place automatically within the container. This check takes place every 5 minutes. If the certificates have been updated, the certificates are loaded automatically.
+
+{{< hint type=important >}} In addition to generating the certificates independently, [cert-manager](https://cert-manager.io/docs/) can also be used for this purpose. {{< /hint >}}
diff --git a/docs/hugo/content/en/tutorials/_index.md b/docs/hugo/content/en/tutorials/_index.md
new file mode 100644
index 000000000..1eb96870d
--- /dev/null
+++ b/docs/hugo/content/en/tutorials/_index.md
@@ -0,0 +1,65 @@
+---
+title: "Tutorials"
+date: 2023-12-28T14:26:51+01:00
+draft: false
+weight: 2300
+---
+# Overview: CYBERTEC Operator Tutorials
+
+In this repository we provide various tutorials that demonstrate the use of the CYBERTEC operator. The tutorials contain cluster snippets that can be used directly with `kubectl`.
+
+## Using the tutorials
+
+The snippets provided can be deployed in two ways:
+
+- **With `kubectl apply -f`**: Use this method to apply a YAML file directly.
+- With `kubectl apply -k`**: Use this method to execute kustomise-supported deployments.
+
+## Repository
+
+The repository with all tutorials can be found here:
+
+[CYBERTEC Operator Tutorials](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials)
+
+### Cluster tutorials
+
+The specific cluster tutorials are available under the following path
+
+🔗 [Operator Helm-Chart](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/setup/helm/operator)
+
+🔗 [Cluster Tutorials](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials)
+
+#### Overview
+
+🔗 [Single Cluster](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/single-cluster)
+
+🔗 [Cluster-configured users and databases](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/configure_users_and_databases)
+🔗 [Cluster with prepared databases](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/prepared_databases)
+
+🔗 [HA-Cluster](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/high-availability-cluster)
+
+🔗 [Cluster with Backup via PVC](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_pvc)
+🔗 [Cluster with Backup via S3](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3)
+🔗 [Cluster with Backup via GCS](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_gcs)
+
+🔗 [Restore Cluster](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/restore)
+
+🔗 [Cluster with Pooler](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/cluster-with-pooler)
+
+🔗 [Cluster with Monitoring](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/monitored_cluster)
+
+🔗 [Cluster-Clone via PVC](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/clone_with_pvc)
+🔗 [Cluster-Clone via S3](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/clone_with_s3)
+
+🔗 [Standby Cluster](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/standby-cluster)
+
+🔗 [Multisite-Cluster](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/multisite)
+
+🔗 [TDE-Cluster](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/tde-cluster)
+
+
+
+
+
+
+Good luck trying it out! 🚀
\ No newline at end of file
diff --git a/docs/hugo/go.mod b/docs/hugo/go.mod
new file mode 100644
index 000000000..ad58e25dc
--- /dev/null
+++ b/docs/hugo/go.mod
@@ -0,0 +1,5 @@
+module github.com/cybertec-postgresql/cybertec-pg-operator
+
+go 1.22.5
+
+require github.com/cybertec-postgresql/hugo-geekdoc v0.0.0-20250130133505-d46e0dcc47c7 // indirect
diff --git a/docs/hugo/go.sum b/docs/hugo/go.sum
new file mode 100644
index 000000000..c36f9a4c3
--- /dev/null
+++ b/docs/hugo/go.sum
@@ -0,0 +1,2 @@
+github.com/cybertec-postgresql/hugo-geekdoc v0.0.0-20250130133505-d46e0dcc47c7 h1:l2xJB771iUIMN0zU1NKfFVqgnS0bQRFdCkVJhxVX7r4=
+github.com/cybertec-postgresql/hugo-geekdoc v0.0.0-20250130133505-d46e0dcc47c7/go.mod h1:y+YYT9rdvbhqlFG8MvkmuP8A8Z2+WkraYEdLghEZLbs=
diff --git a/docs/hugo/hugo.toml b/docs/hugo/hugo.toml
new file mode 100644
index 000000000..600fa5515
--- /dev/null
+++ b/docs/hugo/hugo.toml
@@ -0,0 +1,13 @@
+baseURL = "https://cybertec-postgresql.github.io/CYBERTEC-pg-operator"
+title = "CYBERTEC-PG-Operator"
+
+defaultContentLanguage = "en"
+
+[languages.en]
+languageName = "English"
+contentDir = "content/en"
+weight = 10
+
+[module]
+[[module.imports]]
+path = 'github.com/cybertec-postgresql/hugo-geekdoc'
diff --git a/docs/hugo/layouts/shortcodes/back.html b/docs/hugo/layouts/shortcodes/back.html
new file mode 100644
index 000000000..f9f10936a
--- /dev/null
+++ b/docs/hugo/layouts/shortcodes/back.html
@@ -0,0 +1 @@
+⬅ Back to Parent
\ No newline at end of file
diff --git a/docs/hugo/public/404.html b/docs/hugo/public/404.html
new file mode 100644
index 000000000..6e83a4a06
--- /dev/null
+++ b/docs/hugo/public/404.html
@@ -0,0 +1,566 @@
+
+
+
+
+
+
+
+
+
+
+
+ Lost? Don't worry
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This chapter covers all important aspects relating to the architecture of CPO and the associated components. In addition to the underlying Kubertnetes, the various components and their interaction for the operation of a PostgreSQL cluster are analysed.
External traffic, i.e. the connection to the database for the user or the application, takes place via defined Kubernetes services. A distinction must be made here between read/write and read only traffic.
Various software components are used to operate CPO. This chapter lists the most important components and their respective purposes.
+
Basically, the CPO project focusses on the main tasks of each individual component. This means that each component does what it does best and only that.
+In addition to reliable operation, this should also ensure efficient development and project management that utilises existing approaches rather than fighting against them.
The CYBERTEC-pg-operator is a Kubernetes operator that automates the operation and management of PostgreSQL databases on Kubernetes clusters. It facilitates the provisioning, scaling, backup and recovery of PostgreSQL clusters and integrates tools such as Patroni and pgBackRest for high availability and backup management.
+
The main focus of the operator is the creation of the necessary templates and objects for Kubernetes, the regular check whether the declarative description of the cluster is still up to date and for the implementation of various tasks in the cluster, which were commissioned by the user.
Kubernetes is an open source platform for automating the deployment, scaling and management of containerised applications. It enables the management of container clusters in different environments and offers functions such as automatic load balancing, self-healing and rollouts. Kubernetes ensures that applications are always available and scalable and provides a framework for managing infrastructure in a cloud-native environment.
+
The focus of Kubernetes in the context of CPO is the use of the operator’s templates to create the required objects.
+For example, the statefulset controller creates the desired pods based on the template. Kubernetes or the respective controllers monitor the generated objects independently and react if they are missing or do not correspond to the template.
+This means, for example, that pods that have been removed are automatically regenerated even if the operator is not currently running. This avoids the operator as a single point of failure.
Patroni is an open source tool for managing PostgreSQL high availability clusters. It uses a distributed consensus mechanism, often based on Etcd, Consul or Zookeeper, to manage the role of the PostgreSQL primary node and perform automatic failovers. Patroni ensures that only one primary database server is active at a time, enabling consistency and availability of PostgreSQL databases in a cluster.
+
The focus of Patroni is to build, configure and monitor the PostgreSQL cluster based on the configuration created by the operator. Patroni therefore takes over all tasks such as leader selection, cluster monitoring, auto-failover and much more independently.
+Patroni is included in every PostgreSQL container and therefore pod and focussed on the individual cluster.
+This means that cluster management is guaranteed even without a currently running operator and therefore runs independently of the operator. This avoids the operator as a single point of failure.
PostgreSQL is a powerful, open source object-relational database management system (ORDBMS). It is known for its reliability, robustness and compliance with SQL standards. PostgreSQL supports advanced data types, functions and offers extensive customisation options. It is suitable for applications of any size and offers strong support for ACID transactions and Multi-Version Concurrency Control (MVCC).
+
The main role of PostgreSQL in the context of CPO is quite clear. Controlled by Patroni, PostgreSQL takes care of its task as a DBMS.
pgBackRest is a reliable backup and restore tool for PostgreSQL databases. It offers features such as incremental backups, parallel backup and restore, compression and encryption. pgBackRest is designed for use in large databases and offers both local and remote backup options. It integrates well into Kubernetes environments and enables automated and efficient backup strategies.
+
pgBackRest is configured based on the cluster manifest and therefore via the operator. Automatic backups, on the other hand, are based on Kubernetes cron jobs and are therefore independent of the operator, apart from the template generation by the operator.
PgBouncer is a lightweight connection pooler for PostgreSQL. It reduces the load on the database server by consolidating and efficiently managing incoming client connections. PgBouncer improves the performance and scalability of PostgreSQL-based applications by reducing the number of active connections while enabling fast switching times between different connections.
+
+
+
+
+
+
diff --git a/docs/hugo/public/architecture/index.xml b/docs/hugo/public/architecture/index.xml
new file mode 100644
index 000000000..55c1bb52d
--- /dev/null
+++ b/docs/hugo/public/architecture/index.xml
@@ -0,0 +1,26 @@
+
+
+
+ Architecture on CYBERTEC-PG-Operator
+ http://localhost:1313/CYBERTEC-pg-operator/architecture/
+ Recent content in Architecture on CYBERTEC-PG-Operator
+ Hugo
+ en
+ Tue, 07 Mar 2023 14:26:51 +0100
+
+
+ Software-Components
+ http://localhost:1313/CYBERTEC-pg-operator/architecture/compontens/
+ Tue, 07 Mar 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/architecture/compontens/
+ <p>Various software components are used to operate CPO. This chapter lists the most important components and their respective purposes.</p>
<p>Basically, the CPO project focusses on the main tasks of each individual component. This means that each component does what it does best and only that.
In addition to reliable operation, this should also ensure efficient development and project management that utilises existing approaches rather than fighting against them.</p>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="1-cybertec-pg-operator"
>
1. CYBERTEC-pg-operator
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/architecture/compontens/#1-cybertec-pg-operator" class="gdoc-page__anchor clip flex align-center" title="Anchor to: 1. CYBERTEC-pg-operator" aria-label="Anchor to: 1. CYBERTEC-pg-operator" href="#1-cybertec-pg-operator">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>The CYBERTEC-pg-operator is a Kubernetes operator that automates the operation and management of PostgreSQL databases on Kubernetes clusters. It facilitates the provisioning, scaling, backup and recovery of PostgreSQL clusters and integrates tools such as Patroni and pgBackRest for high availability and backup management.</p>
+
+
+ Rolling-Updates
+ http://localhost:1313/CYBERTEC-pg-operator/architecture/rolling_update/
+ Tue, 07 Mar 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/architecture/rolling_update/
+ <p>Whether updating the minor version, changing the hardware definitions of the cluster or other adjustments that require a pod restart, CPO ensures that the update is as uninterrupted as possible.</p>
<p>This means that adjustments are carried out on the various pods of a particular cluster one after the other and in a sensible sequence. This happens as soon as a cluster consists of more than 1 PostgreSQL node.</p>
<p>In the event of a necessary restart, the operator independently stops the pods and does not leave this to Kubernetes. The idea behind this is that all replica pods are restarted one after the other first. The operator recognises these by the label cpo.opensource.cybertec.at/role=replica set by Patroni</p>
+
+
+
diff --git a/docs/hugo/public/architecture/rolling_update/index.html b/docs/hugo/public/architecture/rolling_update/index.html
new file mode 100644
index 000000000..1ed798adf
--- /dev/null
+++ b/docs/hugo/public/architecture/rolling_update/index.html
@@ -0,0 +1,5114 @@
+
+
+
+
+
+
+
+
+
+
+
+
+ Rolling-Updates | CYBERTEC-PG-Operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Whether updating the minor version, changing the hardware definitions of the cluster or other adjustments that require a pod restart, CPO ensures that the update is as uninterrupted as possible.
+
This means that adjustments are carried out on the various pods of a particular cluster one after the other and in a sensible sequence. This happens as soon as a cluster consists of more than 1 PostgreSQL node.
+
In the event of a necessary restart, the operator independently stops the pods and does not leave this to Kubernetes. The idea behind this is that all replica pods are restarted one after the other first. The operator recognises these by the label cpo.opensource.cybertec.at/role=replica set by Patroni
+
As soon as all replicas are ready again, the operator checks whether one of the replicas is able to take over cluster operation and performs a switchover. Only then is the former leader pod stopped and restarted.
+
This ensures that the only effect on the application is the switchover.
+
+
+
+
+
+
+
+
A completely uninterrupted handover of operation is not possible due to the architecture and connection handling of PostgreSQL.
This chapter describes the use of pgBackRest in combination with with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore or SwiftStack. It is not absolutely necessary to operate a Kubernetes on the AWS Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
+
This Chapter will use AWS S3 for the example, the usage of different s3-compatible Storage is similiar.
+
+
+
+
+
+
+
+
Precondition: a S3-bucket and a priviledged role with credentials is needed for this chapter.
Access-Token and Secret-Access-Key for the service role with the required authorisations for the bucket
+
+
the cluster can be modified. Firstly, a secret containing the Credentials is created and the cluster manifest is adapted accordingly.
+
The first step is to create the required secret. This is most easily done storing the needed data in a file called s3.conf and using a kubectl command.
+
# Create a file with name s3.conf and add the following infos. Please replace the placeholder by the credentials
+[global]
+repo1-s3-key=YOUR_S3_ACCESS_KEY
+repo1-s3-key-secret=YOUR_S3_KEY_SECRET
+repo1-cipher-pass=YOUR_ENCRYPTION_PASSPHRASE
+
+# Create the secret with the credentials
+kubectl create secret generic cluster-1-s3-credentials --from-file=s3.conf=s3.conf
+
In the next step, the secret name ais stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for pgBackRest is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
This example creates a backup in the defined S3 bucket. In addition to the above configurations, a secret is also required which contains the access data for the S3 storage. The name of the secret must be stored in the spec.backup.pgbackrest.configuration.secret object and the secret must be located in the same namespace as the cluster.
+Information required to address the S3 bucket:
+
+
Endpoint: S3 api endpoint
+
Region: Region of the bucket
+
resource: Name of the bucket
+
+
An example with a sercret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
This chapter describes the use of pgBackRest in combination with Azure Blob Storage. It is not absolutely necessary to operate a Kubernetes on the Azure Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
+
+
+
+
+
+
+
+
Precondition: a blob-storage-volume and a priviledged role is needed for this chapter.
+
+
+
+
+ Create a blob-storage-volume on the Azure console
+
In the next step, both the secret name and the file name of the JSON token are stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for pgBackRest is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
There are several ways to gain an insight into the current status of pgBackRest.
+One of these is to use pgBackRest within the container. This can be done both via the repo host and the Postgres pod.
In addition to reading the status via the containers, pgBackRest can also be analysed and monitored via the monitoring stack. You can find information on setting up the monitoring stack and further information here.
pgBackRest also allows you to encrypt your backups on the client side before uploading them. This is possible with any type of storage and is very easy to activate.
+
Firstly, we need to define an encryption key. This must be specified separately for each repo and stored in the same secret that is defined in the spec.backup.pgbackrest.configuration.secret object.
We also need to configure the type of encryption for pgBackRest. This is done via the cipher-type parameter, which must also be specified for each repo. You can find the available values for the parameter here
This chapter describes the use of pgBackRest in combination with Google Cloud Storage (gcs). It is not absolutely necessary to operate a Kubernetes on the Google Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
+
+
+
+
+
+
+
+
Precondition: a gcs-bucket and a priviledged role is needed for this chapter.
+
+
+
+
+ Create a gcs-bucket on the google cloud console
+
In the next step, both the secret name and the file name of the JSON token are stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for pgBackRest is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
+
+
+
+
+
+
diff --git a/docs/hugo/public/backup/index.xml b/docs/hugo/public/backup/index.xml
new file mode 100644
index 000000000..67ec94dc6
--- /dev/null
+++ b/docs/hugo/public/backup/index.xml
@@ -0,0 +1,61 @@
+
+
+
+ Backup on CYBERTEC-PG-Operator
+ http://localhost:1313/CYBERTEC-pg-operator/backup/
+ Recent content in Backup on CYBERTEC-PG-Operator
+ Hugo
+ en
+ Thu, 28 Dec 2023 14:26:51 +0100
+
+
+ Introduction
+ http://localhost:1313/CYBERTEC-pg-operator/backup/introduction/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/introduction/
+ <p>Backups are essential for databases. From broken storage to deployments gone wrong, backups often save the day. Starting with pg_dump, which was released in the late 1990s, to the archiving of WAL files (PostgreSQL 8.0 / 2005) and pg_basebackup (PostgreSQL 9.0 / 2010), PostgreSQL already offers built-in options for backups and restores based on logical and physical backups.</p>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="backups-with-pgbackrest"
>
Backups with pgBackRest
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/introduction/#backups-with-pgbackrest" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Backups with pgBackRest" aria-label="Anchor to: Backups with pgBackRest" href="#backups-with-pgbackrest">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>CPO relies on <a
class="gdoc-markdown__link"
href="www.pgbackrest.org"
>pgBackRest</a> as its backup solution, a tried-and-tested tool with extensive backup and restore options.
The backup is based on two elements:</p>
+
+
+ via Blockstorage (pvc)
+ http://localhost:1313/CYBERTEC-pg-operator/backup/pvc/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/pvc/
+ <div class="flex align-center gdoc-page__anchorwrap">
<h3 id="backups-on-pvc-persistentvolumeclaim"
>
Backups on PVC (PersistentVolumeClaim)
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/pvc/#backups-on-pvc-persistentvolumeclaim" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Backups on PVC (PersistentVolumeClaim)" aria-label="Anchor to: Backups on PVC (PersistentVolumeClaim)" href="#backups-on-pvc-persistentvolumeclaim">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>When using block storage, the operator creates an additional pod that acts as a repo host. Based on a TLS connection, the repo host obtains the data for the Backup from the current primary of the cluster, which is compressed before being sent.
WAL archives are pushed from the primary pod to the repo host.</p>
+
+
+ via S3
+ http://localhost:1313/CYBERTEC-pg-operator/backup/aws/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/aws/
+ <p>This chapter describes the use of pgBackRest in combination with with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore or SwiftStack. It is not absolutely necessary to operate a Kubernetes on the AWS Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.</p>
<p>This Chapter will use AWS S3 for the example, the usage of different s3-compatible Storage is similiar.</p>
+
+
+ via GCS
+ http://localhost:1313/CYBERTEC-pg-operator/backup/gcs/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/gcs/
+ <p>This chapter describes the use of pgBackRest in combination with Google Cloud Storage (gcs). It is not absolutely necessary to operate a Kubernetes on the Google Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.</p>
<blockquote class="gdoc-hint important">
<div class="gdoc-hint__title flex align-center"><i class="fa important" title="Important"></i></div>
<div class="gdoc-hint__text">Precondition: a gcs-bucket and a priviledged role is needed for this chapter.</div>
</blockquote>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="create-a-gcs-bucket-on-the-google-cloud-console"
>
Create a gcs-bucket on the google cloud console
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/gcs/#create-a-gcs-bucket-on-the-google-cloud-console" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Create a gcs-bucket on the google cloud console" aria-label="Anchor to: Create a gcs-bucket on the google cloud console" href="#create-a-gcs-bucket-on-the-google-cloud-console">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="create-a-priviledged-service-role"
>
Create a priviledged service-role
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/gcs/#create-a-priviledged-service-role" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Create a priviledged service-role" aria-label="Anchor to: Create a priviledged service-role" href="#create-a-priviledged-service-role">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="modifying-the-cluster"
>
Modifying the Cluster
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/gcs/#modifying-the-cluster" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Modifying the Cluster" aria-label="Anchor to: Modifying the Cluster" href="#modifying-the-cluster">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>As soon as all requirements are met:</p>
+
+
+ via Azure-Blob
+ http://localhost:1313/CYBERTEC-pg-operator/backup/azure_blob/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/azure_blob/
+ <p>This chapter describes the use of pgBackRest in combination with Azure Blob Storage. It is not absolutely necessary to operate a Kubernetes on the Azure Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.</p>
<blockquote class="gdoc-hint important">
<div class="gdoc-hint__title flex align-center"><i class="fa important" title="Important"></i></div>
<div class="gdoc-hint__text">Precondition: a blob-storage-volume and a priviledged role is needed for this chapter.</div>
</blockquote>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="create-a-blob-storage-volume-on-the-azure-console"
>
Create a blob-storage-volume on the Azure console
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/azure_blob/#create-a-blob-storage-volume-on-the-azure-console" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Create a blob-storage-volume on the Azure console" aria-label="Anchor to: Create a blob-storage-volume on the Azure console" href="#create-a-blob-storage-volume-on-the-azure-console">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="create-a-priviledged-service-role"
>
Create a priviledged service-role
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/azure_blob/#create-a-priviledged-service-role" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Create a priviledged service-role" aria-label="Anchor to: Create a priviledged service-role" href="#create-a-priviledged-service-role">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="modifying-the-cluster"
>
Modifying the Cluster
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/azure_blob/#modifying-the-cluster" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Modifying the Cluster" aria-label="Anchor to: Modifying the Cluster" href="#modifying-the-cluster">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>As soon as all requirements are met:</p>
+
+
+ Encrypted Backups
+ http://localhost:1313/CYBERTEC-pg-operator/backup/encryption/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/encryption/
+ <p>pgBackRest also allows you to encrypt your backups on the client side before uploading them. This is possible with any type of storage and is very easy to activate.</p>
<p>Firstly, we need to define an encryption key. This must be specified separately for each repo and stored in the same secret that is defined in the <code>spec.backup.pgbackrest.configuration.secret</code> object.</p>
<pre tabindex="0"><code>kind: Secret
apiVersion: v1
metadata:
name: cluster-1-s3-credential
namespace: cpo
stringData:
s3.conf |
[global]
repo1-s3-key=YOUR_S3_KEY
repo1-s3-key-secret=YOUR_S3_KEY_SECRET
repo1-cipher-pass=YOUR_ENCRYPTION_KEY
</code></pre><p>We also need to configure the type of encryption for pgBackRest. This is done via the cipher-type parameter, which must also be specified for each repo. You can find the available values for the parameter <a
class="gdoc-markdown__link"
href="https://pgbackrest.org/configuration.html#section-repository/option-repo-cipher-type"
>here</a></p>
+
+
+ Check/Monitor Backups
+ http://localhost:1313/CYBERTEC-pg-operator/backup/check_backups/
+ Thu, 28 Dec 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/backup/check_backups/
+ <p>There are several ways to gain an insight into the current status of pgBackRest.
One of these is to use pgBackRest within the container. This can be done both via the repo host and the Postgres pod.</p>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="pgbackrest-via-terminal-repo-host-pod"
>
pgbackrest via terminal (Repo-Host-Pod)
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/check_backups/#pgbackrest-via-terminal-repo-host-pod" class="gdoc-page__anchor clip flex align-center" title="Anchor to: pgbackrest via terminal (Repo-Host-Pod)" aria-label="Anchor to: pgbackrest via terminal (Repo-Host-Pod)" href="#pgbackrest-via-terminal-repo-host-pod">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<pre tabindex="0"><code>kubectl exec cluster-5-pgbackrest-repo-host-0 --stdin --tty -- pgbackrest info
stanza: db
status: ok
cipher: none
db (current)
wal archive min/max (16): 00000006000000000000005C/000000070000000000000092
full backup: 20240517-125730F
timestamp start/stop: 2024-05-17 12:57:30+00 / 2024-05-17 12:57:41+00
wal start/stop: 00000007000000000000005E / 00000007000000000000005E
database size: 22.9MB, database backup size: 22.9MB
repo1: backup set size: 3MB, backup size: 3MB
incr backup: 20240517-125730F_20240517-130003I
timestamp start/stop: 2024-05-17 13:00:03+00 / 2024-05-17 13:00:05+00
wal start/stop: 000000070000000000000060 / 000000070000000000000060
database size: 22.9MB, database backup size: 904.3KB
repo1: backup set size: 3MB, backup size: 149.4KB
backup reference list: 20240517-125730F
incr backup: 20240517-125730F_20240517-131503I
timestamp start/stop: 2024-05-17 13:15:03+00 / 2024-05-17 13:15:04+00
wal start/stop: 000000070000000000000062 / 000000070000000000000062
database size: 22.9MB, database backup size: 24.3KB
repo1: backup set size: 3MB, backup size: 2.9KB
backup reference list: 20240517-125730F, 20240517-125730F_20240517-130003I
</code></pre><div class="flex align-center gdoc-page__anchorwrap">
<h3 id="pgbackrest-via-terminal-postgres-pod"
>
pgbackrest via terminal (Postgres-Pod)
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/backup/check_backups/#pgbackrest-via-terminal-postgres-pod" class="gdoc-page__anchor clip flex align-center" title="Anchor to: pgbackrest via terminal (Postgres-Pod)" aria-label="Anchor to: pgbackrest via terminal (Postgres-Pod)" href="#pgbackrest-via-terminal-postgres-pod">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<pre tabindex="0"><code>kubectl exec cluster-5-0 --stdin --tty -- pgbackrest info
Defaulted container "postgres" out of: postgres, postgres-exporter, pgbackrest-restore (init)
stanza: db
status: ok
cipher: none
db (current)
wal archive min/max (16): 00000006000000000000005C/000000070000000000000092
full backup: 20240517-125730F
timestamp start/stop: 2024-05-17 12:57:30+00 / 2024-05-17 12:57:41+00
wal start/stop: 00000007000000000000005E / 00000007000000000000005E
database size: 22.9MB, database backup size: 22.9MB
repo1: backup set size: 3MB, backup size: 3MB
incr backup: 20240517-125730F_20240517-130003I
timestamp start/stop: 2024-05-17 13:00:03+00 / 2024-05-17 13:00:05+00
wal start/stop: 000000070000000000000060 / 000000070000000000000060
database size: 22.9MB, database backup size: 904.3KB
repo1: backup set size: 3MB, backup size: 149.4KB
backup reference list: 20240517-125730F
incr backup: 20240517-125730F_20240517-131503I
timestamp start/stop: 2024-05-17 13:15:03+00 / 2024-05-17 13:15:04+00
wal start/stop: 000000070000000000000062 / 000000070000000000000062
database size: 22.9MB, database backup size: 24.3KB
repo1: backup set size: 3MB, backup size: 2.9KB
backup reference list: 20240517-125730F, 20240517-125730F_20240517-130003I
</code></pre><p>There is the “normal” output, as well as the output format Json, which can be processed directly in the terminal.</p>
+
+
+
diff --git a/docs/hugo/public/backup/introduction/index.html b/docs/hugo/public/backup/introduction/index.html
new file mode 100644
index 000000000..f8cd17956
--- /dev/null
+++ b/docs/hugo/public/backup/introduction/index.html
@@ -0,0 +1,5223 @@
+
+
+
+
+
+
+
+
+
+
+
+
+ Introduction | CYBERTEC-PG-Operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Backups are essential for databases. From broken storage to deployments gone wrong, backups often save the day. Starting with pg_dump, which was released in the late 1990s, to the archiving of WAL files (PostgreSQL 8.0 / 2005) and pg_basebackup (PostgreSQL 9.0 / 2010), PostgreSQL already offers built-in options for backups and restores based on logical and physical backups.
CPO relies on pgBackRest as its backup solution, a tried-and-tested tool with extensive backup and restore options.
+The backup is based on two elements:
+
+
Snapshots in the form of physical backups
+
WAL archive: Continuous archiving of the WAL files
Backups represent a snapshot of the database in the form of pyhsical files. This contains all relevant information that PostgreSQL holds in its data folder.
+With pgBackRest it is possible to create different types of Backups:
+
+
full Snapshot: This captures and saves all files at the time of the backup
+
Differential backup: Only captures all files that have been changed since the last full Backup
+
Incremental backup: Only records the files that have been changed since the last backup (of any kind).
+
+
When restoring using differential or incremental Backup, it is necessary to also use the previous Backup that provide the basis for the selected Backup.
+
+
+
+
+
+
+
+
The choice of Backup types depends on factors such as the size of the database, the time available for backups and the restore.
The WAL (Write-Ahead-Log) refers to log files which record all changes to the database data before they are written to the actual files. The basic idea here is to guarantee the consistency and recoverability of the comitted data even in the event of failures.
+
PostgreSQL normally cleans up or recycles the WAL files that are no longer required. By using WAL archiving, the WAL files are saved to a different location before this process so that they can be used for various activities in the future.
+These activities include
+
+
Providing the WAL files for replicas to keep them up to date
+
Restoring instances that have lost parts of the WAL files in the event of a failure and cannot return to a consistent state without them without losing data
+
Point-In-Time-Recovery (PITR): In contrast to Backups, which map a fixed point in time, WAL files make it possible to jump dynamically to a desired point in time and restore the database to the closest available consistent data point
+
+
+
+
+
+
+
+
+
WAL archiving is an indispensable tool for data availability, recoverability and the continuous availability of PostgreSQL.
The operator creates a cronjob object on Kubernetes based on the defined times for automatic backups. This means that the Kubernetes core (CronJob Controller) will take care of processing the automatic backups and create a job and thus a pod at the appropriate time.
+The pod will send the backup command to the primary or, if block storage is used, to the repo host and monitor it. As soon as the backup is successfully completed, the pod stops with Completed and thus completes the job.
If there are problems such as a timeout, the pod will stop with exit code 1 and thus indicate an error. In this case, a new pod will be created which will attempt to complete the backup. The maximum number of attempts is 6, so if the backup fails six times, the job is deemed to have failed and will not be attempted again until the next cronjob execution. The job pod log provides information about the problems.
When using block storage, the operator creates an additional pod that acts as a repo host. Based on a TLS connection, the repo host obtains the data for the Backup from the current primary of the cluster, which is compressed before being sent.
+WAL archives are pushed from the primary pod to the repo host.
This example creates backups based on a repo host with a daily full Backup at 2:30 am. In addition, pgBackRest is instructed to keep a maximum of 7 full Backups. The oldest one is always removed when a new Backup is created. You can increase the pvc-size all time if needed. Therefore you just need to update the size value to a higher amount of Gi. Please be aware that shrinking the volume is not possible.
+
+
+
+
+
+
+
+
In addition, further configurations for pgBackRest can be defined in the global object. Information on possible configurations can be found in the pgBackRest documentation
The function of a cluster clone was implemented to create the possibility of duplicating the current status of a cluster in order to carry out tests such as a major upgrade.
+It creates an autonomous and independent cluster based on an existing local cluster or from a cloud storage via pgBackRest (S3, gcs or Azure Blob)
be accessible from the standby cluster via streaming replication
+
the backup storage used by the standby cluster (S3, GCS or Azure Blob) must be accessible for the standby cluster
+
+
The passwords for the Postgres user, the replication user and the exporter user (if monitoring is active) must be created as a secret for the standby cluster. Otherwise connection problems will occur
CPO enables the use of the in-place upgrade, which makes it possible to upgrade a cluster to a new PG major. For this purpose, pg_upgrade is used in the background.
+
+
PAY ATTENTION: Note that an in-place upgrade generates both a pod restore in the form of a rolling update and an operational interruption of the cluster during the actual execution of the restore.
Pod restart - Use the rolling update strategy to replace all pods based on the new ENV PGVERSION with the version you want to update to.
+
Check - Check that the new PGVERSION is larger than the previously used one.
+
Check whether the new PGVERSION is larger than the previously used one and the maintenance mode of the cluster must be deactivated. In addition, the replicas should not have a high lag.
To trigger an In-Place-Upgrade you have just to increase the parameter spec.postgresql.version. If you choose a valid number the Operator will start with the prozedure, described above.
+If you choosse a not allowed value, you will give an error and if you decrease the value, the operator will just ignore it with the following log-Entry.
Users who are already used to working with PostgreSQL from Baremetal or VMs are already familiar with the need for various files to configure PostgreSQL. These include
+
+
postgresql.conf
+
pg_hba.conf
+
…
+
+
Although these files are available in the container, direct modification is not planned. As part of the declarative mode of operation of the operator, these files are defined via the operator. The modifying intervention within the container also represents a contradiction to the immutability of the container.
+
For these reasons, the operator provides a way to make adjustments to the various files, from PostgreSQL to Patroni.
+
We differentiate between two main objects in the cluster manifest:
+
+
postgresql with the child objects version and parameters
+
patroni with objects for the pg_hab, slots and much more
The patroni object contains numerous options for customising the patroni-setu, and the pg_hba.conf is also configured here. A complete list of all available elements can be found here.
+
The most important elements include
+
+
pg_hba - pg_hba.conf
+
slots
+
synchronous_mode - enables synchronous mode in the cluster. The default is set to false
+
maximum_lag_on_failover - Specifies the maximum lag so that the pod is still considered healthy in the event of a failover.
+
failsafe_mode Allows you to cancel the downgrading of the leader if all cluster members can be reached via the Patroni Rest Api.
+You can find more information on this in the Patroni documentation
The pg_hba.conf contains all defined authentication rules for PostgreSQL.
+
When customising this configuration, it is important that the entire version of pg_hba is written to the manifest.
+The current configuration can be read out in the database using table pg_hba_file_rules ;.
When using user-defined slots, for example for the use of CDC using Debezium, there are problems when interacting with Patroni, as the slot and its current status are not automatically synchronised to the replicas.
+
In the event of a failover, the client cannot start replication as both the entire slot and the information about the data that has already been synchronised are missing.
+
To resolve this problem, slots must be defined in the cluster manifest rather than in PostgreSQL.
This example creates a logical replication slot with the name cdc-example within the app_db database and uses the pgoutput plugin for the slot.
+
+
+
+
+
+
+
+
Slots are only synchronised from the leader/standby leader to the replicas. This means that using the slots read-only on the replicas will cause a problem in the event of a failover.
A connection pooler is a tool that acts as a proxy between the application and the database and enables the performance of the application to be improved and the load on the database to be reduced. The reason for this lies in the connection handling of PostgreSQL.
PostgreSQL use a new Process for every database-connection created by the postmaster. This process is handling the connection. On the positive side, this enables a stable connection and isolation, but it is not particularly efficient for short-lived connections due to the effort required to create them.
With connection pooling, the application connects to the pooler, which in turn maintains a number of connections to the PostgreSQL database.
+This makes it possible to use the connections from the pooler to the database for a long time instead of short-lived connections and to recycle them accordingly.
+
In addition to utilising long-term connections, a ConnectionPooler also makes it possible to reduce the number of connections required to the database. For example, if you have 3 application nodes, each of which maintains 100 connections to the database at the same time, that would be 300 connections in total. The application usually does not even begin to utilise this number of connections. With the pgBouncer, this can be optimised so that the applications open the 300 connections to the pgBouncer, but the pgBouncer only generates 100 connections to PostgreSQL, for example, thus reducing the load by 2/3.
+
+
+
+
+
+
+
+
It is important to correctly configure the bouncer and thus the connections to be created between pgBouncer and PostgreSQL so that enough connections are available for the workload.
CPO relies on pgBouncer, a popular and above all lightweight open source tool. pgBouncer manages individual user-database connections for each user used, which can be used immediately for incoming client connections.
connection_pooler.number_of_instances - How many instances of connection pooler to create. Default is 2 which is also the required minimum.
+
+
+
connection_pooler.schema - Database schema to create for credentials lookup function to be used by the connection pooler. Is is created in every database of the Postgres cluster. You can also choose an existing schema. Default schema is pooler.
+
+
+
connection_pooler.user - User to create for connection pooler to be able to connect to a database. You can also choose an existing role, but make sure it has the LOGIN privilege. Default role is pooler.
+
+
+
connection_pooler.image - Docker image to use for connection pooler deployment. Default: “registry.opensource.zalan.do/acid/pgbouncer”
+
+
+
connection_poole.max_db_connections - How many connections the pooler can max hold. This value is divided among the pooler pods. Default is 60 which will make up 30 connections per pod for the default setup with two instances.
+
+
+
connection_pooler.mode - Defines pooler mode. Available Value: session, transaction or statement. Default is transaction.
+
+
+
connection_pooler.resources - Hardware definition for the pooler pods
+
+
+
enableConnectionPooler - Defines whether poolers for read/write access should be created based on the spec.connectionPooler definition.
+
+
+
enableReplicaConnectionPooler- Defines whether poolers for read-only access should be created based on the spec.connectionPooler definition.
Defines the configuration and settings for every type of a connectionPoolers (Primary and Replica).
+
+
+
databases
+
map
+
false
+
Defines the name of the database, they are created by the operator. See tutorial
+
+
+
dockerImage
+
string
+
true
+
Defines the used PostgreSQL-Container-Image for this cluster
+
+
+
enableLogicalBackup
+
boolean
+
false
+
Enable logical Backups for this Cluster (Stored on S3) - s3-configuration for Operator is needed (Not for pgBackRest)
+
+
+
enableConnectionPooler
+
boolean
+
false
+
creates a ConnectionPooler for the primary Pod
+
+
+
enableReplicaConnectionPooler
+
boolean
+
false
+
creates a ConnectionPooler for the replica Pods
+
+
+
enableMasterLoadBalancer
+
boolean
+
false
+
Define whether to enable the load balancer pointing to the Postgres primary
+
+
+
enableReplicaLoadBalancer
+
boolean
+
false
+
Define whether to enable the load balancer pointing to the Postgres replicas
+
+
+
enableMasterPoolerLoadBalancer
+
boolean
+
false
+
Define whether to enable the load balancer pointing to the primary ConnectionPooler
+
+
+
enableReplicaPoolerLoadBalancer
+
boolean
+
false
+
Define whether to enable the load balancer pointing to the Replica-ConnectionPooler
+
+
+
enableShmVolume
+
boolean
+
false
+
Start a database pod without limitations on shm memory. By default Docker limit /dev/shm to 64M (see e.g. the docker issue, which could be not enough if PostgreSQL uses parallel workers heavily. If this option is present and value is true, to the target database pod will be mounted a new tmpfs volume to remove this limitation.
a name of the priority class that should be assigned to the cluster pods. If not set then the default priority class is taken. The priority class itself must be defined in advance
+
+
+
podAnnotations
+
map
+
false
+
A map of key value pairs that gets attached as annotations to each pod created for the database.
the Persistent Volumes for the Spilo pods in the StatefulSet will be owned and writable by the group ID specified. This will override the spilo_fsgroup operator parameter
+
+
+
spiloRunAsGroup
+
int
+
false
+
sets the group ID which should be used in the container to run the process.
+
+
+
spiloRunAsUser
+
int
+
false
+
Sets the user ID which should be used in the container to run the process. This must be set to run the container without root.
permanent replication slots that Patroni preserves after failover by re-creating them on the new primary immediately. after doing a promote. Use preferred slot-name as map-item
+
+
+
synchronous_mode
+
boolean
+
false
+
DPatroni synchronous_mode parameter value, optional. The default is false.
+
+
+
synchronous_mode_strict
+
boolean
+
false
+
Patroni synchronous_mode_strict parameter value, optional. The default is false.
+
+
+
synchronous_node_count
+
int
+
false
+
Patroni synchronous_node_count parameter value, optional. The default is set to 1. Only used if synchronous_mode_strict is true
+
+
+
ttl
+
int
+
false
+
Patroni ttl parameter value, optional. The default is set by the PostgreSQL image.
Based on the ressources-Definiton we’re able to modify the reserved Hardware (requests) and the limits, which allows use to consume more than the reserved definitons if the k8s-worker has this hardware available. There are some Restrictions when modifiying the limits-section. Because of the behaviour of Databases we should never define a diff between requests.memory and limits.memory. A Database is after some time using all available Memory, for Cache and other things. Limits are optional and the worker node can force them back. forcing back memory will create big problems inside a database like creating corruption, forcing OutOfMemory-Killer and so on.
+CPU on the other side is a ressource we can use inside the limits definiton to allow our database using more cpu if needed and available.
Sidecars are further Containers running on the same Pod as the Database. We can use them for serveral different Jobs.
+The Operator allows us to define them directly inside the Cluster-Manifest.
We can exactly the same as for sidecars also for Init-Containers.
+The difference is, that a sidecar is running normally on a pod.
+An Init-Container will just run as first container when the pod is created and it will ends after his job is done.
+The “normal” Containers has to wait till all init-Containers finished their jobs and ended with a exit-status.
One Startup the Containers will create a custom TLS-Certificate which allows creating tls-secured-connections to the Database.
+But this Certificates cannot verified, because the application has no information about the CA. Because of this the certificates are no protection against MITM-Attacks.
+You’re able to configure your own Certificates and CA to ensure, that you can use secured and verified connections between your application and your database.
+
spec:
+ tls:
+ secretName: "" # should correspond to a Kubernetes Secret resource to load
+ certificateFile: "tls.crt"
+ privateKeyFile: "tls.key"
+ caFile: "" # optionally configure Postgres with a CA certificate
+ caSecretName: "" # optionally the ca.crt can come from this secret instead.
+
You need to store the needed values from tls.crt, tls.key and ca.crt in a secret and define the secrtetname inside the tls-object.
+if you want you can create a separate sercet just for the ca and use this secret for every cluster inside the Namespace.
+To get Information about creating Certificates and the secrets check the Tutorial in the additonal-Section or click here
This allowes you to use specific database-nodes in a mixed cluster for example.
+In the Example above the Cluster-Pods are just deployed on Nodes with the Key: cpo and the value: enabled
+So you’re able to seperate your Workload.
Every Cluster will start with the default PostgreSQL-Configuration. Every Parameter can be overriden based in definitions inside the Cluster-Manifest.
+Therefore we just need a add the section parameters to the postgresql-Object
These Definitions will change the PostgreSQL-Configuration. Based on the needs of Parameter changes the Pods may needs a restart, which creates a Downtime if its not a HA-Cluster.
+You can check Parameters and allowed Values on this Sources to ensure a correct Value.
CPO not only supports you in deploying your cluster, it also supports you in setting it up in terms of the database and users.
+CPO offers you three different options for this:
For each user created, CPO automatically creates a secret with username and password in the namespace of the cluster, which follows the following naming convention:
+[USERNAME].[CLUSTERNAME].credentials.postgresql.cpo.opensource.cybertec.at
+
If the secrets for an application are to be stored in a different namespace, for example, it is necessary to define the setting enable_cross_namespace_secret as true in the operator configuration. You can find more information about the operator configuration here.
+
The namespace must then be written before the user name.
The preparedDatabases object is available for a much more extensive setup of databases and users.
+In addition to the creation of databases and users, this also enables the creation of schemas and extensions. A more detailed rights management is also available.
Creating the preparedDatabases object already creates a database whose name is based on the cluster name. preparedDatabases: {}
+
+
+
+
+
+
+
+
For the database name, - is replaced with _ in the cluster name
+
+
+
To create your own database names and elements such as schemas and extensions within the database, an object must be created within preparedDatabases for each database.
This example creates a database with the name appl_db and creates a schema with the name data in it, as well as creating the dblink extension in the schema public.
For rights management, we distinguish between NOLOGIN roles and LOGIN roles. Users have login rights and inherit the other rights from the NOLOGIN role.
The roles described in the previous paragraph can be assigned to LOGIN roles via the users section in the manifest. Optionally, the Postgres operator can also create standard LOGIN roles for the database and each individual schema. These roles are given the suffix _user and inherit all rights from their NOLOGIN counterparts. Therefore, you cannot set defaultRoles to false and activate defaultUsers at the same time.
This example creates the following users and inheritances
+
+
+
+
Role name
+
Attributes
+
inherits from
+
+
+
+
+
appl_db_owner
+
Cannot login
+
appl_db_reader,appl_db_owner,appl_data_owner,…
+
+
+
appl_db_owner_user
+
+
appl_db_owner
+
+
+
appl_db_reader
+
Cannot login
+
+
+
+
appl_db_reader_user
+
+
appl_db_reader
+
+
+
appl_db_writer
+
Cannot login
+
appl_db_reader
+
+
+
appl_db_writer_user
+
+
appl_db_writer
+
+
+
appl_db_data_owner
+
Cannot login
+
appl_db_data_reader,appl_db_data_writer
+
+
+
appl_db_data_reader
+
Cannot login
+
+
+
+
appl_db_data_writer
+
Cannot login
+
appl_db_data_reader
+
+
+
appl_db_history_owner
+
Cannot login
+
appl_db_history_reader,appl_db_history_writer
+
+
+
appl_db_history_reader
+
Cannot login
+
+
+
+
appl_db_history_writer
+
Cannot login
+
appl_db_history_reader
+
+
+
+
Default access permissions are also defined for LOGIN roles when databases and schemas are created. This means that they are not currently set if defaultUsers (or defaultRoles for schemas) are activated at a later time.
For each user created by cpo with LOGIN permissions, the operator also creates a secret with username and password, as with the creation of roles via the users object.
Setting up a basic Cluster is pretty easy, we just need the minimum Definiton of a cluster-manifest which can also be find in the operator-tutorials repo on github.
+We need the following Definitions for the basic cluster.
Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+
kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
We can now starting to modify our cluster with some more Definitons.
Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
For the gp3 Storage aws you can define more informations
+
volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput AWS docs.
+
To ensure that the settings are updates properly please define the Operator-Configuration ‘storage_resize_mode’ from default to ‘mixed’
To set up a cluster, the implementation is based on a description, as with the other Kubernetes deplyoments. To do this, the operator uses a document of type postgresql.
+
You can also find the basic minimum specifications for a single-node cluster in our tutorial project on Github
Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+
kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+
HINT:Here you will find a complete overview of the available options within the cluster manifest.
Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
For the gp3 Storage aws you can define more informations
+
volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput AWS docs.
+
To ensure that the settings are updates properly please define the Operator-Configuration ‘storage_resize_mode’ from default to ‘mixed’
No more effort is required to create a High-Availablity cluster than for a Single-Node Cluster. Only the Cluster-Manifest needs to be modified slightly.
+The difference lies in the object numberOfInstances, which must be set > 1.
+
You can also find the basic minimum specifications for a High-Availability-Cluster cluster in our tutorial project on Github
To set up a cluster, the implementation is based on a description, as with the other Kubernetes deplyoments. To do this, the operator uses a document of type postgresql.
+
You can also find the basic minimum specifications for a single-node cluster in our tutorial project on Github
Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-17-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+
kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+
+
+
+
+
+
+
Here you will find a complete overview of the available options within the cluster manifest.
Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
For the gp3 Storage aws you can define more informations
+
volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput AWS docs.
+
To ensure that the settings are updates properly please define the Operator-Configuration ‘storage_resize_mode’ from default to ‘mixed’
To set up a cluster, the implementation is based on a description, as with the other Kubernetes deplyoments. To do this, the operator uses a document of type postgresql.
+
You can also find the basic minimum specifications for a single-node cluster in our tutorial project on Github
Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+
kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+
HINT:Here you will find a complete overview of the available options within the cluster manifest.
Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
For the gp3 Storage aws you can define more informations
+
volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput AWS docs.
+
To ensure that the settings are updates properly please define the Operator-Configuration ‘storage_resize_mode’ from default to ‘mixed’
To set up a cluster, the implementation is based on a description, as with the other Kubernetes deplyoments. To do this, the operator uses a document of type postgresql.
+
You can also find the basic minimum specifications for a single-node cluster in our tutorial project on Github
Based on this Manifest the Operator will deploy a single-Node-Cluster based on the defined dockerImage and start the included Postgres-16-Server.
+Also created is a volume based on your default-storage Class. The Ressource-Definiton means, that we reserve a half cpu and a half GB Memory for this Cluster with the same Definition as limit.
+
After some seconds we should see, that the operator creates our cluster based on the declared definitions.
+
kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 50s
+
+
HINT:Here you will find a complete overview of the available options within the cluster manifest.
Using the storageClass-Definiton allows us to define a specific storageClass for this Cluster. Please ensure, that the storageClass exists and is usable. If a Volume cannot provide the Volume will stand in the pending-State as like the Database-Pod.
For the gp3 Storage aws you can define more informations
+
volume:
+ size: 1Gi
+ storageClass: gp3
+ iops: 1000 # for EBS gp3
+ throughput: 250 # in MB/s for EBS gp3
+
The defined IOPS and Throughput will include in the PersistentVolumeClaim and send to the storage-Provisioner.
+Please keep in Mind, that on aws there is a CoolDown-Time as a limitation defined. For new Changes you need to wait 6 hours.
+Please also ensure to check the default and allowed values for IOPS and Throughput AWS docs.
+
To ensure that the settings are updates properly please define the Operator-Configuration ‘storage_resize_mode’ from default to ‘mixed’
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/hugo/public/fonts/GeekdocIcons.woff b/docs/hugo/public/fonts/GeekdocIcons.woff
new file mode 100644
index 000000000..eb6eacacf
Binary files /dev/null and b/docs/hugo/public/fonts/GeekdocIcons.woff differ
diff --git a/docs/hugo/public/fonts/GeekdocIcons.woff2 b/docs/hugo/public/fonts/GeekdocIcons.woff2
new file mode 100644
index 000000000..9a9d6050e
Binary files /dev/null and b/docs/hugo/public/fonts/GeekdocIcons.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_AMS-Regular.woff b/docs/hugo/public/fonts/KaTeX_AMS-Regular.woff
new file mode 100644
index 000000000..b804d7b33
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_AMS-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_AMS-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_AMS-Regular.woff2
new file mode 100644
index 000000000..0acaaff03
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_AMS-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Caligraphic-Bold.woff b/docs/hugo/public/fonts/KaTeX_Caligraphic-Bold.woff
new file mode 100644
index 000000000..9759710d1
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Caligraphic-Bold.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Caligraphic-Bold.woff2 b/docs/hugo/public/fonts/KaTeX_Caligraphic-Bold.woff2
new file mode 100644
index 000000000..f390922ec
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Caligraphic-Bold.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Caligraphic-Regular.woff b/docs/hugo/public/fonts/KaTeX_Caligraphic-Regular.woff
new file mode 100644
index 000000000..9bdd534fd
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Caligraphic-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Caligraphic-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Caligraphic-Regular.woff2
new file mode 100644
index 000000000..75344a1f9
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Caligraphic-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Fraktur-Bold.woff b/docs/hugo/public/fonts/KaTeX_Fraktur-Bold.woff
new file mode 100644
index 000000000..e7730f662
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Fraktur-Bold.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Fraktur-Bold.woff2 b/docs/hugo/public/fonts/KaTeX_Fraktur-Bold.woff2
new file mode 100644
index 000000000..395f28bea
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Fraktur-Bold.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Fraktur-Regular.woff b/docs/hugo/public/fonts/KaTeX_Fraktur-Regular.woff
new file mode 100644
index 000000000..acab069f9
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Fraktur-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Fraktur-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Fraktur-Regular.woff2
new file mode 100644
index 000000000..735f6948d
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Fraktur-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-Bold.woff b/docs/hugo/public/fonts/KaTeX_Main-Bold.woff
new file mode 100644
index 000000000..f38136ac1
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-Bold.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-Bold.woff2 b/docs/hugo/public/fonts/KaTeX_Main-Bold.woff2
new file mode 100644
index 000000000..ab2ad21da
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-Bold.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-BoldItalic.woff b/docs/hugo/public/fonts/KaTeX_Main-BoldItalic.woff
new file mode 100644
index 000000000..67807b0bd
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-BoldItalic.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-BoldItalic.woff2 b/docs/hugo/public/fonts/KaTeX_Main-BoldItalic.woff2
new file mode 100644
index 000000000..5931794de
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-BoldItalic.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-Italic.woff b/docs/hugo/public/fonts/KaTeX_Main-Italic.woff
new file mode 100644
index 000000000..6f43b594b
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-Italic.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-Italic.woff2 b/docs/hugo/public/fonts/KaTeX_Main-Italic.woff2
new file mode 100644
index 000000000..b50920e13
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-Italic.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-Regular.woff b/docs/hugo/public/fonts/KaTeX_Main-Regular.woff
new file mode 100644
index 000000000..21f581296
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Main-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Main-Regular.woff2
new file mode 100644
index 000000000..eb24a7ba2
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Main-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Math-BoldItalic.woff b/docs/hugo/public/fonts/KaTeX_Math-BoldItalic.woff
new file mode 100644
index 000000000..0ae390d74
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Math-BoldItalic.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Math-BoldItalic.woff2 b/docs/hugo/public/fonts/KaTeX_Math-BoldItalic.woff2
new file mode 100644
index 000000000..29657023a
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Math-BoldItalic.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Math-Italic.woff b/docs/hugo/public/fonts/KaTeX_Math-Italic.woff
new file mode 100644
index 000000000..eb5159d4c
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Math-Italic.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Math-Italic.woff2 b/docs/hugo/public/fonts/KaTeX_Math-Italic.woff2
new file mode 100644
index 000000000..215c143fd
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Math-Italic.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_SansSerif-Bold.woff b/docs/hugo/public/fonts/KaTeX_SansSerif-Bold.woff
new file mode 100644
index 000000000..8d47c02d9
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_SansSerif-Bold.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_SansSerif-Bold.woff2 b/docs/hugo/public/fonts/KaTeX_SansSerif-Bold.woff2
new file mode 100644
index 000000000..cfaa3bda5
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_SansSerif-Bold.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_SansSerif-Italic.woff b/docs/hugo/public/fonts/KaTeX_SansSerif-Italic.woff
new file mode 100644
index 000000000..7e02df963
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_SansSerif-Italic.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_SansSerif-Italic.woff2 b/docs/hugo/public/fonts/KaTeX_SansSerif-Italic.woff2
new file mode 100644
index 000000000..349c06dc6
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_SansSerif-Italic.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_SansSerif-Regular.woff b/docs/hugo/public/fonts/KaTeX_SansSerif-Regular.woff
new file mode 100644
index 000000000..31b84829b
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_SansSerif-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_SansSerif-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_SansSerif-Regular.woff2
new file mode 100644
index 000000000..a90eea85f
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_SansSerif-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Script-Regular.woff b/docs/hugo/public/fonts/KaTeX_Script-Regular.woff
new file mode 100644
index 000000000..0e7da821e
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Script-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Script-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Script-Regular.woff2
new file mode 100644
index 000000000..b3048fc11
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Script-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size1-Regular.woff b/docs/hugo/public/fonts/KaTeX_Size1-Regular.woff
new file mode 100644
index 000000000..7f292d911
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size1-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size1-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Size1-Regular.woff2
new file mode 100644
index 000000000..c5a8462fb
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size1-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size2-Regular.woff b/docs/hugo/public/fonts/KaTeX_Size2-Regular.woff
new file mode 100644
index 000000000..d241d9be2
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size2-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size2-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Size2-Regular.woff2
new file mode 100644
index 000000000..e1bccfe24
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size2-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size3-Regular.woff b/docs/hugo/public/fonts/KaTeX_Size3-Regular.woff
new file mode 100644
index 000000000..e6e9b658d
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size3-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size3-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Size3-Regular.woff2
new file mode 100644
index 000000000..249a28662
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size3-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size4-Regular.woff b/docs/hugo/public/fonts/KaTeX_Size4-Regular.woff
new file mode 100644
index 000000000..e1ec54576
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size4-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Size4-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Size4-Regular.woff2
new file mode 100644
index 000000000..680c13085
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Size4-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/KaTeX_Typewriter-Regular.woff b/docs/hugo/public/fonts/KaTeX_Typewriter-Regular.woff
new file mode 100644
index 000000000..2432419f2
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Typewriter-Regular.woff differ
diff --git a/docs/hugo/public/fonts/KaTeX_Typewriter-Regular.woff2 b/docs/hugo/public/fonts/KaTeX_Typewriter-Regular.woff2
new file mode 100644
index 000000000..771f1af70
Binary files /dev/null and b/docs/hugo/public/fonts/KaTeX_Typewriter-Regular.woff2 differ
diff --git a/docs/hugo/public/fonts/LiberationMono.woff b/docs/hugo/public/fonts/LiberationMono.woff
new file mode 100644
index 000000000..05f5bd236
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationMono.woff differ
diff --git a/docs/hugo/public/fonts/LiberationMono.woff2 b/docs/hugo/public/fonts/LiberationMono.woff2
new file mode 100644
index 000000000..3f4bb0637
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationMono.woff2 differ
diff --git a/docs/hugo/public/fonts/LiberationSans-Bold.woff b/docs/hugo/public/fonts/LiberationSans-Bold.woff
new file mode 100644
index 000000000..145ed9f7b
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans-Bold.woff differ
diff --git a/docs/hugo/public/fonts/LiberationSans-Bold.woff2 b/docs/hugo/public/fonts/LiberationSans-Bold.woff2
new file mode 100644
index 000000000..b16596740
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans-Bold.woff2 differ
diff --git a/docs/hugo/public/fonts/LiberationSans-BoldItalic.woff b/docs/hugo/public/fonts/LiberationSans-BoldItalic.woff
new file mode 100644
index 000000000..aa4c0c1f5
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans-BoldItalic.woff differ
diff --git a/docs/hugo/public/fonts/LiberationSans-BoldItalic.woff2 b/docs/hugo/public/fonts/LiberationSans-BoldItalic.woff2
new file mode 100644
index 000000000..081c4d61d
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans-BoldItalic.woff2 differ
diff --git a/docs/hugo/public/fonts/LiberationSans-Italic.woff b/docs/hugo/public/fonts/LiberationSans-Italic.woff
new file mode 100644
index 000000000..ebe952e46
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans-Italic.woff differ
diff --git a/docs/hugo/public/fonts/LiberationSans-Italic.woff2 b/docs/hugo/public/fonts/LiberationSans-Italic.woff2
new file mode 100644
index 000000000..86f6521c0
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans-Italic.woff2 differ
diff --git a/docs/hugo/public/fonts/LiberationSans.woff b/docs/hugo/public/fonts/LiberationSans.woff
new file mode 100644
index 000000000..bb582d51f
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans.woff differ
diff --git a/docs/hugo/public/fonts/LiberationSans.woff2 b/docs/hugo/public/fonts/LiberationSans.woff2
new file mode 100644
index 000000000..796cb17b5
Binary files /dev/null and b/docs/hugo/public/fonts/LiberationSans.woff2 differ
diff --git a/docs/hugo/public/fonts/Metropolis.woff b/docs/hugo/public/fonts/Metropolis.woff
new file mode 100644
index 000000000..6b1342c2f
Binary files /dev/null and b/docs/hugo/public/fonts/Metropolis.woff differ
diff --git a/docs/hugo/public/fonts/Metropolis.woff2 b/docs/hugo/public/fonts/Metropolis.woff2
new file mode 100644
index 000000000..d79d50a77
Binary files /dev/null and b/docs/hugo/public/fonts/Metropolis.woff2 differ
diff --git a/docs/hugo/public/fonts/Roboto-Bold.ttf b/docs/hugo/public/fonts/Roboto-Bold.ttf
new file mode 100644
index 000000000..aaf374d2c
Binary files /dev/null and b/docs/hugo/public/fonts/Roboto-Bold.ttf differ
diff --git a/docs/hugo/public/fonts/Roboto-Italic.ttf b/docs/hugo/public/fonts/Roboto-Italic.ttf
new file mode 100644
index 000000000..f382c6874
Binary files /dev/null and b/docs/hugo/public/fonts/Roboto-Italic.ttf differ
diff --git a/docs/hugo/public/fonts/Roboto.ttf b/docs/hugo/public/fonts/Roboto.ttf
new file mode 100644
index 000000000..2d116d920
Binary files /dev/null and b/docs/hugo/public/fonts/Roboto.ttf differ
diff --git a/docs/hugo/public/fonts/SourceCodePro.ttf b/docs/hugo/public/fonts/SourceCodePro.ttf
new file mode 100644
index 000000000..b1fa336cd
Binary files /dev/null and b/docs/hugo/public/fonts/SourceCodePro.ttf differ
diff --git a/docs/hugo/public/ha_cluster/index.html b/docs/hugo/public/ha_cluster/index.html
new file mode 100644
index 000000000..48fbc9cbd
--- /dev/null
+++ b/docs/hugo/public/ha_cluster/index.html
@@ -0,0 +1,5152 @@
+
+
+
+
+
+
+
+
+
+
+
+
+ High Availability | CYBERTEC-PG-Operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
High availability (HA) is a critical aspect of running database systems, especially in mission-critical applications where downtime is unacceptable. This section explains why high availability is important for PostgreSQL and how Patroni acts as a solution to ensure HA.
+Why High Availability (HA) for PostgreSQL?
+
+
To minimise downtime: In modern, data-driven applications, downtime can cause significant financial and reputational losses. High availability ensures that the database remains available even in the event of hardware failures or network problems.
+
Data integrity and security: A database failure can lead to data loss or data inconsistencies. High-availability solutions protect against such scenarios through continuous data replication and automatic failover.
+
Scalability and load balancing: HA setups make it possible to distribute the load across multiple nodes, resulting in better performance and faster response times. This is particularly important in environments with high data traffic.
+
Ease of maintenance: By setting up high availability, database maintenance can be performed without interrupting services. Nodes can be maintained incrementally while the database remains available.
In our PostgreSQL environment, we use Patroni in the PG containers by default. This has the advantage that even single-node instances basically function as Patroni clusters. This configuration offers several important advantages:
+
+
Easy scalability: by using Patroni in all PG containers, scaling pods up and down is possible at any time. You can easily add additional pods as needed to improve performance or increase capacity, or remove pods to free up resources. This flexibility is particularly useful in dynamic environments where requirements can change quickly.
+
Automated cluster management: Patroni automatically takes over the management of the cluster. When a new pod is added to an existing cluster, Patroni takes care of setting up the new node itself, including initialising and starting replication. This means you don’t have to perform any manual steps to configure or manage new nodes - Patroni does it all for you automatically.
+
Seamless integration: As Patroni is active in every PG container by default, you don’t have to worry about compatibility or manual configuration. This makes deployment and maintenance much easier, as all the necessary components are already preconfigured.
+
Optimisation of resources: Even with a minimal setup (single-node instance), you benefit from the advantages of a Patroni cluster, including the possibility of easy expansion and automatic failover in the event of a failure. This ensures optimal resource utilisation and minimises downtime.
You can either create a new cluster with the document or update an existing cluster with it.
+This makes it possible to scale the cluster up and down during operation.
+
The example above will create a HA-Cluster based on two Nodes.
+
kubectl get pods
+-----------------------------------------------------------------------------
+NAME | READY | STATUS | RESTARTS | AGE
+cluster-1-0 | 1/1 | Running | 0 | 3d
+cluster-1-1 | 1/1 | Running | 0 | 31s
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/hugo/public/ha_cluster/index.xml b/docs/hugo/public/ha_cluster/index.xml
new file mode 100644
index 000000000..ceaf1b1fa
--- /dev/null
+++ b/docs/hugo/public/ha_cluster/index.xml
@@ -0,0 +1,12 @@
+
+
+
+ High Availability on CYBERTEC-PG-Operator
+ http://localhost:1313/CYBERTEC-pg-operator/ha_cluster/
+ Recent content in High Availability on CYBERTEC-PG-Operator
+ Hugo
+ en
+
+
+
+
diff --git a/docs/hugo/public/images/architecture_cluster_backup_cloud_storage.png b/docs/hugo/public/images/architecture_cluster_backup_cloud_storage.png
new file mode 100644
index 000000000..082032dc9
Binary files /dev/null and b/docs/hugo/public/images/architecture_cluster_backup_cloud_storage.png differ
diff --git a/docs/hugo/public/images/architecture_cluster_backup_pvc.png b/docs/hugo/public/images/architecture_cluster_backup_pvc.png
new file mode 100644
index 000000000..1cdbf5bad
Binary files /dev/null and b/docs/hugo/public/images/architecture_cluster_backup_pvc.png differ
diff --git a/docs/hugo/public/images/architecture_overview.png b/docs/hugo/public/images/architecture_overview.png
new file mode 100644
index 000000000..d51e3c12d
Binary files /dev/null and b/docs/hugo/public/images/architecture_overview.png differ
diff --git a/docs/hugo/public/images/css/custom.css b/docs/hugo/public/images/css/custom.css
new file mode 100644
index 000000000..d1f5fce85
--- /dev/null
+++ b/docs/hugo/public/images/css/custom.css
@@ -0,0 +1,3 @@
+thead {
+ background: var(--accent-color-lite);
+ }
\ No newline at end of file
diff --git a/docs/hugo/public/images/css/styles.css b/docs/hugo/public/images/css/styles.css
new file mode 100644
index 000000000..d1f5fce85
--- /dev/null
+++ b/docs/hugo/public/images/css/styles.css
@@ -0,0 +1,3 @@
+thead {
+ background: var(--accent-color-lite);
+ }
\ No newline at end of file
diff --git a/docs/hugo/public/images/k8s-entities.png b/docs/hugo/public/images/k8s-entities.png
new file mode 100644
index 000000000..7f33924b4
Binary files /dev/null and b/docs/hugo/public/images/k8s-entities.png differ
diff --git a/docs/hugo/public/images/multisite-interaction.png b/docs/hugo/public/images/multisite-interaction.png
new file mode 100644
index 000000000..10b4c6f1e
Binary files /dev/null and b/docs/hugo/public/images/multisite-interaction.png differ
diff --git a/docs/hugo/public/img/big-data-circle.svg b/docs/hugo/public/img/big-data-circle.svg
new file mode 100644
index 000000000..0a0d875da
--- /dev/null
+++ b/docs/hugo/public/img/big-data-circle.svg
@@ -0,0 +1,2 @@
+
+
\ No newline at end of file
diff --git a/docs/hugo/public/img/geekdoc-stack.svg b/docs/hugo/public/img/geekdoc-stack.svg
new file mode 100644
index 000000000..64aebb70c
--- /dev/null
+++ b/docs/hugo/public/img/geekdoc-stack.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/hugo/public/index.html b/docs/hugo/public/index.html
new file mode 100644
index 000000000..e1c51483e
--- /dev/null
+++ b/docs/hugo/public/index.html
@@ -0,0 +1,5040 @@
+
+
+
+
+
+
+
+
+
+
+
+ CYBERTEC-PG-Operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Users who are already used to working with PostgreSQL from Baremetal or VMs are already familiar with the need for various files to configure PostgreSQL. These include
+
+
postgresql.conf
+
pg_hba.conf
+
…
+
+
Although these files are available in the container, direct modification is not planned. As part of the declarative mode of operation of the operator, these files are defined via the operator. The modifying intervention within the container also represents a contradiction to the immutability of the container.
+
For these reasons, the operator provides a way to make adjustments to the various files, from PostgreSQL to Patroni.
+
We differentiate between two main objects in the cluster manifest:
+
+
postgresql with the child objects version and parameters
+
patroni with objects for the pg_hab, slots and much more
The patroni object contains numerous options for customising the patroni-setu, and the pg_hba.conf is also configured here. A complete list of all available elements can be found here.
+
The most important elements include
+
+
pg_hba - pg_hba.conf
+
slots
+
synchronous_mode - enables synchronous mode in the cluster. The default is set to false
+
maximum_lag_on_failover - Specifies the maximum lag so that the pod is still considered healthy in the event of a failover.
+
failsafe_mode Allows you to cancel the downgrading of the leader if all cluster members can be reached via the Patroni Rest Api.
+You can find more information on this in the Patroni documentation
The pg_hba.conf contains all defined authentication rules for PostgreSQL.
+
When customising this configuration, it is important that the entire version of pg_hba is written to the manifest.
+The current configuration can be read out in the database using table pg_hba_file_rules ;.
When using user-defined slots, for example for the use of CDC using Debezium, there are problems when interacting with Patroni, as the slot and its current status are not automatically synchronised to the replicas.
+
In the event of a failover, the client cannot start replication as both the entire slot and the information about the data that has already been synchronised are missing.
+
To resolve this problem, slots must be defined in the cluster manifest rather than in PostgreSQL.
This example creates a logical replication slot with the name cdc-example within the app_db database and uses the pgoutput plugin for the slot.
+
+
+
+
+
+
+
+
Slots are only synchronised from the leader/standby leader to the replicas. This means that using the slots read-only on the replicas will cause a problem in the event of a failover.
Minikube is a tool that makes it possible to run Kubernetes locally on a single computer. It sets up a minimal but functional Kubernetes environment suitable for development and testing purposes. Minikube supports most Kubernetes features and provides an easy way to launch and manage Kubernetes clusters on local machines without the need for a complex cloud infrastructure.
You can then start minikube and all the necessary data is written directly to the conf. The definition of a user-defined path ensures that other configs are not inadvertently overwritten.
+The path must be defined again via ENV in each new user session. Alternatively, this can also be permanently defined via .bashrc.
+If the default path is not used for any other purpose, the ENV does not need to be set.
+
# Start minikube
+minikube start
+
+# get pods from default namespace
+kubectl get pods
+
+# change default namespace to cpo
+kubectl config set-context --namespace=cpo
+
CRC (CodeReady Containers) is a tool from Red Hat that provides a local OpenShift environment. It is specifically designed to run a compact version of OpenShift on a local machine to provide developers and testers with an easy way to develop and test applications optimised for use in OpenShift. CRC includes all the necessary OpenShift components and makes it possible to use Red Hat’s container platform locally without building a full cloud infrastructure.
You can then install and start crc and all the necessary data is written directly to the conf. The definition of a user-defined path ensures that other configs are not inadvertently overwritten.
+The path must be defined again via ENV in each new user session. Alternatively, this can also be permanently defined via .bashrc.
+If the default path is not used for any other purpose, the ENV does not need to be set.
+
# Install crc
+crc setup
+
+# Start crc
+crc start
+
+# get pods from default namespace
+oc get pods
+
+# change default namespace to cpo
+oc project cpo
+
+
+
+
+
+
+
diff --git a/docs/hugo/public/installation/index.xml b/docs/hugo/public/installation/index.xml
new file mode 100644
index 000000000..ac9570723
--- /dev/null
+++ b/docs/hugo/public/installation/index.xml
@@ -0,0 +1,33 @@
+
+
+
+ Installation on CYBERTEC-PG-Operator
+ http://localhost:1313/CYBERTEC-pg-operator/installation/
+ Recent content in Installation on CYBERTEC-PG-Operator
+ Hugo
+ en
+ Tue, 07 Mar 2023 14:26:51 +0100
+
+
+ Setup local Kubernetes
+ http://localhost:1313/CYBERTEC-pg-operator/installation/dev-k8s/
+ Tue, 07 Mar 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/installation/dev-k8s/
+ <p>There are various options for setting up a local Kubernetes environment. This chapter deals with the following two variants:</p>
<ul>
<li>minikube</li>
<li>crc (CodeReadyContainers from RedHat)</li>
</ul>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="minikube"
>
Minikube
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/installation/dev-k8s/#minikube" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Minikube" aria-label="Anchor to: Minikube" href="#minikube">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>Minikube is a tool that makes it possible to run Kubernetes locally on a single computer. It sets up a minimal but functional Kubernetes environment suitable for development and testing purposes. Minikube supports most Kubernetes features and provides an easy way to launch and manage Kubernetes clusters on local machines without the need for a complex cloud infrastructure.</p>
+
+
+ Install CPO
+ http://localhost:1313/CYBERTEC-pg-operator/installation/install_operator/
+ Tue, 07 Mar 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/installation/install_operator/
+ <div class="flex align-center gdoc-page__anchorwrap">
<h2 id="prerequisites"
>
Prerequisites
</h2>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/installation/install_operator/#prerequisites" class="gdoc-page__anchor clip flex align-center" title="Anchor to: Prerequisites" aria-label="Anchor to: Prerequisites" href="#prerequisites">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>For the installation you either need our CPO tutorial repository or you install CPO directly from our registry.<!-- raw HTML omitted -->
Exception: Installation via Operatorhub (Openshift only)</p>
<div class="flex align-center gdoc-page__anchorwrap">
<h3 id="cpo-tutorial-repository"
>
CPO-Tutorial-Repository
</h3>
<a data-clipboard-text="http://localhost:1313/CYBERTEC-pg-operator/installation/install_operator/#cpo-tutorial-repository" class="gdoc-page__anchor clip flex align-center" title="Anchor to: CPO-Tutorial-Repository" aria-label="Anchor to: CPO-Tutorial-Repository" href="#cpo-tutorial-repository">
<svg class="gdoc-icon gdoc_link"><use xlink:href="#gdoc_link"></use></svg>
</a>
</div>
<p>To get started, you can fork our tutorial repository on Github and then download it.
<a
class="gdoc-markdown__link"
href="https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/fork"
>CYBERTEC-operator-tutorials</a></p>
+
+
+ Operator-Configuration
+ http://localhost:1313/CYBERTEC-pg-operator/installation/configuration_operator/
+ Tue, 07 Mar 2023 14:26:51 +0100
+ http://localhost:1313/CYBERTEC-pg-operator/installation/configuration_operator/
+ <p>Users who are already used to working with PostgreSQL from Baremetal or VMs are already familiar with the need for various files to configure PostgreSQL. These include</p>
<ul>
<li>postgresql.conf</li>
<li>pg_hba.conf</li>
<li>…</li>
</ul>
<p>Although these files are available in the container, direct modification is not planned. As part of the declarative mode of operation of the operator, these files are defined via the operator. The modifying intervention within the container also represents a contradiction to the immutability of the container.</p>
+
+
+
diff --git a/docs/hugo/public/installation/install_operator/index.html b/docs/hugo/public/installation/install_operator/index.html
new file mode 100644
index 000000000..37ef81b1f
--- /dev/null
+++ b/docs/hugo/public/installation/install_operator/index.html
@@ -0,0 +1,5217 @@
+
+
+
+
+
+
+
+
+
+
+
+
+ Install CPO | CYBERTEC-PG-Operator
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
For the installation you either need our CPO tutorial repository or you install CPO directly from our registry.
+Exception: Installation via Operatorhub (Openshift only)
You can check and change the value.yaml of the helm diagram under the path helm/operator/values.yaml
+By default, the operator is defined so that it is configured via crd-configuration. If you wish, you can change this to configmap. There are also some other default settings.
+
helm install -n cpo cpo helm/operator/.
+
The installation uses a standard configuration. On the following page you will find more information on how to configure cpo and thus adapt it to your requirements.
The installation uses a standard configuration. On the following page you will find more information on how to configure cpo and thus adapt it to your requirements.
The installation uses a standard configuration. On the following page you will find more information on how to configure cpo and thus adapt it to your requirements.
For the installation you either need our CPO tutorial repository or you install CPO directly from our registry.
+Exception: Installation via Operatorhub (Openshift only)
You can check and change the value.yaml of the helm diagram under the path helm/operator/values.yaml
+By default, the operator is defined so that it is configured via crd-configuration. If you wish, you can change this to configmap. There are also some other default settings.
+
helm install -n cpo cpo helm/operator/.
+
The installation uses a standard configuration. On the following page you will find more information on how to configure cpo and thus adapt it to your requirements.
The installation uses a standard configuration. On the following page you will find more information on how to configure cpo and thus adapt it to your requirements.
The installation uses a standard configuration. On the following page you will find more information on how to configure cpo and thus adapt it to your requirements.