Skip to content

Conversation

@flowbie-bot
Copy link
Contributor

@flowbie-bot flowbie-bot bot commented Apr 19, 2025

This PR contains the following updates:

Package Update Change
rook-ceph-cluster minor v1.16.2 -> v1.17.0

Release Notes

rook/rook (rook-ceph-cluster)

v1.17.0

Compare Source

Upgrade Guide

To upgrade from previous versions of Rook, see the Rook upgrade guide.

Breaking Changes

  • Kubernetes v1.28 is now the minimum version supported by Rook through the soon-to-be K8s release v1.33.
  • Several ObjectBucketClaim options were added previously in Rook v1.16 that allowed more control over buckets. These controls allow users to self-serve their own S3 policies. Administrators may consider this flexibility a risk, depending on their environment. Rook now disables these options by default to ensure the safest off-the-shelf configurations. To enable the full range of OBC configurations, the new setting ROOK_OBC_ALLOW_ADDITIONAL_CONFIG_FIELDS must be set to enable users to set all of these options. For more details, see the OBC additionalConfig documentation.
  • First-class credential management added to CephObjectStoreUser resources, allowing multiple credentials and declarative credential rotation. For more details, see Managing User S3 Credentials. As a result, existing S3 users provisioned via CephObjectStoreUser resources no longer allow multiple credentials to exist on underlying S3 users, unless explicitly managed by Rook. Rook will purge all but one of the undeclared credentials. This could be a user observable regression for administrators who manually edited/rotated S3 user credentials for CephObjectStoreUsers, and affected users can make use of the new credential management feature as an alternative.
  • Kafka notifications configured via CephBucketTopic resources will now default to setting the Kafka authentication mechanism to PLAIN. Previously, no auth mechanism was specified by default. It was possible to set the auth mechanism via CephBucketTopic.spec.endpoint.kafka.opaqueData. However, setting &mechanism=<auth type> via opaqueData is no longer possible. If any auth mechanism other than PLAIN is in use, modification to CephBucketTopic resources is required.

Features

  • The name of a pre-existing Ceph RGW user account can be set as the bucket owner on an ObjectBucketClaim (OBC), rather than a unique RGW user being created for every bucket. A CephObjectStoreUser resource may be used to create the Ceph RGW user account which will be specified on the OBC. If the bucket owner is set on a bucket that already exists and is owned by a different user, the bucket will be re-linked to the specified user.
  • The Ceph CSI 3.14 release has a number of features and improvements for RBD and CephFS volumes, volume snapshots, and many more areas. See the Ceph CSI 3.14 release notes for more details.
  • External mons: In some two-datacenter clusters, there is no option to start an arbiter mon in an independent K8s node to configure a proper stretch cluster. The external mons now allow a mon to be configured outside the Kubernetes cluster, while Rook manages everything else inside the cluster. For more details, see the External Mon documentation. This feature is in currently in experimental mode.
  • DNS resolution for mons: Allows clients outside the K8s cluster to resolve mon endpoints via DNS without requiring manual updates to the list of mon endpoints. This helps in scenarios such as virtual machine live migration. The Ceph client can connect to rook-ceph-active-mons..svc.cluster.local to dynamically resolve mon endpoints and receive automatic updates when mon IPs change. To configure this DNS resolution, see Tracking Mon Endpoints.
  • Node-specific ceph.conf overrides: The ceph.conf overrides can now be customized per-node. This may be helpful for some ceph.conf settings that need to be unique per node depending on the hardware. This can be configured by creating a node-specific configmap that will be loaded for all OSDs and OSD prepare jobs on that node, instead of the default settings that are loaded from the rook-config-override configmap.

v1.16.7

Compare Source

Improvements

Rook v1.16.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.16.6

Compare Source

Improvements

Rook v1.16.6 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.16.5

Compare Source

Improvements

Rook v1.16.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.16.4

Compare Source

Improvements

Rook v1.16.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.16.3

Compare Source

Improvements

Rook v1.16.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.


Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@github-actions
Copy link
Contributor

github-actions bot commented Apr 19, 2025

--- HelmRelease: rook-ceph/rook-ceph-cluster ConfigMap: rook-ceph/rook-config-override

+++ HelmRelease: rook-ceph/rook-ceph-cluster ConfigMap: rook-ceph/rook-config-override

@@ -2,13 +2,12 @@

 kind: ConfigMap
 apiVersion: v1
 metadata:
   name: rook-config-override
   namespace: rook-ceph
 data:
-  config: |2
-
+  config: |
     [global]
     bdev_enable_discard = true
     bdev_async_discard = true
     osd_class_update_on_start = false
 
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-block

+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-block

@@ -1,9 +1,9 @@

 ---
+kind: StorageClass
 apiVersion: storage.k8s.io/v1
-kind: StorageClass
 metadata:
   name: ceph-block
   annotations:
     storageclass.kubernetes.io/is-default-class: 'true'
 provisioner: rook-ceph.rbd.csi.ceph.com
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-filesystem

+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-filesystem

@@ -1,9 +1,9 @@

 ---
+kind: StorageClass
 apiVersion: storage.k8s.io/v1
-kind: StorageClass
 metadata:
   name: ceph-filesystem
   annotations:
     storageclass.kubernetes.io/is-default-class: 'false'
 provisioner: rook-ceph.cephfs.csi.ceph.com
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

@@ -1,9 +1,9 @@

 ---
+kind: Deployment
 apiVersion: apps/v1
-kind: Deployment
 metadata:
   name: rook-ceph-tools
   namespace: rook-ceph
   labels:
     app: rook-ceph-tools
 spec:
@@ -17,22 +17,23 @@

         app: rook-ceph-tools
     spec:
       dnsPolicy: ClusterFirstWithHostNet
       hostNetwork: true
       containers:
       - name: rook-ceph-tools
-        image: quay.io/ceph/ceph:v19.2.0
+        image: quay.io/ceph/ceph:v19.2.3
         command:
         - /bin/bash
         - -c
         - |
           # Replicate the script from toolbox.sh inline so the ceph image
           # can be run directly, instead of requiring the rook toolbox
           CEPH_CONFIG="/etc/ceph/ceph.conf"
           MON_CONFIG="/etc/rook/mon-endpoints"
           KEYRING_FILE="/etc/ceph/keyring"
+          CONFIG_OVERRIDE="/etc/rook-config-override/config"
 
           # create a ceph config file in its default location so ceph/rados tools can be used
           # without specifying any arguments
           write_endpoints() {
             endpoints=$(cat ${MON_CONFIG})
 
@@ -47,12 +48,19 @@

           [global]
           mon_host = ${mon_endpoints}
 
           [client.admin]
           keyring = ${KEYRING_FILE}
           EOF
+
+            # Merge the config override if it exists and is not empty
+            if [ -f "${CONFIG_OVERRIDE}" ] && [ -s "${CONFIG_OVERRIDE}" ]; then
+              echo "$DATE merging config override from ${CONFIG_OVERRIDE}"
+              echo "" >> ${CEPH_CONFIG}
+              cat ${CONFIG_OVERRIDE} >> ${CEPH_CONFIG}
+            fi
           }
 
           # watch the endpoints config file and update if the mon endpoints ever change
           watch_endpoints() {
             # get the timestamp for the target of the soft link
             real_path=$(realpath ${MON_CONFIG})
@@ -112,12 +120,15 @@

         - mountPath: /etc/ceph
           name: ceph-config
         - name: mon-endpoint-volume
           mountPath: /etc/rook
         - name: ceph-admin-secret
           mountPath: /var/lib/rook-ceph-mon
+        - name: rook-config-override
+          mountPath: /etc/rook-config-override
+          readOnly: true
       serviceAccountName: rook-ceph-default
       volumes:
       - name: ceph-admin-secret
         secret:
           secretName: rook-ceph-mon
           optional: false
@@ -127,12 +138,16 @@

       - name: mon-endpoint-volume
         configMap:
           name: rook-ceph-mon-endpoints
           items:
           - key: data
             path: mon-endpoints
+      - name: rook-config-override
+        configMap:
+          name: rook-config-override
+          optional: true
       - name: ceph-config
         emptyDir: {}
       tolerations:
       - key: node.kubernetes.io/unreachable
         operator: Exists
         effect: NoExecute
--- HelmRelease: rook-ceph/rook-ceph-cluster Ingress: rook-ceph/rook-ceph-dashboard

+++ HelmRelease: rook-ceph/rook-ceph-cluster Ingress: rook-ceph/rook-ceph-dashboard

@@ -1,9 +1,9 @@

 ---
+kind: Ingress
 apiVersion: networking.k8s.io/v1
-kind: Ingress
 metadata:
   name: rook-ceph-dashboard
   namespace: rook-ceph
 spec:
   rules:
   - host: rook...PLACEHOLDER_SECRET_DOMAIN..
--- HelmRelease: rook-ceph/rook-ceph-cluster CephBlockPool: rook-ceph/ceph-blockpool

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephBlockPool: rook-ceph/ceph-blockpool

@@ -1,9 +1,9 @@

 ---
+kind: CephBlockPool
 apiVersion: ceph.rook.io/v1
-kind: CephBlockPool
 metadata:
   name: ceph-blockpool
   namespace: rook-ceph
 spec:
   enableRBDStats: true
   failureDomain: host
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

@@ -5,14 +5,15 @@

   name: rook-ceph
   namespace: rook-ceph
 spec:
   monitoring:
     enabled: true
   cephVersion:
+    image: quay.io/ceph/ceph:v19.2.3
     allowUnsupported: false
-    image: quay.io/ceph/ceph:v19.2.0
+    imagePullPolicy: null
   cleanupPolicy:
     allowUninstallWithVolumes: false
     confirmation: ''
     sanitizeDisks:
       dataSource: zero
       iteration: 1
@@ -28,13 +29,12 @@

     ssl: false
     urlPrefix: /
   dataDirHostPath: /var/lib/rook
   disruptionManagement:
     managePodBudgets: true
     osdMaintenanceTimeout: 30
-    pgHealthCheckTimeout: 0
   healthCheck:
     daemonHealth:
       mon:
         disabled: false
         interval: 45s
       osd:
--- HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystem: rook-ceph/ceph-filesystem

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystem: rook-ceph/ceph-filesystem

@@ -1,9 +1,9 @@

 ---
+kind: CephFilesystem
 apiVersion: ceph.rook.io/v1
-kind: CephFilesystem
 metadata:
   name: ceph-filesystem
   namespace: rook-ceph
 spec:
   dataPools:
   - failureDomain: host
--- HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystemSubVolumeGroup: rook-ceph/ceph-filesystem-csi

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystemSubVolumeGroup: rook-ceph/ceph-filesystem-csi

@@ -1,9 +1,9 @@

 ---
+kind: CephFilesystemSubVolumeGroup
 apiVersion: ceph.rook.io/v1
-kind: CephFilesystemSubVolumeGroup
 metadata:
   name: ceph-filesystem-csi
   namespace: rook-ceph
 spec:
   name: csi
   filesystemName: ceph-filesystem
--- HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-filesystem

+++ HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-filesystem

@@ -1,9 +1,9 @@

 ---
+kind: VolumeSnapshotClass
 apiVersion: snapshot.storage.k8s.io/v1
-kind: VolumeSnapshotClass
 metadata:
   name: csi-ceph-filesystem
   annotations:
     snapshot.storage.kubernetes.io/is-default-class: 'false'
 driver: rook-ceph.cephfs.csi.ceph.com
 parameters:
--- HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-blockpool

+++ HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-blockpool

@@ -1,9 +1,9 @@

 ---
+kind: VolumeSnapshotClass
 apiVersion: snapshot.storage.k8s.io/v1
-kind: VolumeSnapshotClass
 metadata:
   name: csi-ceph-blockpool
   annotations:
     snapshot.storage.kubernetes.io/is-default-class: 'false'
 driver: rook-ceph.rbd.csi.ceph.com
 parameters:

@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 4964cdc to 4cc8cf6 Compare April 19, 2025 00:28
@github-actions
Copy link
Contributor

github-actions bot commented Apr 19, 2025

--- kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

+++ kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

@@ -13,13 +13,13 @@

     spec:
       chart: rook-ceph-cluster
       sourceRef:
         kind: HelmRepository
         name: rook-ceph
         namespace: flux-system
-      version: v1.16.2
+      version: v1.19.0
   dependsOn:
   - name: rook-ceph-operator
     namespace: rook-ceph
   - name: snapshot-controller
     namespace: storage
   install:

@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 4cc8cf6 to fc208b6 Compare April 26, 2025 00:30
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from fc208b6 to fbb20b8 Compare May 10, 2025 00:29
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from fbb20b8 to 15cc17d Compare May 31, 2025 00:31
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 15cc17d to b176eec Compare June 7, 2025 00:31
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from b176eec to 9e216aa Compare June 21, 2025 00:33
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 9e216aa to 38a9b7c Compare July 12, 2025 00:33
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 38a9b7c to 968a4f9 Compare August 2, 2025 00:33
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch 2 times, most recently from 50322c8 to d1c842d Compare August 30, 2025 00:28
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from d1c842d to 0ba9ca6 Compare September 13, 2025 15:59
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch 2 times, most recently from 5969e25 to 50241c7 Compare October 11, 2025 00:29
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 50241c7 to 2cbce06 Compare October 25, 2025 00:31
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 2cbce06 to c615094 Compare November 1, 2025 01:13
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from c615094 to 2c6cd6f Compare November 15, 2025 00:33
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 2c6cd6f to 401c133 Compare December 6, 2025 02:54
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 401c133 to 39cb460 Compare January 17, 2026 00:34
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 39cb460 to 7d45e3e Compare January 24, 2026 00:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants