Skip to content

Conversation

soerenschneider
Copy link
Owner

@soerenschneider soerenschneider commented Jul 7, 2025

This PR contains the following updates:

Package Update Change
openebs (source) minor 4.2.0 -> 4.3.3

Release Notes

openebs/openebs (openebs)

v4.3.3

Compare Source

This patch bring in a few fixes, as well as update of the bitnami repo which is needed. For more details see https://github.com/bitnami/charts/issues/35164.

What's Changed

Full Changelog: openebs/openebs@v4.3.2...v4.3.3

v4.3.2

Compare Source

What's Changed

Full Changelog: openebs/openebs@v4.3.1...v4.3.2

v4.3.1

Compare Source

Fixes

Full Changelog: openebs/openebs@v4.3.0...v4.3.1

v4.3.0

Compare Source

OpenEBS 4.3.0 Release Notes

Release Summary

OpenEBS version 4.3 introduces several functional fixes and new features focused on improving Data Security, User Experience, High availability (HA), replica rebuilds, and overall stability. The key highlights are Mayastor's support for At-Rest data encryption and a new Openebs plugin thats allows users to interact with all engines supplied by OpenEBS project. In addition, the release includes various usability and functional fixes for mayastor, ZFS, LocalPV LVM and LocalPV Hostpath provisioners, along with documentation enhancements to help users and new contributors get started quickly.

Umbrella Features

  • Unified Plugin
    • With this umbrella plugin, OpenEBS users who have installed cluster using OpenEBS umbrella chart will be able to interface all engines i.e Mayastor, localpv-lvm, localpv-zfs, hostpath using a single plugin i.e kubectl openebs.
  • One-Step Upgrade
    • All OpenEBS storage engines can now be upgraded using a unified umbrella upgrade process.
  • Supportability
    • Support bundle collection for all stable OpenEBS engines—LocalPV ZFS, LocalPV LVM, LocalPV HostPath, and Mayastor—is now supported via the kubectl openebs dump system command.
    • This unified approach enables comprehensive system state capture for efficient debugging and troubleshooting. Previously, support was limited to Mayastor through the kubectl-mayastor plugin.

Replicated Storage (Mayastor)

New Feature

  • Support for at-rest data encryption
    OpenEBS offers support for data-at-rest encryption to help ensure the confidentiality of persistent data stored on disk.
    With this capability, any disk pool configured with a user-defined encryption key can host encrypted volume replicas.
    This feature is particularly beneficial in environments requiring compliance with regulatory or security standards.

Enhancements

  • Added support for IPv6.
  • Added support for formatOptions via storage class.
  • Prefers cordoned nodes while removing volumes replicas eg. volume scale down.
  • We now restrict pool creation using non-persistent devlinks (/dev/sdX).
  • User do not have to recreate SC while restoring volume from thick snapshot. This fix was important for CSI based backup operations.
  • Add new volume health information to better showcase what the current state of the volume is.
  • Added a plugin command to delete volume. Mainly applicable for a PVC with RETAIN policy where user can end up in a situation where mayastor may have a volume without a PV object.
  • Avoid full rebuild if partial rebuild call fails due to the max rebuild limit.

Upgrading

  • Volume Health information now reflects the true status of the volume
    This means that a volume status may now be reported as Degraded whereas it would have previously been reported as Online. This has a particular impact for unpublished volumes (in other words, volumes which are not mounted used by a pod) since volume rebuilds are currently not available for unpublished volumes.
    This behaviour can be reverted by setting a helm chart variable: agents.core.volumeHealth=false.
  • This version of the OpenEBS chart adds three new components out of the box, i.e. Loki, Minio and Alloy, this change is necessary for collecting debugging information and capture cluster state. This includes the newer Loki stack that can be deployed in a HA fashion given there exists one object storage backing it, which is Minio in this case as a default option. Users can choose to avoid Minio or object storage backend and deploy Loki with filesystem storage, as defined here. The new Loki stack would be enabled by default with 3 replicas of Loki and 3 replicas of Minio. This behaviour can be disabled by setting a helm chart variable: loki.enabled=false, alloy.enabled=false.

Release Notes

Limitations

  • The Mayastor IO engine fully utilizes allocated CPU cores regardless of I/O load, running a poller at full speed.
  • A Mayastor DiskPool is limited to a single block device and cannot span multiple block devices.
  • The new at-rest encryption feature does not support rotating Data Encryption Keys(DEK).
  • Volume rebuilds are only performed on published volumes.

Known Issues

  • DiskPool Capacity Expansion
    • Mayastor does not support the capacity expansion of DiskPools as of v2.9.0.
  • IO-Engine Pod Restarts
    • Under heavy I/O and during constant scaling up/down of volume replicas, the io-engine pod may restart occasionally.
  • fsfreeze Operation Failure
    • If a pod-based workload is scheduled on a node that reboots and the pod lacks a controller (such as a Deployment or StatefulSet), the volume unpublish operation might not be triggered.
    • This leads the control plane to assume the volume is still published, causing the fsfreeze operation to fail during snapshot creation.
      • Workaround Recreate or reinstate the pod to ensure proper volume mounting.
  • Diskpool's backing device failure
    • If the backend device that hosts a diskpool runs into a fault, or gets removed e.g cloud disk removal, the status of diskpool and hosted replicas isn't clearly updated to reflect the problem.
    • As a result the resultant failures aren't gracefully handled and volume might remain Degraded for an extended period of time.
  • Extremely large pool undergoing dirty shutdown
    • In case of a dirty shutdown of io-engine node hosting an extremely large pool e.g 10TiB or 20TiB.
    • The recovery of pool hangs after the node comes online.
  • Extremely large filesystem volumes fail to provision
    • Filesystems volumes of sizes ranging in Terabytes e.g. more than 15TiB fails to provision successfully due to filesystem formatting getting hung.

Local Storage (LocalPV ZFS, LocalPV LVM, LocalPV Hostpath)

Fixes and Enhancements

  • LocalPV ZFS Enhancements

    • Introduced a backup garbage collector in the controller to automatically clean up stale or orphaned backup resources.
    • Updated CSI spec and associated sidecar containers to CSI v1.11.
    • Added improved and consistent labeling, including logging-related labels, to enhance Helm chart maintainability and observability.
  • LocalPV ZFS Fixes

    • Fixed an issue where the quota property was not correctly retained during upgrades.
    • Ensured backward compatibility of quotatype values during volume restores.
    • Fixed a crash where unhandled errors in the CSI NodeGetInfo call could cause the controller to exit unexpectedly.
    • The gRPC server now gracefully handles SIGTERM and SIGINT signals for clean exit.
    • The agent now leverages the OpenEBS lib-csi Kubernetes client to reliably load kubeconfig from multiple locations.
    • The CLI flag --plugin now only accepts controller and agent, disallowing invalid values like node.
  • LocalPV LVM Enhancements

    • Added support for formatOptions via storage class. These options will be used when formatting the device using mkfs tool.
    • Excludes Kubernetes cordoned nodes while provisioning volumes.
    • Updated CSI spec to v1.9 and associated sidecar images.
  • LocalPV Hostpath Enhancements

    • Fixed a scenario where a pod crashes when creating an init pod; new pods always failed because the init pod already existed.
    • Added support to specify file permissions for PVC hostpaths.

Release Notes

Limitations

  • LocalPV-LVM
    LVM-localpv has support for volume snapshot. But it doesn't support restore from a snapshot yet. It is in our roadmap.

Known Issues

  • Controller Pod Restart on Single Node Setup
    After upgrading, single node setups may face issues where the ZFS-localpv/LVM-localpv controller pod does not enter the Running state due to changes in the controller manifest (now a Deployment) and missing affinity rules.

    Workaround: Delete the old controller pod to allow the new pod to be scheduled correctly. This does not happen if upgrading from the previous release of ZFS-localpv/LVM-localpv.

  • Thin pool issue with LocalPV-LVM
    We do not unmap/reclaim Thin pool capacity. It is not tracked in lvmnode cr also which can cause unexpected behaviour when scheduling volumes. Refer (When using lvm thinpool type, csistoragecapacities calculation is incorrect · Issue #​382 · openebs/lvm-localpv)

Upgrade and Backward Incompatibilities

  • Kubernetes Requirement: Kubernetes 1.23 or higher is recommended.
  • Engine Compatibility: Upgrades to OpenEBS 4.3.0 are supported only for the following engines:
    • Local PV Hostpath
    • Local PV LVM
    • Local PV ZFS
    • Mayastor (from earlier editions, 3.10.x or below)

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@soerenschneider soerenschneider self-assigned this Jul 7, 2025
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 8 times, most recently from e54ccfb to 68b2eef Compare July 14, 2025 04:19
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 10 times, most recently from 1ef9b04 to fef7404 Compare July 21, 2025 04:19
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 10 times, most recently from f05c571 to 3498f16 Compare July 28, 2025 04:49
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 4 times, most recently from d01057f to aa2b1af Compare September 8, 2025 08:28
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 10 times, most recently from 444aba1 to 12af18a Compare September 19, 2025 04:29
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 2 times, most recently from 059aa85 to 0f02b1a Compare September 22, 2025 04:25
@soerenschneider soerenschneider changed the title chore(deps): update helm release openebs to v4.3.3 Update Helm release openebs to v4.3.3 Sep 22, 2025
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 7 times, most recently from 40475b6 to 99f2bb1 Compare September 29, 2025 04:26
@soerenschneider soerenschneider changed the title Update Helm release openebs to v4.3.3 chore(deps): update helm release openebs to v4.3.3 Sep 29, 2025
@soerenschneider soerenschneider force-pushed the renovate/openebs-4.x branch 3 times, most recently from f52939d to 5813abb Compare October 6, 2025 04:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants