Skip to content

Releases: simplyblock/sbcli

26.1.2

31 Mar 15:13

Choose a tag to compare

What's Changed

Full Changelog: 26.1.1...26.1.2

26.1.1

22 Mar 19:50

Choose a tag to compare

R25.10-Hotfix: pin 26.1.1 and fix clone path regression.

25.10.5

28 Feb 11:11

Choose a tag to compare

What's Changed

Full Changelog: 25.10.3...25.10.5

25.10.4.2

19 Jan 13:12

Choose a tag to compare

25.10.4.1

16 Jan 17:21

Choose a tag to compare

What's Changed

Full Changelog: 25.10.3...25.10.4.1

25.10.4

25 Nov 12:37

Choose a tag to compare

What's Changed

Full Changelog: 25.10.3...25.10.4

25.10.3

13 Nov 18:27

Choose a tag to compare

What's Changed

  • 🐛 Optimise Storage node monitor
  • 🐛 Fix fdb value exceed limit
  • 🐛 Other miner bug fixes, see more in the Full Changelog

Full Changelog: 25.10.2...25.10.3

25.10.2

05 Nov 11:21

Choose a tag to compare

New Features

  • Control Plane: Can alternatively deploy into existing Kubernetes clusters and co-locate on workers with storage nodes.
  • Kubernetes Support Matrix: Added OpenShift starting from version XX.XX.
  • OpenStack Driver: Now available. Supports most optional features and tested from OpenStack 25.1 (Epoxy). (Older OpenStack versions may be supported on request.)
  • Lower Memory Footprint: Required memory on storage nodes reduced from 0.2% of storage capacity to 0.05%.
  • QoS (Pool-level): Added pool-level QoS controls.
  • QoS Service Classes: Assign a service class to a volume; service classes provide full performance isolation within the cluster.
  • Flexible Erasure Coding: Support for flexible erasure-coding schemas within a cluster.
  • Fabrics: Support for RDMA fabric and mixed fabrics (RDMA, TCP).
  • Write Performance: Improvements during first write to volume and during node outage.
  • Namespace Volumes: A single NVMe-oF subsystem can now expose up to 32 namespace volumes.

Fixes

  • Control Plane: Fixed an issue that could lead to stuck deletes.

Upgrade Considerations

  • Upgrades are supported from 25.7.6 and 25.7.7.
  • It’s possible to add RDMA support to the fabric during an online upgrade.

Known Issues

  • Using different erasure-coding schemas per cluster is available but experimental (not GA) and, in some tests, can cause I/O interrupt issues.

25.7.7

17 Sep 11:34

Choose a tag to compare

What's changed:

  • 🐛 Bug fix: QOS setting between lvol and pool must be consistent and not accept negative values
  • 🐛 Bug fix: On bare metal, node auto restart was not triggered after container crash but node is made online
  • 🐛 Bug fix: Crypto LVOL delete: first delete crypto, then Lvol

Full Changelog: 25.7.6...25.7.7

25.7.6

09 Sep 12:22

Choose a tag to compare

What's Changed

  • SFAM-2295: Fix _connect_device invocation from port allow service.
  • SFAM-2292: check storage network interface ping on node auto restart
  • Increase node restart task retry count to 80
  • SFAM-2308: FIO interrupt with IO error on spdk container crash failover
  • revert node restart task retry count to 8 and start count from success checkes and node restart can start
  • SFAM-2179: Add snodeapi logs to graylog
  • fix firewall back port
  • Stop health check auto fix
  • SFAM-2309: Change health check service auto fix for problems in distrib cluster map
  • SFAM-2310: fix base lvol ref to be found before deleting it
  • SFAM-2311: Create snapshot monitor service on cluster update if not found in service ls
  • Use expansion migration instead of temp migrations in case of node restart, node recovers from network outage and node goes from down to online
  • Use "physical_label" in cluster map only if we have multi node per host
  • skip nsenter command for talos
  • SFAM-2179: Apply container logging config to be gelf on node add
  • change sn suspend function to print a msg to use shutdown
  • skip generate automated deployment if spdk pod already exist

Full Changelog: 25.7.5...25.7.6