|
| 1 | +--- |
| 2 | +layout: blog |
| 3 | +title: "Kubernetes v1.36: Moving Volume Group Snapshots to GA" |
| 4 | +date: 2026-04-22T10:30:00-08:00 |
| 5 | +draft: true |
| 6 | +slug: kubernetes-v1-36-volume-group-snapshot-ga |
| 7 | +author: > |
| 8 | + Xing Yang (VMware by Broadcom) |
| 9 | +--- |
| 10 | + |
| 11 | +Volume group snapshots were [introduced](/blog/2023/05/08/kubernetes-1-27-volume-group-snapshot-alpha/) as an Alpha feature with the Kubernetes v1.27 release, moved to [Beta](/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/) in v1.32, and to a [second Beta](/blog/2025/09/16/kubernetes-v1-34-volume-group-snapshot-beta-2/) in v1.34. We are excited to announce that in the Kubernetes v1.36 release, support for volume group snapshots has reached **General Availability (GA)**. |
| 12 | + |
| 13 | +The support for volume group snapshots relies on a set of [extension APIs for group snapshots](https://kubernetes-csi.github.io/docs/group-snapshot-restore-feature.html#volume-group-snapshot-apis). These APIs allow users to take crash-consistent snapshots for a set of volumes. Behind the scenes, Kubernetes uses a label selector to group multiple `PersistentVolumeClaim` objects for snapshotting. A key aim is to allow you to restore that set of snapshots to new volumes and recover your workload based on a crash-consistent recovery point. |
| 14 | + |
| 15 | +This feature is only supported for [CSI](https://kubernetes-csi.github.io/docs/) volume drivers. |
| 16 | + |
| 17 | +## An overview of volume group snapshots |
| 18 | + |
| 19 | +Some storage systems provide the ability to create a crash-consistent snapshot of multiple volumes. A group snapshot represents _copies_ made from multiple volumes that are taken at the same point-in-time. A group snapshot can be used either to rehydrate new volumes (pre-populated with the snapshot data) or to restore existing volumes to a previous state (represented by the snapshots). |
| 20 | + |
| 21 | +### Why add volume group snapshots to Kubernetes? |
| 22 | + |
| 23 | +The Kubernetes volume plugin system already provides a powerful abstraction that automates the provisioning, attaching, mounting, resizing, and snapshotting of block and file storage. Underpinning all these features is the Kubernetes goal of workload portability. |
| 24 | + |
| 25 | +There was already a [VolumeSnapshot](/docs/concepts/storage/volume-snapshots/) API that provides the ability to take a snapshot of a persistent volume to protect against data loss or data corruption. However, some storage systems support consistent group snapshots that allow a snapshot to be taken from multiple volumes at the same point-in-time to achieve write order consistency. This is extremely useful for applications that contain multiple volumes. For example, an application may have data stored in one volume and logs stored in another. If snapshots for these volumes are taken at different times, the application will not be consistent and will not function properly if restored from those snapshots. |
| 26 | + |
| 27 | +While you can quiesce the application first and take individual snapshots sequentially, this process can be time-consuming or sometimes impossible. Consistent group support provides crash consistency across all volumes in the group without the need for application quiescence. |
| 28 | + |
| 29 | +### Kubernetes APIs for volume group snapshots |
| 30 | + |
| 31 | +Kubernetes' support for volume group snapshots relies on three API kinds that are used for managing snapshots: |
| 32 | + |
| 33 | +VolumeGroupSnapshot |
| 34 | +: Created by a Kubernetes user (or automation) to request creation of a volume group snapshot for multiple persistent volume claims. |
| 35 | + |
| 36 | +VolumeGroupSnapshotContent |
| 37 | +: Created by the snapshot controller for a dynamically created VolumeGroupSnapshot. It contains information about the provisioned cluster resource (a group snapshot). The object binds to the VolumeGroupSnapshot for which it was created with a one-to-one mapping. |
| 38 | + |
| 39 | +VolumeGroupSnapshotClass |
| 40 | +: Created by cluster administrators to describe how volume group snapshots should be created, including the driver information, the deletion policy, etc. |
| 41 | + |
| 42 | +These three API kinds are defined as CustomResourceDefinitions (CRDs). For the GA release, the API version has been promoted to `v1`. |
| 43 | + |
| 44 | +## What's new in GA? |
| 45 | + |
| 46 | +* The API version for `VolumeGroupSnapshot`, `VolumeGroupSnapshotContent`, and `VolumeGroupSnapshotClass` is promoted to `groupsnapshot.storage.k8s.io/v1`. |
| 47 | +* Enhanced stability and bug fixes based on feedback from the beta releases, including the improvements introduced in v1beta2 for accurate `restoreSize` reporting. |
| 48 | + |
| 49 | +## How do I use Kubernetes volume group snapshots |
| 50 | + |
| 51 | +### Creating a new group snapshot with Kubernetes |
| 52 | + |
| 53 | +Once a `VolumeGroupSnapshotClass` object is defined and you have volumes you want to snapshot together, you may request a new group snapshot by creating a `VolumeGroupSnapshot` object. |
| 54 | + |
| 55 | +Label the PVCs you wish to group: |
| 56 | +```console |
| 57 | +% kubectl label pvc pvc-0 group=myGroup |
| 58 | +persistentvolumeclaim/pvc-0 labeled |
| 59 | + |
| 60 | +% kubectl label pvc pvc-1 group=myGroup |
| 61 | +persistentvolumeclaim/pvc-1 labeled |
| 62 | +``` |
| 63 | + |
| 64 | +For dynamic provisioning, a selector must be set so that the snapshot controller can find PVCs with the matching labels to be snapshotted together. |
| 65 | + |
| 66 | +```yaml |
| 67 | +apiVersion: groupsnapshot.storage.k8s.io/v1 |
| 68 | +kind: VolumeGroupSnapshot |
| 69 | +metadata: |
| 70 | + name: snapshot-daily-20260422 |
| 71 | + namespace: demo-namespace |
| 72 | +spec: |
| 73 | + volumeGroupSnapshotClassName: csi-groupSnapclass |
| 74 | + source: |
| 75 | + selector: |
| 76 | + matchLabels: |
| 77 | + group: myGroup |
| 78 | +``` |
| 79 | +
|
| 80 | +The `VolumeGroupSnapshotClass` is required for dynamic provisioning: |
| 81 | + |
| 82 | +```yaml |
| 83 | +apiVersion: groupsnapshot.storage.k8s.io/v1 |
| 84 | +kind: VolumeGroupSnapshotClass |
| 85 | +metadata: |
| 86 | + name: csi-groupSnapclass |
| 87 | +driver: example.csi.k8s.io |
| 88 | +deletionPolicy: Delete |
| 89 | +``` |
| 90 | + |
| 91 | +### How to use group snapshot for restore |
| 92 | + |
| 93 | +At restore time, request a new `PersistentVolumeClaim` to be created from a `VolumeSnapshot` object that is part of a `VolumeGroupSnapshot`. Repeat this for all volumes that are part of the group snapshot. |
| 94 | + |
| 95 | +```yaml |
| 96 | +apiVersion: v1 |
| 97 | +kind: PersistentVolumeClaim |
| 98 | +metadata: |
| 99 | + name: examplepvc-restored-2026-04-22 |
| 100 | + namespace: demo-namespace |
| 101 | +spec: |
| 102 | + storageClassName: example-sc |
| 103 | + dataSource: |
| 104 | + name: snapshot-0962a745b2bf930bb385b7b50c9b08af471f1a16780726de19429dd9c94eaca0 |
| 105 | + kind: VolumeSnapshot |
| 106 | + apiGroup: snapshot.storage.k8s.io |
| 107 | + accessModes: |
| 108 | + - ReadWriteOncePod |
| 109 | + resources: |
| 110 | + requests: |
| 111 | + storage: 100Mi |
| 112 | +``` |
| 113 | + |
| 114 | +## As a storage vendor, how do I add support for group snapshots? |
| 115 | + |
| 116 | +To implement the volume group snapshot feature, a CSI driver **must**: |
| 117 | + |
| 118 | +* Implement a new group controller service. |
| 119 | +* Implement group controller RPCs: `CreateVolumeGroupSnapshot`, `DeleteVolumeGroupSnapshot`, and `GetVolumeGroupSnapshot`. |
| 120 | +* Add group controller capability `CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT`. |
| 121 | + |
| 122 | +See the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) and the [Kubernetes-CSI Driver Developer Guide](https://kubernetes-csi.github.io/docs/) for more details. |
| 123 | + |
| 124 | +## How can I learn more? |
| 125 | + |
| 126 | +- The [design spec](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3476-volume-group-snapshot) for the volume group snapshot feature. |
| 127 | +- The [code repository](https://github.com/kubernetes-csi/external-snapshotter) for volume group snapshot APIs and controller. |
| 128 | +- CSI [documentation](https://kubernetes-csi.github.io/docs/) on the group snapshot feature. |
| 129 | + |
| 130 | +## How do I get involved? |
| 131 | + |
| 132 | +This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. On behalf of SIG Storage, I would like to offer a huge thank you to all the contributors who stepped up over the years to help the project reach GA: |
| 133 | + |
| 134 | +* Ben Swartzlander ([bswartz](https://github.com/bswartz)) |
| 135 | +* Cici Huang ([cici37](https://github.com/cici37)) |
| 136 | +* Darshan Murthy ([darshansreenivas](https://github.com/darshansreenivas)) |
| 137 | +* Hemant Kumar ([gnufied](https://github.com/gnufied)) |
| 138 | +* James Defelice ([jdef](https://github.com/jdef)) |
| 139 | +* Jan Šafránek ([jsafrane](https://github.com/jsafrane)) |
| 140 | +* Madhu Rajanna ([Madhu-1](https://github.com/Madhu-1)) |
| 141 | +* Manish M Yathnalli ([manishym](https://github.com/manishym)) |
| 142 | +* Michelle Au ([msau42](https://github.com/msau42)) |
| 143 | +* Niels de Vos ([nixpanic](https://github.com/nixpanic)) |
| 144 | +* Leonardo Cecchi ([leonardoce](https://github.com/leonardoce)) |
| 145 | +* Rakshith R ([Rakshith-R](https://github.com/Rakshith-R)) |
| 146 | +* Raunak Shah ([RaunakShah](https://github.com/RaunakShah)) |
| 147 | +* Saad Ali ([saad-ali](https://github.com/saad-ali)) |
| 148 | +* Wei Duan ([duanwei33](https://github.com/duanwei33)) |
| 149 | +* Xing Yang ([xing-yang](https://github.com/xing-yang)) |
| 150 | +* Yati Padia ([yati1998](https://github.com/yati1998)) |
| 151 | + |
| 152 | +For those interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). We always welcome new contributors. |
| 153 | + |
| 154 | +We also hold regular [Data Protection Working Group meetings](https://github.com/kubernetes/community/tree/master/wg-data-protection). New attendees are welcome to join our discussions. |
0 commit comments