|
17 | 17 | What's New in |release| |
18 | 18 | ======================= |
19 | 19 |
|
20 | | -Apache CloudStack |release| is a 4.18 LTS minor release with 196 fixes |
21 | | -since the 4.18.0.0 release. Some of the highlights include: |
22 | | - |
23 | | -• Support Managed User Data in AutoScale VM groups |
24 | | -• Support CKS (CloudStack Kubernetes Cluster) in VPC tiers |
25 | | -• Support for VMware 8.0.0.x |
26 | | -• Several Hypervisor (VMware, KVM, XenServer) fixes and improvements |
27 | | -• Several UI fixes and improvements |
28 | | -• Several Network (L2, VXLAN, etc) fixes and improvements |
29 | | -• Several System VM (CPVM, SSVM) fixes and improvements |
30 | | -• Improve Solidfire storage plugin integration on VMware |
31 | | -• Support volume migration in ScaleIO/PowerFlex within and across ScaleIO/PowerFlex storage clusters |
32 | | -• Volume encryption support for StorPool |
33 | | -• Fix CloudStack upgrade with some MySQL versions |
34 | | -• Fix guest OSes and guest OS mappings in CloudStack database |
| 20 | +Apache CloudStack |release| is the initial 4.19 LTS release. It has over 300 fixes |
| 21 | +and features since the 4.18.1.0 release. |
35 | 22 |
|
36 | 23 | The full list of fixes and improvements can be found in the project release notes at |
37 | | -https://docs.cloudstack.apache.org/en/4.18.1.0/releasenotes/changes.html |
| 24 | +https://docs.cloudstack.apache.org/en/4.19.0.0/releasenotes/changes.html |
38 | 25 |
|
39 | | -What's in since 4.18.0.0 |
40 | | -====================== |
| 26 | +What's in since 4.19.0.0 |
| 27 | +======================== |
41 | 28 |
|
42 | | -Apache CloudStack 4.18.0.0 is the initial 4.18 LTS release with 300+ new |
43 | | -features, improvements and bug fixes since 4.17, including 19 major |
| 29 | +Apache CloudStack 4.19.0.0 is the initial 4.19 LTS release with 300+ new |
| 30 | +features, improvements and bug fixes since 4.18, including 26 major |
44 | 31 | new features. Some of the highlights include: |
45 | 32 |
|
46 | | -• Edge Zones |
47 | | -• Autoscaling |
48 | | -• Managed User Data |
49 | | -• Two-Factor Authentication Framework |
50 | | -• Support for Time-based OTP (TOTP) Authenticator |
51 | | -• Volume Encryption |
52 | | -• SDN Integration – Tungsten Fabric |
53 | | -• Ceph Multi Monitor Support |
54 | | -• API-Driven Console Access |
55 | | -• Console Access Security Improvements |
56 | | -• New Global settings UI |
57 | | -• Configurable MTU for VR |
58 | | -• Adaptative Affinity Groups |
59 | | -• Custom DNS Servers for Networks |
60 | | -• Improved Guest OS Support Framework |
61 | | -• Support for Enterprise Linux 9 |
62 | | -• Networker Backup Plugin for KVM Hypervisor |
63 | | -• Custom Quota Tariffs |
64 | | -• Secure VNC for KVM |
| 33 | +• CloudStack Object Storage Feature |
| 34 | +• VMware to KVM Migration |
| 35 | +• KVM Import |
| 36 | +• CloudStack DRS |
| 37 | +• OAuth2 Authentication |
| 38 | +• VNF Appliances Support |
| 39 | +• CloudStack DRS |
| 40 | +• CloudStack Snapshot Copy |
| 41 | +• Scheduled Instance Lifecycle Operations |
| 42 | +• Guest OS Management |
| 43 | +• Pure Flash Array and HPE-Primera Support |
| 44 | +• User-specified source NAT |
| 45 | +• Storage Browser |
| 46 | +• Safe CloudStack Shutdown |
| 47 | +• New CloudStack Dashboard |
| 48 | +• Domain migration |
| 49 | +• Flexible tags for hosts and storage pools |
| 50 | +• Support for Userdata in Autoscale Groups |
| 51 | +• KVM Host HA for StorPool storage |
| 52 | +• Dynamic secondary storage selection |
| 53 | +• Domain VPCs |
| 54 | +• Global ACL for VPCs |
65 | 55 |
|
66 | 56 | The full list of new features can be found in the project release notes at |
67 | | -https://docs.cloudstack.apache.org/en/4.18.0.0/releasenotes/changes.html |
| 57 | +https://docs.cloudstack.apache.org/en/4.19.0.0/releasenotes/changes.html |
68 | 58 |
|
69 | 59 | .. _guestosids |
70 | 60 |
|
71 | | -Possible Issue with Guest OS IDs |
72 | | -================================ |
| 61 | +Possible Issue with volume snapshot revert with KVM |
| 62 | +=================================================== |
73 | 63 |
|
74 | | -It has been noticed during upgrade testing that some environment, where |
75 | | -custom guest OSses where added and mapping for those OSses where added, |
76 | | -problems may occur during upgrade. Part of the mitigation is to make sure |
77 | | -OSses that are newly mapped but should have already been in the guest_os |
78 | | -table are there. Make sure you apply those before you start the new 4.18 |
79 | | -management server. |
| 64 | +Between versions 4.17.x, 4.18.0 and 4.18.1, KVM volume snapshot backups were |
| 65 | +not full snapshots and they rely on the primary storage as a backing store. |
| 66 | +To prevent any loss of data, care must be taken during revert operation and |
| 67 | +it must be ensured that the source primary storage snapshot file is present |
| 68 | +if the snapshot is created with any of these CloudStack versions. |
80 | 69 |
|
81 | | -first check which of the guest_os entries you miss: |
| 70 | +Users will have a backing store in their volume snapshots in the following cases: |
82 | 71 |
|
83 | | -.. parsed-literal:: |
| 72 | +- the snapshots are from a ROOT volume created from template; |
| 73 | + |
| 74 | +Users will not have a backing store in their volume snapshots in the following cases: |
| 75 | + |
| 76 | +- the snapshots are from ROOT volumes created with ISO; |
| 77 | +- the snapshots are from DATADISK volumes; |
84 | 78 |
|
85 | | - SELECT * FROM cloud.guest_os WHERE display_name IN (´CentOS 8´, ´Debian GNU/Linux 10 (32-bit)´, ´Debian GNU/Linux 10 (64-bit)´, ´SUSE Linux Enterprise Server 15 (64-bit)´, ´Windows Server 2019 (64-bit)´) |
| 79 | +Following there are two queries to help users identify snapshots with a backing store: |
86 | 80 |
|
87 | | -Then apply any of the following lines that you might need. |
| 81 | +Identify snapshots that were not removed yet and were created from a volume that was created from a template: |
88 | 82 |
|
89 | 83 | .. parsed-literal:: |
| 84 | + SELECT s.uuid AS "Snapshot ID", |
| 85 | + s.name AS "Snapshot Name", |
| 86 | + s.created AS "Snapshot creation datetime", |
| 87 | + img_s.uuid AS "Sec Storage ID", |
| 88 | + img_s.name AS "Sec Storage Name", |
| 89 | + ssr.install_path AS "Snapshot path on Sec Storage", |
| 90 | + v.uuid AS "Volume ID", |
| 91 | + v.name AS "Volume Name" |
| 92 | + FROM cloud.snapshots s |
| 93 | + INNER JOIN cloud.volumes v ON (v.id = s.volume_id) |
| 94 | + INNER JOIN cloud.snapshot_store_ref ssr ON (ssr.snapshot_id = s.id |
| 95 | + AND ssr.store_role = 'Image') |
| 96 | + INNER JOIN cloud.image_store img_s ON (img_s.id = ssr.store_id) |
| 97 | + WHERE s.removed IS NULL |
| 98 | + AND v.template_id IS NOT NULL; |
| 99 | +
|
| 100 | +With that, one can use qemu-img info in the snapshot file to check if they have a backing store. |
| 101 | + |
| 102 | +For those snapshots that have a backing store, one can use the following query to check which template is it and in which storage pool it is: |
90 | 103 |
|
91 | | - INSERT INTO cloud.guest_os (uuid, category_id, display_name, created, is_user_defined) VALUES (UUID(), '1', 'CentOS 8', now(), '0'); |
92 | | - INSERT INTO cloud.guest_os (uuid, category_id, display_name, created, is_user_defined) VALUES (UUID(), '2', 'Debian GNU/Linux 10 (32-bit)', now(), '0'); |
93 | | - INSERT INTO cloud.guest_os (uuid, category_id, display_name, created, is_user_defined) VALUES (UUID(), '2', 'Debian GNU/Linux 10 (64-bit)', now(), '0'); |
94 | | - INSERT INTO cloud.guest_os (uuid, category_id, display_name, created, is_user_defined) VALUES (UUID(), '5', 'SUSE Linux Enterprise Server 15 (64-bit)', now(), '0'); |
95 | | - INSERT INTO cloud.guest_os (uuid, category_id, display_name, created, is_user_defined) VALUES (UUID(), '6', 'Windows Server 2019 (64-bit)', now(), '0'); |
| 104 | +.. parsed-literal:: |
| 105 | + SELECT vt.uuid AS "Template ID", |
| 106 | + vt.name AS "Template Name", |
| 107 | + tsr.install_path AS "Template file on Pri Storage", |
| 108 | + sp.uuid AS "Pri Storage ID", |
| 109 | + sp.name AS "Pri Storage Name", |
| 110 | + sp.`path` AS "Pri Storage Path", |
| 111 | + sp.pool_type as "Pri Storage type" |
| 112 | + FROM cloud.template_spool_ref tsr |
| 113 | + INNER JOIN cloud.storage_pool sp ON (sp.id = tsr.pool_id AND sp.removed IS NULL) |
| 114 | + INNER JOIN cloud.vm_template vt ON (vt.id = tsr.template_id) |
| 115 | + WHERE tsr.install_path = "<template file in the snapshot backing store>"; |
| 116 | +
|
| 117 | +After identifying the snapshots with a backing store and the related templates, one can mount the secondary storage on a host that has access to the template and use qemu-img convert on the snapshot to consolidate it: |
| 118 | + |
| 119 | +.. parsed-literal:: |
| 120 | + qemu-img convert -O qcow2 -U --image-opts driver=qcow2,file.filename=<path to snapshot on secondary storage> <path to snapshot on secondary storage>-converted |
0 commit comments