|
1 | | -[id="virt-release-notes"] |
| 1 | +[id="virt-2-4-release-notes"] |
2 | 2 | = {RN_BookName} |
3 | 3 | include::modules/virt-document-attributes.adoc[] |
4 | | -:context: virt-release-notes |
| 4 | +:context: virt-2-4-release-notes |
5 | 5 | toc::[] |
6 | 6 |
|
7 | | -== About {VirtProductName} {VirtVersion} |
| 7 | +Do not add or edit release notes here. Edit release notes directly in the branch |
| 8 | +that they are relevant for. |
8 | 9 |
|
9 | | -include::modules/virt-what-you-can-do-with-virt.adoc[leveloffset=+2] |
| 10 | +This file is here to allow builds to work. |
10 | 11 |
|
11 | | -=== {VirtProductName} support |
12 | | - |
13 | | -:FeatureName: {VirtProductName} |
14 | | -include::modules/technology-preview.adoc[leveloffset=+2] |
15 | | - |
16 | | -== New and changed features |
17 | | - |
18 | | -* Managing virtual machines is simpler and more efficient due to improvements |
19 | | -in design and workflow. You can now: |
20 | | -** Run the virtual machine wizard with less navigation. The wizard now uses |
21 | | -a comprehensive in-page style and includes a review page for confirming |
22 | | -configuration details before submission. |
23 | | -** Import a single VMware virtual machine with less navigation. |
24 | | -** Edit virtual machine templates as well as virtual machine configurations. |
25 | | -** Monitor health of virtual machine-backed services as you would for |
26 | | -Pod-based services. |
27 | | -** Enable persistent local storage for virtual machine images. |
28 | | -** Add, edit, and view virtual CD-ROM devices attached to a virtual machine. |
29 | | -** Add and view network attachment definitions with a graphical editor. |
30 | | - |
31 | | -== Resolved issues |
32 | | - |
33 | | -* Previously, when you added a disk to a virtual machine via the *Disks* tab in |
34 | | -the web console, the added disk had a `Filesystem` volumeMode regardless of the |
35 | | -volumeMode set in the `kubevirt-storage-class-default` ConfigMap. This issue has been fixed. |
36 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1753688[*BZ#1753688*]) |
37 | | - |
38 | | -* Previously, when navigating to the *Virtual Machines Console* tab, |
39 | | -sometimes no content was displayed. This issue has been fixed. |
40 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1753606[*BZ#1753606*]) |
41 | | - |
42 | | -* Previously, attempting to list all instances of the {VirtProductName} operator |
43 | | -from a browser resulted in a 404 (page not found) error. |
44 | | -This issue has been fixed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1757526[*BZ#1757526*]) |
45 | | - |
46 | | -* Previously, if a virtual machine used guaranteed CPUs, it was not scheduled |
47 | | -because the label `cpumanager=true` was not automatically set on nodes. |
48 | | -This issue has been fixed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1718944[*BZ#1718944*]) |
49 | | - |
50 | | -== Known issues |
51 | | - |
52 | | -* If you have {VirtProductName} 2.1.0 deployed, you must first upgrade {VirtProductName} |
53 | | -to 2.2.0 before upgrading {product-title}. Upgrading {product-title} before upgrading |
54 | | -{VirtProductName} might trigger virtual machine deletion. |
55 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1785661[*BZ#1785661*]) |
56 | | - |
57 | | -// Don't remove: this BZ is probably true for all 2.x releases |
58 | | -* The `masquerade` binding method for virtual machines cannot be used in clusters with RHEL 7 compute nodes. |
59 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1741626[*BZ#1741626*]) |
60 | | - |
61 | | -* After migration, a virtual machine is assigned a new IP address. However, the |
62 | | -commands `oc get vmi` and `oc describe vmi` still generate output containing the |
63 | | -obsolete IP address. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1686208[*BZ#1686208*]) |
64 | | -+ |
65 | | -** As a workaround, view the correct IP address by running the following command: |
66 | | -+ |
67 | | ----- |
68 | | -$ oc get pod -o wide |
69 | | ----- |
70 | | - |
71 | | -* Some resources are improperly retained when removing {VirtProductName}. You |
72 | | -must manually remove these resources in order to reinstall {VirtProductName}. |
73 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1712429[*BZ#1712429*]) |
74 | | - |
75 | | -* Users without administrator privileges cannot add a network interface |
76 | | -to a project in an L2 network using the virtual machine wizard. |
77 | | -This issue is caused by missing permissions that allow users to load |
78 | | -network attachment definitions. |
79 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1743985[*BZ#1743985*]) |
80 | | -+ |
81 | | -** As a workaround, provide the user with permissions to load the network attachment |
82 | | -definitions. |
83 | | -+ |
84 | | -. Define `ClusterRole` and `ClusterRoleBinding` objects to the YAML configuration |
85 | | -file, using the following examples: |
86 | | -+ |
87 | | -[source,yaml] |
88 | | ----- |
89 | | -apiVersion: rbac.authorization.k8s.io/v1 |
90 | | -kind: ClusterRole |
91 | | -metadata: |
92 | | - name: cni-resources |
93 | | -rules: |
94 | | -- apiGroups: ["k8s.cni.cncf.io"] |
95 | | - resources: ["*"] |
96 | | - verbs: ["*"] |
97 | | ----- |
98 | | -+ |
99 | | -[source,yaml] |
100 | | ----- |
101 | | -apiVersion: rbac.authorization.k8s.io/v1 |
102 | | -kind: ClusterRoleBinding |
103 | | -metadata: |
104 | | - name: <role-binding-name> |
105 | | -roleRef: |
106 | | - apiGroup: rbac.authorization.k8s.io |
107 | | - kind: ClusterRole |
108 | | - name: cni-resources |
109 | | -subjects: |
110 | | -- kind: User |
111 | | - name: <user to grant the role to> |
112 | | - namespace: <namespace of the user> |
113 | | ----- |
114 | | -+ |
115 | | -. As a `cluster-admin` user, run the following command to create the `ClusterRole` |
116 | | -and `ClusterRoleBinding` objects you defined: |
117 | | -+ |
118 | | ----- |
119 | | -$ oc create -f <filename>.yaml |
120 | | ----- |
121 | | - |
122 | | -* Live migration fails when nodes have different CPU models. Even in cases where |
123 | | -nodes have the same physical CPU model, differences introduced by microcode |
124 | | -updates have the same effect. This is because the default settings trigger |
125 | | -host CPU passthrough behavior, which is incompatible with live migration. |
126 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1760028[*BZ#1760028*]) |
127 | | -+ |
128 | | -** As a workaround, set the default CPU model in the `kubevirt-config` ConfigMap, |
129 | | -as shown in the following example: |
130 | | -+ |
131 | | -[NOTE] |
132 | | -==== |
133 | | -You must make this change before starting the virtual machines that support |
134 | | -live migration. |
135 | | -==== |
136 | | -+ |
137 | | -. Open the `kubevirt-config` ConfigMap for editing by running the following command: |
138 | | -+ |
139 | | ----- |
140 | | -$ oc edit configmap kubevirt-config -n openshift-cnv |
141 | | ----- |
142 | | -+ |
143 | | -. Edit the ConfigMap: |
144 | | -+ |
145 | | -[source,yaml] |
146 | | ----- |
147 | | -kind: ConfigMap |
148 | | -metadata: |
149 | | - name: kubevirt-config |
150 | | -data: |
151 | | - default-cpu-model: "<cpu-model>" <1> |
152 | | ----- |
153 | | -<1> Replace `<cpu-model>` with the actual CPU model value. You can determine this |
154 | | -value by running `oc describe node <node>` for all nodes and looking at the |
155 | | -`cpu-model-<name>` labels. Select the CPU model that is present on all of your |
156 | | -nodes. |
157 | | - |
158 | | -* When running `virtctl image-upload` to upload large VM disk images in `qcow2` |
159 | | -format, an end-of-file (EOF) error may be reported after the data is |
160 | | -transmitted, even though the upload is either progressing normally or completed. |
161 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1789093[*BZ#1789093*]) |
162 | | -+ |
163 | | -Run the following command to check the status of an upload on a given PVC: |
164 | | -+ |
165 | | ----- |
166 | | -$ oc describe pvc <pvc-name> | grep cdi.kubevirt.io/storage.pod.phase |
167 | | ----- |
168 | | - |
169 | | -* When attempting to create and launch a virtual machine using a Haswell CPU, |
170 | | -the launch of the virtual machine can fail due to incorrectly labeled nodes. |
171 | | -This is a change in behavior from previous versions of {VirtProductName}, |
172 | | -where virtual machines could be successfully launched on Haswell hosts. |
173 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1781497[*BZ#1781497*]) |
174 | | -+ |
175 | | -As a workaround, select a different CPU model, if possible. |
176 | | - |
177 | | -* If you select a directory that shares space with your operating system, |
178 | | -you can potentially exhaust the space on the partition, causing the node to |
179 | | -be non-functional. Instead, create a separate partition |
180 | | -and point the hostpath provisioner to that partition so it will not |
181 | | -interfere with your operating system. |
182 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1793132[*BZ#1793132*]) |
183 | | - |
184 | | -* The {VirtProductName} upgrade process occasionally fails due to an interruption |
185 | | -from the Operator Lifecycle Manager (OLM). This issue is caused by the limitations |
186 | | -associated with using a declarative API to track the state of {VirtProductName} |
187 | | -Operators. Enabling automatic updates during |
188 | | -xref:install/installing-virt.adoc#virt-subscribing-to-the-catalog_installing-virt[installation] |
189 | | -decreases the risk of encountering this issue. |
190 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1759612[*BZ#1759612*]) |
191 | | - |
192 | | -* {VirtProductName} cannot reliably identify node drains that are triggered by |
193 | | -running either `oc adm drain` or `kubectl drain`. Do not run these commands on |
194 | | -the nodes of any clusters where {VirtProductName} is deployed. The nodes might not |
195 | | -drain if there are virtual machines running on top of them. |
196 | | -The current solution is to put nodes into maintenance. |
197 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1707427[*BZ#1707427*]) |
198 | | - |
199 | | -* If you navigate to the *Subscription* tab on the *Operators* -> *Installed Operators* |
200 | | -page and click the current upgrade channel to edit it, there might be no visible results. |
201 | | -If this occurs, there are no visible errors. |
202 | | -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1796410[*BZ#1796410*]) |
203 | | -+ |
204 | | -** As a workaround, trigger the upgrade process to {VirtProductName} {VirtVersion} |
205 | | -from the CLI by running the following `oc` patch command: |
206 | | -+ |
207 | | ----- |
208 | | -$ TARGET_NAMESPACE=openshift-cnv HCO_CHANNEL=2.2 oc patch -n "${TARGET_NAMESPACE}" $(oc get subscription -n ${TARGET_NAMESPACE} --no-headers -o name) --type='json' -p='[{"op": "replace", "path": "/spec/channel", "value":"${HCO_CHANNEL}"}, {"op": "replace", "path": "/spec/installPlanApproval", "value":"Automatic"}]' |
209 | | ----- |
210 | | -+ |
211 | | -This command points your subscription to upgrade channel `2.2` and enables automatic updates. |
0 commit comments