Skip to content

Commit c5ccc0c

Browse files
committed
Add upgrade/downgrade tests
1 parent 94bd667 commit c5ccc0c

File tree

1 file changed

+54
-1
lines changed
  • keps/sig-network/1860-kube-proxy-IP-node-binding

1 file changed

+54
-1
lines changed

keps/sig-network/1860-kube-proxy-IP-node-binding/README.md

Lines changed: 54 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,60 @@ of a required rollback.
208208

209209
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
210210

211-
No.
211+
Because this is a feature that depends on CCM/LoadBalancer controller, and none yet
212+
implements it, the scenario is simulated with the upgrade/downgrade/upgrade path being
213+
enabling and disabling the feature flag, and doing the changes on services status subresources.
214+
215+
There is a LoadBalancer running on the environment (metallb) that is responsible for doing the proper
216+
LB ip allocation and announcement, but the rest of the test is related to kube-proxy programming
217+
or not the iptables rules based on this enablement/disablement path
218+
219+
* Initial scenario
220+
* Started with a v1.29 cluster with the feature flag enabled
221+
* Created 3 Deployments:
222+
* web1 - Will be using the new feature
223+
* web2 - Will NOT be using the new feature
224+
* client - "the client"
225+
* Created the loadbalancer for the two web services. By default both LBs are with the default `VIP` value
226+
```yaml
227+
status:
228+
loadBalancer:
229+
ingress:
230+
- ip: 172.18.255.200
231+
ipMode: VIP
232+
```
233+
* With the feature flag enabled but no change on the service resources, tested and both
234+
web deployments were accessible
235+
* Verified that the iptables rule for both LBs exists on all nodes
236+
* Testing the feature ("upgrade")
237+
* Changed the `ipMode` from first LoadBalancer to `Proxy`
238+
* Verified that the iptables rule for the second LB still exists, while the first one didn't
239+
* Because the LoadBalancer of the first service is not aware of this new implementation (metallb), it is
240+
not accessible anymore from the client Pod
241+
* The second service, which `ipMode` is `VIP` is still accessible from the Pods
242+
* Disable the feature flag ("downgrade")
243+
* Edit kube-apiserver manifest and disable the feature flag
244+
* Edit kube-proxy configmap, disable the feature and restart kube-proxy Pods
245+
* Confirmed that both iptables rules are present, even if the `ipMode` field was still
246+
set as `Proxy`, confirming the feature is disabled. Both accesses are working
247+
248+
Additionally, an apiserver and kube-proxy upgrade test was executed as the following:
249+
* Created a KinD cluster with v1.28
250+
* Created the same deployments and services as bellow
251+
* Both loadbalancer are accessible
252+
* Upgraded apiserver and kube-proxy to v1.29, and enabled the feature flag
253+
* Set `ipMode` as `Proxy` on one of the services and execute the same tests as above
254+
* Observed the expected behavior of iptables rule for the changed service
255+
not being created
256+
* Observed that the access of the changed service was not accessible anymore, as
257+
expected
258+
* Disable feature flag
259+
* Rollback kube-apiserver and kube-proxy to v1.28
260+
* Verified that both services are working correctly on v1.28
261+
* Upgraded again to v1.29, keeping the feature flag disabled
262+
* Both loadbalancers worked as expected, the field is still present on
263+
the changed service.
264+
212265

213266
###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
214267

0 commit comments

Comments
 (0)