-
Notifications
You must be signed in to change notification settings - Fork 282
✨ Enable setting QoSPolicy on ports #2800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Hi @nikParasyr. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
4ca468e to
c2eba8d
Compare
|
/ok-to-test |
c2eba8d to
1e0eee1
Compare
| builder = portSecurityOpts | ||
| } | ||
|
|
||
| if portSpec.QoSPolicyID != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this only works in the Create scenario. There is no handling for Update cases.
There is already an existing ensurePortTagsAndTrunk function that handles the update scenario. But that takes care of just the port tags and trunk. I think we should modify this to handle the update cases for QoS as well.
Rename it to something like ensurePortProperties and add QoS as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to also handle the Update scenario. Also added a unit test for that use case.
Question: Do I also need to update webhooks to allow mutations for qosPolicy? and if so are there some pointers for that?
api/v1beta1/types.go
Outdated
|
|
||
| // QoSPolicyID is the ID of the qos policy the port will use. | ||
| // +optional | ||
| QoSPolicyID *string `json:"qosPolicyID"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
omitempty missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed. fixed
nikParasyr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bnallapeta thanks for the review.
Your suggestions were implemented, have a minor question on the update comment.
Moreover, I updated ci and e2e to cover this case ( decided to add it on the existing multi-network scenario since it allows to test various ways of setting qos policy and didnt want to make another scenario just for this )
| builder = portSecurityOpts | ||
| } | ||
|
|
||
| if portSpec.QoSPolicyID != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to also handle the Update scenario. Also added a unit test for that use case.
Question: Do I also need to update webhooks to allow mutations for qosPolicy? and if so are there some pointers for that?
6681eff to
c89012d
Compare
Awesome! thanks for this. I have replied on that comment. Looks good overall. |
|
/lgtm |
|
/test pull-cluster-api-provider-openstack-e2e-test |
c89012d to
05a76a1
Compare
|
New changes are detected. LGTM label has been removed. |
|
the e2e tests fails on the multi-network one. looking into it but it might take a while |
05a76a1 to
b2f2c39
Compare
|
@bnallapeta / @lentzi90 can i get some help on where to look/troubleshoot? The 1st control-plane node doesnt come up. What i have checked this far:
It sort of feels to me that this might be k-orc for some reason not creating the server, but not sure if im correct in this and why it wouldn't. (do we need downstream support for QoS on k-orc maybe? while working on this it didnt feel like that but maybe im wrong) Any insights/help would be much appreciated |
|
@nikParasyr curious.. why did you comment out the qos here? I think the first time the e2e ran on this PR, these were enabled yes? And still the same failure? Also, instead of using shared, how about we give the same policy for all ports and see if that works. I'm thinking that if there are multiple policies and for some reason, the policy resolution becomes an issue, the port won't come up and that causes the whole thing to fail. Perhaps you can test this locally. I will try to find some time tomorrow to look into this deeply. |
Enable users to set QoSPolicyID on ports. This can be done by either defining the ID or a filter to query the QoS Policy
Ensure that the qos extension is enabled on the openstack deployment before creating ports with QoS policy set
Allow the QoSPolicyID of a port to be updated when different from spec.
b2f2c39 to
b792dff
Compare
|
The missing logs from ORC (and also very minimal from CAPO) is strange and disturbing. Maybe it is because the test gets interrupted and fails to cleanup properly? Regarding ORC, we are actually not using it for the OpenStackServers yet 🙁 For progressing here though, I think we need to figure out why we are not getting proper logs from CAPO and fix that. Then it will be clear why the server is not created. Maybe try focusing on only the multi-network test? It will be easier (and faster) to see what is going on. |
|
|
||
| "github.com/go-logr/logr/testr" | ||
| "github.com/gophercloud/gophercloud/v2/openstack/networking/v2/extensions/qos/policies" | ||
| . "github.com/onsi/gomega" //nolint:revive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the nolint really needed? We should be covered by the config here:
cluster-api-provider-openstack/.golangci.yml
Lines 163 to 166 in ee4be02
| - linters: | |
| - revive | |
| - staticcheck | |
| path: (test)/.*.go |
or if we are not covered by that we should update it to cover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
This is on one of my later runs to see if i still get the same problems with 1 port with qosPolicy instead of all 3. I have made ci runs with either defining the same qosPolicy.id on all 3 ports or with the uncommented lines => they all fail the same way. History of runs is here: https://prow.k8s.io/pr-history/?org=kubernetes-sigs&repo=cluster-api-provider-openstack&pr=2800
Initially e2e tests were not added, i added them after tackling your input ( validating qos extension is enabled etc ). And since then they consistently fail.
I am passing the same policy on all 3 ports, just with different ways ( to test that the filter works as expected ). On the openstackServer.Status.Resolved i can see that all 3 are resolved properly to the same qosPolicyID.
I have noticed in general that orc doesn't create many logs ( we had servers fail due to quotas multiple times and orc logs are silent -- probably because as you mention below k-orc is still not used by capo for servers... ).
Ok this is good to know. If this is to happen, its probably better to wait for it and implement it once in k-orc ( and apis etc here ), instead of doing it now on capo, then on k-orc and patching capo again. from my side this doesn't have the highest priority atm and can wait for a bit for the migration to occur. I'll check a bit more in-depth the related ticket and k-orc development (maybe i can start implementing qos there and we meet in the middle :P )
So this PR on the last commit has multiple runs (because i was amending ) https://prow.k8s.io/pr-history/?org=kubernetes-sigs&repo=cluster-api-provider-openstack&pr=2800. All of them fail on the multi-network (which is the only e2e that i "touched"). CAPO logs there are consistently empty (other than the first 50 lines of starting controllers etc). Unless im checking on the wrong place |
b792dff to
4a17c00
Compare
Update the e2e tests to add coverage for port with QoS policy set. For this: - enable q-qos neutron plugin on ci devstack - update multi-network scenario to add qos policies using ID and filter
4a17c00 to
976b0a3
Compare
|
@nikParasyr: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What this PR does / why we need it:
Enable users to set QoSPolicyID on ports. This can be done by either defining the ID or a filter to query the QoS Policy.
Which issue(s) this PR fixes:
Fixes #2672
Special notes for your reviewer:
I have not added e2e tests, did a quick look and qos extension doesnt seem enabled. Let me know if this is requiredTODOs:
/hold