Skip to content

Conversation

@elvgarrui
Copy link
Contributor

@elvgarrui elvgarrui commented Nov 5, 2025

OVNLogLevel affects ovncontroller logs and OVSLogLevel affects
ovs-vswitchd and ovsdb-server. Both OVNLogLevel and OVSLogLevel can take
values from the vlog range: off, emer, err, warn, info, or dbg See
ovs-appctl(8) for a definition of each log level. The change of the log
level will be modified from the config pod and not on the pod itself to
avoid restarting ovn-controller and ovn-controller-ovs pods.

Resolves: OSPRH-6429

Copy link
Contributor

@karelyatin karelyatin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this config change will trigger a pod restarts and trigger some downtime for centralized traffic, wondering now if we really need this change as when needed temporary debug can just change with vlog/set in the required pods?
If we still need to enable these flags then i think we should consider avoiding any restarts with loglevel change.

@averdagu
Copy link
Contributor

averdagu commented Nov 6, 2025

Good point @karelyatin , A way to work arround that could be create a job that spawns a new pod that connects to the desired pod (ovn-controller or ovs-db-server) and modify the debug level using vlog/set?
I don't think it's worth to potentially create downtime just to change log level, but I think it would be good to have this automated and not need the human operator to engage manually in changing the log level.

@karelyatin
Copy link
Contributor

karelyatin commented Nov 6, 2025

Good point @karelyatin , A way to work arround that could be create a job that spawns a new pod that connects to the desired pod (ovn-controller or ovs-db-server) and modify the debug level using vlog/set?

Yes, Can consider to utilized existing config job for this too

@elvgarrui elvgarrui force-pushed the OSPRH-6429 branch 4 times, most recently from 183d5f3 to ba26b60 Compare November 7, 2025 16:58
@elvgarrui
Copy link
Contributor Author

I changed it to work now without the need of restarting any pods, using the config job. Let me know if this is better. Thanks for all the reviews!

@elvgarrui elvgarrui force-pushed the OSPRH-6429 branch 2 times, most recently from fe52f7b to 79a131e Compare November 10, 2025 16:29
@elvgarrui
Copy link
Contributor Author

I improved the ovn-appctl commands so as to not hardcode the name of the unixctl socket.

@elvgarrui elvgarrui requested a review from karelyatin November 11, 2025 08:45
Copy link
Contributor

@slawqo slawqo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for me, just please remove that empty line from start-vswitchd.sh file and that way diff will be one file smaller :)

# under the License.

source $(dirname $0)/functions

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think that this is not really needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100% agree, I did not realize I left that :)

OVNLogLevel affects ovncontroller logs and OVSLogLevel affects
ovs-vswitchd and ovsdb-server. Both OVNLogLevel and OVSLogLevel can take
values from the vlog range: off, emer, err, warn, info, or dbg See
ovs-appctl(8) for a definition of each log level. The change of the log
level will be modified from the config pod and not on the pod itself to
avoid restarting ovn-controller and ovn-controller-ovs pods.

Resolves: OSPRH-6429
Signed-off-by: Elvira Garcia <[email protected]>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 12, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: elvgarrui, slawqo

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@elvgarrui
Copy link
Contributor Author

/retest failed unrelated because of fips

could not run steps: step ovn-operator-build-deploy-kuttl failed: "ovn-operator-build-deploy-kuttl" test steps failed: "ovn-operator-build-deploy-kuttl" pod "ovn-operator-build-deploy-kuttl-openstack-k8s-operators-fips-check" failed: could not watch pod: the pod ci-op-j0cmgmjk/ovn-operator-build-deploy-kuttl-openstack-k8s-operators-fips-check failed after 1m7s (failed containers: test): ContainerFailed one or more containers exited

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 12, 2025

@elvgarrui: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

/test functional
/test images
/test ovn-operator-build-deploy-kuttl
/test precommit-check

The following commands are available to trigger optional jobs:

/test ovn-operator-build-deploy

Use /test all to run the following jobs that were automatically triggered:

pull-ci-openstack-k8s-operators-ovn-operator-main-functional
pull-ci-openstack-k8s-operators-ovn-operator-main-images
pull-ci-openstack-k8s-operators-ovn-operator-main-ovn-operator-build-deploy-kuttl
pull-ci-openstack-k8s-operators-ovn-operator-main-precommit-check

In response to this:

/retest failed unrelated because of fips

could not run steps: step ovn-operator-build-deploy-kuttl failed: "ovn-operator-build-deploy-kuttl" test steps failed: "ovn-operator-build-deploy-kuttl" pod "ovn-operator-build-deploy-kuttl-openstack-k8s-operators-fips-check" failed: could not watch pod: the pod ci-op-j0cmgmjk/ovn-operator-build-deploy-kuttl-openstack-k8s-operators-fips-check failed after 1m7s (failed containers: test): ContainerFailed one or more containers exited

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@elvgarrui
Copy link
Contributor Author

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 12, 2025

@elvgarrui: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/ovn-operator-build-deploy-kuttl 0dbc84c link true /test ovn-operator-build-deploy-kuttl

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@elvgarrui
Copy link
Contributor Author

Waiting for PR #513 to merge and unblock the test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants