Skip to content

Conversation

@sunjayBhatia
Copy link
Member

@sunjayBhatia sunjayBhatia commented Oct 20, 2025

Sort HTTPProxies when checking for FQDN conflicts so that the oldest is still set as Valid so multiple tenants cannot affect each other, e.g. resources across namespaces should not be able to invalidate the original "owner" of the FQDN.

Similar to the Gateway API conflict resolution logic here: https://gateway-api.sigs.k8s.io/guides/api-design/#conflicts

Useful for cases where Gatekeeper policies are not feasible to apply.

Sort HTTPProxies when checking for FQDN conflicts so that the
oldest is still set as Valid so multiple tenants cannot affect
each other, e.g. resources across namespaces should not be able
to invalidate the original "owner" of the FQDN.

Useful for cases where Gatekeeper policies are not feasible to
apply.

Signed-off-by: Sunjay Bhatia <sunjay.bhatia@broadcom.com>
@sunjayBhatia sunjayBhatia added the release-note/minor A minor change that needs about a paragraph of explanation in the release notes. label Oct 20, 2025
@sunjayBhatia
Copy link
Member Author

WIP, still needs tests

@tsaarni
Copy link
Member

tsaarni commented Oct 29, 2025

Looks good to me! This way we keep traffic up as long as possible and avoid failing existing HTTPProxies. I’m not sure if this could introduce any backward compatibility issues? I suppose anything is possible, but it seems quite unlikely that someone would depend on creating conflicting proxies and having them all fail. It might be surprising if Contour starts accepting one, but maybe bit unlikely to cause issues.

I’m also not sure I understand the reasoning behind the last criteria in https://gateway-api.sigs.k8s.io/guides/api-design/#conflicts, where the first resource in alphabetical order wins. That feels like an arbitrary, not serving the goal of keeping traffic up as long as possible. But since Gateway API defines it that way, having consistent behaviour for HTTPProxy can make sense.

It looks like the existing code already includes conflicting resource names in the error message, so I assume the winning proxy name should be included in the status of failed ones. Though, I wonder if exposing that information could be used by someone with access to one namespace to enumerate HTTPProxy names in other namespaces for any given FQDN. It is good for troubleshooting and it is not clear what risk that information could have, but it is still "leaking" information cross namespace border.

@sunjayBhatia
Copy link
Member Author

I’m also not sure I understand the reasoning behind the last criteria in https://gateway-api.sigs.k8s.io/guides/api-design/#conflicts, where the first resource in alphabetical order wins.

Yeah this is just a bit of a hack to have something to say in precedence order in the unlikely event multiple resources have the exact same creation timestamp

@github-actions
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 30d of inactivity, lifecycle/stale is applied
  • After 60d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. release-note/minor A minor change that needs about a paragraph of explanation in the release notes.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants