Skip to content

fix: invalidate stale informer cache on cluster re-registration#7152

Open
goyalpalak18 wants to merge 1 commit intokarmada-io:masterfrom
goyalpalak18:fix-stale-informer-cache-cluster-reuse
Open

fix: invalidate stale informer cache on cluster re-registration#7152
goyalpalak18 wants to merge 1 commit intokarmada-io:masterfrom
goyalpalak18:fix-stale-informer-cache-cluster-reuse

Conversation

@goyalpalak18
Copy link
Copy Markdown
Contributor

Description

This PR addresses a critical race condition and cache invalidation issue where reusing a cluster name (common in disaster recovery, GitOps, or cloud auto-provisioning) caused the InformerManager to serve stale cache data from the previous cluster.

The Problem

Previously, the MultiClusterInformerManager identified clusters solely by their name. If a cluster was unregistered and immediately a new physical cluster was registered with the same name, the system would return the existing (stopped or stale) informer manager.

This resulted in:

  • Silent failures: Resources appeared "Applied" but didn't exist in the new cluster.
  • Data corruption: Status updates were read from stale cache data.
  • Reconciliation loops: Controllers would get stuck trying to update resources based on incorrect state.

Solution

I have updated the InformerManager logic to track the Cluster UID alongside the cluster name to ensure identity verification.

Key Implementation Details:

  1. UID Awareness: Added clusterUIDs map to MultiClusterInformerManager to track the active UID for every registered cluster name.
  2. Smart Invalidation: Introduced ForClusterWithUID. When requested, it checks if the cached manager matches the provided UID.
    • If the UID matches: Returns the existing manager.
    • If the UID differs (name reuse): It stops the old manager, clears the stale cache, and initializes a fresh manager.
  3. Controller Updates: Updated WorkStatusController and ClusterStatusController to use IsManagerExistWithUID and ForClusterWithUID, ensuring they never operate on a stale client.

Impact

This fix significantly improves reliability in dynamic multi-cluster environments. It ensures that Karmada correctly distinguishes between different physical clusters that share a name, preventing cross-cluster status corruption and ensuring accurate resource propagation.

Testing

  • Unit Tests: Updated work_status_controller_test.go and cluster_status_controller_test.go to mock UID changes and verify that the manager is correctly refreshed.
  • Manual Verification:
    1. Register a cluster cluster-a.
    2. Deploy work to cluster-a.
    3. Unregister cluster-a.
    4. Register a new cluster with the name cluster-a (different IP/UID).
    5. Verified that the controller detects the UID mismatch and successfully syncs the new cache instead of returning stale data.

@karmada-bot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign chaunceyjiang for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 31, 2026
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @goyalpalak18, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical issue where the system could serve stale informer cache data when a cluster was unregistered and then a new physical cluster was registered with the same name. By introducing UID tracking for clusters within the InformerManager, the system can now accurately detect when a cluster with a matching name but different identity is re-registered. This ensures that the old, stale informer manager is stopped and a fresh one is initialized, preventing silent failures, data corruption, and reconciliation loops in dynamic multi-cluster environments.

Highlights

  • UID-based Cache Invalidation: The InformerManager now tracks cluster UIDs in addition to names, ensuring that if a cluster is re-registered with the same name but a different UID, its stale cache is properly invalidated.
  • New ForClusterWithUID Method: A new method ForClusterWithUID has been introduced in both generic and typed MultiClusterInformerManager interfaces to facilitate UID-aware informer management.
  • Controller Updates: WorkStatusController and ClusterStatusController have been updated to utilize the new UID-aware informer management methods, preventing operations on stale client data.
  • Enhanced Test Coverage: Unit tests were added/updated to specifically cover scenarios involving UID changes and verify correct informer manager refreshing.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a critical race condition by introducing UID tracking for cluster informer managers, preventing stale cache issues when cluster names are reused. The implementation is well-structured and applied consistently across both generic and typed informer managers. I've identified a critical issue in the new IsManagerExistWithUID function that could undermine the fix, particularly during an upgrade. I've also suggested a minor performance improvement to avoid unnecessary client creation. Overall, this is a solid improvement for Karmada's reliability.

@goyalpalak18
Copy link
Copy Markdown
Contributor Author

Hi @XiShanYongYe-Chang @RainbowMango ,

I’ve addressed the stale cache issue by implementing UID tracking for the informer manager. This ensures we don't hit silent propagation failures or data corruption when a cluster name is reused.

Ready for your review!

Add UID tracking to MultiClusterInformerManager to detect when a cluster
is re-registered with the same name but different identity. This prevents
stale cache issues that could cause cross-cluster resource corruption.

Signed-off-by: goyalpalak18 <goyalpalak1806@gmail.com>
@goyalpalak18 goyalpalak18 force-pushed the fix-stale-informer-cache-cluster-reuse branch from 79559ed to ba85cc3 Compare January 31, 2026 12:54
@codecov-commenter
Copy link
Copy Markdown

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

❌ Patch coverage is 18.82353% with 69 lines in your changes missing coverage. Please review.
✅ Project coverage is 46.50%. Comparing base (1de37ff) to head (ba85cc3).

Files with missing lines Patch % Lines
...edinformer/genericmanager/multi-cluster-manager.go 0.00% 30 Missing ⚠️
.../fedinformer/typedmanager/multi-cluster-manager.go 7.14% 26 Missing ⚠️
...kg/controllers/status/cluster_status_controller.go 16.66% 9 Missing and 1 partial ⚠️
...kg/util/fedinformer/genericmanager/testing/fake.go 0.00% 3 Missing ⚠️
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #7152      +/-   ##
==========================================
- Coverage   46.55%   46.50%   -0.05%     
==========================================
  Files         700      700              
  Lines       48139    48206      +67     
==========================================
+ Hits        22409    22418       +9     
- Misses      24044    24103      +59     
+ Partials     1686     1685       -1     
Flag Coverage Δ
unittests 46.50% <18.82%> (-0.05%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@RainbowMango
Copy link
Copy Markdown
Member

Manual Verification:
Register a cluster cluster-a.
Deploy work to cluster-a.
Unregister cluster-a.
Register a new cluster with the name cluster-a (different IP/UID).
Verified that the controller detects the UID mismatch and successfully syncs the new cache instead of returning stale data.

@goyalpalak18 Can this issue be reproduced with these steps?

Just out of curiosity, how do you find this?

@goyalpalak18
Copy link
Copy Markdown
Contributor Author

Manual Verification:
Register a cluster cluster-a.
Deploy work to cluster-a.
Unregister cluster-a.
Register a new cluster with the name cluster-a (different IP/UID).
Verified that the controller detects the UID mismatch and successfully syncs the new cache instead of returning stale data.

@goyalpalak18 Can this issue be reproduced with these steps?

Just out of curiosity, how do you find this?

Hey @RainbowMango, thanks for the review!

Yes, those reproduction steps definitely work.

As for how I found it—I was actually just digging through the MultiClusterInformerManager code to understand the lifecycle better. I noticed that ForCluster strictly checks for the existence of the cluster name in the map.

It occurred to me that if a cluster gets unregistered and immediately re-registered with the same name (which happens in some of our DR scenarios), we'd just silently hand back the old, stale manager because the name matches. Since the UID is unique even if the name is reused, tracking that seemed like the safest way to force a cache refresh.

@RainbowMango RainbowMango added this to the v1.18 milestone Feb 4, 2026
@RainbowMango
Copy link
Copy Markdown
Member

Thanks for the clarification. I will take a look and try to reproduce it on my side. But it may take some time.

@goyalpalak18
Copy link
Copy Markdown
Contributor Author

Thanks for the clarification. I will take a look and try to reproduce it on my side. But it may take some time.

Sounds good. Thanks for looking into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants