fix(cache): close rename-cascade race and bot-task delete eviction gap#28100
fix(cache): close rename-cascade race and bot-task delete eviction gap#28100harshach wants to merge 3 commits into
Conversation
Three IT failures on the new postgres+ES+redis CI profile traced back to two cache-invalidation gaps introduced alongside #28012: 1) Classification / Tag / Glossary / GlossaryTerm / Domain renames called invalidateCacheForRenameCascade BEFORE the bulk DAO updateFqn. With invalidateCacheForTaggedEntitiesAndDescendants (search-index walk) in between, the window was ~4 s in CI traces. Any concurrent reader landing in that window loaded the still-visible pre-rename row from DB and repopulated L1+L2 cache with the old FQN, which then stuck for the entity TTL. Awaitility timeouts on ClassificationResourceIT.test_classificationRename_tagActivityFeedsPreserved and test_classificationRename_multipleTagsUpdated. Refactored invalidateCacheForRenameCascade to return the captured (id, oldFqn) pairs and added finishInvalidateCacheForRenameCascade — a post-commit pair that re-evicts the same entries by id and by old FQN, closing the race window. Updated the 6 call sites (Classification, Tag, Glossary, GlossaryTerm x2, Domain x2 for DOMAIN+DATA_PRODUCT) to capture the list pre-write and call the finish pair after all DB writes complete. 2) UserRepository.deleteSuggestionTasksForUser issued a direct DELETE FROM task_entity ... that bypassed EntityRepository.delete and its cache hook. Any task previously read by id was still pinned in the L1 Guava CACHE_WITH_ID, so the next GET returned the "deleted" task — failing TaskResourceIT.testDeletingBotCreatorCleansUpOpenSuggestionTasks. Added TaskDAO.listIdsByCreatorAndCategory, capture the ids before the bulk DELETE, then fan out EntityRepository.invalidateCacheForEntity(Entity.TASK, id, null) afterwards. List + delete are intentionally not in one transaction — over-invalidating a few extra ids on retry is cheap; missing one is the bug. mvn clean compile + spotless:check pass on openmetadata-service. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR addresses cache invalidation gaps that caused flaky backend integration tests under the Redis cache profile: (1) rename-cascade flows could be “re-poisoned” by concurrent readers repopulating caches with pre-rename rows, and (2) bulk deletion of bot-created suggestion tasks bypassed repository delete hooks, leaving stale task entries pinned in the L1 Guava cache.
Changes:
- Refactors
EntityRepository.invalidateCacheForRenameCascadeto return the enumerated descendant(id, oldFqn)pairs and adds a symmetricfinishInvalidateCacheForRenameCascade(...)pass for post-rename re-eviction. - Updates rename flows (Classification/Tag/Glossary/GlossaryTerm/Domain + DataProduct) to capture descendants pre-update and re-invalidate after the rename-related writes.
- Fixes bot-task deletion cache eviction by listing task IDs before bulk delete and explicitly invalidating the per-task cache entries afterward.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/UserRepository.java | Captures task IDs before bulk delete and explicitly invalidates Task cache entries by ID afterward. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/CollectionDAO.java | Adds TaskDAO.listIdsByCreatorAndCategory to support pre-delete ID capture. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/EntityRepository.java | Changes rename-cascade invalidation to return affected descendants and adds finishInvalidateCacheForRenameCascade plus shared eviction helper. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/TagRepository.java | Adopts two-phase rename-cascade invalidation for tag rename flows. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/ClassificationRepository.java | Adopts two-phase rename-cascade invalidation for classification rename → tag descendants. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/GlossaryRepository.java | Adopts two-phase rename-cascade invalidation for glossary rename → glossary term descendants. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/GlossaryTermRepository.java | Adopts two-phase rename-cascade invalidation for glossary term rename/move cascades. |
| openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/DomainRepository.java | Adopts two-phase rename-cascade invalidation for domain rename affecting domains + data products. |
Comments suppressed due to low confidence (1)
openmetadata-service/src/main/java/org/openmetadata/service/jdbi3/TagRepository.java:989
finishInvalidateCacheForRenameCascadeis called before the subsequentclassificationChanged/parentChangedhandling (and the final response-field invalidations). SinceentitySpecificUpdateruns under@Transaction, there is still time before commit where a concurrent reader can repopulate the cache with the pre-rename row after this “finish” pass. Consider moving the finish call to the very end ofupdateNameAndParent(after the classification/parent updates and final invalidations), or invoking it again right before returning, so the post-pass is as close to commit as possible.
finishInvalidateCacheForRenameCascade(Entity.TAG, renamedTags);
}
if (classificationChanged) {
updateClassificationRelationship(original, updated);
| * #finishInvalidateCacheForRenameCascade} after the DB writes commit — necessary because a | ||
| * reader landing in the window between this call and the DB commit will repopulate the by-id | ||
| * cache with the still-visible pre-rename row, and only a second invalidate pass after the | ||
| * commit can evict the poisoned entry. | ||
| * |
The remaining failure on the postgres+ES+redis CI gate —
TestCaseResourceIT.testBulkFluentAPI ("Description should be bulk updated"
times out after 60s) — was a cache-poisoning race between the bulk PATCH
loop and a concurrent test running test_bulkAddAllTestCasesWithExcludeIds.
CI trace for testCase c5fa887e:
T0: bulk-add fetches all candidate test cases via Entity.getEntities(
refs, "*", ALL) — gets snapshot of c5fa887e with OLD description.
T1: testBulkFluentAPI PATCHes c5fa887e — DB committed, cache write-
through stores the NEW description (1649 bytes).
T2: bulk-add calls postUpdateMany(updatedTestCases) → writeThroughCache-
Many serializes the pre-read snapshot and overwrites Redis with the
OLD description (2158 bytes).
T3+: 60s of polling sees the poisoned cache value and never reaches
"Bulk updated".
The pre-read snapshot was load-bearing for nothing — testSuites is in the
storage-stripped field list (getFieldsStrippedFromStorageJson), so the
testCase entity JSON does not actually change here. The only DB write is
the entity_relationship CONTAINS row.
Fix in TestCaseRepository.addTestCasesToLogicalTestSuite and
addAllTestCasesToLogicalTestSuiteTxn: replace postUpdateMany with a new
postLogicalSuiteRelationshipUpdate hook that:
1. Invalidates the read-bundle cache (where testSuites is fanned out
during reads) for each affected test case — so the next GET picks
up the new relationship.
2. Fires the lifecycle "entities updated" event (event subscribers
still see the testSuites field change).
3. Updates the RDF graph.
Crucially, no writeThroughCacheMany. The base-entity JSON in Redis is
left alone, so a concurrent PATCH's write-through is not clobbered.
mvn clean compile + spotless:check pass on openmetadata-service.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
🔴 Playwright Results — 1 failure(s), 13 flaky✅ 3353 passed · ❌ 1 failed · 🟡 13 flaky · ⏭️ 56 skipped
Genuine Failures (failed on all attempts)❌
|
…with imports gitar-bot review on #28100: per CLAUDE.md "no fully qualified names in code — import the class instead". Add imports for CacheBundle, EntityLifecycleEventDispatcher, and RdfUpdater; drop the inline FQNs in the method body. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Code Review ✅ Approved 1 resolved / 1 findingsCloses rename-cascade and bot-task deletion race conditions by implementing pre- and post-commit cache invalidation patterns, resolving intermittent integration test failures. No issues found. ✅ 1 resolved✅ Quality: Fully qualified class names used instead of imports
OptionsDisplay: compact → Showing less information. Comment with these commands to change:
Was this helpful? React with 👍 / 👎 | Gitar |
| LOG.warn("Skipping cache invalidation for non-UUID task id: {}", taskIdStr); | ||
| continue; | ||
| } | ||
| EntityRepository.invalidateCacheForEntity(Entity.TASK, taskId, null); |
|



Describe your changes:
Three IT failures on the new
Integration Tests - PostgreSQL + Elasticsearch + Redisworkflow (gate added by #28012) traced back to two cache-invalidation gaps in the cache improvements merge:Rename cascade race (ClassificationResourceIT + GlossaryRepository + TagRepository + DomainRepository):
invalidateCacheForRenameCascaderan BEFORE the bulk DAOupdateFqn. WithinvalidateCacheForTaggedEntitiesAndDescendants(search-index walk) in between, CI traces showed a ~4 s window where any concurrent reader loaded the still-visible pre-rename row from DB and repopulated L1+L2 cache with the old FQN — pinned for the entity TTL. Awaitility timeouts onClassificationResourceIT.test_classificationRename_tagActivityFeedsPreservedandtest_classificationRename_multipleTagsUpdated.Bot task delete eviction gap (TaskResourceIT):
UserRepository.deleteSuggestionTasksForUserissued a directDELETE FROM task_entity WHERE createdById=…that bypassedEntityRepository.deleteand its cache hook. Any task previously read by id was still pinned in the L1 GuavaCACHE_WITH_ID, so the next GET returned the "deleted" task — failingTaskResourceIT.testDeletingBotCreatorCleansUpOpenSuggestionTasks.Refactored
invalidateCacheForRenameCascadeto return the captured(id, oldFqn)list and added afinishInvalidateCacheForRenameCascadepost-commit pair that re-evicts the same entries; updated the 6 call sites (Classification, Tag, Glossary, GlossaryTerm ×2, Domain ×2 for DOMAIN+DATA_PRODUCT) to capture the list pre-write and finish post-write. For the bot path, addedTaskDAO.listIdsByCreatorAndCategory, capture ids before the bulk DELETE, then fan outEntityRepository.invalidateCacheForEntity(Entity.TASK, id, null)after.Type of change:
High-level design:
The pattern was that pre-commit invalidations clear stale cache entries, but anything mutating in long-running rename flows (search-index walks, ES asset updates, policy condition updates) leaves a wide race window where a concurrent reader can repopulate the L1/L2 cache from the still-visible pre-rename DB row. The new
finishInvalidateCacheForRenameCascadeis the symmetric post-commit pair — it re-evicts the descendants captured at pre-invalidate time, by id and by old FQN, closing the window. The bot-delete fix follows the same principle: any direct SQL write that bypassesEntityRepository.deletemust explicitly fan out cache invalidation, since the L1 Guava cache otherwise keeps stale rows alive past the DB drop.List-then-delete in the bot path is intentionally not transactional — over-invalidating a few extra ids on retry is cheap; missing one is the original bug.
Tests:
Use cases covered
Backend integration tests
ClassificationResourceIT.test_classificationRename_*andTaskResourceIT.testDeletingBotCreatorCleansUpOpenSuggestionTasks. CI on the postgres+ES+redis profile will verify.Manual testing performed
mvn clean compile -pl openmetadata-service— passesmvn spotless:check -pl openmetadata-service— passesUI screen recording / screenshots:
Not applicable.
Checklist:
🤖 Generated with Claude Code
Summary by Gitar
postUpdateManywithpostLogicalSuiteRelationshipUpdateinTestCaseRepositoryto prevent concurrentPATCHoperations from being overwritten by stale cache data during logical test suite bulk additions.This will update automatically on new commits.