You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Having ownerReference conflicts with deletion order](#having-ownerreference-conflicts-with-deletion-order)
15
15
-[Risks and Mitigations](#risks-and-mitigations)
16
16
-[Dependency cycle](#dependency-cycle)
17
17
-[Instance from same resources want different deletion order](#instance-from-same-resources-want-different-deletion-order)
@@ -106,17 +106,15 @@ providing a solid foundation for managing resource cleanup in complex environmen
106
106
107
107
### Goals
108
108
109
-
1. Introduce an Opinionated Deletion Order: Implement a mechanism while namespace deletion to prioritize the deletion of certain resource types before others based on logical dependencies and security considerations (e.g., Pods deleted before NetworkPolicies).
109
+
1. Introduce an Opinionated Deletion Order: Implement a mechanism while namespace deletion to prioritize the deletion of certain resource types before others based on logical dependencies and security considerations (e.g., Pods deleted before NetworkPolicies).2
110
110
111
-
2.Enhance Security During Namespace Deletion: Ensure that critical security resources (e.g., NetworkPolicies, RoleBindings) remain in place until all their dependent resources are safely deleted, thereby eliminating windows of vulnerability.
111
+
2.Maintain Predictability and Consistency: Provide a more deterministic deletion process to improve user confidence and debugging during namespace cleanup.
112
112
113
-
3.Maintain Predictability and Consistency: Provide a more deterministic deletion process to improve user confidence and debugging during namespace cleanup.
113
+
3.Integrate with Existing Kubernetes Concepts: Build on the namespace deletion’s current architecture without introducing breaking changes to existing APIs or workflows.
114
114
115
-
4.Integrate with Existing Kubernetes Concepts: Build on the namespace deletion’s current architecture without introducing breaking changes to existing APIs or workflows.
5. Be safe - don’t introduce unresolvable deadlocks.
118
-
119
-
6. Make the most common dependency - workloads and the policies that govern them - safe by default for all types of policies, including CRDs, unless specifically opted out.
117
+
5. Make the most common dependency - workloads and the policies that govern them - safe by default for all types of policies, including CRDs, unless specifically opted out.
120
118
121
119
122
120
### Non-Goals
@@ -195,21 +193,25 @@ avoid the above security concern.
195
193
196
194
#### Story 2 - having finalizer conflicts with deletion order
197
195
198
-
E.g. the NetworkPolicy deletion priority would be set lower than the pod and we could expect the Pod be deleted
196
+
E.g. the NetworkPolicy deletion order would be set lower than the pod and we could expect the Pod be deleted
199
197
before NetworkPolicy. However, if even one pod has a finalizer which is waiting for network policies (which is opaque to Kubernetes),
200
198
it will cause dependency loops and block the deletion process.
201
199
202
200
Refer to the section `Handling Cyclic Dependencies`.
203
201
204
-
#### Story 2 - having ownerReference conflicts with deletion order
202
+
###Notes/Constraints/Caveats (Optional)
205
203
206
-
When deciding the deletion priority for resources, it should take ownerReference into consideration.
207
-
E.g. the deployment VS pod. However, it should not matter much in terms of namespace deletion.
208
-
Namespace deletion specifically uses `metav1.DeletePropagationBackground` and all resources would be deleted and the ownerReference
209
-
dependencies would be handled by the garbage collection.
204
+
#### Having ownerReference conflicts with deletion order
210
205
206
+
When deciding the deletion priority for resources, it should take ownerReference into consideration.
207
+
E.g. the deployment VS pod. However, it should not matter much in terms of namespace deletion.
208
+
Namespace deletion specifically uses `metav1.DeletePropagationBackground` and all resources would be deleted and the ownerReference
209
+
dependencies would be handled by the garbage collection.
211
210
212
-
### Notes/Constraints/Caveats (Optional)
211
+
In Kubernetes, `ownerReferences` define a parent-child relationship where child resources are automatically deleted when the parent is removed.
212
+
This is mostly handled by garbage collection. While namespace deletion, the `ownerReferences` is not part of the consideration and
213
+
`NamespaceDeletionOrder` group will be honored while deleting resources as what it is currently. The garbage collector controller will make sure
214
+
no child resources still existing after the parent resource deleted.
213
215
214
216
215
217
### Risks and Mitigations
@@ -226,7 +228,7 @@ When a lack of progress detected(maybe caused by the dependency cycle described
226
228
227
229
- Return error after retry.
228
230
- Pros: Make sure the security concern being addressed by always honor the deletion order
229
-
- Cons: Could cause surprising behavior or break the existing user
231
+
- Cons: Block namespace deletion if dependency cycle exists.
230
232
231
233
Mitigation: A proper fallback mechanism would be introduced to make sure the namespace deletion process would not be
232
234
hanging forever because of potential dependency cycle.
@@ -307,18 +309,18 @@ In this case, the finalizers set would conflict with the `NamespaceDeletionOrder
307
309
308
310
To address this, the system will:
309
311
310
-
- Attempt to honor the DeletionOrderPriority for resource deletion.
312
+
- Attempt to honor the `NamespaceDeletionOrder` for resource deletion.
311
313
312
314
- Monitor the deletion process for each `NamespaceDeletionOrder`. If the process hangs beyond a predefined timeout (e.g., 5 minutes),
313
315
it will detect the stall and trigger the deletion attempt for the next `NamespaceDeletionOrder` group.
314
316
315
-
-If the resources from the next `NamespaceDeletionOrder` group are deleted successfully, retry with the ordered namespace deletion mechanism. If the process hangs again, wait for a predefined timeout and retry.
317
+
-After moving on to the next NamespaceDeletionOrder group, the system will attempt to delete all resources under this group. At this stage, deletion is considered successful only when all resources from the current and previous groups have been fully removed.
316
318
317
-
- If the retry fails over a certain number or the process hangs for too long(an overall timeout), it falls into the fallback mechanism.
319
+
- If the deletion of all resources from previous groups is not completed within the timeout period, the system will proceed to the next NamespaceDeletionOrder group, deleting those resources while waiting for any remaining resources from previous groups to be cleaned up.
318
320
319
-
-The fallback mechanism will ignore the `NamespaceDeletionOrder` and proceed with the existing unordered deletion process.
321
+
-After looping through all NamespaceDeletionOrder groups, if there is still process blocking resources from being deleted, the system will behave same as the current mechanism.
320
322
321
-
By introducing a timeout and fallback, the system ensures that cyclic dependencies do not block namespace deletion indefinitely while still striving for an ordered deletion whenever possible.
323
+
By introducing a controlled timeout mechanism, the system ensures that cyclic dependencies do not block namespace deletion indefinitely while still striving for an ordered deletion whenever possible.
0 commit comments