Skip to content

K8SPSMDB-1572: Add revisionHistoryLimit option to PerconaServerMongoDB CRD#2219

Open
myJamong wants to merge 3 commits intopercona:mainfrom
myJamong:add-revisionhistorylimit
Open

K8SPSMDB-1572: Add revisionHistoryLimit option to PerconaServerMongoDB CRD#2219
myJamong wants to merge 3 commits intopercona:mainfrom
myJamong:add-revisionhistorylimit

Conversation

@myJamong
Copy link

@myJamong myJamong commented Jan 29, 2026

Due to the high volume of requests, we're unable to provide free service for this account. To continue using the service, please upgarde to a paid plan.

This allows users to control the number of ControllerRevision objects retained for StatefulSets managed by the operator.

CHANGE DESCRIPTION

Problem:
When StatefulSets are updated, Kubernetes creates ControllerRevision objects to track revision history. Without a limit, these objects accumulate indefinitely, cluttering the cluster
and making it harder to manage resources.

Cause:
The operator does not expose the revisionHistoryLimit field from StatefulSet spec, so users cannot control how many ControllerRevision objects are retained.

Solution:
Add revisionHistoryLimit option to PerconaServerMongoDB CRD spec. This value is propagated to all StatefulSets (replsets and mongos) managed by the operator, allowing users to limit
the number of retained ControllerRevision objects.

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

This allows users to control the number of ControllerRevision
objects retained for StatefulSets managed by the operator.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@CLAassistant
Copy link

CLAassistant commented Jan 29, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ hors
❌ lake-yoo
You have signed the CLA already but the status is still pending? Let us recheck it.

@egegunes egegunes changed the title Add revisionHistoryLimit option to PerconaServerMongoDB CRD K8SPSMDB-1572: Add revisionHistoryLimit option to PerconaServerMongoDB CRD Feb 3, 2026
@@ -1,3 +1,4 @@
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if we should have this warning in CRD

Copy link
Author

@myJamong myJamong Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed all the comments.

commit: b910275

UpdateStrategy: updateStrategy,
Template: template,
UpdateStrategy: updateStrategy,
RevisionHistoryLimit: cr.Spec.RevisionHistoryLimit,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to add this field with version check to prevent unexpected restarts after operator upgrade. you can check cr.CompareVersion for examples. this will go into v1.23, so we need to check for "1.23.0"

Copy link
Author

@myJamong myJamong Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some changes and added a cr.CompareVersion logic to check for 1.23.0.
Thanks for your guide.

commit: b910275

},
},
UpdateStrategy: updateStrategy,
RevisionHistoryLimit: cr.Spec.RevisionHistoryLimit,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need cr version check here as well

Copy link
Author

@myJamong myJamong Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also changed this part.

commit: b910275

@egegunes egegunes added this to the v1.23.0 milestone Feb 3, 2026
@pull-request-size pull-request-size bot added size/M 30-99 lines and removed size/S 10-29 lines labels Feb 3, 2026
@myJamong myJamong requested a review from egegunes February 3, 2026 23:29
@myJamong
Copy link
Author

myJamong commented Feb 9, 2026

Hi @egegunes
I've been looking into the CI failures, but they don't seem to be caused by my changes.
I could be wrong though — is there anything I might have missed or should fix on my end?

@egegunes
Copy link
Contributor

egegunes commented Feb 9, 2026

@myJamong no, they are not related to your changes.

@JNKPercona
Copy link
Collaborator

Test Name Result Time
arbiter passed 00:11:37
balancer passed 00:19:06
cross-site-sharded passed 00:18:50
custom-replset-name passed 00:10:15
custom-tls passed 00:14:38
custom-users-roles passed 00:10:35
custom-users-roles-sharded passed 00:11:45
data-at-rest-encryption passed 00:12:44
data-sharded passed 00:22:27
demand-backup passed 00:15:53
demand-backup-eks-credentials-irsa passed 00:00:08
demand-backup-fs passed 00:24:41
demand-backup-if-unhealthy passed 00:12:11
demand-backup-incremental-aws passed 00:12:23
demand-backup-incremental-azure passed 00:12:07
demand-backup-incremental-gcp-native passed 00:12:13
demand-backup-incremental-gcp-s3 passed 00:11:39
demand-backup-incremental-minio passed 00:25:14
demand-backup-incremental-sharded-aws passed 00:19:20
demand-backup-incremental-sharded-azure passed 00:18:27
demand-backup-incremental-sharded-gcp-native passed 00:17:48
demand-backup-incremental-sharded-gcp-s3 passed 00:18:15
demand-backup-incremental-sharded-minio passed 00:27:24
demand-backup-physical-parallel passed 00:08:30
demand-backup-physical-aws passed 00:12:30
demand-backup-physical-azure passed 00:12:37
demand-backup-physical-gcp-s3 passed 00:12:02
demand-backup-physical-gcp-native passed 00:11:31
demand-backup-physical-minio passed 00:20:40
demand-backup-physical-minio-native passed 00:27:07
demand-backup-physical-minio-native-tls passed 00:19:43
demand-backup-physical-sharded-parallel passed 00:12:18
demand-backup-physical-sharded-aws passed 00:19:24
demand-backup-physical-sharded-azure passed 00:19:28
demand-backup-physical-sharded-gcp-native passed 00:18:16
demand-backup-physical-sharded-minio passed 00:17:59
demand-backup-physical-sharded-minio-native passed 00:17:48
demand-backup-sharded passed 00:26:44
disabled-auth passed 00:16:27
expose-sharded failure 00:15:41
finalizer passed 00:10:22
ignore-labels-annotations passed 00:07:49
init-deploy passed 00:13:14
ldap passed 00:09:13
ldap-tls passed 00:13:03
limits passed 00:06:32
liveness passed 00:09:08
mongod-major-upgrade passed 00:12:00
mongod-major-upgrade-sharded passed 00:21:20
monitoring-2-0 passed 00:25:46
monitoring-pmm3 passed 00:29:31
multi-cluster-service passed 00:15:00
multi-storage passed 00:19:11
non-voting-and-hidden passed 00:17:18
one-pod passed 00:08:38
operator-self-healing-chaos passed 00:13:06
pitr passed 00:31:48
pitr-physical passed 01:01:17
pitr-sharded passed 00:23:13
pitr-to-new-cluster passed 00:25:56
pitr-physical-backup-source passed 00:55:04
preinit-updates passed 00:05:11
pvc-auto-resize passed 00:14:58
pvc-resize passed 00:17:10
recover-no-primary passed 00:25:47
replset-overrides passed 00:18:54
replset-remapping passed 00:17:12
replset-remapping-sharded passed 00:17:05
rs-shard-migration passed 00:15:08
scaling passed 00:11:18
scheduled-backup passed 00:18:13
security-context passed 00:07:11
self-healing-chaos passed 00:15:27
service-per-pod passed 00:19:35
serviceless-external-nodes passed 00:07:19
smart-update passed 00:08:21
split-horizon passed 00:14:10
stable-resource-version passed 00:04:48
storage passed 00:07:32
tls-issue-cert-manager passed 00:31:12
unsafe-psa passed 00:07:18
upgrade passed 00:10:05
upgrade-consistency passed 00:07:55
upgrade-consistency-sharded-tls passed 00:57:40
upgrade-sharded passed 00:20:21
upgrade-partial-backup passed 00:15:35
users passed 00:17:23
users-vault passed 00:13:29
version-service passed 00:25:54
Summary Value
Tests Run 89/89
Job Duration 02:46:13
Total Test Time 25:29:55

commit: 2808255
image: perconalab/percona-server-mongodb-operator:PR-2219-2808255d

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants