-
Controller Versionv5.13.22 Describe the BugA commit was made 5 days ago that made it so my controller fails to start. Expected BehaviorThe controller can be stopped and started without issue. Steps to Reproduce
How You're Launching the Container
Container Logs
MongoDB Logs
Additional ContextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
It looks like you are experiencing MongoDB corruption. This is very unlikely that it is related to any recent commits, typically db corruption is related to the process in which k8s updated your container, causing the database to not be cleanly shutdown. Not sure what exactly is defined in your kubernetes manifest since they were not included here but I would make sure to add this to your deployment to ensure that the controller has a proper amount of time to shut down gracefully: Your best bet would be to restore from a backup, if you have one. If you don't, you really should because it's built into the software to backup the configuration on a regular basis. Check your database's persistent data folder, looking for an Otherwise, you're likely going to need to attempt to repair your data which isn't easy. Doing a google search for the error, there are a few sites that mention ways to try to fix it. Unfortunately, the controller is only validated by TP-Link to run a much older MongoDB than I would like as newer versions are a bit more robust in terms of being able to repair errors automatically. Doing any sort of repair is going to involve running a pod, bypassing the default entrypoint & command to run some utilities to try to repair the database which is a bit beyond the scope of what I can realistically help with. |
Beta Was this translation helpful? Give feedback.
It looks like you are experiencing MongoDB corruption. This is very unlikely that it is related to any recent commits, typically db corruption is related to the process in which k8s updated your container, causing the database to not be cleanly shutdown. Not sure what exactly is defined in your kubernetes manifest since they were not included here but I would make sure to add this to your deployment to ensure that the controller has a proper amount of time to shut down gracefully:
terminationGracePeriodSeconds: 60
. Unexpected shutdowns of the MongoDB can also cause corruption but the most common thing I see is from just general lifecycle issues of stopping and re-creating the container. I…