Curator does not seem to close logs-syslog-so indices. #11894
-
Version2.4.10 Installation MethodSecurity Onion ISO image Descriptionconfiguration Installation TypeDistributed Locationon-prem with Internet access Hardware SpecsExceeds minimum requirements CPU8 RAM16 Storage for /300gb Storage for /nsm1.7TB Network Traffic Collectionspan port Network Traffic SpeedsLess than 1Gbps StatusYes, all services on all nodes are running OK Salt StatusNo, there are no failures LogsNo, there are no additional clues DetailHi, I have tried to search for this, but a lot of old version topics came across, but no recent ones. We have a distributed setup but after a couple of months we noticed that the logs where on 30days/365days with curator disabled. So I enabled curator from the grid config and changed several 365days to 90days, but after that threshold was reached there was no flattening of the used disk space, it was still increasing. I noticed in elestic index management that the most logging was in logs-syslog-so and so I went looking for that in the curator logging but found nothing. Only a lot of logs that stated that actions where skipped because there was nothing to close. There are actions for this index in /opt/so/conf/curator/action/ but I cannot find what should run this action and why it is not running. Does anyone know if this is a known issue or if there is a standard quick config/fix to configure curator. Or does anyone know what I can try or troubleshoot for this? Regards, Guidelines
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 15 replies
-
Let's try this:
|
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for the help, the first index is moved to the cold phase, so I think the deletion will also work. |
Beta Was this translation helpful? Give feedback.
I think you would need to re-index the logs-syslog-so index to another named index, or delete it for the new data stream to be created. Otherwise, Elastic will use the template for the old index and Logstash will continue writing to it, since it matches the logs-syslog-so name.
This script may work for you, but please test it before using it in production.
https://github.com/weslambert/securityonion-elastic-misc/blob/2.x/so-elasticsearch-reindex
For an index of that size it may take a very long time.