/nsm on manager node fills up about monthly #7929
-
Hi all, we're running an SO Grid of 5 forward, 1 search and 1 manager node, all running 2.3.110 Periodically, /nsm on our manager node fills up (manager node hardware is virtual, 8 cores, 16GB RAM, 2TB disk, /nsm is 1.8TB - and I'm told expanding disk (again) is tricky). It will self-maintain at about 80% (looks from grafana like it's doing a couple of cleanups per day) but that slowly grows over a month or so until /nsm fills and the webgui fails, reporting grid "fail" for all nodes. When that happens we generally clean up wazuh by running on the manager node: What's our best approach to this? Should we just bite the bullet and expand /nsm again? To what size? How can we prevent this problem from recurring with the larger manager /nsm partition as it did last time we expanded? Should we be doing something so it doesn't fill in the first place? I know /nsm grooming for pcaps and zeek on forward nodes is automatic, have I possibly disabled a cleanup job somewhere? Any other advice? Thanks Larry |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Wazuh logs usually don't take up that much space on a manager. If you have Wazuh agents sending their logs to the manager, then you might consider having those agents send their logs to another server in your deployment. Another option might be to simply create a cron job to purge old Wazuh logs. |
Beta Was this translation helpful? Give feedback.
Wazuh logs usually don't take up that much space on a manager. If you have Wazuh agents sending their logs to the manager, then you might consider having those agents send their logs to another server in your deployment. Another option might be to simply create a cron job to purge old Wazuh logs.