Broke my Security-Onion setup #12381
-
I have a SecurityOnion (standalone) setup. It has two forwarder nodes hosted on separate machines. A couple of hours later the disk usage was at 90% and it exceeded the flood-stage watermark. I wasn't able to use the Kibana web UI to delete indices to free up space and I used curl to delete a large zeek index, and was able to use Kibana after that. I reduced the log retention period and disabled (not-unenrolled) the elastic-agents on the newly added agents to reduce the logs coming in. I found the fleet server agent to be in "Inactive" state. I found the agent in running state when I ran
I tried restarting the container and the machine. Finally, I tried to re-enroll the fleet server (on the standalone SecurityOnion installation), using these commands:
That also didn't work. (I don't have the logs). It said the port is already in use which made sense as the docker container was using it. I tried to use the Thanks for reading up to this point. I am very new to the whole SecurityOnion setup and seeking help on how I should go about and recover the system.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Have you tried to reset fleet with |
Beta Was this translation helpful? Give feedback.
I was able to fix it looking at logs from
salt-master
andsalt-minion
.I had two yaml files corrupted that I am pretty sure I messed up when I ran
so-elastic-fleet-setup
.opt/so/saltstack/local/pillar/minions/sec1-s-cc_standalone.sls
andopt/so/saltstack/local/pillar/global/soc_global.sls
. They both had duplicate keys. I manually removed the duplicate keys, and the cluster came back.However, the fleet-server-agent was still missing (in
so-status
), and the docker container:so-elastic-fleet
was missing. I had to runso-docker-refresh
and things came back to normal.