-
Version2.4.60 Installation MethodSecurity Onion ISO image Descriptionupgrading Installation TypeDistributed Locationon-prem with Internet access Hardware SpecsExceeds minimum requirements CPU8 RAM32 Storage for /500GB Storage for /nsm500GB Network Traffic Collectionspan port Network Traffic SpeedsLess than 1Gbps StatusNo, one or more services are failed (please provide detail below) Salt StatusYes, there are salt failures (please provide detail below) LogsYes, there are additional clues in /opt/so/log/ (please provide detail below) DetailHi, Today I decided to upgrade my SO-installation (distributed installation including manager-, search-, sensor- and several IDH-nodes) from 2.4.50 -> 2.4.60.
Checking soup.log I found this:
First I thought about a timeout and rebooted the manager-node, but alas, it didn't come up again :-( Grid status shows "Fault" for the manager node: Likewise "so-status" shows "so-elastalert" as "missing". Running
counting all the way up to "(300/300") The grid status shows Any ideas on how to get the installation up & running again? Thanks much in advance for your help PS: As for the other nodes in the grid, they're at 2.4.60 and show status green (OK) - haven't rebooted any of them yet however: Guidelines
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
Sorry, forgot the logs mentioned in my previous post... (NB: Please note that I've removed all normal logs in elastalert.log prior to the upgraded - the full elastalert.log including the normal status messages would be almost 6MB) |
Beta Was this translation helpful? Give feedback.
-
Addition to my post from yesterday: when when I click on Kibana, ElasticFleet or OSQuery Manager from the main menu instead of the various tools I only get
|
Beta Was this translation helpful? Give feedback.
-
It looks like things started to fall apart where the following log entry appears in soup.log:
You may need to manually update the manager's salt mine with the command |
Beta Was this translation helpful? Give feedback.
It looks like things started to fall apart where the following log entry appears in soup.log:
You may need to manually update the manager's salt mine with the command
sudo salt-call mine.update
then runsudo so-checkin
to get everything back up to highstate.