Replies: 1 comment 1 reply
-
Update - - removing the minion file and the salt-key for the superfluous entry helped. The manager is up and running. Still waiting on results from the new sensor. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.140
Installation Method
Security Onion ISO image
Description
configuration
Installation Type
Distributed
Location
cloud
Hardware Specs
Exceeds minimum requirements
CPU
24
RAM
92
Storage for /
500 G
Storage for /nsm
3-7 TB
Network Traffic Collection
tap
Network Traffic Speeds
1Gbps to 10Gbps
Status
No, one or more services are failed (please provide detail below)
Salt Status
Yes, there are salt failures (please provide detail below)
Logs
Yes, there are additional clues in /opt/so/log/ (please provide detail below)
Detail
We're almost done with upgrading our nodes to 2.4 in our hybrid distributed environment, which had been running great. (Manager is in AWS with AWS image, sensor nodes are a combination of AWS images on on-prem from ISO install) We had some trouble on the most recent sensor node resulting in repeated reinstalls.
Now our manager won't start -- so web config is not available. salt master logs indicate a conflict with that node:
and slightly later in the log:
salt.pillar ... [CRITICAL]... Pillar render erorr: Redering SLS 'node_data.ips' failed. Please see master log for details.
Is there a way at the command-line on the master to remove the offending conflict manually if the web interface is not available?
Thanks,
Larry
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions