-
Notifications
You must be signed in to change notification settings - Fork 568
Sensor dont send events after reboot #12475
Replies: 2 comments · 13 replies
-
2 hours after the reboot, the sensor started sending events. What could be the reason for such a delay, I did not observe any problems according to the logs from /opt/so/log |
Beta Was this translation helpful? Give feedback.
All reactions
-
[2024-03-04T10:00:11,808][WARN ][logstash.outputs.redis ] Failed to flush outgoing items {:outgoing_count=>125, :exception=>"Redis::CommandError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:162:in |
Beta Was this translation helpful? Give feedback.
All reactions
-
Also check it on your search node, looks like Redis is getting backed up. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hello, On search [2024-03-12T10:37:54,213][INFO ][logstash.javapipeline ] Pipeline after several such messages [2024-03-12T10:37:54,473][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://so_elastic:xxxxxx@manager:9200/]}} [2024-03-12T10:37:58,887][INFO ][logstash.inputs.redis ] Registering Redis {:identity=>"redis://@manager:9696/0 list:logstash:unparsed"} later only timeout or con refused on search or manager like this [2024-03-12T10:41:06,285][WARN ][logstash.inputs.redis ] Redis connection error {:message=>"Error connecting to Redis on manager:9696 (Redis::TimeoutError)", :exception=>Redis::CannotConnectError} |
Beta Was this translation helpful? Give feedback.
All reactions
-
Looks like there are some issues connecting to Redis on the manager and connecting to Elasticsearch on |
Beta Was this translation helpful? Give feedback.
All reactions
-
[2024-03-12T10:52:11,518][INFO ][org.apache.lucene.util.VectorUtilPanamaProvider] Java vector incubator API enabled; uses preferredBitSize=256 [2024-03-12T10:52:29,696][INFO ][org.elasticsearch.node.Node] initialized but this did not happen for more than 2 minutes. [elastic_agent][error] Unit state changed log-default-logfile-logs-9cacb7c8-c04e-4ca4-be26-a9784360b29f (STARTING->FAILED): Failed: pid '695069' missed 3 check-ins and will be killed |
Beta Was this translation helpful? Give feedback.
All reactions
-
Would you let me know what the following returns? Run this on your manager and search node - |
Beta Was this translation helpful? Give feedback.
All reactions
-
[adminso@manager ~]$ sudo salt * elastic-agent status |
Beta Was this translation helpful? Give feedback.
All reactions
-
Okay, forgot to add one thing in my command, |
Beta Was this translation helpful? Give feedback.
All reactions
-
30 mins after reboot [root@manager adminso]# sudo salt * cmd.run "elastic-agent status" |
Beta Was this translation helpful? Give feedback.
All reactions
-
Have you tried to stop/start/restart the Elastic Agent on the sensors? If so, do they still show degraded? |
Beta Was this translation helpful? Give feedback.
All reactions
-
I tried to deploy the same configuration of 3 servers (manager, search and sensor) on 2.4.60, the problem remained at first glance. Next, I tried restarting the elastic agent and it looks like it helped, thank you! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.50
Installation Method
Security Onion ISO image
Description
other (please provide detail below)
Installation Type
Distributed
Location
other (please provide detail below)
Hardware Specs
Exceeds minimum requirements
CPU
16
RAM
16
Storage for /
200
Storage for /nsm
800
Network Traffic Collection
span port
Network Traffic Speeds
Less than 1Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
No, there are no failures
Logs
No, there are no additional clues
Detail
Hello,
I have installed the new version SO 2.4.50. It is installed from 3 servers - manager, search and sensor. After a fresh installation, a message appears in the grid menu telling me to restart nodes. Before restarting, the sensor sends zeek and suricata data. After reboot, all statuses go to the ok state, but no data is received from the modules, only system.syslog.
I tried to reinstall the sensor, but after the reboot the situation does not change
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions