Replies: 3 comments 2 replies
-
Are there any errors in the Logstash log on your search node(s)? Increasing Redis maxmemory will allow for a larger queue, but if there is an error in the pipeline somewhere all you will get is a larger Redis count. |
Beta Was this translation helpful? Give feedback.
-
yes there is an error log. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response, here is what i got
tail -f /opt/so/log/elasticsearch/securityonion.log
{
Ncat: Version 7.92 ( https://nmap.org/ncat )
[2024-10-30T13:47:46,862][WARN ][org.elasticsearch.cluster.coordination.ClusterFormationFailureHelper] master not discovered yet: have discovered [{securityonionsearch}{kuXUetoaS5SzI16tgFRFtg}{TCw6dyRXTtaWYofuHi-aMQ}{securityonionsearch}{securityonionsearch}{localipofsearchnode:9300}{dhit}{8.14.3}{7000099-8505000}]; discovery will continue using [publicip:9300] from hosts providers and [] from last-known cluster state; node term 14, last-accepted version 3136 in term 14; for troubleshooting guidance, see https://www.elastic.co/guide/en/elasticsearch/reference/8.14/discovery-troubleshooting.html this is what i got. and the manager node has 2 nics one for public ip and the other for private ip. am accessing the gui using the public ip. (in the time of deployment the manager has only one nic which is the local ip and after connecting with the search and sensor node i add the second nic). |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.100
Installation Method
Security Onion ISO image
Description
configuration
Installation Type
Distributed
Location
cloud
Hardware Specs
Exceeds minimum requirements
CPU
16
RAM
64
Storage for /
500Gb
Storage for /nsm
250Gb
Network Traffic Collection
tap
Network Traffic Speeds
1Gbps to 10Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
No, there are no failures
Logs
Yes, there are additional clues in /opt/so/log/ (please provide detail below)
Detail
i have been using seconion for more than a week now and everything was working fine, after the complete installation of soc every service is running ok and i have data of three days. after that on the 4th day i set the redis max memory to 2147483648 using the redis>config>maxmemory on the gui, after that i was unable to get 7 days of data and on the dashboard i only have data of the first three days. below is the log from logstash and so-redis-count has 2038252
/opt/so/log/logstash/logstash.log has
[WARN ][logstash.outputs.redis ] Failed to send backlog of events to Redis {:identity=>"redis://@securityonionmanager:6379/0 list:logstash:unparsed", :exception=>#<Redis::CommandError: OOM command not allowed when used memory > 'maxmemory'.>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:162:in
call'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis.rb:270:in
block in send_command'", "org/jruby/ext/monitor/Monitor.java:82:insynchronize'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis.rb:269:in
send_command'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/commands/lists.rb:86:inrpush'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:152:in
flush'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:221:inblock in buffer_flush'", "org/jruby/RubyHash.java:1610:in
each'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:216:inbuffer_flush'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:159:in
buffer_receive'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:209:insend_to_redis'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-codec-json-3.1.1/lib/logstash/codecs/json.rb:69:in
encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:48:inblock in encode'", "org/logstash/instrument/metrics/AbstractSimpleMetricExt.java:74:in
time'", "org/logstash/instrument/metrics/AbstractNamespacedMetricExt.java:68:intime'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:47:in
encode'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:123:inreceive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:104:in
block in multi_receive'", "org/jruby/RubyArray.java:1981:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:104:in
multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:inmulti_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:304:in
block in start_workers'"]}how can i fix this?
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions