Logstash Sending Bulk Request Error - Redis Backing Up #12358
Replies: 1 comment
-
Lets bring down your redis maxmemory to 1000m and your lsheap size to 8G then we can test some other changes. Try increasing your ls_pipeline_batch_size from 125 up to 750 Once the changes are applied keep an eye on your redis queue using influxdb. We want to see the redis queue with peaks and valleys rather than growing indefinitely like you seem to be experiencing. From the same influxdb screen on the right hand side you'll see a table for 'most recent container events' that will show you a message if logstash experiences an 'out of memory' error. If during this test you do see that you can try increasing the lsheap a bit more to 10G. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Security Onion Version - 2.4.40
Standalone
System Cores: 48
System RAM: 755G
logstash setting:
ls_pipeline_batch_size: 125
lsheap: 20000m
ls_Pipeline_workers: 48
Redis Settings:
redis_maxmemory: 12000m
Error:
[ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request but Elasticsearch appears to be unreachable or down {:message=>"Elasticsearch Unreachable: [https://mssoc01:9200/_bulk][Manticore::SocketTimeout] Read timed out", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :will_retry_in_seconds=>2}
Issue: Redis starts to backup when this error occurs to the point where I have to delete the logs in the /nsm/suricata directory and then restart the suricata service. This error happens when there's an event in the suricata logs that keeps on looping continuously. I end up having to tune out the alert thats looping to stop the problem from happening again. I would like to find a permanent solution to this issue. I know its related to the bulk request that logstash tries to send elasticsearch. Elasticsearch is not down, so whats probably happening is the events are taking to long to process.
Any help is much appreciated
Beta Was this translation helpful? Give feedback.
All reactions