Optimizing Security Onion: Solving Logstash, Elasticsearch & Redis Memory Issues #14322
Replies: 1 comment
-
You can also navigate to SOC > Administration > Configuration > logstash > config > pipeline_x_batch_x_size, and reduce the value there for better optimization. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🛠️ Resource Optimization on a Distributed Security Onion Architecture
📌 Context
I have set up a distributed Security Onion architecture with the following configuration:
The overall status of the services is OK, but several resource management issues have been encountered.
1️⃣ Logstash Error:
🔹 Solution Applied:
2️⃣ Elasticsearch Error:
🔹 Solution Applied:
Heap*20
3️⃣ Redis Error:
🔹 Solution Applied:
2048m
🚨 Issue: Despite this optimization, the issue persists after a few days.
⚙️ Current Configuration After Optimization
🔹 Elasticsearch (
esheap
)8,381 MB
8,381 MB
🔹 Logstash (
lsheap
)6 GB
8 GB
🔹 Redis (
maxmemory
)8,192 MB
🔹 Number of Elasticsearch Shards
120
📚 References Used
🚀 Request for Optimization and Best Practices
Despite these optimizations, the Redis issue persists. I am looking for recommendations on:
maxmemory-policy
,appendonly
,save
...)120 shards
too high for this setup?)Any help or insights would be greatly appreciated. Thank you in advance! 🙏
Beta Was this translation helpful? Give feedback.
All reactions