-
Version2.4.30 Installation MethodSecurity Onion ISO image Descriptionconfiguration Installation TypeDistributed Locationon-prem with Internet access Hardware SpecsExceeds minimum requirements CPU20 RAM64gb Storage for /320GB Storage for /nsm8TB Network Traffic Collectiontap Network Traffic Speeds1Gbps to 10Gbps StatusYes, all services on all nodes are running OK Salt StatusNo, there are no failures LogsNo, there are no additional clues DetailThis is to document my experience with using Receiver nodes. I wanted this HA feature that allows for downtime on Master/MasterSearch without losing data. It occurred to me that if I offloaded the logstash processing from the ManagerSearch to the receiver nodes, there would be a performance increase on the ManagerSearch node, and indeed it worked. Logstash load on my system over time was about 1/3 of system load. I after deploying a receiver node, I had to allow elastic_agent_data from anywhere (roaming agents) and redis traffic from ManagerSearch to receiver, redis is a PULL technology. Seems like this could be done during integration time. All works great. So here is a small frustration. Under SO menu/Elastic Fleet/Settings/Outputs/grid-logstash receiver nodes are published to elastic agents. And this happens surprisingly fast, change receivers and the fielded agents obey this in mere moments. What I want to do is only publish the receiver nodes here, but by default, the salt engine replaces the list of receivers with the receivers plus the managersearch node. (assuming here that this is true for manager nodes). I want to leave all logstash processing to the receiver nodes and exclude the managersearch node. So where in config menu/salt files can I control the published list of logstash receiver nodes, that overwrite the Elastic Fleet settings? Guidelines
|
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 4 replies
-
Would removing the logstash roll from the ManagerSearch effectively do the same thing? If salt is configure to publish all logstash nodes, simply remove it from the Manager. Is this possible? |
Beta Was this translation helpful? Give feedback.
-
Another salt issue with configuration of Fleet is similar to OUTPUT round robin receivers, is the list of Fleet server hosts on the same page. Salt insists on populating it with: The issue is that the second entry with just HOSTNAME creates a LOT of client side errors as the client cannot resolve the HOST because there is no search directive in the client's DNS configuration with the search directive of the securityonion system. They do resolve after trying the FQDN. This does not cause a functional issue, just bad form to fill client logs with unnecessary errors. |
Beta Was this translation helpful? Give feedback.
-
We have an issue in to disable incoming connections to the manager but keep logstash running when using receiver nodes. #12033 |
Beta Was this translation helpful? Give feedback.
-
Also, you can disable auto-configuration of the Logstash Outputs & Fleet Host URLs by enabling Advanced options under the SOC Configuration and navigating under Elastic Fleet. Disabling this sets those Elastic Fleet config options to not be automatically updated. Any changes you made in the Elastic Fleet configuration settings for those options would not get overridden. |
Beta Was this translation helpful? Give feedback.
-
This has been answered, thanks. Just want re-iterate how remarkable the change in system LOAD, and cpu use, when the LOGSTASH workload was moved to the receiver. Both the elastic and logstash on the LOAD on the ManagerSearch dropped from mid 30's down to 8 this morning. Simply by redirecting Elastic Agent outputs to receiver. Could be the characteristics of my hardware, or a concern I had that the receiver logstash is not doing the same functions to the streams. But the data looks good so far. The receiver is running on a Proxmox VM. 4 cores, 16gb. Bumped java from 500 to 4192. I failed to mention earlier that when I integrated Elastic Defend on about 400 workstation, logstash on ManagerSearch crashed twice, so I changed the java 1000mb to 4192. No issues since. |
Beta Was this translation helpful? Give feedback.
We have an issue in to disable incoming connections to the manager but keep logstash running when using receiver nodes. #12033