How to correctly use Fortigate elastic agent integration with Security Onion? #11866
Replies: 1 comment 3 replies
-
Finally got it to work. So in summary here's what I did (these are two separate issues though, one was the documents not being generated and the other was the fields not appearing). To fix the issue with the documents not being generated, I removed the integration and reinstalled it. To fix the issue with the fields not appearing I copied over the index settings I had from my test environment, which is where the fields are defined. I noticed that the fortigate index in my production setup didn't have the same settings as my test setup and copying them over ended up fixing it. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am running a standalone on-prem deployment, version 2.4.30 installed from the ISO image, and I am trying to get syslogs sent from my internal/external fortigate firewalls to appear in Kibana. I have added the fortigate integration to the "so-grid-nodes_general" agent policy, which from my understanding is the policy used by my standalone deployment. I also tried adding it to the "FleetServer_<serverName>" policy. I have created a custom host group and custom port group with my fortigate IPs and the port 9004 respectively, (I've also assigned the custom port group to the custom host group) and I've verified that the rule exists with iptables.
I am able to see the packets flowing in on the correct port through tcpdump, here is some example output from tcpdump, taken from the standalone node.
Here is the firewall rule which is in the "INPUT" chain. As you can see, it's receiving traffic.
I added the same rule to the "DOCKER USER" chain just in case, that one isn't receiving traffic though.
Here is an overview of the elastic agent on my standalone node:
Here is the fortigate integration configuration, I also tried using "localhost" and "<server ip address>" as values for the listen address.
I've also restarted the elastic-agent on the server and it's currently running fine:
I've restarted the entire server, twice, as well.
I suspect that I've added the integration incorrectly (maybe to the wrong policy?) because in Kibana all the forti fields are empty.
Here is the output for
sudo salt-call state.highstate
andsudo so-status
I've also tried sending the logs via TCP instead of UDP, and adding the corresponding firewall rule for that but no luck. The way I understand it is, the logs are being received by SO on port 9004, SO then needs to forward that traffic to port 5055 which is the port for elastic agent data. Once forwarded then they should appear in Kibana right? Here is the relevant section of the policy that is applying to my standalone node which states where the output should go.
The policy should be applying correctly:

Am I supposed to create a new agent policy with the fortigate integration, and then follow the on-screen instructions to enroll and install my SO node with fleet? I'm a little worried because I am under the impression that security onion 2.4.30 already has the elastic agent installed and I don't want to install it again, overwrite something important and end up breaking my installation. I thought that each agent could only be enrolled in 1 agent policy at a time.
I've been looking through every setting I can think of and I found this setting in the administration section of SO. I changed it to true, but still not getting any results.
Here is the status for the fortigate index, indicating that there are no documents in it:
EDIT 23/11/2023:
I've found what the error is. Logstash can't index the event to Elasticsearch because it's expecting the value of the "event.module" field to be "fortinet" and it comes in as "fortinet_fortigate".
[WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["create", {:_id=>nil, :_index=>"logs-fortinet_fortigate.log-default", :routing=>nil}, {"@version"=>"1", "ecs"=>{"version"=>"8.0.0"}, "message"=> REDACTED error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:2087] failed to parse field [event.module] of type [constant_keyword] in document with id 'C4Fr_osB_bgS-nfCZW0F'. Preview of field's value: 'fortinet_fortigate'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"[constant_keyword] field [event.module] only accepts values that are equal to the value defined in the mappings [fortinet], but got [fortinet_fortigate]"}}}}}
I've tried creating a new index template (cloned from the original "logs-fortinet_fortigate.log") template and applying the following mapping to it:
I've changed the index templates priority to be higher than the default one, and I've checked that it's applying to to the corresponding "logs-fortinet_fortigate.log-*" index.
I noticed that in the actual index, the mapping hasn't changed. Even after restarting the logstash container, so it keeps giving me the same error described above. I'm getting closer but damn, I've been at this for 2 days and my thoughts are starting to get tangled up. I'll update this post once I find the solution.
But I still can't see the logs anywhere in Kibana. The default fortigate dashboard comes up empty and the discover section doesn't contain any logs originating from my fortigate appliances. Am I missing something? Or could anyone point me in the right direction please?
EDIT
Ok so I reinstalled the integration. Manually deleted its indices and index templates, and I can now see documents being created from the data being ingested.
The issue now is that all the forti fields are missing from the dashboard. I tested the ingest pipeline, and it seems to be parsing correctly. I've also tried refreshing the index.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions