Suricata and Zeek data logged but not being displayed in securityonion alerts,dashboard or hunt #12025
Replies: 6 comments 2 replies
-
[root@securityonion ~]# curl http://192.168.1.100:5601/api/fleet/settings --user "user:pw" -v
|
Beta Was this translation helpful? Give feedback.
-
Do you get the same issue if you install using our Security Onion ISO image instead of a manual Oracle installation? What kind of storage on your Virtualbox host (NVME, SSD, rotational)? If rotational, please see: You specify 32GB RAM. How much on the host and how much in the guest? |
Beta Was this translation helpful? Give feedback.
-
Please see my answers below:
On Sat, Dec 16, 2023 at 1:25 AM Doug Burks ***@***.***> wrote:
Do you get the same issue if you install using our Security Onion ISO
image instead of a manual Oracle installation?
https://docs.securityonion.net/en/2.4/os.html#supported
[image: image.png]
Using the iso install (note: I had to use Virtualbox 'Guided mode"
otherwise the install would say "harddrive was already created")
But even so, the iso install would eventually drop in to the anaconda shell
and request network setting/user/password etc setting before erroring as
above.
What kind of storage on your Virtualbox host (NVME, SSD, rotational)? If
rotational, please see:
https://docs.securityonion.net/en/2.4/hardware.html#elastic-stack
SSD storage
You specify 32GB RAM. How much on the host and how much in the guest?
32Gb in the guest, 64GB total on the PC
… —
Reply to this email directly, view it on GitHub
<#12025 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGL5B46L7CYUB6XBB544ZPDYJQ6STAVCNFSM6AAAAABAVRLKPCVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TQNRTGU2TM>
.
You are receiving this because you authored the thread.Message ID:
<Security-Onion-Solutions/securityonion/repo-discussions/12025/comments/7863556
@github.com>
|
Beta Was this translation helpful? Give feedback.
-
Does your network already use the 172.17.0.0/16 range? If so, have you tried changing the Docker network as shown at https://docs.securityonion.net/en/2.4/docker.html#networking-and-bridging? Have you tried a different hypervisor to see if that makes any difference? Have you tried a bare metal installation to see if that makes any difference? |
Beta Was this translation helpful? Give feedback.
-
On Tue, Dec 19, 2023 at 1:09 AM Doug Burks ***@***.***> wrote:
Does your network already use the 172.17.0.0/16 range? If so, have you
tried changing the Docker network as shown at
https://docs.securityonion.net/en/2.4/docker.html#networking-and-bridging?
*No *
Have you tried a different hypervisor to see if that makes any difference?
*not at this time, currently using VirtualBox 7.0.12*
Have you tried a bare metal installation to see if that makes any
difference?
*Yes, the last time I installed 2.4.30-iso I let the software run through
without me trying to get the desktop installed straight away. This has
improved the situation, with the displays: dashboard, hunt now showing data
as well as elastic/kibana and fleet has now accepted elastic-agent
enrollments. But the Alerts tab remains empty and is not being filled even
when pcap and other suggested tests are tried. All Playbook's are active*
*Grid shows no errors*
*Suricata and zeek producing logs in the NSM directory*
… —
Reply to this email directly, view it on GitHub
<#12025 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGL5B45A2E274ZDKQUO35RTYKAW7NAVCNFSM6AAAAABAVRLKPCVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TQOBVG4ZDI>
.
You are receiving this because you authored the thread.Message ID:
<Security-Onion-Solutions/securityonion/repo-discussions/12025/comments/7885724
@github.com>
|
Beta Was this translation helpful? Give feedback.
-
*Typical Logstash errors - logstash.log*
[2023-12-19T03:26:29,727][WARN ][logstash.outputs.redis ] Failed to flush
outgoing items {:outgoing_count=>86, :exception=>"Redis::TimeoutError",
:backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:58:in
`block in _read_from_socket'", "org/jruby/RubyKernel.java:1586:in `loop'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:54:in
`_read_from_socket'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:47:in
`gets'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:382:in
`read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:311:in
`block in read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:299:in
`io'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:310:in
`read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:161:in
`block in call'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:279:in
`block in process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:411:in
`ensure_connected'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:269:in
`block in process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:375:in
`logging'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:268:in
`process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:161:in
`call'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:139:in
`block in connect'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:344:in
`with_reconnect'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:114:in
`connect'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:409:in
`ensure_connected'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:269:in
`block in process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:375:in
`logging'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:268:in
`process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:161:in
`call'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis.rb:270:in
`block in send_command'", "org/jruby/ext/monitor/Monitor.java:82:in
`synchronize'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis.rb:269:in
`send_command'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/commands/lists.rb:11:in
`llen'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:138:in
`congestion_check'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:151:in
`flush'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:221:in
`block in buffer_flush'", "org/jruby/RubyHash.java:1587:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:216:in
`buffer_flush'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:112:in
`block in buffer_initialize'", "org/jruby/RubyKernel.java:1586:in `loop'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:110:in
`block in buffer_initialize'"]}
[2023-12-19T03:26:29,788][WARN ][logstash.outputs.redis ] Failed to send
backlog of events to Redis
***@***.***:6379/0
list:logstash:unparsed", :exception=>#<Redis::TimeoutError: Connection
timed out>,
:backtrace=>["/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:58:in
`block in _read_from_socket'", "org/jruby/RubyKernel.java:1586:in `loop'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:54:in
`_read_from_socket'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:47:in
`gets'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:382:in
`read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:311:in
`block in read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:299:in
`io'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:310:in
`read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:161:in
`block in call'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:279:in
`block in process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:411:in
`ensure_connected'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:269:in
`block in process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:375:in
`logging'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:268:in
`process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:161:in
`call'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:139:in
`block in connect'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:344:in
`with_reconnect'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:114:in
`connect'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:409:in
`ensure_connected'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:269:in
`block in process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:375:in
`logging'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:268:in
`process'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/client.rb:161:in
`call'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis.rb:270:in
`block in send_command'", "org/jruby/ext/monitor/Monitor.java:82:in
`synchronize'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis.rb:269:in
`send_command'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/redis-4.8.1/lib/redis/commands/lists.rb:11:in
`llen'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:138:in
`congestion_check'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-redis-5.0.0/lib/logstash/outputs/redis.rb:151:in
`flush'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:221:in
`block in buffer_flush'", "org/jruby/RubyHash.java:1587:in `each'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:216:in
`buffer_flush'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:112:in
`block in buffer_initialize'", "org/jruby/RubyKernel.java:1586:in `loop'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/stud-0.0.23/lib/stud/buffer.rb:110:in
`block in buffer_initialize'"]}
[2023-12-19T03:26:29,727][WARN ][logstash.inputs.redis ] Redis
connection error {:message=>"Error connecting to Redis on
securityonion:9696 (Redis::TimeoutError)",
:exception=>Redis::CannotConnectError}
*elasticsearch - securityonion.log*
[2023-12-19T08:57:03,621][INFO
][org.elasticsearch.monitor.jvm.JvmGcMonitorService]
[gc][young][48636][1154] duration [782ms], collections [1]/[1.7s], total
[782ms]/[9m], memory [2.2gb]->[1.8gb]/[10.5gb], all_pools {[young]
[464mb]->[0b]/[0b]}{[old] [1.7gb]->[1.7gb]/[10.5gb]}{[survivor]
[37.4mb]->[34.3mb]/[0b]}
[2023-12-19T08:57:03,628][INFO
][org.elasticsearch.monitor.jvm.JvmGcMonitorService] [gc][48636] overhead,
spent [782ms] collecting in the last [1.7s]
[2023-12-19T08:57:35,054][INFO
][org.elasticsearch.monitor.jvm.JvmGcMonitorService] [gc][48667] overhead,
spent [538ms] collecting in the last [1.2s]
…On Wed, Dec 20, 2023 at 12:40 AM Doug Burks ***@***.***> wrote:
@The1Waterman <https://github.com/The1Waterman> Please follow the
Troubleshooting Alerts section of the documentation:
https://docs.securityonion.net/en/2.4/suricata.html#troubleshooting-alerts
@fschallock <https://github.com/fschallock> From #1720
<#1720>
:
*Start a new discussion instead of replying to somebody else's discussion
Please search to see if you can find similar discussions that may help you.
However, in order to avoid confusion, please do NOT reply to somebody
else's discussion with your own issue. Instead, please start a new
discussion and in that new discussion you can provide a hyperlink to the
related discussion.*
—
Reply to this email directly, view it on GitHub
<#12025 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGL5B427QSX3PBZNSTQHZKLYKF4MFAVCNFSM6AAAAABAVRLKPCVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TQOJWGUYTI>
.
You are receiving this because you were mentioned.Message ID:
<Security-Onion-Solutions/securityonion/repo-discussions/12025/comments/7896514
@github.com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.30
Installation Method
Other (please provide detail below)
Description
configuration
Installation Type
Standalone
Location
on-prem with Internet access
Hardware Specs
Exceeds minimum requirements
CPU
12, 8 selsect
RAM
32GB
Storage for /
515.2GB
Storage for /nsm
175GB
Network Traffic Collection
tap
Network Traffic Speeds
Less than 1Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
No, there are no failures
Logs
Yes, there are additional clues in /opt/so/log/ (please provide detail below)
Detail
Installed in VirtualBox with additions, Oracle 9. then standalone installed from cloned github image.
Grid shows no errors all containers running.
sudo salt-call state.highstate - shows no errors
logstash log shows following
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.94.Final.jar:4.1.94.Final]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
Tried the following suggestion:
Stop Filebeats and Logstash containers. Stopped Elasticsearch too, just in case...
Locate the Filebeat certificate files in /etc/pki , and move them aside
Locate the Filebeat certificate files in /opt/so/conf/filebeat/etc/pki , move them aside.
Run salt-call state.apply ssl
logstash.log still shows the fatal alert.
curator.log - indicates NoIndices
2023-12-14 21:02:35,664 INFO Preparing Action ID: 1, "close"
2023-12-14 21:02:35,664 INFO Creating client object and testing connection
2023-12-14 21:02:35,664 INFO Creating client object and testing connection
2023-12-14 21:02:35,977 INFO GET https://192.168.1.100:9200/ [status:200 duration:0.112s]
2023-12-14 21:02:35,987 INFO GET https://192.168.1.100:9200/_nodes/_local [status:200 duration:0.010s]
2023-12-14 21:02:35,991 INFO GET https://192.168.1.100:9200/_cluster/state/master_node [status:200 duration:0.003s]
2023-12-14 21:02:35,991 INFO Trying Action ID: 1, "close": Close import indices older than 73000 days.
2023-12-14 21:02:36,011 INFO GET https://192.168.1.100:9200/*/_settings?expand_wildcards=open,closed [status:200 duration:0.020s]
2023-12-14 21:02:36,030 INFO Skipping action "close" due to empty list: <class 'curator.exceptions.NoIndices'>
2023-12-14 21:02:36,030 INFO Action ID: 1, "close" completed.
2023-12-14 21:02:36,030 INFO All actions completed.
Elastalert.log
2023-12-14 21:50:31,794 WARNING elasticsearch POST https://securityonion:9200/.ds-logs-*/_eql/search?ignore_unavailable=true [status:400 request:0.021s]
2023-12-14 21:50:31,800 ERROR elastalert Error running query: RequestError(400, 'verification_exception', 'Found 2 problems\nline 1:12: Unknown column [registry.path]\nline 1:65: Unknown column [registry.value]')
2023-12-14 21:50:31,958 INFO elastalert Ran Potential Persistence Via Netsh Helper DLL - Registry - 022236ac3 from 2023-12-14 21:40 UTC to 2023-12-14 21:50 UTC: 0 query hits (0 already seen), 0 matches, 0 alerts sent
2023-12-14 21:50:31,958 INFO elastalert Potential Persistence Via Netsh Helper DLL - Registry - 022236ac3 range 600
2023-12-14 21:50:31,958 INFO apscheduler.executors.default Job "Rule: Potential Persistence Via Netsh Helper DLL - Registry - 022236ac3 (trigger: interval[0:03:00], next run at: 2023-12-14 21:53:32 UTC)" executed successfully
2023-12-14 21:51:00,683 INFO apscheduler.executors.default Running job "Internal: Handle Pending Alerts (trigger: interval[0:03:00], next run at: 2023-12-14 21:54:00 UTC)" (scheduled at 2023-12-14 21:51:00.677202+00:00)
2023-12-14 21:51:00,685 INFO apscheduler.executors.default Running job "Internal: Handle Config Change (trigger: interval[0:03:00], next run at: 2023-12-14 21:54:00 UTC)" (scheduled at 2023-12-14 21:51:00.677262+00:00)
2023-12-14 21:51:00,759 INFO elastalert Background configuration change check run at 2023-12-14 21:51 UTC
2023-12-14 21:51:00,762 INFO apscheduler.executors.default Job "Internal: Handle Config Change (trigger: interval[0:03:00], next run at: 2023-12-14 21:54:00 UTC)" executed successfully
2023-12-14 21:51:00,765 INFO elastalert Background alerts thread 0 pending alerts sent at 2023-12-14 21:51 UTC
2023-12-14 21:51:00,765 INFO apscheduler.executors.default Job "Internal: Handle Pending Alerts (trigger: interval[0:03:00], next run at: 2023-12-14 21:54:00 UTC)" executed successfully
2023-12-14 21:51:01,046 INFO elastalert Disabled rules are: []
2023-12-14 21:51:01,047 INFO elastalert Sleeping for 179.999698 seconds
influxdb.log
ts=2023-12-14T21:54:32.761619Z lvl=info msg="http: TLS handshake error from 127.0.0.1:57082: EOF" log_id=0m6R5hvG000 service=http
ts=2023-12-14T21:55:33.797068Z lvl=info msg="http: TLS handshake error from 127.0.0.1:40292: EOF" log_id=0m6R5hvG000 service=http
Stenographer.log
2023/12/14 22:03:45 Thread 0 error tracking "1702474195654995": could not open blockfile "/tmp/stenographer297448431/PKT0/1702474195654995": could not open index for "/tmp/stenographer297448431/PKT0/1702474195654995": invalid index file "/tmp/stenographer297448431/IDX0/1702474195654995" missing versions record: leveldb/table: invalid table (block has no restart points)
2023/12/14 22:04:00 Thread 0 error tracking "1702279154104293": could not open blockfile "/tmp/stenographer297448431/PKT0/1702279154104293": could not open index for "/tmp/stenographer297448431/PKT0/1702279154104293": invalid index file "/tmp/stenographer297448431/IDX0/1702279154104293" missing versions record: leveldb/table: invalid table (block has no restart points)
2023/12/14 22:04:00 Thread 0 error tracking "1702474195654995": could not open blockfile "/tmp/stenographer297448431/PKT0/1702474195654995": could not open index for "/tmp/stenographer297448431/PKT0/1702474195654995": invalid index file "/tmp/stenographer297448431/IDX0/1702474195654995" missing versions record: leveldb/table: invalid table (block has no restart points)
finally /root/soup.log
Waiting for value 'fleet' at 'http://localhost:5601/api/fleet/settings' (299/300)
Server is not ready
Waiting for value 'fleet' at 'http://localhost:5601/api/fleet/settings' (300/300)
Server is not ready
Server still not ready after 300 attempts; giving up.
Kibana API not accessible, exiting script...
Starting crond service at 08:23:51.421256
Successfully started crond.
Starting salt-master service at 08:23:51.520623
Successfully started salt-master.
Starting salt-minion service at 08:23:51.610874
Successfully started salt-minion.
Enabling highstate.
local:
----------
msg:
Info: highstate state already enabled.
res:
True
Soup failed with error 1: Unhandled error
Unhandled error occured, please check /root/soup.log for details.
Several reinstalls have been tried all with the same issues.
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions