Replies: 2 comments
-
I haven't tested this, but I think that your Standalone installation should be configured to import any Zeek logs that it finds at |
Beta Was this translation helpful? Give feedback.
0 replies
-
jscrub, I have often wondered the same thing. I've not lost a node, but wondered if the zeek logs can be easily imported if needed. Did you try this? What were your results? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.60
Installation Method
Security Onion ISO image
Description
other (please provide detail below)
Installation Type
Standalone
Location
airgap
Hardware Specs
Exceeds minimum requirements
CPU
8
RAM
64
Storage for /
2 TB
Storage for /nsm
50 TB
Network Traffic Collection
span port
Network Traffic Speeds
1Gbps to 10Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
No, there are no failures
Logs
No, there are no additional clues
Detail
Hello all, I come with a problem hoping for some solutions related to zeek logs, particularly importing them from a failed SO cluster.
Long story short, we had a SO cluster (distributed setup) that lost power and ended up corrupted. We were able to retrieve most of the logs collected, but in the standard zeek logs format (*.log). We have several GB of these logs, and they need to be ingested and analyzed. We could do it manually, but wanted to see if we can "feed" them manually back into a new SO installation (Standalone attempt).
However when we try to drop the logs in their respective folder format in /nsm/zeek/logs/, we don't seem to have a way to force SO and Kibana to pick up the logs.
For more info, we were able to retrieve formatted zeek log folders (year-month-date), and the zipped files (*.log.gz). We have tried pushing the raw logs into /nsm/zeek/logs/current to no avail, we tried adding custom zeek log file paths to point to the correct folders, and we tried restarting services (particularly elasticsearch and zeek), but SO and Kibana do not seem to pick up the logs. They are roughly 3-4 months old, and are not sure if the date has something do with possible log rotation or retention.
in 2.3, we have had similar cases of this happening, and were able to use filebeat to import the logs. Now with 2.4, we don't seem to be able to use the same tricks.
So how would we be able to ingest and parse old zeek logs from a broken Distributed SO installation? For clarity, the logs are not corrupt, they are readable.
We also have considered replaying the logs with Tshark, but want to avoid issues such as timestamp issues and similar in case reports need to be written related to what are in the logs.
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions