Replies: 3 comments 1 reply
-
|
From https://docs.securityonion.net/en/2.4/architecture.html#standalone For Elastic data, the longer your retention goal the more storage. If you have two search nodes, one with SSD, one with regular HDD, you could run them in data tiers. Hot data (most recent and searched) on the SSD search node and warm/cold data on the HDD search node. I have seen managersearch + search node + sensor node grids, but if you want to separate all the node roles you would go fully distributed. |
Beta Was this translation helpful? Give feedback.
-
|
thank you for your answer. If i understand correctly there is no way to use hdd and ssd in the same node as the only mount point known by S.O is /nsm. 3.5G ./repo In my case suripcap takes quite some space. It could make sense to have a sensor node. But then no way to choose between hdd and ssd. |
Beta Was this translation helpful? Give feedback.
-
|
I feel a bit dumb as i wasn't aware of lvm-cache. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.170
Installation Method
Security Onion ISO image
Description
installation
Installation Type
Standalone
Location
on-prem with Internet access
Hardware Specs
Exceeds minimum requirements
CPU
6
RAM
32G
Storage for /
278G
Storage for /nsm
568G
Network Traffic Collection
span port
Network Traffic Speeds
1Gbps to 10Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
Yes, there are salt failures (please provide detail below)
Logs
No, there are no additional clues
Detail
hello,
I'm trying to better understand S.O. I have some difficulties with storage management.
As i understand it /nsm is the directory where almost all collected and processed data are.
/nsm/suripcap for packet captures
/nsm/elasticsearch for all data related to elastic search
/nsm/zeek for zeek logs
What i find difficult is that /nsm storage space is used concurently by pcap, elasticsearch and zeek
Keep in mind that for now i tried so only in standalone mode.
I read about ILM but it only concerns elastic search data.
afaik the hot, warm and cold indices are managed by node: e.g you affect a node as warm, hot or cold data tier.
For S.O i suppose these nodes are search nodes.
In my case (homelab with only 1 physical host with 1 more host soon), i would need to add search node as cold tier.
I think you may have guessed that i'm looking to better use my storage. I can't have all the data on ssd.
Is there a way to have all data in /nsm spread accross hdd or ssd according to how frequent they are accessed?
In order to have a better idea of traffic in my network i recently setup zabbix with an agent on my firewall (opnsense). I will have insight on inter vlan traffic.
The monitor i use until now is lan inbound & outbound, vlans inbound & outbound. This traffic flows through 3 2,5gbe interfaces. The monitor is on a 10gbps interface
My precedent S.O standalone instance barely reach 6 days of pcap retention with a 1Tb vdisk and ends up with problems in elastic search indices.
For pcap i restricted capture to 1Mb.
Wouldn't it be better to have the ability to configure nsm more granularly for all kind of data S.O produce and access?
My next install will have more resources (192Gb ram, 8Tb ssd, 24 cores on a proxmox host) but still spinning disks could be better used if you can use them on the same host that use ssd.
How people plan storage for their S.O setup? I would prefer a standalone installation but if distributed is mandatory i can do that.
Should i have 2 search node for a physical host : 1 to use ssd and the second with hdd ?
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions