You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* NETOBSERV-1692: Add FLP-based deduper options
FLP-based dedup allows to decrease Loki CPU / memory / storage a lot (~50%) at the cost of minimal loss in data accuracy (e.g. loosing interfaces involved in egress traffic)
* Add filters API
* FLP-based filters
- Switch using new "keep" api on FLP filters
- Support sampling
- Support regexes
- Add tests
* update sample config
* bump flp, mention dev preview
* fix rebase issue
// `deduper` allows to sample or drop flows identified as duplicates, in order to save on resource usage.
609
+
Deduper*FLPDeduper`json:"deduper,omitempty"`
610
+
611
+
// `filters` let you define custom filters to limit the amount of generated flows.
612
+
// +optional
613
+
Filters []FLPFilterSet`json:"filters"`
614
+
607
615
// `debug` allows setting some aspects of the internal configuration of the flow processor.
608
616
// This section is aimed exclusively for debugging and fine-grained performance optimizations,
609
617
// such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
610
618
// +optional
611
619
DebugDebugConfig`json:"debug,omitempty"`
612
620
}
613
621
622
+
typeFLPDeduperModestring
623
+
624
+
const (
625
+
FLPDeduperDisabledFLPDeduperMode="Disabled"
626
+
FLPDeduperDropFLPDeduperMode="Drop"
627
+
FLPDeduperSampleFLPDeduperMode="Sample"
628
+
)
629
+
630
+
// `FLPDeduper` defines the desired configuration for FLP-based deduper
631
+
typeFLPDeduperstruct {
632
+
// Set the Processor deduper mode (de-duplication). It comes in addition to the Agent deduper because the Agent cannot de-duplicate same flows reported from different nodes.<br>
633
+
// - Use `Drop` to drop every flow considered as duplicates, allowing saving more on resource usage but potentially loosing some information such as the network interfaces used from peer.<br>
634
+
// - Use `Sample` to randomly keep only 1 flow on 50 (by default) among the ones considered as duplicates. This is a compromise between dropping every duplicates or keeping every duplicates. This sampling action comes in addition to the Agent-based sampling. If both Agent and Processor sampling are 50, the combined sampling is 1:2500.<br>
635
+
// - Use `Disabled` to turn off Processor-based de-duplication.<br>
// Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html.
688
+
// +required
689
+
Fieldstring`json:"field"`
690
+
691
+
// Value to filter on. When `matchType` is `Equal` or `NotEqual`, you can use field injection with `$(SomeField)` to refer to any other field of the flow.
0 commit comments