While following https://docs.openshift.com/container-platform/4.12/network_observability/configuring-operator.html
And running flow logs pipeline with k8s enrichment on large clusters ~20k pods, the memory consumtion is huuuge... Now as this was running as daemonset, this kind of ddos' api server.
Would it be better for scaling to allow use some sort of shared k8s enrich cache for all flp... ?

Perhaps cache can be smarter like https://redis.io/docs/manual/client-side-caching/
In the end we had to recreate a custom grpc server that used shared cache to achieve network tracing on larger clusters.