-
Notifications
You must be signed in to change notification settings - Fork 0
Suseobs store #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Suseobs store #16
Conversation
New SUSE observability store backend. Allow sending kubewarden audit result to SUSE Observability. This first code just has the code to convert the policy report into SUSE Observability health check. Signed-off-by: José Guilherme Vanz <[email protected]>
Adds the code to build and send the request to SUSE Observability. As well as the tests to cover this code. Signed-off-by: José Guilherme Vanz <[email protected]>
store. Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
| var policyReportStore report.ReportStore | ||
| if len(suseObsURL) > 0 && len(suseObsApiKey) > 0 && len(suseObsUrn) > 0 && len(suseObsCluster) > 0 { | ||
| log.Debug().Msg("Using SUSE Observability as report store") | ||
| policyReportStore = report.NewSuseObsStore(suseObsApiKey, suseObsURL, suseObsUrn, suseObsCluster, suseObsRepeatInterval, suseObsExpireInterval) | ||
| } else { | ||
| log.Debug().Msg("Using Kubernetes as report store") | ||
| policyReportStore = report.NewPolicyReportStore(client) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is very ugly. And it needs to be refactored.
What do you think about having multiple stores? For example, the audit scanner could create policy reports and send the data to SUSE Obs.
| rootCmd.Flags().StringVar(&suseObsURL, "suseobs-url", "", "URL to the SUSE OBS API") | ||
| rootCmd.Flags().StringVar(&suseObsApiKey, "suseobs-apikey", "", "API key to authenticate with the SUSE OBS API") | ||
| rootCmd.Flags().StringVar(&suseObsUrn, "suseobs-urn", "", "SUSE Observability health check stream urn") | ||
| rootCmd.Flags().StringVar(&suseObsCluster, "suseobs-cluster", "", "SUSE Observability cluster name where audit scanner is running") | ||
| rootCmd.Flags().DurationVar(&suseObsRepeatInterval, "suseobs-repeat-interval", defaultInterval, "The frequency with which audit scanner will send health data to SUSE Observability. Max allowed value is 1800 (30 minutes)") | ||
| rootCmd.Flags().DurationVar(&suseObsExpireInterval, "suseobs-expire-interval", defaultInterval, "The time to wait after the last update before an audit scanner check is deleted by SUSE Observability if the check isn't observed again") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we decide to allow multiple store types. I think we should move the store configuration to a file.
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
Signed-off-by: José Guilherme Vanz <[email protected]>
| scanner.BeforeScan(ctx) | ||
| defer scanner.AfterScan(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the consistency mode used to send the health checks to SUSE Observabiliry (REPEAT_SNAPSHOT), it necessary to send a payload to start the snapshot and another to close it. All the request between these two request, will be the health checks send by the audit scanner.
I've add these functions here just to avoid any the work to handle the concurrency nature of the audit scanner and the need to add some control to avoid multiple start/stop snapshot payloads. If we send more then one of this payload, the health checks will not work as expected.
| // FIXME - This is workaround to allow start the SUSE Obs snapshot. This should be called before | ||
| // any healtch check state is sent | ||
| // This should be properly addressed when we changing the code to be production ready | ||
| // This is not in the scanner level just to avoid dealing with the multiple goroutines sending health checks | ||
| func (s *Scanner) BeforeScan(ctx context.Context) error { | ||
| return s.policyReportStore.BeforeScanning(ctx) | ||
| } | ||
|
|
||
| // FIXME - This is workaround to allow closing the SUSE Obs snapshot. This should be called after | ||
| // all healtch check state is sent. | ||
| // This should be properly addressed when we changing the code to be production ready | ||
| // This is not in the scanner level just to avoid dealing with the multiple goroutines sending health checks | ||
| func (s *Scanner) AfterScan(ctx context.Context) { | ||
| err := s.policyReportStore.AfterScanning(ctx) | ||
| log.Err(err) | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is related to this comment
| type ReportStore interface { | ||
| BeforeScanning(ctx context.Context) error | ||
| AfterScanning(ctx context.Context) error | ||
| CreateOrPatchPolicyReport(ctx context.Context, policyReport *wgpolicy.PolicyReport) error | ||
| DeleteOldPolicyReports(ctx context.Context, scanRunID, namespace string) error | ||
| CreateOrPatchClusterPolicyReport(ctx context.Context, clusterPolicyReport *wgpolicy.ClusterPolicyReport) error | ||
| DeleteOldClusterPolicyReports(ctx context.Context, scanRunID string) error | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor this for something more meaningful. As the SUSE Obs is not actually storing the reports...
POC adding a new store to send the audit scanner results to suse observability.
Warning
It still have room for many improvements. See this PR as a knowledge sharing with the team. Not the final code to be merged. There are many comments with some point of improvements.
The logic of the requests to the SUSE Observability is:
start_snapshotfield to start the snapshot creation. It should not contains thestop_snapshotfieldstop_snapshotfield. It should not contains thestart_snapshotfieldSee more information about the payload here
Example the payloads sent to create a health snapshot:
{ "apiKey": "V1dLjMKA008I3h7HisT3BXh9rWlHaWk2", "collection_timestamp": 1741902901, "internalHostname": "stackstate.hpc9u27", "metrics": [], "service_checks": [], "health": [ { "consistency_model": "REPEAT_SNAPSHOTS", "start_snapshot": { "repeat_interval_s": 300, "expiry_interval_s": 360 }, "stream": { "urn": "urn:health:kubewarden:audit_scanner", "sub_stream_id": "kubewarden1" }, "check_states": [] } ], "topologies": [] }{ "apiKey": "V1dLjMKA008I3h7HisT3BXh9rWlHaWk2", "collection_timestamp": 1741902902, "internalHostname": "stackstate.hpc9u27", "metrics": [], "service_checks": [], "health": [ { "consistency_model": "REPEAT_SNAPSHOTS", "stream": { "urn": "urn:health:kubewarden:audit_scanner", "sub_stream_id": "kubewarden1" }, "check_states": [ { "checkStateId": "clusterwide-no-host-namespace-sharing-default-pod-privileged-pause-pod-clusterwide-no-host-namespace-sharing", "message": "", "health": "Clear", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-no-host-namespace-sharing" }, { "checkStateId": "clusterwide-drop-capabilities-default-pod-privileged-pause-pod-clusterwide-drop-capabilities", "message": "", "health": "Clear", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-drop-capabilities" }, { "checkStateId": "clusterwide-safe-labels-default-pod-privileged-pause-pod-clusterwide-safe-labels", "message": "", "health": "Clear", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-safe-labels" }, { "checkStateId": "clusterwide-no-privileged-pod-default-pod-privileged-pause-pod-clusterwide-no-privileged-pod", "message": "Privileged container is not allowed", "health": "Deviating", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-no-privileged-pod" }, { "checkStateId": "clusterwide-do-not-run-as-root-default-pod-privileged-pause-pod-clusterwide-do-not-run-as-root", "message": "", "health": "Clear", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-do-not-run-as-root" }, { "checkStateId": "clusterwide-no-privilege-escalation-default-pod-privileged-pause-pod-clusterwide-no-privilege-escalation", "message": "", "health": "Clear", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-no-privilege-escalation" }, { "checkStateId": "clusterwide-do-not-share-host-paths-default-pod-privileged-pause-pod-clusterwide-do-not-share-host-paths", "message": "", "health": "Clear", "topologyElementIdentifier": "urn:kubernetes:/kubewarden1:default:pod/privileged-pause-pod", "name": "clusterwide-do-not-share-host-paths" } ] } ], "topologies": [] }{ "apiKey": "V1dLjMKA008I3h7HisT3BXh9rWlHaWk2", "collection_timestamp": 1741902902, "internalHostname": "stackstate.hpc9u27", "metrics": [], "service_checks": [], "health": [ { "consistency_model": "REPEAT_SNAPSHOTS", "stop_snapshot": {}, "stream": { "urn": "urn:health:kubewarden:audit_scanner", "sub_stream_id": "kubewarden1" }, "check_states": [] } ], "topologies": [] }