-
Notifications
You must be signed in to change notification settings - Fork 130
Add health probe, SA, and APM mappingProcessors #2460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2460 +/- ##
==========================================
+ Coverage 37.30% 37.33% +0.03%
==========================================
Files 290 290
Lines 24699 24837 +138
==========================================
+ Hits 9213 9274 +61
- Misses 14773 14840 +67
- Partials 713 723 +10
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
What does this PR do?
Adds new mapping processors for the yaml-mapper:
mapHealthPortWithProbes: Maps custom *.healthPort (if configured properly, otherwise skips mapping)mapTraceAgentLivenessProbe: Maps trace-agent liveness probe with tcpSocket.port (if configured properly)mapServiceAccountName: Maps serviceAccountName only whenrbac.create: falsemapApmPortToContainerPort: Maps traceAgent hostPort to containerPortAlso:
Motivation
Previously, the yaml-mapper was mapping each helm field directly without additional processing. This resulted in DDA CRs that when applied, caused pod restarts and failing health checks. The new mapping processors handle each source helm field in the same manner as the datadog chart by mapping or adding the relevant specs to the DDA CR.
mapHealthPortWithProbes: fixes failing health checks in the agent, DCA, and CCRmapTraceAgentLivenessProbe: fixes failing health check in the trace-agent containermapServiceAccountName: fixes missing DCA serviceAccount and failure in the DCA to detect cluster namemapApmPortToContainerPort: fixes edge case where a user specifies a custom APM hostPort and containerPort is not updated to matchAdditional Notes
Anything else we should know when reviewing?
Minimum Agent Versions
Are there minimum versions of the Datadog Agent and/or Cluster Agent required?
Describe your test plan
make kubectl-datadogSmoke test
Testing the mappingProcessors
1) Configured
healthPortmaps to probes properly:1a) All probes that match healthPort → should map
Example values.yaml:
Checklist:
DD_HEALTH_PORTenv var and their probe httpGet.ports have the correct value1b) Partial probes or mismatch → should NOT map
healthPortin the values.yamlChecklist:
2) Trace Agent Liveness Probe
2a) No livenessProbe set → operator sets defaults
tcpSocket.port: 8126)2b) Operator updates livenessProbe to use custom set APM port
Checklist:
2c) Mapper sets livenessProbe override with using custom set APM port
Checklist:
tcpSocket.port: 9000andinitialDelaySeconds: 203) Service Account Name
3a)
rbac.createnot set → should NOT map serviceAccountName3b) rbac.create: false → should map
Checklist:
kubectl create sa my-cluster-agent-saspec.override.clusterAgent.serviceAccountName: my-cluster-agent-samy-cluster-agent-sa4) Cluster Token Secret Key Mapping
tokenkey:kubectl create secret generic my-custom-dca-token --from-literal token=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4a) Existing secret set → should map and set appropriate secret key
clusterAgentTokenSecret.secretName: my-custom-token4b) Empty/not set → should NOT map
4c) Token is set → should map
Checklist
bug,enhancement,refactoring,documentation,tooling, and/ordependenciesqa/skip-qalabel