-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[CONTP-1254] chore(autodiscovery): Add new EndpointSlices AD listener and providers #45949
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
26 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 0134072 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | -3.56 | [-6.55, -0.57] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | quality_gate_logs | % cpu utilization | +1.64 | [+0.11, +3.17] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_logs | memory utilization | +1.48 | [+1.39, +1.58] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.71 | [+0.47, +0.95] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.70 | [+0.60, +0.79] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | +0.54 | [+0.39, +0.69] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.24 | [+0.18, +0.29] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.15 | [-0.05, +0.35] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.06 | [-0.01, +0.13] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.06 | [-0.42, +0.54] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.02 | [-0.11, +0.15] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | -0.00 | [-0.07, +0.07] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.11, +0.10] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.01 | [-0.14, +0.13] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.02 | [-0.44, +0.41] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.04 | [-0.41, +0.33] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.07 | [-0.12, -0.03] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.07 | [-0.12, -0.03] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.13 | [-0.17, -0.10] | 1 | Logs bounds checks dashboard |
| ➖ | quality_gate_metrics_logs | memory utilization | -0.23 | [-0.44, -0.03] | 1 | Logs bounds checks dashboard |
| ➖ | file_tree | memory utilization | -0.31 | [-0.36, -0.26] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.46 | [-0.62, -0.30] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.57 | [-0.80, -0.34] | 1 | Logs |
| ➖ | docker_containers_cpu | % cpu utilization | -3.56 | [-6.55, -0.57] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
2fa93da to
a3f48f7
Compare
96c18ad to
10b9350
Compare
releasenotes/notes/add-ad-endpointslices-listener-and-provider-38ba1b51dfaaeca0.yaml
Outdated
Show resolved
Hide resolved
|
All contributors have signed the CLA ✍️ ✅ |
0e6cefe to
dad57b1
Compare
| } | ||
|
|
||
| // NewKubeEndpointSlicesListener returns the kube endpointslices implementation of the ServiceListener interface | ||
| func NewKubeEndpointSlicesListener(options ServiceListernerDeps) (ServiceListener, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This listener is functionality equivalent to listeners/kube_endpoints.go so referencing that listener while reviewing could be a helpful.
| // NewKubeEndpointSlicesConfigProvider returns a new ConfigProvider connected to apiserver using EndpointSlices. | ||
| // Connectivity is not checked at this stage to allow for retries, Collect will do it. | ||
| // Using GetAPIClient (no wait) as Client should already be initialized by Cluster Agent main entrypoint before | ||
| func NewKubeEndpointSlicesConfigProvider(_ *pkgconfigsetup.ConfigurationProviders, telemetryStore *telemetry.Store) (types.ConfigProvider, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, this provider is very similar to providers/kube_endpoints.go and is meant as a drop-in replacement of that provider.
dad57b1 to
3f7d7ba
Compare
triviajon
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, left 2 nits and 1 question
| func (ac *AutoConfig) start() { | ||
| listeners.RegisterListeners(ac.serviceListenerFactories) | ||
| providers.RegisterProviders(ac.providerCatalog) | ||
| useEndpointSlices := pkgconfigsetup.Datadog().GetBool("kubernetes_use_endpoint_slices") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be also gated behind a K8s server version check like v1.21+?
| func KubeServerVersion(discoveryCl discovery.DiscoveryInterface, retryTimeout time.Duration) (*version.Info, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. We have a follow up ticket to address this case.
| "github.com/DataDog/datadog-agent/pkg/util/kubernetes/apiserver" | ||
| "github.com/DataDog/datadog-agent/pkg/util/log" | ||
| discv1 "k8s.io/api/discovery/v1" | ||
|
|
||
| v1 "k8s.io/api/core/v1" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, linter doesn't pickup that this is in the wrong block
| "github.com/DataDog/datadog-agent/pkg/util/kubernetes/apiserver" | |
| "github.com/DataDog/datadog-agent/pkg/util/log" | |
| discv1 "k8s.io/api/discovery/v1" | |
| v1 "k8s.io/api/core/v1" | |
| ) | |
| "github.com/DataDog/datadog-agent/pkg/util/kubernetes/apiserver" | |
| "github.com/DataDog/datadog-agent/pkg/util/log" | |
| v1 "k8s.io/api/core/v1" | |
| discv1 "k8s.io/api/discovery/v1" | |
| ) |
| servicesInformer apiserver.InformerName = "v1/services" | ||
| crdInformer apiserver.InformerName = "v1/crd" | ||
| endpointsInformer apiserver.InformerName = "v1/endpoints" | ||
| endpointSlicesInformer apiserver.InformerName = "discovery.v1/endpointslices" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
| endpointSlicesInformer apiserver.InformerName = "discovery.v1/endpointslices" | |
| endpointSlicesInformer apiserver.InformerName = "discovery.k8s.io/v1/endpointslices" |
What does this PR do?
Adds a new Autodiscovery
EndpointSliceslistener and provider gated behindkubernetes_use_endpoint_slicesconfig flag, defaulting to false.The internal AD service representation for an Endpoint,
KubeEndpointService, remains the same. What has changed is how a service and its check configurations are collected. The endpoint slices provider queries discovery/v1 EndpointSlices matching AD annotation checks with each endpoint address (slice.Endpoints[*].Addresses[*]) to create a config check for each endpoint.Similarly, the EndpointSlices provider will create a
KubeEndpointServicefor each endpoint associated with an AD annotated service ORl.targetAllEndpoints = true.Config generation optimization (deferred)
I explored an optimization to generate one template config per service (instead of one config per endpoint IP) to reduce memory overhead for services with many endpoints. However, that change would require modifying the global
configresolver.Resolve()function, or some other shared codepath, to populate the scheduled check's ADIdentifiers withpod_uididentifiers so that the node agent receiving the cluster check would schedule & tag the check appropriately. However, making such a change had some unknown risks because.Resolve()is shared with all config <-> service resolution and isn't specific to endpoint checks.To limit scope creep and minimize risk, I'm deferring this optimization to a follow-up PR after the core EndpointSlices migration is done and validated.
Motivation
As of Kubernetes version 1.33, the Endpoints API was deprecated in favour of EndpointSlices, which partition k8s endpoint addresses into
NEndpointSlices. The Agent should use the latest API groups to ensure long term stability.Describe how you validated your changes
Deploy service with endpoints
(old) Autodiscovery using deprecated Endpoints
Deploy agent with default
kubernetes_use_endpoint_slices(false)Confirm DCA schedules cluster check
Confirm node agent runs scheduled check with pod tags
(new) Autodiscovery listener/provider using EndpointSlices
Deploy Agent with
kubernetes_use_endpoint_slicesenabled.Confirm DCA schedules cluster check for each endpoint
Confirm node agent running scheduled checks with tags
Additional Notes