Skip to content

[CONTP-1198] Extract GPU device id from k8s container runtime#45152

Draft
zhuminyi wants to merge 1 commit intomainfrom
gpu_k8s_runtime
Draft

[CONTP-1198] Extract GPU device id from k8s container runtime#45152
zhuminyi wants to merge 1 commit intomainfrom
gpu_k8s_runtime

Conversation

@zhuminyi
Copy link
Contributor

@zhuminyi zhuminyi commented Jan 15, 2026

What does this PR do?

Enables GPU device extraction from container runtime configuration (NVIDIA_VISIBLE_DEVICES environment variable) for Kubernetes workloads, with UUID validation to detect and handle user overrides.

Changes

  1. Add GPU utility functions** (comp/core/workloadmeta/collectors/util/gpu_util.go)

    • ExtractGPUDeviceIDsFromEnvMap() - Extract GPU IDs from env var map (containerd)
    • ExtractGPUDeviceIDsFromEnvVars() - Extract GPU IDs from env var slice (docker)
    • IsGPUUUID() - Validate NVIDIA GPU/MIG UUID format
    • ShouldExtractGPUDeviceIDsFromConfig() - Environment detection (ECS/K8s only)
    • UUID validation in Kubernetes to detect user overrides with non-UUID format and trigger fallback
  2. Update containerd collector** (comp/core/workloadmeta/collectors/internal/containerd/container_builder.go)

    • Extract GPUDeviceIDs from container spec env vars
  3. Refactor docker collector** (comp/core/workloadmeta/collectors/internal/docker/docker.go) to use shard util extract function

Motivation

Background: How GPU device mapping works

In Kubernetes, the NVIDIA device plugin handles GPU allocation:

  1. Pod requests nvidia.com/gpu resource
  2. Device plugin's Allocate() API selects GPU(s) and returns UUID(s)
  3. NVIDIA container toolkit injects NVIDIA_VISIBLE_DEVICES=GPU-uuid at container runtime (not in pod spec)
  4. Agent currently uses PodResources API as the primary source for GPU-to-container mapping

Why this change

  1. NVIDIA_VISIBLE_DEVICES is what the NVIDIA container runtime actually uses to determine GPU visibility.
  2. The env var is already available at container discovery time. PodResources API requires an additional gRPC call to kubelet.
  3. Users can manually set NVIDIA_VISIBLE_DEVICES in their pod spec with values like all, 0, or none. In those non-canonical cases, the agent validates the value and falls back to PodResources API.

GPU UUID Validation

In Kubernetes, the NVIDIA device plugin sets NVIDIA_VISIBLE_DEVICES to GPU UUIDs. However, users can override this in their pod specs. The UUID validation detects these overrides:

Value Is Valid UUID Behavior
GPU-aec058b1-c18e-236e-c14d-49d2990fda0f Yes Use env var
MIG-aec058b1-c18e-236e-c14d-49d2990fda0f Yes Use env var
MIG-GPU-aec058b1-.../0/0 Yes Use env var (legacy MIG)
all No Fall back to PodResources API
none, void No Fall back to PodResources API
0, 1, 0,1 No Fall back to PodResources API

Note: ECS does not validate UUIDs because users cannot override env vars set by the ECS agent.

GPU Discovery Priority

Priority Source When Used
1 GPUDeviceIDs (runtime) ECS, K8s with valid UUID from NVIDIA device plugin
2 PodResources API K8s fallback (GKE, user override, or env var not available)
3 procfs (/proc/PID/environ) Docker standalone

Testing

Test Environment: EKS (Kubernetes + containerd)

Setup:

  • EKS cluster with GPU node group with NVIDIA device plugin installed
inv aws.create-eks --stack-name my-gpu-test --gpu-node-group

Test Case 1: Normal GPU pod

apiVersion: v1
kind: Pod
metadata:
  name: gpu-test-normal
spec:
  containers:
  - name: cuda-test
    image: nvidia/cuda:12.0.0-base-ubuntu22.04
    command: ["sleep", "infinity"]
    resources:
      limits:
        nvidia.com/gpu: "1"

Verification - Agent workload-list:

=== Entity container sources(merged):[node_orchestrator runtime] id: c9b8db4a575876a20b6091c7cbe37fb1b88d7458cbec7e6928b5ee68cf2a3bf2 ===
----------- Entity Meta -----------
Name: cuda-test
----------- Resources -----------
GPUVendor: [nvidia]
----------- Allocated Resources -----------
Name: nvidia.com/gpu, ID: GPU-00cc6634-6a30-b312-2dc9-731a61cb17a9
----------- GPU Info -----------
GPU Device IDs: [GPU-00cc6634-6a30-b312-2dc9-731a61cb17a9]

Verification - Agent logs:

 2026-01-16 06:26:24 UTC | CORE | DEBUG | (pkg/gpu/containers/containers.go:76 in MatchContainerDevices) | GPU device source for container c9b8db4a575876a20b6091c7cbe37fb1b88d7458cbec7e6928b5ee68cf2a3bf2: runtime (NVIDIA_VISIBLE_DEVICES from config)

Result: GPU device extracted from container runtime config (NVIDIA_VISIBLE_DEVICES in containerd spec). The NVIDIA device plugin sets this env var via Allocate() API.



Test Case 2: User override with NVIDIA_VISIBLE_DEVICES=all

apiVersion: v1
kind: Pod
metadata:
  name: gpu-test-override
spec:
  containers:
  - name: cuda-test
    image: nvidia/cuda:12.0.0-base-ubuntu22.04
    command: ["sleep", "infinity"]
    env:
      - name: NVIDIA_VISIBLE_DEVICES
        value: "all"  # User override
    resources:
      limits:
        nvidia.com/gpu: "1"

Verification - Agent workload-list:

=== Entity container sources(merged):[node_orchestrator runtime] id: 2fb4e10564c1742ddd2501ac800b6a2f01061d7d11208edbefeec0f6d085d35e ===
----------- Entity Meta -----------
Name: cuda-test
----------- Resources -----------
GPUVendor: [nvidia]
----------- Allocated Resources -----------
Name: nvidia.com/gpu, ID: GPU-00cc6634-6a30-b312-2dc9-731a61cb17a9

Verification - Agent logs:

GPU device source for container 2fb4e10564c1742ddd2501ac800b6a2f01061d7d11208edbefeec0f6d085d35e: pod_resources_api

Result: Agent detected all is not a valid UUID → returned nil for GPUDeviceIDs → fell back to PodResources API for correct GPU assignment. Note: No GPU Device IDs section in workload-list (field is nil).

@github-actions github-actions bot added medium review PR review might take time team/container-platform The Container Platform Team labels Jan 15, 2026
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Jan 15, 2026

Static quality checks

✅ Please find below the results from static quality gates
Comparison made with ancestor ff0abbe
📊 Static Quality Gates Dashboard

Successful checks

Info

Quality gate Change Size (prev → curr → max)
agent_deb_amd64_fips +4.0 KiB (0.00% increase) 700.550 → 700.554 → 704.000
agent_rpm_amd64_fips +4.0 KiB (0.00% increase) 700.537 → 700.541 → 703.990
agent_rpm_arm64_fips +4.0 KiB (0.00% increase) 682.938 → 682.942 → 688.480
agent_suse_amd64_fips +4.0 KiB (0.00% increase) 700.537 → 700.541 → 703.990
agent_suse_arm64_fips +4.0 KiB (0.00% increase) 682.938 → 682.942 → 688.480
docker_agent_amd64 -17.42 KiB (0.00% reduction) 767.468 → 767.451 → 770.720
docker_agent_arm64 -17.45 KiB (0.00% reduction) 773.619 → 773.602 → 780.200
docker_agent_jmx_amd64 -17.45 KiB (0.00% reduction) 958.347 → 958.329 → 961.600
docker_agent_jmx_arm64 -17.45 KiB (0.00% reduction) 953.217 → 953.200 → 959.800
docker_cluster_agent_amd64 -17.42 KiB (0.01% reduction) 180.715 → 180.698 → 181.080
docker_cluster_agent_arm64 -17.42 KiB (0.01% reduction) 196.557 → 196.540 → 198.490
dogstatsd_deb_amd64 +8.0 KiB (0.03% increase) 30.027 → 30.035 → 30.610
dogstatsd_deb_arm64 +4.0 KiB (0.01% increase) 28.176 → 28.180 → 29.110
dogstatsd_rpm_amd64 +8.0 KiB (0.03% increase) 30.027 → 30.035 → 30.610
dogstatsd_suse_amd64 +8.0 KiB (0.03% increase) 30.027 → 30.035 → 30.610
16 successful checks with minimal change (< 2 KiB)
Quality gate Current Size
agent_deb_amd64 705.273 MiB
agent_heroku_amd64 326.916 MiB
agent_msi 571.536 MiB
agent_rpm_amd64 705.259 MiB
agent_rpm_arm64 686.832 MiB
agent_suse_amd64 705.259 MiB
agent_suse_arm64 686.832 MiB
docker_cws_instrumentation_amd64 7.135 MiB
docker_cws_instrumentation_arm64 6.689 MiB
docker_dogstatsd_amd64 38.808 MiB
docker_dogstatsd_arm64 37.128 MiB
iot_agent_deb_amd64 42.987 MiB
iot_agent_deb_arm64 40.112 MiB
iot_agent_deb_armhf 40.689 MiB
iot_agent_rpm_amd64 42.987 MiB
iot_agent_suse_amd64 42.987 MiB
On-wire sizes (compressed)
Quality gate Change Size (prev → curr → max)
agent_deb_amd64 -22.0 KiB (0.01% reduction) 173.353 → 173.332 → 174.490
agent_deb_amd64_fips +16.77 KiB (0.01% increase) 172.239 → 172.255 → 173.750
agent_heroku_amd64 -3.92 KiB (0.00% reduction) 87.119 → 87.115 → 88.450
agent_msi +24.0 KiB (0.02% increase) 142.902 → 142.926 → 143.020
agent_rpm_amd64 -43.31 KiB (0.02% reduction) 176.217 → 176.175 → 177.660
agent_rpm_amd64_fips +30.68 KiB (0.02% increase) 174.916 → 174.946 → 176.600
agent_rpm_arm64 -4.88 KiB (0.00% reduction) 159.343 → 159.338 → 161.260
agent_rpm_arm64_fips +4.33 KiB (0.00% increase) 158.754 → 158.758 → 160.550
agent_suse_amd64 -43.31 KiB (0.02% reduction) 176.217 → 176.175 → 177.660
agent_suse_amd64_fips +30.68 KiB (0.02% increase) 174.916 → 174.946 → 176.600
agent_suse_arm64 -4.88 KiB (0.00% reduction) 159.343 → 159.338 → 161.260
agent_suse_arm64_fips +4.33 KiB (0.00% increase) 158.754 → 158.758 → 160.550
docker_agent_amd64 +28.5 KiB (0.01% increase) 261.025 → 261.053 → 262.450
docker_agent_arm64 +12.83 KiB (0.01% increase) 250.055 → 250.068 → 252.630
docker_agent_jmx_amd64 +21.17 KiB (0.01% increase) 329.672 → 329.692 → 331.080
docker_agent_jmx_arm64 +17.46 KiB (0.01% increase) 314.676 → 314.693 → 317.270
docker_cluster_agent_amd64 +11.21 KiB (0.02% increase) 63.835 → 63.846 → 64.490
docker_cluster_agent_arm64 +10.57 KiB (0.02% increase) 60.121 → 60.132 → 61.170
docker_cws_instrumentation_amd64 neutral 2.994 MiB
docker_cws_instrumentation_arm64 neutral 2.726 MiB
docker_dogstatsd_amd64 neutral 15.026 MiB
docker_dogstatsd_arm64 -3.99 KiB (0.03% reduction) 14.354 → 14.351 → 14.830
dogstatsd_deb_amd64 neutral 7.944 MiB
dogstatsd_deb_arm64 neutral 6.824 MiB
dogstatsd_rpm_amd64 neutral 7.956 MiB
dogstatsd_suse_amd64 neutral 7.956 MiB
iot_agent_deb_amd64 neutral 11.257 MiB
iot_agent_deb_arm64 neutral 9.633 MiB
iot_agent_deb_armhf neutral 9.823 MiB
iot_agent_rpm_amd64 neutral 11.278 MiB
iot_agent_suse_amd64 neutral 11.278 MiB

@cit-pr-commenter
Copy link

cit-pr-commenter bot commented Jan 15, 2026

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 41775049-f520-49d6-8eb5-0763576e7ca0

Baseline: ff0abbe
Comparison: 184cfcf
Diff

Optimization Goals: ✅ No significant changes detected

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_cpu % cpu utilization -0.87 [-3.84, +2.09] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
otlp_ingest_logs memory utilization +0.85 [+0.74, +0.96] 1 Logs
ddot_logs memory utilization +0.46 [+0.40, +0.52] 1 Logs
quality_gate_logs % cpu utilization +0.37 [-1.10, +1.84] 1 Logs bounds checks dashboard
tcp_syslog_to_blackhole ingress throughput +0.18 [+0.11, +0.26] 1 Logs
ddot_metrics_sum_delta memory utilization +0.18 [-0.04, +0.40] 1 Logs
ddot_metrics_sum_cumulativetodelta_exporter memory utilization +0.10 [-0.13, +0.33] 1 Logs
file_to_blackhole_500ms_latency egress throughput +0.05 [-0.34, +0.44] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.04 [-0.37, +0.45] 1 Logs
docker_containers_memory memory utilization +0.03 [-0.04, +0.10] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.03 [-0.02, +0.07] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.08, +0.09] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.01 [-0.14, +0.12] 1 Logs
uds_dogstatsd_to_api_v3 ingress throughput -0.02 [-0.15, +0.11] 1 Logs
ddot_metrics_sum_cumulative memory utilization -0.03 [-0.19, +0.13] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.04 [-0.53, +0.45] 1 Logs
quality_gate_idle memory utilization -0.11 [-0.16, -0.07] 1 Logs bounds checks dashboard
ddot_metrics memory utilization -0.17 [-0.40, +0.05] 1 Logs
quality_gate_idle_all_features memory utilization -0.19 [-0.23, -0.15] 1 Logs bounds checks dashboard
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization -0.22 [-0.28, -0.17] 1 Logs
file_tree memory utilization -0.58 [-0.64, -0.52] 1 Logs
otlp_ingest_metrics memory utilization -0.70 [-0.85, -0.55] 1 Logs
docker_containers_cpu % cpu utilization -0.87 [-3.84, +2.09] 1 Logs
quality_gate_metrics_logs memory utilization -0.96 [-1.17, -0.74] 1 Logs bounds checks dashboard

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
docker_containers_cpu simple_check_run 10/10
docker_containers_memory memory_usage 10/10
docker_containers_memory simple_check_run 10/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency lost_bytes 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_logs memory_usage 10/10 bounds checks dashboard
quality_gate_metrics_logs cpu_usage 10/10 bounds checks dashboard
quality_gate_metrics_logs intake_connections 10/10 bounds checks dashboard
quality_gate_metrics_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_metrics_logs memory_usage 10/10 bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

Replicate Execution Details

We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.

Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.

Experiment Variant Replicates Failure Logs Debug Dashboard
experiment_with_failures baseline 0 (x8) 1 (x6) Oom killed Debug Dashboard

The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.

❌ Retried Profiling Replicate Execution Failures (target internal profiling)

Note: Profiling replicas may still be executing. See the debug dashboard for up to date status.

Experiment Variant Replicates Failure Debug Dashboard
quality_gate_idle_all_features baseline 11 (x3) Oom killed Debug Dashboard
quality_gate_idle_all_features comparison 11 (x3) Oom killed Debug Dashboard

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@zhuminyi zhuminyi force-pushed the gpu_k8s_runtime branch 2 times, most recently from 73fc17c to 71eafe3 Compare January 16, 2026 00:16
@zhuminyi zhuminyi changed the title Extract GPU device id from container runtime [CONTP-1198] Extract GPU device id from k8s container runtime Jan 16, 2026
@zhuminyi zhuminyi marked this pull request as ready for review January 16, 2026 06:37
@zhuminyi zhuminyi requested review from a team as code owners January 16, 2026 06:37
@zhuminyi zhuminyi added this to the 7.76.0 milestone Jan 16, 2026
@zhuminyi zhuminyi added the qa/done QA done before merge and regressions are covered by tests label Jan 16, 2026
@zhuminyi zhuminyi marked this pull request as draft January 20, 2026 03:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog/no-changelog medium review PR review might take time qa/done QA done before merge and regressions are covered by tests team/container-platform The Container Platform Team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant