The data-science-prometheus-restricted deployment provides secure, namespace-scoped access to Prometheus metrics through a two-proxy architecture that enforces authentication, authorization, and query filtering.
User Request
↓
kube-rbac-proxy (port 8443)
├─ Authenticates via bearer token (TokenReview)
├─ Extracts 'namespace' query parameter
├─ Performs SubjectAccessReview for metrics.k8s.io/pods in that namespace
└─ Denies if unauthorized (403 Forbidden)
↓
prom-label-proxy (port 9091)
├─ Validates namespace parameter is present (400 if missing)
├─ Rewrites PromQL queries to inject namespace label: {namespace="value"}
└─ Forwards to upstream Prometheus
↓
Prometheus (prometheus-operated service on port 9090)
Purpose: Authenticates users and performs authorization checks
How it works:
- Extracts the
namespacequery parameter from incoming requests - Validates the bearer token via TokenReview API call to Kubernetes API server
- Performs SubjectAccessReview to verify the user has
get podspermission in themetrics.k8s.ioAPI group for the specified namespace - Allows (proxies to prom-label-proxy) or denies (403 Forbidden) based on the result
Configuration: data-science-prometheus-restricted-config ConfigMap
Purpose: Enforces namespace isolation at the query level
How it works:
- Receives proxied requests from kube-rbac-proxy (already authenticated/authorized)
- Validates that the
namespaceparameter is present (returns 400 Bad Request if missing) - Rewrites the PromQL query to inject namespace label filter
- Forwards the rewritten query to upstream Prometheus
Example query transformation:
Original query: up
Request: GET /api/v1/query?query=up&namespace=my-namespace
Rewritten query: up{namespace="my-namespace"}
This ensures users can ONLY see metrics from namespaces they're authorized for, even if they try to craft queries with different namespace labels.
All queries MUST include the namespace parameter:
GET /api/v1/query?query=<promql>&namespace=<namespace-name>Examples:
# ✅ Correct
curl "https://prometheus-route/api/v1/query?query=up&namespace=my-namespace" \
-H "Authorization: Bearer $(oc whoami -t)"
# ❌ Incorrect - Missing namespace parameter
curl "https://prometheus-route/api/v1/query?query=up" \
-H "Authorization: Bearer $(oc whoami -t)"Error Message:
Bad Request. The request or configuration is malformed.
Cause: Query is missing the required namespace parameter
Example:
curl "https://.../api/v1/query?query=up"Fix: Add the namespace parameter
curl "https://.../api/v1/query?query=up&namespace=my-namespace"Technical Detail: The prom-label-proxy requires the namespace parameter to perform query rewriting. Without it, it cannot inject the namespace label filter.
Error Message:
Forbidden (user=<username>, verb=get, resource=pods, subresource=)
Cause: User lacks permissions for metrics.k8s.io/pods resource in the requested namespace
Diagnostic:
# Check who has metrics permissions
oc adm policy who-can get pods.metrics.k8s.io -n my-namespaceFix: Grant appropriate role
# Option 1: Grant cluster-monitoring-view (metrics-only access)
oc adm policy add-role-to-user cluster-monitoring-view alice -n my-namespace
# Option 2: Grant view (read-only access including metrics)
oc adm policy add-role-to-user view alice -n my-namespace
# Option 3: Grant edit (read-write access including metrics)
oc adm policy add-role-to-user edit alice -n my-namespaceNote: OpenShift's built-in roles (view, edit, admin) already include metrics.k8s.io permissions.
Error Message:
Unauthorized
Cause: Missing or invalid bearer token
Fix: Include a valid authentication token
TOKEN=$(oc whoami -t)
curl "https://.../api/v1/query?query=up&namespace=my-namespace" \
-H "Authorization: Bearer $TOKEN"Response:
{
"status": "success",
"data": {
"resultType": "vector",
"result": []
}
}Possible Causes:
- No metrics are being scraped in that namespace
- Wrong namespace name (typo)
- No pods running or exposing metrics
- Metrics haven't been collected yet
Diagnostic:
# Verify namespace exists and has pods
oc get pods -n my-namespace
# Check if ServiceMonitors are configured
oc get servicemonitors -n my-namespace
# Verify namespace spelling
oc get namespaces | grep my-namespaceThe kube-rbac-proxy performs authorization checks with these attributes:
resourceAttributes:
apiGroup: metrics.k8s.io
resource: pods
namespace: "<value from query parameter>"
verb: getThis is different from checking pod visibility (core API):
resourceAttributes:
apiGroup: "" # Core API
resource: pods
namespace: "<value from query parameter>"
verb: getOpenShift built-in roles that include metrics.k8s.io/pods permission:
- ✅
view- Read-only access to namespace resources including metrics - ✅
edit- Modify namespace resources including metrics access - ✅
admin- Full namespace control including metrics access - ✅
cluster-monitoring-view- Metrics-specific read access
Custom roles need explicit grants:
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get"]The deployment implements multiple security layers:
-
Network Level:
- NetworkPolicy restricts ingress to OpenShift router and Alertmanager only
- TLS termination with certificate validation
-
Authentication Level (kube-rbac-proxy):
- Bearer token validation via Kubernetes TokenReview
- Delegated authentication via
system:auth-delegatorClusterRole
-
Authorization Level (kube-rbac-proxy):
- SubjectAccessReview for every request
- Namespace-scoped permission checks
-
Query Level (prom-label-proxy):
- Mandatory namespace parameter
- Query rewriting to inject namespace filters
- Prevents cross-namespace metric visibility
-
Container Level:
- readOnlyRootFilesystem
- No privilege escalation
- All capabilities dropped
- Runs as non-root
-
RBAC Level:
- ServiceAccount with minimal permissions (only cluster-monitoring-view)
- No cluster-admin or elevated privileges
Conservative resource limits to handle typical query loads:
| Container | CPU Limit | Memory Limit | CPU Request | Memory Request |
|---|---|---|---|---|
| kube-rbac-proxy | 100m | 128Mi | 50m | 64Mi |
| prom-label-proxy | 100m | 128Mi | 50m | 64Mi |
Rationale: Provides ~40% headroom for:
- Concurrent SubjectAccessReview API calls
- Complex PromQL query parsing and rewriting
- TLS termination and certificate validation
- Connection handling under load
Recommendation: Monitor actual usage and tune based on your query patterns and concurrency levels.
oc logs -n <monitoring-namespace> deployment/data-science-prometheus-restricted -c kube-rbac-proxy --tail=50Look for:
Unable to authenticate the request- TokenReview failuresFailed to make webhook authenticator request- API server connectivity issues- Authorization denials with user/namespace details
oc logs -n <monitoring-namespace> deployment/data-science-prometheus-restricted -c prom-label-proxy --tail=50Look for:
- Query rewriting errors
- Upstream connection failures
- Namespace parameter validation errors
cat <<EOF | oc create -f -
apiVersion: authorization.k8s.io/v1
kind: SubjectAccessReview
spec:
resourceAttributes:
namespace: my-namespace
verb: get
group: metrics.k8s.io
resource: pods
user: alice
EOFCheck the output - "allowed": true means the user has permission.
# Check ingress rules
oc get networkpolicy data-science-prometheus-proxy-ingress -n <monitoring-namespace> -o yaml
# Verify pod selector matches
oc get pods -n <monitoring-namespace> -l app=data-science-prometheus-restricted- Monitoring RBAC Documentation - General monitoring access control
- Kubernetes NetworkPolicy
- kube-rbac-proxy Documentation
- prom-label-proxy Documentation