Skip to content

rhobs/obs-mcp

Repository files navigation

obs mcp server

lint unit e2e docs

obs-mcp is a mcp server to allow LLMs to interact with Prometheus or Thanos Querier instances via the API.

Note

This project is moved from jhadvig/genie-plugin preserving the history of commits.

Quickstart

Run make help to see all available commands.

1. Using Kubeconfig (OpenShift)

The easiest way to get the obs-mcp connected to the cluster is via a kubeconfig:

  1. Login into your OpenShift cluster
  2. Run the server with
make run

Or directly:

go run ./cmd/obs-mcp/ --listen 127.0.0.1:9100 --auth-mode kubeconfig --insecure

This will auto-discover the metrics backend in OpenShift. By default, it tries thanos-querier route first, then falls back to prometheus-k8s route. Use --metrics-backend to control which route is preferred.

Warning

kubeconfig auth mode requires a bearer token. Run oc whoami -t to verify you have one.

If it fails, either:

  • Re-login with: oc login --token=<token> or oc login -u user -p password
  • Use port-forwarding with --auth-mode header instead

Example using Prometheus as the preferred backend:

go run ./cmd/obs-mcp/ --listen 127.0.0.1:9100 --auth-mode kubeconfig --metrics-backend prometheus --insecure

Example using Thanos as the preferred backend:

Note

Thanos versions before v0.40.0 do not expose the /api/v1/status/tsdb endpoint, so guardrails that rely on TSDB stats (max-metric-cardinality, max-label-cardinality) will fail. Use --guardrails=none when using older Thanos versions. Thanos v0.40.0+ (#8484) added TSDB status support to the Query component, so guardrails should work if your cluster runs that version or later.

make run-no-guardrails

Or directly:

go run ./cmd/obs-mcp/ --listen 127.0.0.1:9100 --auth-mode kubeconfig --metrics-backend thanos --insecure --guardrails=none

Important

How the Metrics Backend URL is Determined:

  1. PROMETHEUS_URL environment variable (if set, always used)
  2. --metrics-backend flag route discovery (only in kubeconfig mode)
  3. Default: http://localhost:9090

Example using explicit PROMETHEUS_URL:

PROMETHEUS_URL=https://thanos-querier.openshift-monitoring.svc.cluster.local:9091/ make run

2. Port-forwarding alternative

Port-forwards prometheus-k8s-0:9090 to localhost and starts obs-mcp with header auth. Requires oc login:

make run-openshift-pf-prometheus

3. Local Development with Kind (using E2E test infrastructure)

Use the E2E test infrastructure for a fully working local environment with Prometheus:

Setup Kind cluster with Prometheus

make test-e2e-setup

This creates a Kind cluster with:

  • Prometheus Operator
  • Prometheus (accessible at prometheus-k8s.monitoring.svc.cluster.local:9090)
  • Alertmanager

Build and deploy obs-mcp

make test-e2e-deploy

Port forward obs-mcp

kubectl port-forward -n obs-mcp svc/obs-mcp 9100:9100

To connect an MCP client, use http://localhost:9100/mcp.

When done:

make test-e2e-teardown

See TESTING.md for more details.

4. Using prometheus helm chart in local Kubernetes cluster

# sets up Prometheus (and exporters) on your local single-node k8s cluster
helm install prometheus-community/prometheus --name-template <prefix>

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=alertmanager,app.kubernetes.io/instance=local" -o jsonpath="{.items[0].metadata.name}") && kubectl --namespace default port-forward $POD_NAME 9090

go run ./cmd/obs-mcp/ --auth-mode header --insecure --listen :9100 

Testing with curl

You can test the MCP server using curl. The server uses JSON-RPC 2.0 over HTTP.

Tip

For formatted JSON output, pipe the response to jq:

curl ... | jq

List available tools:

curl -X POST http://localhost:9100/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'|jq

Call the list_metrics tool:

curl -X POST http://localhost:9100/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"list_metrics","arguments":{}}}' | jq

Execute a range query (e.g., get up metrics for the last hour):

curl -X POST http://localhost:9100/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"execute_range_query","arguments":{"query":"up{job=\"prometheus\"}","step":"1m","end":"NOW","duration":"1h"}}}' | jq

Testing with MCP Inspector

Use the MCP Inspector to visually test and debug obs-mcp tools.

Using container compose

Kind
  1. Set up a Kind cluster with Prometheus and Alertmanager (if not already running):

    make test-e2e-setup
  2. Port-forward Prometheus and Alertmanager from your Kind cluster:

    kubectl port-forward -n monitoring pod/prometheus-k8s-0 9090:9090 &
    kubectl port-forward -n monitoring pod/alertmanager-main-0 9093:9093 &
OpenShift
  1. Port-forward Prometheus and Alertmanager from your OpenShift cluster:

    oc port-forward -n openshift-monitoring pod/prometheus-k8s-0 9090:9090 &
    oc port-forward -n openshift-monitoring pod/alertmanager-main-0 9093:9093 &
  2. Start obs-mcp and the Inspector (builds the obs-mcp container and starts both services via compose):

    make inspect

    This uses Docker by default. For podman, use:

    CONTAINER_CLI=podman make inspect
  3. Open the Inspector URL from the logs (includes the auth token):

    http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=<token>
    
  4. Connect using Streamable HTTP transport to http://obs-mcp:8080/mcp

Documentation

Document Description
DEPLOYMENT.md Authentication modes, in-cluster deployment, configuration
TOOLS.md Available MCP tools
TESTING.md Testing guide

License

Apache 2.0

About

MCP server to allow LLMs to interact with a running Prometheus instance via the API

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages