-
Notifications
You must be signed in to change notification settings - Fork 22
Setup Grafana on Openshift cluster and integrate it with the Openshift monitoring stack configured for IBM Storage Scale container native project workload monitoring
You can visualize and monitor Prometheus metrics with the Grafana dashboard within an Openshift cluster when monitoring your own services, such as IBM Storage Scale container native (gpfs) performance metrics, with user-defined projects. Follow this link to learn more about OpenShift Container Platform monitoring.
Make sure you have configured Openshift Monitoring stack for monitoring IBM Storage Scale container native project. Otherwise follow this instructions before you proceed with the next step.
Follow these instructions to deploy a Grafana instance using the RedHat Community-powered Grafana Operator.
To allow Grafana to access monitoring data in the OpenShift cluster, create a cluster role binding to the grafana-serviceaccount
oc adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-for-cnsa-sa
Next, create a secret for the grafana-serviceaccount token. (Thanos-querier, which runs in the openshift-monitoring namespace, requires a bearer token for service authentication).
oc apply -f https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-bridge-for-grafana/refs/heads/master/examples/openshift_deployment_scripts/cnsa_workload_monitoring/grafana-sa-token-secret.yml
Create the Prometheus GrafanaDatasource
oc apply -f https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-bridge-for-grafana/refs/heads/master/examples/openshift_deployment_scripts/cnsa_workload_monitoring/grafana-prometheus-datasource.yml
Verify the prometheus-grafanadatasource instance from the type GrafanaDatasource has been deployed and discovered by Grafana server
oc get GrafanaDataSource prometheus-grafanadatasource -n grafana-for-cnsa -o json | jq '.status'
Create the GrafanaDashboard resources
oc apply -f https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-bridge-for-grafana/refs/heads/master/examples/openshift_deployment_scripts/cnsa_workload_monitoring/cnsa-openshift-cluster-dashboards.yaml
Navigate to the Grafana web interface by retrieving the route with 'oc get routes -n grafana-for-cnsa' and log in. In the sidebar of the Grafana dashboard, move the cursor over the Dashboards icon (squares) and click on Manage. The deployed dashboards should be listed as members of the my-folder folder.
Click on the "Openshift cluster and IBM Storage Scale cloud native project overview" dashboard name to start cluster monitoring with a high-level health view.
Visit the IBM Storage Scale Knowledge Center for getting more info about the latest product updates
-
- Setup classic Grafana
- Make usage of Grafana Provisioning feature
-
- Installing RedHat community-powered Grafana operator from OperatorHub
- Creating Grafana instance using the RedHat community-powered Grafana-operator
- Creating Grafana Datasorce instance from Custom Resource managed by the RedHat community powered Grafana operator
- Importing the predefined dashboard from the example dashboards collection
- Exploring Grafana WEB interface for CNSA project in a k8s OCP environment
- How to setup Grafana instance to monitor multiple IBM Storage Scale clusters running in a cloud or mixed environment
- API key authentication
- Configurable bridge settings
- CherryPy builtin HTTP server settings
- How to setup HTTPS(SSL) connection
- Start and stop grafana-bridge with systemd
- Refresh IBM Storage Scale cluster configuration data cached by grafana bridge
- Accelerate the PrometheusExporter data retrieval time
- Optimize the performance of PrometheusExporter by using scrape_job params settings
- Grafana Dashboard Panel shows no metric values for a particular entity
- Missing Grafana-Operator on an OpenShift cluster
- Missing CherryPy packages
- What to do if your system is on Python < 3.8
- Grafana-bridge fails to start with Python3.8
- Grafana-bridge container time is different from a host time
- Verify that the grafana-bridge returns data as expected