-
Notifications
You must be signed in to change notification settings - Fork 22
Optimize the performance of PrometheusExporter by using scrape_job params settings
Prometheus provides the ability to send custom parameters as part of each scrape job. These parameters must be entered in the 'params' section of the scrape job definition in the Prometheus YAML configuration file.
# The job name assigned to scraped metrics by default.
job_name: <job_name>
# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]
# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]
...
# Optional HTTP URL parameters.
params:
[ <string>: [<string>, ...] ]
With this feature you can selectively reduce the amount of timeseries queried by pmcollector, and transfered and stored in Prometheus.
Example:
- job_name: 'GPFSDiskCap_cluster1'
scrape_interval: 86400s
honor_timestamps: true
metrics_path: '/metrics_gpfs_diskcap'
params:
gpfs_diskpool_name: ['data']
scheme: http
static_configs:
- targets: ['9.152.187.254:9250']
This function can also be used to split a scrape job that collects metrics for a specific target across all time series into several jobs. Each of these jobs then takes care of a group of time series defined by the params settings. This allows you to avoid overwhelming the system with long-running queries due to the huge amount of data.
Example:
- job_name: 'GPFSDiskCap_cluster1_pool_data'
scrape_interval: 86400s
honor_timestamps: true
metrics_path: '/metrics_gpfs_diskcap'
params:
gpfs_diskpool_name: ['data']
scheme: http
static_configs:
- targets: ['9.152.187.254:9250']
- job_name: 'GPFSDiskCap_cluster1_pool_system'
scrape_interval: 86400s
honor_timestamps: true
metrics_path: '/metrics_gpfs_diskcap'
params:
gpfs_diskpool_name: ['system']
scheme: http
static_configs:
- targets: ['9.152.187.254:9250']
The applicable list of labels and values for each scrape target can be easily determined using the '/filters' endpoint of the REST API. Please refer to the IBM Storage Scale bridge for Grafana wiki for more information on helpful REST API endpoints.
Visit the IBM Storage Scale Knowledge Center for getting more info about the latest product updates
-
- Setup classic Grafana
- Make usage of Grafana Provisioning feature
-
- Installing RedHat community-powered Grafana operator from OperatorHub
- Creating Grafana instance using the RedHat community-powered Grafana-operator
- Creating Grafana Datasorce instance from Custom Resource managed by the RedHat community powered Grafana operator
- Importing the predefined dashboard from the example dashboards collection
- Exploring Grafana WEB interface for CNSA project in a k8s OCP environment
- How to setup Grafana instance to monitor multiple IBM Storage Scale clusters running in a cloud or mixed environment
- API key authentication
- Configurable bridge settings
- CherryPy builtin HTTP server settings
- How to setup HTTPS(SSL) connection
- Start and stop grafana-bridge with systemd
- Refresh IBM Storage Scale cluster configuration data cached by grafana bridge
- Accelerate the PrometheusExporter data retrieval time
- Optimize the performance of PrometheusExporter by using scrape_job params settings
- Grafana Dashboard Panel shows no metric values for a particular entity
- Missing Grafana-Operator on an OpenShift cluster
- Missing CherryPy packages
- What to do if your system is on Python < 3.8
- Grafana-bridge fails to start with Python3.8
- Grafana-bridge container time is different from a host time
- Verify that the grafana-bridge returns data as expected