Skip to content

Optimize the performance of PrometheusExporter by using scrape_job params settings

hwassman edited this page Aug 4, 2025 · 1 revision

Prometheus provides the ability to send custom parameters as part of each scrape job. These parameters must be entered in the 'params' section of the scrape job definition in the Prometheus YAML configuration file.

# The job name assigned to scraped metrics by default.
job_name: <job_name>

# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]

...

# Optional HTTP URL parameters.
params:
  [ <string>: [<string>, ...] ]

With this feature you can selectively reduce the amount of timeseries queried by pmcollector, and transfered and stored in Prometheus.

Example:

  - job_name: 'GPFSDiskCap_cluster1'
    scrape_interval: 86400s
    honor_timestamps: true
    metrics_path: '/metrics_gpfs_diskcap'
    params:
        gpfs_diskpool_name: ['data']
    scheme: http
    static_configs:
    - targets: ['9.152.187.254:9250']

This function can also be used to split a scrape job that collects metrics for a specific target across all time series into several jobs. Each of these jobs then takes care of a group of time series defined by the params settings. This allows you to avoid overwhelming the system with long-running queries due to the huge amount of data.

Example:

  - job_name: 'GPFSDiskCap_cluster1_pool_data'
    scrape_interval: 86400s
    honor_timestamps: true
    metrics_path: '/metrics_gpfs_diskcap'
    params:
        gpfs_diskpool_name: ['data']
    scheme: http
    static_configs:
    - targets: ['9.152.187.254:9250']

  - job_name: 'GPFSDiskCap_cluster1_pool_system'
    scrape_interval: 86400s
    honor_timestamps: true
    metrics_path: '/metrics_gpfs_diskcap'
    params:
        gpfs_diskpool_name: ['system']
    scheme: http
    static_configs:
    - targets: ['9.152.187.254:9250']

The applicable list of labels and values for each scrape target can be easily determined using the '/filters' endpoint of the REST API. Please refer to the IBM Storage Scale bridge for Grafana wiki for more information on helpful REST API endpoints.

User Guide

Installation

Configuration

Maintenance

Troubleshooting

Use cases

Designing dashboards

Developer Guide

Clone this wiki locally