You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# create an empty tokens file for use with volumes if required. You can use a mounted volume to /etc/pure-fa-om-exporter/ to pass the `tokens.yaml` file. File must be named `tokens.yaml`.
3. Configure `/etc/prometheus/prometheus.yml` to point use the OpenMetrics exporter to query the device endpoint.
79
92
80
-
[This is an example of configuring the prometheus.yml](../prometheus/prometheus.yml)
81
-
82
-
Let's take a walkthrough an example of scraping the `/metrics/array` endpoint.
83
-
84
-
```yaml
85
-
# Scrape job for one Pure Storage FlashArray scraping /metrics/array
86
-
# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
87
-
- job_name: 'purefa_array_arrayname01'
88
-
# Specify the array endpoint from /metrics/array
89
-
metrics_path: /metrics/array
90
-
# Provide FlashArray authorization API token
91
-
authorization:
92
-
credentials: a12345bc6-d78e-901f-23a4-56b07b89012
93
-
# Provide parameters to pass the exporter the device to connect to. Provide FQDN or IP address
94
-
params:
95
-
endpoint: ['arrayname01.fqdn.com']
96
-
97
-
static_configs:
98
-
# Tell Prometheus which exporter to make the request
99
-
- targets:
100
-
- 10.0.2.10:9490
101
-
# Finally provide labels to the device.
102
-
labels:
103
-
# Instance should be the device name and is used to correlate metrics between different endpoints in Prometheus and Grafana. Ensure this is the same for each endpoint for the same device.
104
-
instance: arrayname01
105
-
# location, site and env are specific to your environment. Feel free to add more labels but maintain these three to minimize changes to Grafana which is expecting to use location, site and env as filter variables.
106
-
location: uk
107
-
site: London
108
-
env: production
109
-
110
-
# Repeat for the above for end points:
111
-
# /metrics/volumes
112
-
# /metrics/hosts
113
-
# /metrics/pods
114
-
# /metrics/directories
115
-
116
-
# Repeat again for more Pure Storage FlashArrays
117
-
```
93
+
This is an example [prometheus.yml](../prometheus/prometheus.yml) file.
94
+
95
+
Let's take a walkthrough an example of scraping the `/metrics/array` endpoint.
96
+
97
+
```yaml
98
+
# Scrape job for one Pure Storage FlashArray scraping /metrics/array
99
+
# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
100
+
- job_name: 'purefa_array_arrayname01'
101
+
# Specify the array endpoint from /metrics/array
102
+
metrics_path: /metrics/array
103
+
# Provide FlashArray authorization API token
104
+
authorization:
105
+
credentials: 11111111-1111-1111-1111-111111111111
106
+
# Provide parameters to pass the exporter the device to connect to. Provide FQDN or IP address
107
+
params:
108
+
endpoint: ['arrayname01.fqdn.com']
109
+
110
+
static_configs:
111
+
# Tell Prometheus which exporter to make the request
112
+
- targets:
113
+
- 10.0.2.10:9490
114
+
# Finally provide labels to the device.
115
+
labels:
116
+
# Instance should be the device name and is used to correlate metrics between different endpoints in Prometheus and Grafana. Ensure this is the same for each endpoint for the same device.
117
+
instance: arrayname01
118
+
# location, site and env are specific to your environment. Feel free to add more labels but maintain these three to minimize changes to Grafana which is expecting to use location, site and env as filter variables.
119
+
location: uk
120
+
site: London
121
+
env: production
122
+
123
+
# Repeat for the above for end points:
124
+
# /metrics/volumes
125
+
# /metrics/hosts
126
+
# /metrics/pods
127
+
# /metrics/directories
128
+
# It is recommended to collect expensive queries less frequently such as /metrics/directories.
129
+
scrape_interval: 15m # Set the scrape interval to every 15min. Default is every 1 minute. This overrides the global setting.
130
+
scrape_timeout: 15m # Set the scrape timeout to shorter than or equal to scrape_interval. Default is every 1 minute.
0 commit comments