|
1 | 1 | # prometheus-bigquery-exporter
|
2 |
| -[](https://github.com/m-lab/prometheus-bigquery-exporter/releases) [](https://travis-ci.org/m-lab/prometheus-bigquery-exporter) [](https://coveralls.io/github/m-lab/prometheus-bigquery-exporter?branch=master) [](https://godoc.org/github.com/m-lab/prometheus-bigquery-exporter) [](https://goreportcard.com/report/github.com/m-lab/prometheus-bigquery-exporter) |
3 | 2 |
|
4 |
| -An exporter for converting BigQuery results into Prometheus metrics. |
| 3 | +[](https://github.com/m-lab/prometheus-bigquery-exporter/releases) [](https://travis-ci.org/m-lab/prometheus-bigquery-exporter) [](https://coveralls.io/github/m-lab/prometheus-bigquery-exporter?branch=master) [](https://godoc.org/github.com/m-lab/prometheus-bigquery-exporter) [](https://goreportcard.com/report/github.com/m-lab/prometheus-bigquery-exporter) |
5 | 4 |
|
6 |
| -# Limitations |
| 5 | +An exporter for converting BigQuery results into Prometheus metrics. |
7 | 6 |
|
8 |
| -## No historical values |
| 7 | +## Limitations: No historical values |
9 | 8 |
|
10 | 9 | Prometheus collects the *current* status of a system as reported by an exporter.
|
11 | 10 | Prometheus then associates the values collected with a timestamp of the time of
|
12 | 11 | collection.
|
13 | 12 |
|
14 | 13 | *NOTE:* there is no way to associate historical values with timestamps in the
|
15 |
| -the past! |
| 14 | +the past with this exporter! |
16 | 15 |
|
17 | 16 | So, the results of queries run by prometheus-bigquery-exporter should represent
|
18 | 17 | a meaningful value at a fixed point in time relative to the time the query is
|
19 | 18 | made, e.g. total number of tests in a 5 minute window 1 hour ago.
|
20 | 19 |
|
21 |
| -# Query format |
| 20 | +## Query Formatting |
22 | 21 |
|
23 | 22 | The prometheus-bigquery-exporter accepts arbitrary BQ queries. However, the
|
24 | 23 | query results must be structured in a predictable way for the exporter to
|
25 | 24 | successfully interpret and convert it into prometheus metrics.
|
26 | 25 |
|
27 |
| -Required columns: |
| 26 | +### Metric names and values |
28 | 27 |
|
29 |
| - * `value` -- every query result must have a "value". Values should be integers |
30 |
| - or floats. |
| 28 | +Metric names are derived from the query file name and query value columns. |
| 29 | +The bigquery-exporter identifies value columns by looking for column names |
| 30 | +that match the pattern: `value([.+])`. All characters in the matching group |
| 31 | +`([.+])` are appended to the metric prefix taken from the query file name. |
| 32 | +For example: |
31 | 33 |
|
32 |
| -Optional columns: |
| 34 | +* Filename: `bq_ndt_test.sql` |
| 35 | +* Metric prefix: `bq_ndt_test` |
| 36 | +* Column name: `value_count` |
| 37 | +* Final metric: `bq_ndt_test_count` |
33 | 38 |
|
34 |
| - * If there is more than one result row, then the query must also define labels |
35 |
| - to distinguish each value. Every column name that is not "value" will create |
36 |
| - a label on the resulting metric. For example, results with two columns, |
37 |
| - "machine" and "value" would create metrics with labels named "machine" and |
38 |
| - values from the results for that row. |
| 39 | +Value columns are required (at least one): |
39 | 40 |
|
40 |
| - Labels should be strings. |
| 41 | +* `value([.+])` - every query must define a result "value". Values must |
| 42 | + be integers or floats. For a query to return multiple values, prefix each |
| 43 | + with "value" and define unique suffixes. |
41 | 44 |
|
42 |
| - There is no limit on the number of labels, but you should respect the |
43 |
| - prometheus best practices by limiting label value cardinality. |
| 45 | +Label columns are optional: |
44 | 46 |
|
45 |
| -## Example query |
| 47 | +* If there is more than one result row, then the query must also define labels |
| 48 | + to distinguish each value. Every column name that is not "value" will create |
| 49 | + a label on the resulting metric. For example, results with two columns, |
| 50 | + "machine" and "value" would create metrics with labels named "machine" and |
| 51 | + values from the results for that row. |
46 | 52 |
|
47 |
| -The following query creates a "machine" label and counts the number of tests |
| 53 | +Labels must be strings: |
48 | 54 |
|
49 |
| -``` |
50 |
| -# TODO: replace with query using views. |
51 |
| -# TODO: replace with standard SQL syntax. |
52 |
| -SELECT |
53 |
| - -- All columns not named "value" are added as metric labels. |
54 |
| - CONCAT(REPLACE(REGEXP_EXTRACT(task_filename, |
55 |
| - r'gs://.*-(mlab[1-4]-[a-z]{3}[0-9]+)-ndt.*.tgz'), "-", "."), |
56 |
| - ".measurement-lab.org") AS label_machine, |
57 |
| -
|
58 |
| - -- All queries must have a single column named "value" |
59 |
| - count(*) as value |
60 |
| -
|
61 |
| -FROM |
62 |
| - [measurement-lab:public.ndt] |
63 |
| -
|
64 |
| -GROUP BY label_machine |
65 |
| -ORDER BY value |
66 |
| -``` |
| 55 | +* There is no limit on the number of labels, but you should respect the |
| 56 | + prometheus best practices by limiting label value cardinality. |
67 | 57 |
|
68 |
| -Save the sample query to a file named "ndt_test_cound.sql". The metric name is |
69 |
| -derived from the file name. Start the exporter: |
| 58 | +Duplicate metrics are an error: |
70 | 59 |
|
71 |
| -``` |
72 |
| - bq_exporter --query counter=ndt_test_count.sql |
73 |
| -``` |
| 60 | +* If the query returns multiple rows that are not distinguished by the set of |
| 61 | + labels for each row. |
74 | 62 |
|
75 |
| -Visit http://localhost:9393/metrics and you will find metrics like: |
| 63 | +## Example Query |
76 | 64 |
|
77 |
| -``` |
78 |
| - ndt_test_count{machine="mlab1.foo01.measurement-lab.org"} 100 |
79 |
| - ndt_test_count{machine="mlab2.foo01.measurement-lab.org"} 200 |
80 |
| - ... |
81 |
| -``` |
| 65 | +The following query creates a label and groups by each label. |
82 | 66 |
|
| 67 | + ```sql |
| 68 | + -- Example data in place of an actual table of values. |
| 69 | + WITH example_data as ( |
| 70 | + SELECT "a" as label, 5 as widgets |
| 71 | + UNION ALL |
| 72 | + SELECT "b" as label, 2 as widgets |
| 73 | + UNION ALL |
| 74 | + SELECT "b" as label, 3 as widgets |
| 75 | + ) |
83 | 76 |
|
84 |
| -# Testing |
| 77 | + SELECT |
| 78 | + label, SUM(widgets) as value |
| 79 | + FROM |
| 80 | + example_data |
| 81 | + GROUP BY |
| 82 | + label |
| 83 | + ``` |
85 | 84 |
|
86 |
| -To run the bigquery exporter locally (e.g. with a new query) you can build a |
87 |
| -test environment based on the google/cloud-sdk with a golang tools installed. |
| 85 | +* Save the sample query to a file named "bq_example.sql". |
| 86 | +* Start the exporter: |
88 | 87 |
|
89 |
| -Use the following steps: |
| 88 | + ```sh |
| 89 | + prometheus-bigquery-exporter -gauge-query bq_example.sql |
| 90 | + ``` |
90 | 91 |
|
91 |
| -1. Build the testing docker image. |
| 92 | +* Visit http://localhost:9348/metrics and you will find metrics like: |
92 | 93 |
|
93 |
| -``` |
94 |
| -$ docker build -t bqe.testing -f Dockerfile.testing . |
95 |
| -``` |
| 94 | + ```txt |
| 95 | + bq_example{label="a"} 5 |
| 96 | + bq_example{label="b"} 5 |
| 97 | + ... |
| 98 | + ``` |
96 | 99 |
|
97 |
| -2. Run the testing image, with fowarded ports and shared volume. The |
98 |
| - `--volumes-from` option is created automatically by the cloud-sdk base image. |
99 |
| - This volume preserves credentials across runs of the docker image. |
| 100 | +## Example Configuration |
100 | 101 |
|
101 |
| -``` |
102 |
| -$ docker run -p 9050:9050 --rm -ti -v $PWD:/go/src/github.com/m-lab/prometheus-bigquery-exporter --volumes-from gcloud-config bqe.testing |
| 102 | +Typical deployments will be in Kubernetes environment, like GKE. |
| 103 | + |
| 104 | +```sh |
| 105 | +# Change to the example directory. |
| 106 | +cd example |
| 107 | +# Deploy the example query as a configmap and example k8s deployment. |
| 108 | +./deploy.sh |
103 | 109 | ```
|
104 | 110 |
|
105 |
| -3. Authenticate using your account. Both steps are necessary, the first to run |
106 |
| - gcloud commands (which uses user credentials), the second to run the bigquery |
107 |
| - exporter (which uses application default credentials). |
| 111 | +## Testing |
108 | 112 |
|
109 |
| -``` |
110 |
| -# gcloud auth login |
111 |
| -# gcloud auth application-default login |
112 |
| -``` |
| 113 | +To run the bigquery exporter locally (e.g. with a new query) you can build |
| 114 | +and run locally. |
113 | 115 |
|
114 |
| -4. Start the bigquery exporter. |
| 116 | +Use the following steps: |
115 | 117 |
|
116 |
| -``` |
117 |
| -go get -v github.com/m-lab/prometheus-bigquery-exporter/cmd/bigquery_exporter |
118 |
| -./go/bin/bigquery_exporter \ |
119 |
| - --project mlab-sandbox \ |
120 |
| - --type gauge --query <path-to-some-query-file>/bq_ndt_metrics.sql |
121 |
| -``` |
| 118 | +1. Build the docker image. |
| 119 | + |
| 120 | + ```sh |
| 121 | + docker build -t bqx-local -f Dockerfile . |
| 122 | + ``` |
| 123 | + |
| 124 | +2. Authenticate using your Google account. Both steps are necessary, the |
| 125 | + first to run gcloud commands (which uses user credentials), the second to run |
| 126 | + the bigquery exporter (which uses application default credentials). |
| 127 | + |
| 128 | + ```sh |
| 129 | + gcloud auth login |
| 130 | + gcloud auth application-default login |
| 131 | + ``` |
| 132 | + |
| 133 | +3. Run the image, with fowarded ports and access to gcloud credentials. |
| 134 | + |
| 135 | + ```sh |
| 136 | + docker run -p 9348:9348 --rm \ |
| 137 | + -v $HOME/.config/gcloud:/root/.config/gcloud \ |
| 138 | + -v $PWD:/queries -it bqx-local \ |
| 139 | + -project $GCLOUD_PROJECT \ |
| 140 | + -guage-query /queries/example/config/bq_example.sql |
| 141 | + ``` |
0 commit comments