@@ -133,6 +133,164 @@ service:
133
133
134
134
<SubSectionSeparator />
135
135
136
+ ## Grafana Labs
137
+
138
+ ### Grafana Cloud Agent
139
+
140
+ The Grafana Cloud Agent is an all-in-one agent for collecting metrics, logs,
141
+ and traces, built on top of production-ready open source software. The
142
+ batteries-included nature removes the need to install more than one piece of
143
+ software by including common integrations for monitoring out of the box.
144
+ Through our native integrations with OpenTelemetry, Prometheus, and
145
+ OpenMetrics, we ensure full compatibility with the complete CNCF ecosystem
146
+ while adding extra functionality for scaling collection.
147
+
148
+ <SubSectionSeparator />
149
+
150
+ ### Requirements
151
+
152
+ Dependencies:
153
+
154
+ * OpenTelemetry-Collector
155
+ * Prometheus
156
+ * Cortex
157
+ * Loki
158
+
159
+ Runtime requirements:
160
+
161
+ * ETCD or Consul for scaling functionality
162
+
163
+ <SubSectionSeparator />
164
+
165
+ ### Configuration
166
+
167
+ Create a YAML configuration file with the following contents to configure the
168
+ Agent to collect metrics a Prometheus- or OpenMetrics-compatible endpoint:
169
+
170
+ ``` yaml lineNumbers=true
171
+ prometheus :
172
+ # Configures where scraped metric data will be stored. Data read from this
173
+ # directory will be forwarded using the configured remote_write settings.
174
+ # this directory
175
+ wal_directory : /tmp/agent
176
+ configs :
177
+ # An invidiual config here identifies a single Prometheus instance. Each
178
+ # instance holds its own set of scrape jobs and remote_write settings. The
179
+ # name specified here must be unique.
180
+ - name : primary
181
+ scrape_configs :
182
+ # A job is responsible for discovering a set of targets to scrape.
183
+ # static_configs specifies a static list of host:port pairs to use
184
+ # as targets. Replace "host:port" below with the hostname and port
185
+ # number of the OpenMetrics-compatible endpoint to collect metrics
186
+ # from.
187
+ - job_name : collect
188
+ static_configs :
189
+ - targets : ['host:port']
190
+ # remote_write configures where to send metrics using the Prometheus
191
+ # Remote Write format. If authentication is used, replace USERNAME and
192
+ # password accordingly. Otherwise, omit the basic_auth block.
193
+ remote_write :
194
+ - url : REMOTE_WRITE_URL
195
+ basic_auth :
196
+ username : USERNAME
197
+ password : PASSWORD
198
+ ` ` `
199
+
200
+ Integrations may be enabled to also collect metrics from common systems. Add the
201
+ following block to your configuration file:
202
+
203
+ ` ` ` yaml lineNumbers=true
204
+ integrations :
205
+ # Enable the "node_exporter" integration, which runs
206
+ # https://github.com/prometheus/node_exporter in-process and scrapes metrics
207
+ # from it.
208
+ node_exporter :
209
+ enabled : true
210
+ # Configured identically to remote_write from the previous section. This
211
+ # section must exist if integrations are used.
212
+ prometheus_remote_write :
213
+ - url : REMOTE_WRITE_URL
214
+ basic_auth :
215
+ username : USERNAME
216
+ password : PASSWORD
217
+ ` ` `
218
+
219
+ Log support may be added by adding a ` loki` block. Use the following code block
220
+ to collect all log files from `/var/log` :
221
+
222
+ ` ` ` yaml lineNumbers=true
223
+ loki:
224
+ positions:
225
+ # Configures where to store byte offsets of recently read files.
226
+ filename: /tmp/positions.yaml
227
+ clients:
228
+ # Configures the location to send logs using the Loki write API.
229
+ # If authentication is not needed, omit the basic_auth block.
230
+ - url: LOKI_URL
231
+ basic_auth:
232
+ username: USERNAME
233
+ password: PASSWORD
234
+ scrape_configs:
235
+ # Configures a scrape job to find log files to collect from. Targets
236
+ # must be set to localhost.
237
+ #
238
+ # __path__ may be set to any glob-patterned filepath where log files are
239
+ # stored.
240
+ - job_name: varlogs
241
+ static_configs:
242
+ - targets:
243
+ - localhost
244
+ labels:
245
+ __path__: /var/log/*log
246
+ ` ` `
247
+
248
+ Support for collecting traces may be added with a `tempo` block. Use the
249
+ following code block to collect spans and forward them to an OTLP-compatible
250
+ endpoint :
251
+
252
+ ` ` ` yaml lineNumbers=true
253
+ tempo:
254
+ receivers:
255
+ # Configure jaeger support. grpc supports spans over port
256
+ # 14250, thrift_binary over 6832, thrift_compact over 6831,
257
+ # and thrift_http over 14268. Specific port numbers may be
258
+ # customized within the config for the protocol.
259
+ jaeger:
260
+ protocols:
261
+ grpc:
262
+ thrift_binary:
263
+ thrift_compact:
264
+ thrift_http:
265
+ # Configure opencensus support. Spans can be sent over port 55678
266
+ # by default.
267
+ opencensus:
268
+ # Configure otlp support. Spans can be sent to port 55680 by
269
+ # default.
270
+ otlp:
271
+ protocols:
272
+ grpc:
273
+ http:
274
+ # Configure zipkin support. Spans can be sent to port 9411 by
275
+ # default.
276
+ zipkin:
277
+
278
+ # Configures where to send collected spans and traces. Outgoing spans are sent
279
+ # in the OTLP format. Replace OTLP_ENDPOINT with the host:port of the target
280
+ # OTLP-compatible host. If the OTLP endpoint uses authentication, configure
281
+ # USERNAME and PASSWORD accordingly. Otherwise, omit the basic_auth section.
282
+ push_config:
283
+ endpoint: OTLP_ENDPOINT
284
+ basic_auth:
285
+ username: USERNAME
286
+ password: PASSWORD
287
+ ` ` `
288
+
289
+ A full configuration reference is located in the [Grafana Cloud Agent code
290
+ repository](https://github.com/grafana/agent/tree/main/docs/configuration)
291
+
292
+ <SubSectionSeparator />
293
+
136
294
# # Honeycomb
137
295
138
296
Honeycomb supports OpenTelemetry by ingesting OTLP directly, so users of the AWS Distro for OpenTelemetry (ADOT) can send tracing data directly to
0 commit comments