@@ -130,164 +130,6 @@ service:
130
130
131
131
<SubSectionSeparator />
132
132
133
- ## Grafana Labs
134
-
135
- ### Grafana Cloud Agent
136
-
137
- The Grafana Cloud Agent is an all-in-one agent for collecting metrics, logs,
138
- and traces, built on top of production-ready open source software. The
139
- batteries-included nature removes the need to install more than one piece of
140
- software by including common integrations for monitoring out of the box.
141
- Through our native integrations with OpenTelemetry, Prometheus, and
142
- OpenMetrics, we ensure full compatibility with the complete CNCF ecosystem
143
- while adding extra functionality for scaling collection.
144
-
145
- <SubSectionSeparator />
146
-
147
- ### Requirements
148
-
149
- Dependencies:
150
-
151
- * OpenTelemetry-Collector
152
- * Prometheus
153
- * Cortex
154
- * Loki
155
-
156
- Runtime requirements:
157
-
158
- * ETCD or Consul for scaling functionality
159
-
160
- <SubSectionSeparator />
161
-
162
- ### Configuration
163
-
164
- Create a YAML configuration file with the following contents to configure the
165
- Agent to collect metrics a Prometheus- or OpenMetrics-compatible endpoint:
166
-
167
- ``` yaml lineNumbers=true
168
- prometheus :
169
- # Configures where scraped metric data will be stored. Data read from this
170
- # directory will be forwarded using the configured remote_write settings.
171
- # this directory
172
- wal_directory : /tmp/agent
173
- configs :
174
- # An invidiual config here identifies a single Prometheus instance. Each
175
- # instance holds its own set of scrape jobs and remote_write settings. The
176
- # name specified here must be unique.
177
- - name : primary
178
- scrape_configs :
179
- # A job is responsible for discovering a set of targets to scrape.
180
- # static_configs specifies a static list of host:port pairs to use
181
- # as targets. Replace "host:port" below with the hostname and port
182
- # number of the OpenMetrics-compatible endpoint to collect metrics
183
- # from.
184
- - job_name : collect
185
- static_configs :
186
- - targets : ['host:port']
187
- # remote_write configures where to send metrics using the Prometheus
188
- # Remote Write format. If authentication is used, replace USERNAME and
189
- # password accordingly. Otherwise, omit the basic_auth block.
190
- remote_write :
191
- - url : REMOTE_WRITE_URL
192
- basic_auth :
193
- username : USERNAME
194
- password : PASSWORD
195
- ` ` `
196
-
197
- Integrations may be enabled to also collect metrics from common systems. Add the
198
- following block to your configuration file:
199
-
200
- ` ` ` yaml lineNumbers=true
201
- integrations :
202
- # Enable the "node_exporter" integration, which runs
203
- # https://github.com/prometheus/node_exporter in-process and scrapes metrics
204
- # from it.
205
- node_exporter :
206
- enabled : true
207
- # Configured identically to remote_write from the previous section. This
208
- # section must exist if integrations are used.
209
- prometheus_remote_write :
210
- - url : REMOTE_WRITE_URL
211
- basic_auth :
212
- username : USERNAME
213
- password : PASSWORD
214
- ` ` `
215
-
216
- Log support may be added by adding a ` loki` block. Use the following code block
217
- to collect all log files from `/var/log` :
218
-
219
- ` ` ` yaml lineNumbers=true
220
- loki:
221
- positions:
222
- # Configures where to store byte offsets of recently read files.
223
- filename: /tmp/positions.yaml
224
- clients:
225
- # Configures the location to send logs using the Loki write API.
226
- # If authentication is not needed, omit the basic_auth block.
227
- - url: LOKI_URL
228
- basic_auth:
229
- username: USERNAME
230
- password: PASSWORD
231
- scrape_configs:
232
- # Configures a scrape job to find log files to collect from. Targets
233
- # must be set to localhost.
234
- #
235
- # __path__ may be set to any glob-patterned filepath where log files are
236
- # stored.
237
- - job_name: varlogs
238
- static_configs:
239
- - targets:
240
- - localhost
241
- labels:
242
- __path__: /var/log/*log
243
- ` ` `
244
-
245
- Support for collecting traces may be added with a `tempo` block. Use the
246
- following code block to collect spans and forward them to an OTLP-compatible
247
- endpoint :
248
-
249
- ` ` ` yaml lineNumbers=true
250
- tempo:
251
- receivers:
252
- # Configure jaeger support. grpc supports spans over port
253
- # 14250, thrift_binary over 6832, thrift_compact over 6831,
254
- # and thrift_http over 14268. Specific port numbers may be
255
- # customized within the config for the protocol.
256
- jaeger:
257
- protocols:
258
- grpc:
259
- thrift_binary:
260
- thrift_compact:
261
- thrift_http:
262
- # Configure opencensus support. Spans can be sent over port 55678
263
- # by default.
264
- opencensus:
265
- # Configure otlp support. Spans can be sent to port 4317 by
266
- # default.
267
- otlp:
268
- protocols:
269
- grpc:
270
- http:
271
- # Configure zipkin support. Spans can be sent to port 9411 by
272
- # default.
273
- zipkin:
274
-
275
- # Configures where to send collected spans and traces. Outgoing spans are sent
276
- # in the OTLP format. Replace OTLP_ENDPOINT with the host:port of the target
277
- # OTLP-compatible host. If the OTLP endpoint uses authentication, configure
278
- # USERNAME and PASSWORD accordingly. Otherwise, omit the basic_auth section.
279
- push_config:
280
- endpoint: OTLP_ENDPOINT
281
- basic_auth:
282
- username: USERNAME
283
- password: PASSWORD
284
- ` ` `
285
-
286
- A full configuration reference is located in the [Grafana Cloud Agent code
287
- repository](https://github.com/grafana/agent/tree/main/docs/configuration)
288
-
289
- <SubSectionSeparator />
290
-
291
133
## Honeycomb
292
134
293
135
Honeycomb supports OpenTelemetry by ingesting OTLP directly, so users of the AWS Distro for OpenTelemetry (ADOT) can send tracing data directly to
0 commit comments