@@ -285,19 +285,19 @@ Let's start by bringing up a Prometheus instance ourselves, to scrape our applic
285
285
286
286
.. code-block :: yaml
287
287
288
- # prometheus.yml
288
+ # /tmp/ prometheus.yml
289
289
scrape_configs :
290
290
- job_name : ' my-app'
291
- scrape_interval : 5s
292
- static_configs :
293
- - targets : ['localhost:8000']
291
+ scrape_interval : 5s
292
+ static_configs :
293
+ - targets : ['localhost:8000']
294
294
295
295
And start a docker container for it:
296
296
297
297
.. code-block :: sh
298
298
299
299
# --net=host will not work properly outside of Linux.
300
- docker run --net=host -v ./ prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus\
300
+ docker run --net=host -v /tmp/ prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus \
301
301
--log.level=debug --config.file=/etc/prometheus/prometheus.yml
302
302
303
303
For our Python application, we will need to install an exporter specific to Prometheus:
@@ -371,15 +371,13 @@ To see how this works in practice, let's start the Collector locally. Write the
371
371
372
372
.. code-block :: yaml
373
373
374
- # otel-collector-config.yaml
374
+ # /tmp/ otel-collector-config.yaml
375
375
receivers :
376
376
opencensus :
377
377
endpoint : 0.0.0.0:55678
378
378
exporters :
379
379
logging :
380
380
loglevel : debug
381
- sampling_initial : 10
382
- sampling_thereafter : 50
383
381
processors :
384
382
batch :
385
383
queued_retry :
@@ -397,8 +395,8 @@ Start the docker container:
397
395
398
396
.. code-block :: sh
399
397
400
- docker run -p 55678:55678\
401
- -v ./ otel-collector-config.yaml:/etc/otel-collector-config.yaml\
398
+ docker run -p 55678:55678 \
399
+ -v /tmp/ otel-collector-config.yaml:/etc/otel-collector-config.yaml \
402
400
omnition/opentelemetry-collector-contrib:latest \
403
401
--config=/etc/otel-collector-config.yaml
404
402
@@ -433,6 +431,7 @@ And execute the following script:
433
431
)
434
432
tracer_provider = TracerProvider()
435
433
trace.set_tracer_provider(tracer_provider)
434
+ span_processor = BatchExportSpanProcessor(span_exporter)
436
435
tracer_provider.add_span_processor(span_processor)
437
436
438
437
# create a CollectorMetricsExporter
@@ -448,21 +447,27 @@ And execute the following script:
448
447
meter = metrics.get_meter(__name__ )
449
448
# controller collects metrics created from meter and exports it via the
450
449
# exporter every interval
451
- controller = PushController(meter, collector_exporter , 5 )
450
+ controller = PushController(meter, metric_exporter , 5 )
452
451
453
452
# Configure the tracer to use the collector exporter
454
453
tracer = trace.get_tracer_provider().get_tracer(__name__ )
455
454
456
455
with tracer.start_as_current_span(" foo" ):
457
456
print (" Hello world!" )
458
457
459
- counter = meter.create_metric(
460
- " requests" , " number of requests" , " requests" , int , Counter, (" environment" ,),
458
+ requests_counter = meter.create_metric(
459
+ name = " requests" ,
460
+ description = " number of requests" ,
461
+ unit = " 1" ,
462
+ value_type = int ,
463
+ metric_type = Counter,
464
+ label_keys = (" environment" ,),
461
465
)
466
+
462
467
# Labelsets are used to identify key-values that are associated with a specific
463
468
# metric that you want to record. These are useful for pre-aggregation and can
464
469
# be used to store custom dimensions pertaining to a metric
465
470
label_set = meter.get_label_set({" environment" : " staging" })
466
471
467
- counter .add(25 , label_set)
472
+ requests_counter .add(25 , label_set)
468
473
time.sleep(10 ) # give push_controller time to push metrics
0 commit comments