Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@
# tracing-honeycomb

This repo contains the source code for:
- [`tracing-distributed`](tracing-distributed/README.md), which contains generic machinery for publishing distributed trace telemetry to arbitrary backends
- [`tracing-honeycomb`](tracing-honeycomb/README.md), which contains a concrete implementation that uses [honeycomb.io](https://honeycomb.io) as a backend
- [`tracing-distributed`](tracing-distributed/README.md), which contains generic machinery for publishing distributed trace telemetry to arbitrary backends.
- [`tracing-honeycomb`](tracing-honeycomb/README.md), which contains a concrete implementation that uses [honeycomb.io](https://honeycomb.io) as a backend.
- [`tracing-jaeger`](tracing-jaeger/README.md), which contains a concrete implementation that uses [jaegertracing.io](https://www.jaegertracing.io/) as a backend.

## Usage

Expand Down
49 changes: 29 additions & 20 deletions tracing-honeycomb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,21 @@

# tracing-honeycomb

This crate provides:
- A tracing layer, `TelemetryLayer`, that can be used to publish trace data to honeycomb.io
- Utilities for implementing distributed tracing against the honeycomb.io backend

As a tracing layer, `TelemetryLayer` can be composed with other layers to provide stdout logging, filtering, etc.

## Usage

Add the following to your Cargo.toml to get started.
Add the following to your `Cargo.toml` to get started.

```toml
tracing-honeycomb = "0.1.0"
tracing-honeycomb = "0.2.0"
```

### Propagating distributed tracing metadata
This crate provides:
- A tracing layer, `TelemetryLayer`, that can be used to publish trace data to [honeycomb.io][].
- Utilities for implementing distributed tracing against the honeycomb.io backend.

As a tracing layer, `TelemetryLayer` can be composed with other layers to provide stdout logging, filtering, etc.

#### Propagating distributed tracing metadata

This crate provides two functions for out of band interaction with the `TelemetryLayer`
- `register_dist_tracing_root` registers the current span as the local root of a distributed trace.
Expand All @@ -27,48 +27,57 @@ Here's an example of how they might be used together:
2. A child of that span uses `current_dist_trace_ctx` to fetch the current `TraceId` and `SpanId`. It passes these values along with an RPC request, as metadata.
3. The RPC service handler uses the `TraceId` and remote parent `SpanId` provided in the request's metadata to register the handler function's span as a local root of the distributed trace initiated in step 1.

### Registering a global Subscriber
#### Registering a global Subscriber

The following example shows how to create and register a subscriber created by composing `TelemetryLayer` with other layers and the `Registry` subscriber provided by the `tracing_subscriber` crate.

```rust
use tracing_honeycomb::new_honeycomb_telemetry_layer;
use tracing_subscriber::prelude::*;
use tracing_subscriber::{filter::LevelFilter, fmt, registry::Registry};

let honeycomb_config = libhoney::Config {
options: libhoney::client::Options {
api_key: honeycomb_key,
api_key: std::env::var("HONEYCOMB_WRITEKEY").unwrap(),
dataset: "my-dataset-name".to_string(),
..libhoney::client::Options::default()
},
transmission_options: libhoney::transmission::Options::default(),
};

let telemetry_layer = mk_honeycomb_tracing_layer("my-service-name", honeycomb_config);
let telemetry_layer = new_honeycomb_telemetry_layer("my-service-name", honeycomb_config);

// NOTE: the underlying subscriber MUST be the Registry subscriber
let subscriber = registry::Registry::default() // provide underlying span data store
let subscriber = Registry::default() // provide underlying span data store
.with(LevelFilter::INFO) // filter out low-level debug tracing (eg tokio executor)
.with(tracing_subscriber::fmt::Layer::default()) // log to stdout
.with(fmt::Layer::default()) // log to stdout
.with(telemetry_layer); // publish to honeycomb backend


tracing::subscriber::set_global_default(subscriber).expect("setting global default failed");
```

### Testing
#### Testing

Since `TraceCtx::current_trace_ctx` and `TraceCtx::record_on_current_span` can be expected to return `Ok` as long as some `TelemetryLayer` has been registered as part of the layer/subscriber stack and the current span is active, it's valid to `.expect` them to always succeed & to panic if they do not. As a result, you may find yourself writing code that fails if no distributed tracing context is present. This means that unit and integration tests covering such code must provide a `TelemetryLayer`. However, you probably don't want to publish telemetry while running unit or integration tests. You can fix this problem by registering a `TelemetryLayer` constructed using `BlackholeTelemetry`. `BlackholeTelemetry` discards spans and events without publishing them to any backend.

```rust
let telemetry_layer = mk_honeycomb_blackhole_tracing_layer();
use tracing_honeycomb::new_blackhole_telemetry_layer;
use tracing_subscriber::prelude::*;
use tracing_subscriber::{filter::LevelFilter, fmt, registry::Registry};

let telemetry_layer = new_blackhole_telemetry_layer();

// NOTE: the underlying subscriber MUST be the Registry subscriber
let subscriber = registry::Registry::default() // provide underlying span data store
let subscriber = Registry::default() // provide underlying span data store
.with(LevelFilter::INFO) // filter out low-level debug tracing (eg tokio executor)
.with(tracing_subscriber::fmt::Layer::default()) // log to stdout
.with(fmt::Layer::default()) // log to stdout
.with(telemetry_layer); // publish to blackhole backend

tracing::subscriber::set_global_default(subscriber).expect("setting global default failed");
tracing::subscriber::set_global_default(subscriber).ok();
```

[honeycomb.io]: https://www.honeycomb.io/

## License

MIT
Expand Down
57 changes: 2 additions & 55 deletions tracing-honeycomb/README.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -2,68 +2,15 @@

# {{crate}}

{{readme}}

## Usage

Add the following to your Cargo.toml to get started.
Add the following to your `Cargo.toml` to get started.

```toml
tracing-honeycomb = "{{version}}"
```

### Propagating distributed tracing metadata

This crate provides two functions for out of band interaction with the `TelemetryLayer`
- `register_dist_tracing_root` registers the current span as the local root of a distributed trace.
- `current_dist_trace_ctx` fetches the `TraceId` and `SpanId` associated with the current span.

Here's an example of how they might be used together:
1. Some span is registered as the global tracing root using a newly-generated `TraceId`.
2. A child of that span uses `current_dist_trace_ctx` to fetch the current `TraceId` and `SpanId`. It passes these values along with an RPC request, as metadata.
3. The RPC service handler uses the `TraceId` and remote parent `SpanId` provided in the request's metadata to register the handler function's span as a local root of the distributed trace initiated in step 1.

### Registering a global Subscriber

The following example shows how to create and register a subscriber created by composing `TelemetryLayer` with other layers and the `Registry` subscriber provided by the `tracing_subscriber` crate.

```rust
let honeycomb_config = libhoney::Config {
options: libhoney::client::Options {
api_key: honeycomb_key,
dataset: "my-dataset-name".to_string(),
..libhoney::client::Options::default()
},
transmission_options: libhoney::transmission::Options::default(),
};

let telemetry_layer = mk_honeycomb_tracing_layer("my-service-name", honeycomb_config);

// NOTE: the underlying subscriber MUST be the Registry subscriber
let subscriber = registry::Registry::default() // provide underlying span data store
.with(LevelFilter::INFO) // filter out low-level debug tracing (eg tokio executor)
.with(tracing_subscriber::fmt::Layer::default()) // log to stdout
.with(telemetry_layer); // publish to honeycomb backend


tracing::subscriber::set_global_default(subscriber).expect("setting global default failed");
```

### Testing

Since `TraceCtx::current_trace_ctx` and `TraceCtx::record_on_current_span` can be expected to return `Ok` as long as some `TelemetryLayer` has been registered as part of the layer/subscriber stack and the current span is active, it's valid to `.expect` them to always succeed & to panic if they do not. As a result, you may find yourself writing code that fails if no distributed tracing context is present. This means that unit and integration tests covering such code must provide a `TelemetryLayer`. However, you probably don't want to publish telemetry while running unit or integration tests. You can fix this problem by registering a `TelemetryLayer` constructed using `BlackholeTelemetry`. `BlackholeTelemetry` discards spans and events without publishing them to any backend.

```rust
let telemetry_layer = mk_honeycomb_blackhole_tracing_layer();

// NOTE: the underlying subscriber MUST be the Registry subscriber
let subscriber = registry::Registry::default() // provide underlying span data store
.with(LevelFilter::INFO) // filter out low-level debug tracing (eg tokio executor)
.with(tracing_subscriber::fmt::Layer::default()) // log to stdout
.with(telemetry_layer); // publish to blackhole backend

tracing::subscriber::set_global_default(subscriber).expect("setting global default failed");
```
{{readme}}

## License

Expand Down
78 changes: 70 additions & 8 deletions tracing-honeycomb/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,72 @@
)]

//! This crate provides:
//! - A tracing layer, `TelemetryLayer`, that can be used to publish trace data to honeycomb.io
//! - Utilities for implementing distributed tracing against the honeycomb.io backend
//! - A tracing layer, `TelemetryLayer`, that can be used to publish trace data to [honeycomb.io][].
//! - Utilities for implementing distributed tracing against the honeycomb.io backend.
//!
//! As a tracing layer, `TelemetryLayer` can be composed with other layers to provide stdout logging, filtering, etc.
//!
//! ### Propagating distributed tracing metadata
//!
//! This crate provides two functions for out of band interaction with the `TelemetryLayer`
//! - `register_dist_tracing_root` registers the current span as the local root of a distributed trace.
//! - `current_dist_trace_ctx` fetches the `TraceId` and `SpanId` associated with the current span.
//!
//! Here's an example of how they might be used together:
//! 1. Some span is registered as the global tracing root using a newly-generated `TraceId`.
//! 2. A child of that span uses `current_dist_trace_ctx` to fetch the current `TraceId` and `SpanId`. It passes these values along with an RPC request, as metadata.
//! 3. The RPC service handler uses the `TraceId` and remote parent `SpanId` provided in the request's metadata to register the handler function's span as a local root of the distributed trace initiated in step 1.
//!
//! ### Registering a global Subscriber
//!
//! The following example shows how to create and register a subscriber created by composing `TelemetryLayer` with other layers and the `Registry` subscriber provided by the `tracing_subscriber` crate.
//!
//! ```no_run
//! use tracing_honeycomb::new_honeycomb_telemetry_layer;
//! use tracing_subscriber::prelude::*;
//! use tracing_subscriber::{filter::LevelFilter, fmt, registry::Registry};
//!
//! let honeycomb_config = libhoney::Config {
//! options: libhoney::client::Options {
//! api_key: std::env::var("HONEYCOMB_WRITEKEY").unwrap(),
//! dataset: "my-dataset-name".to_string(),
//! ..libhoney::client::Options::default()
//! },
//! transmission_options: libhoney::transmission::Options::default(),
//! };
//!
//! let telemetry_layer = new_honeycomb_telemetry_layer("my-service-name", honeycomb_config);
//!
//! // NOTE: the underlying subscriber MUST be the Registry subscriber
//! let subscriber = Registry::default() // provide underlying span data store
//! .with(LevelFilter::INFO) // filter out low-level debug tracing (eg tokio executor)
//! .with(fmt::Layer::default()) // log to stdout
//! .with(telemetry_layer); // publish to honeycomb backend
//!
//! tracing::subscriber::set_global_default(subscriber).expect("setting global default failed");
//! ```
//!
//! ### Testing
//!
//! Since `TraceCtx::current_trace_ctx` and `TraceCtx::record_on_current_span` can be expected to return `Ok` as long as some `TelemetryLayer` has been registered as part of the layer/subscriber stack and the current span is active, it's valid to `.expect` them to always succeed & to panic if they do not. As a result, you may find yourself writing code that fails if no distributed tracing context is present. This means that unit and integration tests covering such code must provide a `TelemetryLayer`. However, you probably don't want to publish telemetry while running unit or integration tests. You can fix this problem by registering a `TelemetryLayer` constructed using `BlackholeTelemetry`. `BlackholeTelemetry` discards spans and events without publishing them to any backend.
//!
//! ```
//! use tracing_honeycomb::new_blackhole_telemetry_layer;
//! use tracing_subscriber::prelude::*;
//! use tracing_subscriber::{filter::LevelFilter, fmt, registry::Registry};
//!
//! let telemetry_layer = new_blackhole_telemetry_layer();
//!
//! // NOTE: the underlying subscriber MUST be the Registry subscriber
//! let subscriber = Registry::default() // provide underlying span data store
//! .with(LevelFilter::INFO) // filter out low-level debug tracing (eg tokio executor)
//! .with(fmt::Layer::default()) // log to stdout
//! .with(telemetry_layer); // publish to blackhole backend
//!
//! tracing::subscriber::set_global_default(subscriber).ok();
//! ```
//!
//! [honeycomb.io]: https://www.honeycomb.io/

mod honeycomb;
mod visitor;
Expand All @@ -22,7 +84,7 @@ pub use tracing_distributed::{TelemetryLayer, TraceCtxError};

/// Register the current span as the local root of a distributed trace.
///
/// Specialized to the honeycomb.io-specific SpanId and TraceId provided by this crate.
/// Specialized to the honeycomb.io-specific `SpanId` and `TraceId` provided by this crate.
pub fn register_dist_tracing_root(
trace_id: TraceId,
remote_parent_span: Option<SpanId>,
Expand All @@ -35,14 +97,14 @@ pub fn register_dist_tracing_root(
/// Returns the `TraceId`, if any, that the current span is associated with along with
/// the `SpanId` belonging to the current span.
///
/// Specialized to the honeycomb.io-specific SpanId and TraceId provided by this crate.
/// Specialized to the honeycomb.io-specific `SpanId` and `TraceId` provided by this crate.
pub fn current_dist_trace_ctx() -> Result<(TraceId, SpanId), TraceCtxError> {
tracing_distributed::current_dist_trace_ctx()
}

/// Construct a TelemetryLayer that does not publish telemetry to any backend.
/// Construct a `TelemetryLayer` that does not publish telemetry to any backend.
///
/// Specialized to the honeycomb.io-specific SpanId and TraceId provided by this crate.
/// Specialized to the honeycomb.io-specific `SpanId` and `TraceId` provided by this crate.
pub fn new_blackhole_telemetry_layer(
) -> TelemetryLayer<tracing_distributed::BlackholeTelemetry<SpanId, TraceId>, SpanId, TraceId> {
let instance_id: u64 = 0;
Expand All @@ -56,9 +118,9 @@ pub fn new_blackhole_telemetry_layer(
)
}

/// Construct a TelemetryLayer that publishes telemetry to honeycomb.io using the provided honeycomb config.
/// Construct a `TelemetryLayer` that publishes telemetry to honeycomb.io using the provided honeycomb config.
///
/// Specialized to the honeycomb.io-specific SpanId and TraceId provided by this crate.
/// Specialized to the honeycomb.io-specific `SpanId` and `TraceId` provided by this crate.
pub fn new_honeycomb_telemetry_layer(
service_name: &'static str,
honeycomb_config: libhoney::Config,
Expand Down