A Rust crate to interface with the Grafana Cloud API for sending metrics and logs.
- Metrics: Implements the
metricscrate'sRecordertrait to push metrics to Grafana Cloud using the Prometheus remote write protocol. - Logs: Push logs to Grafana Loki with support for labels and structured logging.
- Aggregating Logger: Logs are buffered and sent in batches via a background thread for high performance.
- Global Logger: Like the
metricscrate, provides a global logger with low overhead after initialization. - Minimal dependencies: Uses only essential crates for HTTP, serialization, and compression.
Add this to your Cargo.toml:
[dependencies]
grafana = { path = "." }
metrics = "0.24"
ureq = "3"use grafana::{GrafanaClient, GrafanaConfig};
use metrics::{counter, gauge, histogram};
use std::time::Duration;
fn main() {
// Configure the client
let config = GrafanaConfig::new(
"https://prometheus-prod-01-prod-us-east-0.grafana.net/api/prom/push",
"https://logs-prod-us-central1.grafana.net/loki/api/v1/push",
123456, // Your Grafana Cloud user ID
"your-api-key",
)
.with_flush_interval(Duration::from_secs(15)) // Flush metrics and logs every 15 seconds
.with_label("service", "my-app")
.with_label("env", "production");
// Create and install the client (sets up both global metrics recorder AND global logger)
let agent = ureq::Agent::new_with_defaults();
let client = GrafanaClient::new(config, agent).expect("Failed to create client");
let installed = client.install().expect("Failed to install recorder");
// Start a single background thread that handles both metrics and logs
let push_handle = installed.start_push_thread();
// Now you can use the metrics macros anywhere in your application
counter!("requests_total", "method" => "GET").increment(1);
gauge!("temperature").set(23.5);
histogram!("request_duration_seconds").record(0.025);
// Send logs using global logger macros - works from anywhere!
// These are very fast - they just queue the log to a buffer
grafana::log_info!("Application started", "version" => "1.0.0");
grafana::log_warn!("High memory usage", "percent" => "85");
grafana::log_error!("Database connection failed", "host" => "localhost");
// When shutting down, stop the push thread (does a final push)
push_handle.stop();
}Similar to how the metrics crate provides global macros for recording metrics, this crate provides global logging macros with minimal overhead after the logger is set.
The global logger uses OnceLock for thread-safe, lock-free access after initialization, and an AggregatingLokiClient that buffers logs:
- Before logger is set: Logging is a no-op (returns immediately)
- After logger is set: Single atomic read to access the logger, then a quick mutex lock to append to buffer
- Background thread: Periodically flushes the buffer to Grafana Loki
This design means logging calls are extremely fast - they never block on HTTP requests.
// Set up once at application startup
let config = GrafanaConfig::new(/* ... */);
let agent = ureq::Agent::new_with_defaults();
let client = GrafanaClient::new(config, agent)?;
let installed = client.install()?; // This sets both global metrics AND global logger
// Start the background push thread (handles both metrics and logs)
let push_handle = installed.start_push_thread();
// Now use global logging macros from anywhere in your application
grafana::log_trace!("Detailed trace info", "func" => "process");
grafana::log_debug!("Debug information", "request_id" => "abc123");
grafana::log_info!("Application event", "event" => "startup");
grafana::log_warn!("Warning message", "threshold" => "80%");
grafana::log_error!("Error occurred", "code" => "500");
grafana::log_fatal!("Fatal error", "reason" => "out_of_memory");
// On shutdown, stop the thread (flushes remaining data)
push_handle.stop();If you want to set up the logger separately from metrics:
use grafana::{GrafanaConfig, AggregatingLokiClient, set_global_logger};
use std::sync::Arc;
let config = GrafanaConfig::new(/* ... */);
let loki = Arc::new(AggregatingLokiClient::new(config));
set_global_logger(Arc::clone(&loki)).expect("Logger already set");
// Start the background push thread
let handle = loki.start_push_thread();
// Now global logging works
grafana::log_info!("Ready to log!");
// On shutdown
handle.stop();if grafana::logger_is_set() {
println!("Global logger is configured");
}export GRAFANA_USER_ID=123456
export GRAFANA_API_KEY=your-api-key
export GRAFANA_PROMETHEUS_URL=https://prometheus-prod-01-prod-us-east-0.grafana.net/api/prom/push
export GRAFANA_LOKI_URL=https://logs-prod-us-central1.grafana.net/loki/api/v1/push| Option | Description | Default |
|---|---|---|
with_flush_interval(Duration) |
How often to push metrics and logs to Grafana | 15 seconds |
with_label(key, value) |
Add a default label to all metrics/logs | None |
with_timeout(Duration) |
HTTP request timeout | 30 seconds |
This crate implements the metrics crate's Recorder trait, which means you can use the standard metrics macros:
use metrics::{counter, gauge, histogram};
// Counters - monotonically increasing values
counter!("http_requests_total", "method" => "GET", "status" => "200").increment(1);
// Gauges - values that can go up and down
gauge!("active_connections").set(42.0);
gauge!("queue_size").increment(1.0);
gauge!("queue_size").decrement(1.0);
// Histograms - distributions of values
histogram!("request_duration_seconds", "endpoint" => "/api").record(0.025);Metrics are sent to Grafana Cloud using the Prometheus remote write protocol:
- Protobuf encoding
- Snappy compression
- Proper histogram bucket support (
_sum,_count,_bucket)
Send logs to Grafana Loki using either the global logger or the AggregatingLokiClient directly:
After calling client.install(), use the global macros from anywhere:
// Simple messages - these just queue to a buffer, very fast!
grafana::log_info!("User logged in");
grafana::log_error!("Connection failed");
// With labels
grafana::log_info!("Request processed", "method" => "GET", "path" => "/api/users");
grafana::log_error!("Database error", "table" => "users", "operation" => "insert");If you need more control, use the AggregatingLokiClient directly:
use grafana::{LogLevel, LogEntry, LogBuilder};
use std::collections::HashMap;
// Log entries are buffered
installed.loki().log(LogLevel::Info, "Application started");
// Log with labels
let mut labels = HashMap::new();
labels.insert("component".to_string(), "auth".to_string());
installed.loki().log_with_labels(LogLevel::Error, "Auth failed", labels);
// Using the log! macro to create entries
let entry = grafana::log!("warn", "High memory usage",
"component" => "memory",
"threshold" => "80%"
);
installed.loki().push(entry);
// Push a batch of entries
let entries = vec![
grafana::log!("info", "Batch log 1"),
grafana::log!("info", "Batch log 2"),
];
installed.loki().push_batch(entries);
// Using LogBuilder
let entry = LogBuilder::new(LogLevel::Debug, "Processing request")
.label("request_id", "abc123")
.label("user_id", "456")
.build();
installed.loki().push(entry);
// Manual flush if needed
installed.push_logs().unwrap();tracedebuginfowarnerrorfatal
For best performance, start a background thread that periodically pushes both metrics and logs:
let installed = client.install().expect("Failed to install");
// Start background push thread (handles both metrics and logs)
let push_handle = installed.start_push_thread();
// ... your application runs ...
// Both metrics and logs are pushed every `flush_interval` (default 15s)
// On shutdown, stop the thread (performs final push of both)
push_handle.stop();You can also push manually:
installed.push_metrics().unwrap(); // Push metrics immediately
installed.push_logs().unwrap(); // Flush log buffer immediately
installed.push().unwrap(); // Push both metrics and logs- Log in to Grafana Cloud
- Go to your stack's Details page
- Find:
- User ID: Listed as "Instance ID" or in the Prometheus/Loki details
- API Key: Create one under Security → API Keys
- URLs: Listed in the Prometheus and Loki connection details
This crate aims to use minimal dependencies:
metrics- The metrics facadeureq- Minimal HTTP clientserde/serde_json- Serializationprost- Protobuf encodingsnap- Snappy compressionbase64- Authentication encodingparking_lot- Efficient locks
MIT