A comprehensive demonstration of Apache Iggy message streaming with Axum.
This application showcases how to build a production-ready message streaming service using:
- Apache Iggy 0.6.0-edge: High-performance message streaming with io_uring shared-nothing architecture
- Iggy SDK 0.8.0-edge.6: Latest edge SDK for compatibility with edge server features
- Axum 0.8: Ergonomic and modular Rust web framework
- Tokio: Async runtime for Rust
- True batch message sending (single network call for multiple messages)
- Graceful shutdown with SIGTERM/SIGINT handling
- Input validation and sanitization for resource names
- Comprehensive error handling with
Resulttypes (nounwrap()/expect()in production code) - Zero clippy warnings - strict lints enforced, no
#[allow(...)]in production code - Connection resilience with automatic reconnection and exponential backoff
- Circuit breaker pattern for fail-fast during outages
- Rate limiting with token bucket algorithm (configurable RPS and burst)
- API key authentication with constant-time comparison (timing attack resistant)
- Request ID propagation for distributed tracing
- Request timeout propagation via
X-Request-Timeoutheader - Configurable CORS with origin whitelist support
- Background stats caching to avoid expensive queries on each request
- Structured concurrency with proper task lifecycle management
- Background health checks for early connection issue detection
- Prometheus metrics export for observability
┌─────────────────────────────────────────────────────────────┐
│ Axum HTTP Server │
│ (src/main.rs) │
├─────────────────────────────────────────────────────────────┤
│ Middleware Stack (src/middleware/) │
│ - rate_limit.rs: Token bucket rate limiting │
│ - auth.rs: API key authentication │
│ - timeout.rs: Request timeout propagation │
│ - request_id.rs: Request ID propagation │
│ + tower_http: Tracing, CORS │
├─────────────────────────────────────────────────────────────┤
│ Handlers (src/handlers/) │
│ - health.rs: Health/readiness checks, stats │
│ - messages.rs: Send/poll messages │
│ - streams.rs: Stream CRUD operations │
│ - topics.rs: Topic CRUD operations │
├─────────────────────────────────────────────────────────────┤
│ Services (src/services/) │
│ - producer.rs: Message publishing logic │
│ - consumer.rs: Message consumption logic │
├─────────────────────────────────────────────────────────────┤
│ IggyClientWrapper (src/iggy_client.rs) │
│ High-level wrapper with automatic reconnection │
│ + Circuit breaker for fail-fast during outages │
│ + PollParams builder for cleaner polling API │
├─────────────────────────────────────────────────────────────┤
│ Background Tasks (managed by TaskTracker) │
│ - Stats refresh task (periodic cache update) │
│ - Health check task (connection monitoring) │
├─────────────────────────────────────────────────────────────┤
│ Apache Iggy Server (TCP/QUIC/HTTP) │
│ Persistent message streaming │
└─────────────────────────────────────────────────────────────┘
.github/
├── workflows/
│ ├── ci.yml # Main CI (fmt, clippy, tests, coverage, audit)
│ ├── pr.yml # PR checks (size, commits, docs, semver)
│ ├── release.yml # Release builds and publishing
│ └── extended-tests.yml # Weekly stress/memory/benchmark tests
├── dependabot.yml # Automated dependency updates
└── pull_request_template.md
.commitlintrc.json # Conventional commits configuration
observability/
├── prometheus/
│ └── prometheus.yml # Prometheus scrape configuration
└── grafana/
└── provisioning/
├── datasources/
│ └── datasources.yml # Prometheus datasource
└── dashboards/
├── dashboards.yml # Dashboard provisioning
└── iggy-overview.json # Pre-built Iggy dashboard
src/
├── main.rs # Application entry point
├── lib.rs # Library exports
├── config.rs # Configuration from environment
├── error.rs # Error types with HTTP status codes
├── metrics.rs # Prometheus metrics export
├── state.rs # Shared application state with stats caching
├── routes.rs # Route definitions and middleware stack
├── iggy_client/ # Iggy SDK wrapper module
│ ├── mod.rs # Client wrapper with auto-reconnection
│ ├── circuit_breaker.rs # Circuit breaker pattern implementation
│ ├── connection.rs # Connection state management
│ ├── helpers.rs # Utility functions
│ ├── params.rs # PollParams builder
│ └── scopeguard.rs # Scope guard utilities
├── validation.rs # Input validation utilities
├── middleware/
│ ├── mod.rs # Middleware exports
│ ├── ip.rs # Client IP extraction (shared by rate_limit and auth)
│ ├── rate_limit.rs # Token bucket rate limiting (Governor)
│ ├── auth.rs # API key authentication
│ ├── timeout.rs # Request timeout propagation
│ └── request_id.rs # Request ID propagation
├── models/
│ ├── mod.rs # Model exports
│ ├── event.rs # Domain event types (uses rust_decimal for money)
│ └── api.rs # API request/response types
├── services/
│ ├── mod.rs # Service exports
│ ├── producer.rs # Message producer service
│ └── consumer.rs # Message consumer service
└── handlers/
├── mod.rs # Handler exports
├── health.rs # Health endpoints
├── messages.rs # Message endpoints
├── streams.rs # Stream management
└── topics.rs # Topic management
tests/
├── integration_tests.rs # End-to-end API tests with testcontainers
│ ├── Standard fixture tests (basic CRUD, messages)
│ └── Security boundary tests (auth, rate limiting)
└── model_tests.rs # Unit tests for models
fuzz/
├── Cargo.toml # Fuzz testing configuration
└── fuzz_targets/
└── fuzz_validation.rs # Validation function fuzz tests
deny.toml # License and security policy for cargo-deny
docs/ # Documentation and guides
├── README.md # Guide index and navigation
├── guide.md # Event-driven architecture guide
├── partitioning-guide.md # Partitioning strategies
├── durable-storage-guide.md # Storage and durability configuration
└── structured-concurrency.md # Task lifecycle management
See the docs/ directory for comprehensive guides covering event-driven architecture, partitioning strategies, durable storage configuration, and more.
GET /health- Health check with Iggy connection statusGET /ready- Kubernetes readiness probeGET /stats- Service statistics
POST /messages- Send a single messageGET /messages- Poll messagesPOST /messages/batch- Send multiple messages
POST /streams/{stream}/topics/{topic}/messages- Send to specific topicGET /streams/{stream}/topics/{topic}/messages- Poll from specific topic
GET /streams- List all streamsPOST /streams- Create a new streamGET /streams/{name}- Get stream detailsDELETE /streams/{name}- Delete a stream
GET /streams/{stream}/topics- List topics in streamPOST /streams/{stream}/topics- Create a topicGET /streams/{stream}/topics/{topic}- Get topic detailsDELETE /streams/{stream}/topics/{topic}- Delete a topic
Environment variables (see .env.example):
| Variable | Default | Description |
|---|---|---|
HOST |
0.0.0.0 |
Server bind address |
PORT |
3000 |
Server port |
RUST_LOG |
info |
Log level |
| Variable | Default | Description |
|---|---|---|
IGGY_CONNECTION_STRING |
iggy://iggy:iggy@localhost:8090 |
Iggy connection string |
IGGY_STREAM |
sample-stream |
Default stream name |
IGGY_TOPIC |
events |
Default topic name |
IGGY_PARTITIONS |
3 |
Partitions for default topic |
| Variable | Default | Description |
|---|---|---|
MAX_RECONNECT_ATTEMPTS |
0 |
Max reconnect attempts (0 = infinite) |
RECONNECT_BASE_DELAY_MS |
1000 |
Base delay for exponential backoff |
RECONNECT_MAX_DELAY_MS |
30000 |
Max delay between reconnects |
HEALTH_CHECK_INTERVAL_SECS |
30 |
Connection health check interval |
OPERATION_TIMEOUT_SECS |
30 |
Timeout for Iggy operations |
| Variable | Default | Description |
|---|---|---|
CIRCUIT_BREAKER_FAILURE_THRESHOLD |
5 |
Failures before opening circuit |
CIRCUIT_BREAKER_SUCCESS_THRESHOLD |
2 |
Successes in half-open to close |
CIRCUIT_BREAKER_OPEN_DURATION_SECS |
30 |
How long circuit stays open |
| Variable | Default | Description |
|---|---|---|
RATE_LIMIT_RPS |
100 |
Requests per second (0 = disabled) |
RATE_LIMIT_BURST |
50 |
Burst capacity above RPS limit |
| Variable | Default | Description |
|---|---|---|
BATCH_MAX_SIZE |
1000 |
Max messages per batch send |
POLL_MAX_COUNT |
100 |
Max messages per poll |
MAX_REQUEST_BODY_SIZE |
10485760 |
Max request body size in bytes (10MB) |
| Variable | Default | Description |
|---|---|---|
API_KEY |
(none) | API key for authentication (disabled if not set) |
AUTH_BYPASS_PATHS |
/health,/ready |
Comma-separated paths that bypass auth |
CORS_ALLOWED_ORIGINS |
* |
Comma-separated allowed origins |
TRUSTED_PROXIES |
(none) | Comma-separated CIDR ranges for trusted reverse proxies |
The TRUSTED_PROXIES variable configures IP spoofing mitigation for the rate limiter.
When set, X-Forwarded-For headers are validated against trusted proxy networks.
Format: Comma-separated CIDR notation
Common values:
- Private networks:
10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 - Kubernetes pod network:
10.0.0.0/8 - Docker bridge:
172.17.0.0/16 - Localhost:
127.0.0.0/8
Example:
# Trust all RFC 1918 private networks
TRUSTED_PROXIES="10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
# Trust only specific proxy IPs
TRUSTED_PROXIES="10.0.1.5,10.0.1.6"When empty (default), all X-Forwarded-For headers are trusted. This is not recommended for production.
| Variable | Default | Description |
|---|---|---|
STATS_CACHE_TTL_SECS |
5 |
Stats cache refresh interval |
METRICS_PORT |
9090 |
Prometheus metrics port (0 = disabled) |
Background task logs use tiered log levels to reduce noise:
| Level | What's Logged |
|---|---|
info |
Startup, shutdown, significant events |
debug |
Task lifecycle (cancellation, shutdown) |
trace |
Routine success ("Stats cache refreshed", "Health check OK") |
Recommended settings:
# Production (quiet)
RUST_LOG=info
# Development (see task events)
RUST_LOG=debug
# Debugging background tasks
RUST_LOG=traceThe project includes a complete Grafana-based observability stack for monitoring Iggy and the sample application.
| Service | Port | Description |
|---|---|---|
| Iggy | 3000 | Message streaming server (also serves /metrics) |
| Iggy Web UI | 3050 | Dashboard for streams, topics, messages, and users |
| Prometheus | 9090 | Metrics collection and storage |
| Grafana | 3001 | Visualization and dashboards |
# Start the full stack (Iggy + Prometheus + Grafana)
docker-compose up -d
# Access the services:
# - Iggy HTTP API: http://localhost:3000
# - Iggy Web UI: http://localhost:3050 (iggy/iggy)
# - Prometheus: http://localhost:9090
# - Grafana: http://localhost:3001 (admin/admin)The Iggy Web UI provides a comprehensive dashboard for managing the Iggy server:
- Streams & Topics: Create, browse, and delete streams and topics
- Messages: Browse and inspect messages in topics
- Users: Manage users and permissions
- Server Health: Monitor server status and connections
Access at http://localhost:3050 with credentials iggy/iggy.
Pre-configured dashboards are automatically provisioned:
- Iggy Overview: Server status, request rates, message throughput, latency percentiles
Iggy exposes Prometheus-compatible metrics at /metrics:
# View raw metrics
curl http://localhost:3000/metrics- Log into Grafana at http://localhost:3001
- Navigate to Dashboards → Iggy folder
- Edit existing dashboards or create new ones
- Export JSON and save to
observability/grafana/provisioning/dashboards/
Iggy supports OpenTelemetry for distributed tracing. Add to docker-compose:
environment:
- IGGY_TELEMETRY_ENABLED=true
- IGGY_TELEMETRY_SERVICE_NAME=iggy
- IGGY_TELEMETRY_LOGS_TRANSPORT=grpc
- IGGY_TELEMETRY_LOGS_ENDPOINT=http://otel-collector:4317
- IGGY_TELEMETRY_TRACES_TRANSPORT=grpc
- IGGY_TELEMETRY_TRACES_ENDPOINT=http://otel-collector:4317- Rust 1.90+ (edition 2024, MSRV: 1.90.0)
- Docker & Docker Compose
-
Start Iggy server (with observability):
docker-compose up -d iggy prometheus grafana
-
Run the application:
cargo run
-
Test the API:
# Health check curl http://localhost:3000/health # Send a message curl -X POST http://localhost:3000/messages \ -H "Content-Type: application/json" \ -d '{ "event": { "id": "550e8400-e29b-41d4-a716-446655440000", "event_type": "user.created", "timestamp": "2024-01-15T10:30:00Z", "payload": { "type": "User", "data": { "action": "Created", "user_id": "550e8400-e29b-41d4-a716-446655440001", "email": "test@example.com", "name": "Test User" } } } }' # Poll messages (partition_id is 0-indexed, 0 = first partition) curl "http://localhost:3000/messages?partition_id=0&count=10"
# Unit tests (130 tests)
cargo test --lib
# Integration tests (24 tests, requires Docker for testcontainers)
cargo test --test integration_tests
# Model tests (20 tests)
cargo test --test model_tests
# All tests
cargo test
# Security-specific tests
cargo test --test integration_tests -- test_auth
cargo test --test integration_tests -- test_rate_limitFuzz tests are available in the fuzz/ directory for validation functions:
# Install cargo-fuzz (requires nightly)
cargo +nightly install cargo-fuzz
# Run the validation fuzz target
cargo +nightly fuzz run fuzz_validation
# Run with a time limit (e.g., 60 seconds)
cargo +nightly fuzz run fuzz_validation -- -max_total_time=60
# View coverage
cargo +nightly fuzz coverage fuzz_validationThe fuzz tests verify that validation functions never panic on any input.
# Build release binary
cargo build --release
# Or use Docker
docker-compose up -dEvents follow a structured format:
{
"id": "uuid",
"event_type": "domain.action",
"timestamp": "ISO8601",
"payload": {
"type": "User|Order|Generic",
"data": { ... }
},
"correlation_id": "optional-uuid",
"source": "optional-service-name"
}- User Events: Created, Updated, Deleted, LoggedIn
- Order Events: Created, Updated, Cancelled, Shipped
- Generic Events: Any JSON payload
All errors return structured JSON responses:
{
"error": "error_type",
"message": "Human-readable message",
"details": "Optional additional context"
}Error types and HTTP status codes:
connection_failed(503): Initial connection to Iggy server faileddisconnected(503): Lost connection during operationconnection_reset(503): Connection was reset by peercircuit_open(503): Circuit breaker is open, failing faststream_error(500): Stream operation failedtopic_error(500): Topic operation failedsend_error(500): Message send failedpoll_error(500): Message poll failednot_found(404): Resource not foundbad_request(400): Invalid request data
RateLimitError is returned during router construction if rate limiting configuration is invalid:
pub enum RateLimitError {
ZeroRps, // RPS cannot be 0; use disabled() instead
}The build_router() function returns Result<Router, RateLimitError> to propagate configuration errors at startup rather than panicking.
Connection-related errors use explicit enum variants for reliable reconnection logic:
// Pattern matching for connection errors (no string matching)
fn is_connection_error(error: &AppError) -> bool {
matches!(
error,
AppError::ConnectionFailed(_)
| AppError::Disconnected(_)
| AppError::ConnectionReset(_)
)
}- Use batch endpoints for high-throughput scenarios
- Configure appropriate partition count for parallelism
- Use partition keys for ordered processing within a partition
- Enable auto-commit for at-least-once delivery
Iggy uses 0-indexed partitions:
- A topic with
partitions: 3has partitions0,1, and2 - When polling,
partition_id=0refers to the first partition - The poll query defaults to
partition_id=0when not specified
Key dependencies (see Cargo.toml):
iggy 0.8.0: Iggy Rust SDKaxum 0.8: Web frameworktokio 1.48: Async runtimetokio-util 0.7: Task tracking and cancellation tokensserde 1.0: Serializationtracing 0.1: Structured loggingthiserror 2.0: Error handlinggovernor 0.8: Rate limiting with token bucket algorithmsubtle 2.6: Constant-time comparison for securitytower-http 0.6: HTTP middleware (CORS, tracing, request ID)rust_decimal 1.39: Exact decimal arithmetic for monetary valuesmetrics 0.24: Application metricsmetrics-exporter-prometheus 0.16: Prometheus metrics exporttestcontainers 0.26: Integration testing with containerized Iggy
The application uses structured concurrency patterns for proper task lifecycle management.
Background tasks (stats refresh, health checks) are managed using:
tokio_util::task::TaskTracker: Tracks all spawned taskstokio_util::sync::CancellationToken: Signals tasks to stop
// In src/state.rs
pub struct AppState {
// ...
task_tracker: TaskTracker,
cancellation_token: CancellationToken,
}
impl AppState {
pub async fn shutdown(&self) {
self.cancellation_token.cancel(); // Signal all tasks
self.task_tracker.close(); // Prevent new spawns
self.task_tracker.wait().await; // Wait for completion
}
}1. SIGTERM/SIGINT received
2. HTTP server stops accepting connections
3. In-flight requests complete
4. state.shutdown() called
5. CancellationToken signals all background tasks
6. TaskTracker waits for tasks to complete
7. Process exits cleanly
Uses tokio::sync::Notify instead of busy-wait for efficient reconnection:
// In src/iggy_client.rs - ConnectionState
struct ConnectionState {
reconnect_complete: Notify, // Efficient wake-up
// ...
}
// Waiting tasks sleep efficiently:
async fn wait_for_reconnection(&self) {
if self.is_reconnecting() {
self.reconnect_complete.notified().await;
}
}The PollParams struct provides a cleaner API for message polling:
use iggy_sample::PollParams;
// Builder pattern for poll parameters
let params = PollParams::new(1, 1) // partition_id, consumer_id
.with_count(50)
.with_offset(100)
.with_auto_commit(true);
// Use with the cleaner API
let messages = client.poll_with_params("stream", "topic", params).await?;Request flow (applied in order):
Request → Rate Limit → Auth → Timeout → Request ID → Tracing → CORS → Handler
- Shared IP extraction logic used by both rate limiting and authentication
- Header priority:
X-Forwarded-For(first IP) →X-Real-IP→ "unknown" fallback - Uses
Cow<'static, str>for zero-allocation on the "unknown" fallback path #[inline]hints on hot paths for potential inlining- See module-level docs for security warnings about IP spoofing
- Token bucket algorithm via Governor crate (GCRA)
- Configurable RPS and burst capacity
- Per-IP rate limiting via shared
extract_client_ip_with_validation()function - Returns
429 Too Many RequestswithRetry-Afterheader - Fallible construction:
RateLimitLayer::new()returnsResult<Self, RateLimitError>
- Constant-time comparison to prevent timing attacks
- Per-IP brute force protection via shared
extract_client_ip()function - Accepts key via
X-API-Keyheader orapi_keyquery parameter - Bypasses
/healthand/readyfor health checks (exact path matching)
- Clients can specify
X-Request-Timeout: <milliseconds>header - Bounded: 100ms minimum, 5 minutes maximum
- Stored in request extensions for handler use
RequestTimeoutExttrait for easy extraction in handlers
Both rate limiting and authentication brute force protection use client IP addresses
extracted from X-Forwarded-For or X-Real-IP headers. These headers can be
spoofed by clients if the service is directly accessible.
- Deploy behind a trusted reverse proxy (nginx, HAProxy, cloud LB, Kubernetes ingress)
- Block direct access to this service from the internet
- Configure proxy to overwrite (not append) client IP headers:
# nginx example - overwrites any client-provided header
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;For multi-hop scenarios:
# Trust only your proxy network
set_real_ip_from 10.0.0.0/8;
real_ip_header X-Forwarded-For;
real_ip_recursive off;Without proper proxy configuration, attackers can:
- Bypass rate limiting by rotating spoofed IP addresses
- Bypass brute force protection by rotating IPs during attacks
- Frame innocent IPs for abuse or lockout
- Exhaust quotas for legitimate users (DoS attack)
When running in containers:
- Use an Ingress controller (nginx-ingress, Traefik, etc.)
- Configure the ingress to set
X-Real-IPfrom the client connection - Ensure the service is only accessible via the ingress (ClusterIP, not NodePort/LoadBalancer)
- Generates UUIDv4 for requests without
X-Request-Id - Propagates existing IDs for distributed tracing
- Adds ID to response headers
GitHub Actions workflows provide automated quality assurance:
- Formatting:
cargo fmt --check - Linting:
cargo clippy -- -D warnings - Tests: Matrix across Ubuntu/macOS/Windows × stable/beta/MSRV
- Integration tests: With Docker for testcontainers
- Coverage: Uploaded to Codecov
- Documentation: Build with
-D warnings - Security audit:
cargo-auditfor vulnerabilities - License check:
cargo-denyfor license compliance - Scheduled runs: Weekly Monday 2:00 AM UTC to catch dependency issues
- PR size limits (warning >500 lines, error >1000)
- Conventional commit format enforcement
- Semver breaking change detection
- Code complexity analysis
- Coverage diff reporting
Triggered on version tags (v*.*.*):
- Multi-platform builds (Linux x86/ARM, macOS x86/ARM, Windows)
- GitHub Release with changelog
- Documentation deployment to GitHub Pages
- Optional crates.io publishing
Weekly scheduled runs:
- Performance benchmarks
- Stress tests with real Iggy server
- Valgrind memory leak detection
- Miri undefined behavior detection
- Feature matrix testing
- Documentation coverage
- Weekly Cargo dependency updates
- Weekly GitHub Actions updates
- Grouped minor/patch updates
MIT License