A high-performance AWS Lambda Extension written in Rust that automatically captures and forwards Lambda function logs to OpenObserve in real-time.
This Lambda layer runs as a separate process alongside your Lambda function, capturing all logs and telemetry without impacting your function's performance. It uses AWS Lambda's Extensions API to collect logs and forwards them to OpenObserve as structured JSON.
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Your Lambda │ │ O2 Extension │ │ OpenObserve │
│ Function │───▶│ (This Layer) │───▶│ Platform │
│ │ │ │ │ │
└─────────────────┘ └──────────────────┘ └─────────────────┘
- Extension Registration: Layer registers with Lambda Extensions API
- Log Subscription: Subscribes to Lambda Logs API to receive all function logs
- Smart Buffering: Collects logs in memory with overflow protection
- Adaptive Flushing: Uses intelligent strategies based on invocation patterns
- Reliable Delivery: Implements retry logic with exponential backoff
TL;DR: Run
./build.sh && ./deploy.sh
to build and deploy both architectures!
# Clone the repository
git clone <your-repo>
cd o2_lambda_layer
# Build and package the layer (builds BOTH architectures by default)
./build.sh
# This creates BOTH packages:
# - target/o2-lambda-extension-x86_64.zip (for Intel/AMD Lambda functions)
# - target/o2-lambda-extension-arm64.zip (for Graviton Lambda functions)
# Build for specific architecture only (optional):
BUILD_TARGETS=x86_64-unknown-linux-musl ./build.sh # x86_64 only
BUILD_TARGETS=aarch64-unknown-linux-musl ./build.sh # arm64 only
# Additional build commands:
./build.sh clean # Clean build artifacts
./build.sh test # Run tests
./build.sh check # Run cargo check for targets
./build.sh help # Show detailed help
# Deploy both architectures to AWS Lambda
./deploy.sh
# Deploy specific architecture only
DEPLOY_ARCH=x86_64 ./deploy.sh # Deploy x86_64 only
DEPLOY_ARCH=arm64 ./deploy.sh # Deploy arm64 only
# Deploy to specific AWS region
AWS_REGION=eu-west-1 ./deploy.sh
# List existing layer versions
./deploy.sh list
# Delete all layer versions (with confirmation)
./deploy.sh delete
# Show deployment help
./deploy.sh help
# For x86_64 Lambda functions (Intel/AMD):
aws lambda publish-layer-version \
--layer-name openobserve-extension-x86_64 \
--zip-file fileb://target/o2-lambda-extension-x86_64.zip \
--compatible-runtimes python3.9 python3.10 python3.11 python3.12 python3.13 nodejs18.x nodejs20.x nodejs22.x java11 java17 java21 dotnet6 dotnet8 go1.x ruby3.2 ruby3.3 provided.al2 provided.al2023 \
--compatible-architectures x86_64 \
--description "OpenObserve lambda layer extension for forwarding logs (x86_64)"
# For arm64 Lambda functions (Graviton):
aws lambda publish-layer-version \
--layer-name openobserve-extension-arm64 \
--zip-file fileb://target/o2-lambda-extension-arm64.zip \
--compatible-runtimes python3.9 python3.10 python3.11 python3.12 python3.13 nodejs18.x nodejs20.x nodejs22.x java11 java17 java21 dotnet6 dotnet8 go1.x ruby3.2 ruby3.3 provided.al2 provided.al2023 \
--compatible-architectures arm64 \
--description "OpenObserve lambda layer extension for forwarding logs (arm64)"
Add the layer to your Lambda function and set these environment variables:
O2_ORGANIZATION_ID=your_organization_id
O2_AUTHORIZATION_HEADER="Basic your_base64_encoded_credentials"
O2_ENDPOINT=https://api.openobserve.ai # OpenObserve API endpoint
O2_STREAM=default # Log stream name
That's it! Your Lambda function will now automatically forward all logs to OpenObserve.
Variable | Required | Default | Description |
---|---|---|---|
O2_ENDPOINT |
No | https://api.openobserve.ai |
OpenObserve API endpoint URL |
O2_ORGANIZATION_ID |
Yes | - | Your OpenObserve organization ID |
O2_STREAM |
No | default |
Target log stream name |
O2_AUTHORIZATION_HEADER |
Yes | - | Authorization header (e.g., "Basic <base64>" ) |
For high-volume or specialized use cases:
Variable | Default | Description |
---|---|---|
O2_BATCH_SIZE |
100 | Max logs per batch sent to OpenObserve |
O2_FLUSH_INTERVAL_MS |
5000 | Flush interval for periodic flushing (ms) |
O2_MAX_BUFFER_SIZE_MB |
10 | Max memory buffer size before dropping logs |
O2_REQUEST_TIMEOUT_MS |
30000 | HTTP request timeout (ms) |
O2_MAX_RETRIES |
3 | Max retry attempts for failed requests |
O2_INITIAL_RETRY_DELAY_MS |
1000 | Initial retry delay (ms) |
O2_MAX_RETRY_DELAY_MS |
30000 | Maximum retry delay (ms) |
The extension automatically chooses the optimal flushing strategy based on your function's invocation pattern:
- When: Low-frequency functions (<10 invocations/minute)
- Behavior: Flushes all logs before calling
/next
- Benefit: Ensures maximum data delivery for sporadic functions
- When: High-frequency functions (≥10 invocations/minute)
- Behavior: Async flushing during function execution
- Benefit: Optimal performance for busy functions
- When: Long-running functions (>30s since last invocation)
- Behavior: Timer-based flushing at regular intervals
- Benefit: Handles long-duration functions efficiently
The extension forwards all Lambda logs without parsing or modification:
- Function logs: Your application's stdout/stderr
- Runtime logs: Lambda runtime messages
- Extension logs: Start/stop/error messages from this extension
- Platform logs: Lambda platform events and metrics
Logs are sent to OpenObserve as JSON arrays:
[
{
"time": "2023-08-30T12:34:56.789Z",
"type": "function",
"record": "2023-08-30T12:34:56.789Z\tINFO\tYour log message here",
"requestId": "abc123-def456-ghi789"
}
]
- Cold Start: Adds ~70ms to cold start time (vs 450ms+ for comparable solutions)
- Runtime: Zero impact on function execution (runs asynchronously)
- Memory: Uses ~10MB RAM (configurable buffer size)
- Logging Volume: High-volume functions will increase OpenObserve ingestion costs
- Lambda Duration: Extension flushing happens after response is sent (no duration impact)
- Network: Additional outbound HTTPS requests to OpenObserve
- Network Failures: Extension retries with exponential backoff
- OpenObserve Outages: Logs are buffered temporarily, oldest dropped if buffer fills
- Extension Failures: Lambda function continues to work normally
- Data Loss: Some logs may be lost if Lambda times out during flush
The extension logs to CloudWatch with these key messages:
# Successful startup
INFO Extension registered successfully: my-function (ID: abc123)
INFO Successfully subscribed to Logs API on port 8080
# Flushing activity
INFO Successfully sent 45 logs to OpenObserve
WARN Retry attempt 2/3 failed: Request timeout
# Shutdown
INFO Starting shutdown sequence with 2.5s deadline
INFO Final flush completed successfully
Extension Not Starting
ERROR Configuration error: O2_ORGANIZATION_ID environment variable is required
→ Check required environment variables are set
Network Connectivity
ERROR OpenObserve request failed with status 401: Unauthorized
→ Verify O2_AUTHORIZATION_HEADER
is correct
High Memory Usage
WARN Log buffer full, dropping oldest batch
→ Reduce O2_MAX_BUFFER_SIZE_MB
or increase O2_FLUSH_INTERVAL_MS
Test the extension configuration and connectivity:
# Set required environment variables
export O2_ORGANIZATION_ID=your_org
export O2_AUTHORIZATION_HEADER="Basic your_token"
# Run health check
./target/debug/o2-lambda-extension --health-check
# Or use the release binary
./target/release/o2-lambda-extension --health-check
The health check will:
- ✅ Validate all configuration parameters
- ✅ Test connectivity to OpenObserve API
- ✅ Verify authentication credentials
- ✅ Send a test log entry
# Show help
./target/debug/o2-lambda-extension --help
# Show version
./target/debug/o2-lambda-extension --version
# Run health check
./target/debug/o2-lambda-extension --health-check
# Alternative short form
./target/debug/o2-lambda-extension -h
- Environment Variables: Store credentials securely in Lambda environment
- AWS Secrets Manager: Consider using AWS Secrets Manager for enhanced security
- No Logging: Extension never logs credential values
- HTTPS Only: All communication uses TLS encryption
- Outbound Only: Extension only makes outbound connections
- No Inbound Ports: Does not open any listening ports (except internal log receiver)
- No Parsing: Logs are forwarded as-is without inspection
- No Storage: No local storage of logs or credentials
- Minimal Metadata: Only adds timestamps and request IDs
# Prerequisites
rustup target add x86_64-unknown-linux-musl
# Build
./build.sh
# Run tests
cargo test
# Check code quality
cargo clippy
├── Cargo.toml # Dependencies and build configuration
├── Cargo.lock # Dependency lockfile
├── src/
│ ├── main.rs # Extension entry point and lifecycle
│ ├── config.rs # Environment variable handling
│ ├── extension.rs # Extensions API client
│ ├── telemetry.rs # Telemetry API subscriber (formerly logs.rs)
│ └── openobserve.rs # OpenObserve HTTP client
├── tests/ # Test suite
├── build.sh # Cross-compilation script
├── deploy.sh # Deployment helper script
├── design.md # Technical design document
├── start_specs.md # Startup specifications
└── .gitignore # Git ignore rules
Metric | Value | Notes |
---|---|---|
Cold Start Impact | ~70ms | One-time per container |
Memory Usage | 5-15MB | Depends on buffer size |
CPU Impact | <1% | During log processing |
Network Overhead | ~1KB/log | Compressed JSON |
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
cargo test
- Submit a pull request
MIT License - See Cargo.toml for details
- Issues: GitHub Issues
- Documentation: OpenObserve Docs
- Community: OpenObserve Slack
Built with ❤️ in Rust for maximum performance and reliability.