Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
164 changes: 164 additions & 0 deletions tests/e2e-openshift/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# OpenTelemetry Operator OpenShift End-to-End Test Suite

This directory contains a comprehensive set of OpenShift-specific end-to-end tests for the OpenTelemetry Operator. These tests serve as **configuration blueprints** for users to understand and deploy various OpenTelemetry observability patterns on OpenShift.

## 🎯 Purpose

These test scenarios provide OpenTelemetry configuration blueprints that demonstrate:
- Integration with OpenShift-specific features (Routes, Monitoring, Security)
- Real-world observability patterns and configurations
- Step-by-step deployment instructions for various use cases

## πŸ“‹ Test Scenarios Overview

| Scenario | Purpose | Key Features |
|----------|---------|-------------|
| [route](./route/) | External Access via OpenShift Routes | Route ingress, OTLP HTTP/gRPC endpoints |
| [scrape-in-cluster-monitoring](./scrape-in-cluster-monitoring/) | Prometheus Metrics Federation | In-cluster monitoring integration, metrics scraping |
| [otlp-metrics-traces](./otlp-metrics-traces/) | OTLP Endpoint with Tempo | Metrics & traces collection, Tempo integration |
| [multi-cluster](./multi-cluster/) | Secure Multi-Cluster Communication | TLS certificates, cross-cluster telemetry |
| [must-gather](./must-gather/) | Diagnostic Information Collection | Must-gather functionality, target allocator |
| [monitoring](./monitoring/) | Platform Monitoring Integration | OpenShift monitoring stack integration |
| [kafka](./kafka/) | Messaging Layer for Telemetry | Kafka-based telemetry distribution |
| [export-to-cluster-logging-lokistack](./export-to-cluster-logging-lokistack/) | Log Export to LokiStack | Log shipping to OpenShift logging |

## πŸ”— OpenTelemetry Collector Components Tests

For detailed component-specific configurations and testing patterns, see the **OpenTelemetry Component E2E Test Suite** in the [distributed-tracing-qe](https://github.com/openshift/distributed-tracing-qe.git) repository:

**πŸ“‘ Receivers:**
- [filelog](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/filelog) - File-based log collection from Kubernetes pods
- [hostmetricsreceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/hostmetricsreceiver) - Host system metrics (CPU, memory, disk, network)
- [journaldreceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/journaldreceiver) - Systemd journal log collection
- [k8sclusterreceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/k8sclusterreceiver) - Kubernetes cluster-wide metrics
- [k8seventsreceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/k8seventsreceiver) - Kubernetes events collection
- [k8sobjectsreceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/k8sobjectsreceiver) - Kubernetes objects monitoring
- [kubeletstatsreceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/kubeletstatsreceiver) - Kubelet and container metrics
- [otlpjsonfilereceiver](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/otlpjsonfilereceiver) - OTLP JSON file log ingestion

**πŸ“€ Exporters:**
- [awscloudwatchlogsexporter](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/awscloudwatchlogsexporter) - AWS CloudWatch Logs integration
- [awsxrayexporter](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/awsxrayexporter) - AWS X-Ray tracing export
- [googlemanagedprometheusexporter](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/googlemanagedprometheusexporter) - Google Cloud Managed Prometheus
- [loadbalancingexporter](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/loadbalancingexporter) - High availability load balancing
- [prometheusremotewriteexporter](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/prometheusremotewriteexporter) - Prometheus remote write integration

**βš™οΈ Processors:**
- [batchprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/batchprocessor) - Batching for performance optimization
- [filterprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/filterprocessor) - Selective data filtering
- [groupbyattrsprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/groupbyattrsprocessor) - Attribute-based data grouping
- [memorylimiterprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/memorylimiterprocessor) - Memory usage protection
- [resourceprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/resourceprocessor) - Resource attribute manipulation
- [tailsamplingprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/tailsamplingprocessor) - Intelligent trace sampling
- [transformprocessor](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/transformprocessor) - Advanced data transformation

**πŸ”— Connectors:**
- [forwardconnector](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/forwardconnector) - Data forwarding between pipelines
- [routingconnector](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/routingconnector) - Conditional data routing
- [countconnector](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/countconnector) - Metrics generation from telemetry data

**πŸ”§ Extensions:**
- [oidcauthextension](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/oidcauthextension) - OIDC authentication
- [filestorageextension](https://github.com/openshift/distributed-tracing-qe/tree/main/tests/e2e-otel/filestorageextension) - Persistent file storage

These component test blueprints provide configurations for individual OpenTelemetry components that can be combined with the OpenShift integration patterns documented here.

## πŸš€ Quick Start

### Prerequisites

- OpenShift cluster (4.12+)
- OpenTelemetry Operator installed
- `oc` CLI tool configured
- Appropriate cluster permissions

### Running Tests

These tests use [Chainsaw](https://kyverno.github.io/chainsaw/) for end-to-end testing:

```bash
# Run all OpenShift tests
chainsaw test --test-dir tests/e2e-openshift/

# Run specific test scenario
chainsaw test --test-dir tests/e2e-openshift/route/
```

### Using as Configuration Templates

Each test directory contains:
- **Configuration Files**: YAML configuration blueprints
- **README.md**: Step-by-step deployment instructions
- **Scripts**: Verification and setup automation

## πŸ“ Directory Structure

```
tests/e2e-openshift/
β”œβ”€β”€ README.md # This overview
β”œβ”€β”€ route/ # External access patterns
β”œβ”€β”€ scrape-in-cluster-monitoring/ # Prometheus integration
β”œβ”€β”€ otlp-metrics-traces/ # OTLP with Tempo
β”œβ”€β”€ multi-cluster/ # Cross-cluster telemetry
β”œβ”€β”€ must-gather/ # Diagnostic collection
β”œβ”€β”€ monitoring/ # Platform monitoring
β”œβ”€β”€ kafka/ # Messaging patterns
└── export-to-cluster-logging-lokistack/ # Log export patterns
```

## πŸ”§ Configuration Patterns

### Common OpenShift Integrations

1. **Security Context Constraints (SCCs)**
- Automated SCC annotations for namespaces
- Service account configurations

2. **OpenShift Routes**
- TLS termination options
- External endpoint exposure

3. **Monitoring Stack Integration**
- Prometheus federation
- Platform monitoring labels

4. **RBAC Configurations**
- Cluster roles and bindings
- Service account permissions

## πŸ“– Documentation

Each test scenario includes:
- **Configuration blueprints** for reference and adaptation
- **Step-by-step instructions** for manual deployment
- **Verification steps** to ensure proper operation
- **Troubleshooting guidance** for common issues

## 🏷️ Labels and Annotations

OpenShift-specific labels and annotations used across scenarios:
- `openshift.io/cluster-monitoring=true` - Enable platform monitoring
- `openshift.io/sa.scc.uid-range` - UID range for security contexts
- `openshift.io/sa.scc.supplemental-groups` - Supplemental groups for SCCs

## 🀝 Contributing

When adding new test scenarios:
1. Include comprehensive README with step-by-step instructions
2. Provide configuration blueprint examples
3. Add verification scripts for testing
4. Document OpenShift-specific considerations

## πŸ“ Documentation Note

The comprehensive READMEs in this test suite were generated using Claude AI to provide detailed, step-by-step configuration blueprints for OpenTelemetry deployments on OpenShift. These AI-generated guides aim to accelerate user adoption by providing clear, actionable documentation for complex observability scenarios.

To regenerate / update the docs, the following prompt can be used.

> To enhance our OpenShift End-to-End (E2E) tests, we need to create comprehensive README files within the tests/e2e-openshift directory. These READMEs should provide a maintained set of OpenTelemetry configuration blueprints to assist users in easily deploying and configuring their observability stack, enabling them to quickly access and learn from out-of-the-box observability collection patterns. Each README must include step-by-step instructions referenced directly from the test cases, citing test resources and scripts. Since the test cases are written using the Chainsaw E2E testing tool, the READMEs should be designed from a user perspective for clear and easy follow-through.

## πŸ“š Additional Resources

- [OpenTelemetry Operator Documentation](https://github.com/open-telemetry/opentelemetry-operator)
- [OpenShift Documentation](https://docs.openshift.com/)
- [Chainsaw Testing Framework](https://kyverno.github.io/chainsaw/)
203 changes: 203 additions & 0 deletions tests/e2e-openshift/export-to-cluster-logging-lokistack/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
# Export to Cluster Logging LokiStack Test

This test demonstrates how to export OpenTelemetry logs to OpenShift's cluster logging infrastructure using LokiStack for centralized log management and analysis.

## Test Overview

This test creates:
1. A MinIO instance for LokiStack object storage
2. A LokiStack instance for log storage and querying
3. An OpenTelemetry Collector that processes and exports logs to LokiStack
4. Log generation to test the end-to-end flow
5. Integration with OpenShift logging UI plugin

## Prerequisites

- OpenShift cluster (4.12+)
- OpenTelemetry Operator installed
- Red Hat OpenShift Logging Operator installed
- `oc` CLI tool configured
- Appropriate cluster permissions

## Configuration Resources

### MinIO Object Storage

Deploy MinIO for LokiStack storage backend:

**Configuration:** [`install-minio.yaml`](./install-minio.yaml)

Creates MinIO infrastructure:
- 2Gi PersistentVolumeClaim for storage
- MinIO deployment with demo credentials (tempo/supersecret)
- Service for internal cluster access
- Secret with S3-compatible access configuration

### LokiStack Instance

Deploy LokiStack for log storage:

**Configuration:** [`install-loki.yaml`](./install-loki.yaml)

Creates a LokiStack with:
- 1x.demo size for testing environments
- S3-compatible storage via MinIO
- v13 schema with openshift-logging tenant mode
- Integration with cluster logging infrastructure

### OpenTelemetry Collector Configuration

Deploy collector with LokiStack integration:

**Configuration:** [`otel-collector.yaml`](./otel-collector.yaml)

Configures a complete collector setup with:
- Service account with LokiStack write permissions
- ClusterRole for accessing pods, namespaces, and nodes
- Bearer token authentication for LokiStack gateway
- k8sattributes processor for Kubernetes metadata
- Transform processor for log level normalization
- Dual pipeline: one for LokiStack export, one for debug output

### Log Generation

Generate test logs to validate the pipeline:

**Configuration:** [`generate-logs.yaml`](./generate-logs.yaml)

Creates a job that:
- Generates 20 structured log entries in OTLP format
- Sends logs via HTTP POST to the collector
- Includes service metadata and custom attributes
- Uses proper OTLP JSON structure for compatibility

### Logging UI Plugin

Enable the logging UI plugin for log visualization:

**Configuration:** [`logging-uiplugin.yaml`](./logging-uiplugin.yaml)

Configures the OpenShift console plugin for:
- Log visualization in the OpenShift web console
- Integration with LokiStack for log querying
- Enhanced logging user interface experience

## Deployment Steps

1. **Install MinIO for object storage:**
```bash
oc apply -f install-minio.yaml
```

2. **Deploy LokiStack instance:**
```bash
oc apply -f install-loki.yaml
```

3. **Deploy OpenTelemetry Collector with RBAC:**
```bash
oc apply -f otel-collector.yaml
```

4. **Enable logging UI plugin:**
```bash
oc apply -f logging-uiplugin.yaml
```

5. **Generate test logs:**
```bash
oc apply -f generate-logs.yaml
```

## Expected Resources

The test creates and verifies these resources:

### Storage Infrastructure
- **MinIO**: Object storage backend with `tempo` bucket
- **PVC**: 2Gi persistent volume for MinIO storage
- **Secret**: `logging-loki-s3` with MinIO access credentials

### Logging Stack
- **LokiStack**: `logging-loki` instance in demo mode
- **Gateway**: HTTP gateway for log ingestion
- **Storage Schema**: v13 schema with 2023-10-15 effective date

### OpenTelemetry Integration
- **Service Account**: `otel-collector-deployment` with logging permissions
- **Collector**: `otel-collector` with LokiStack exporter
- **RBAC**: Cluster role for writing to LokiStack

### Log Generation
- **Job**: `generate-logs` creating test log entries
- **UI Plugin**: Logging view plugin for OpenShift console

## Testing the Configuration

The test includes verification logic in the Chainsaw test configuration.

**Verification Script:** [`check_logs.sh`](./check_logs.sh)

The script verifies:
- Log generation job completes successfully
- Collector exports logs to LokiStack gateway
- LokiStack gateway receives and processes log entries
- End-to-end log flow from generation to storage

## Additional Verification Commands

Verify the logging infrastructure:

```bash
# Check LokiStack status
oc get lokistack logging-loki -o yaml

# Check MinIO deployment
oc get deployment minio -o yaml

# View LokiStack gateway service
oc get svc logging-loki-gateway-http

# Check collector service account permissions
oc auth can-i create application --as=system:serviceaccount:openshift-logging:otel-collector-deployment

# Port forward to MinIO for bucket verification
oc port-forward svc/minio 9000:9000 &
curl -u tempo:supersecret http://localhost:9000/minio/health/live # Using demo test credentials

# Check collector metrics
oc port-forward svc/otel-collector 8888:8888 &
curl http://localhost:8888/metrics | grep otelcol_exporter
```

## Verification

The test verifies:
- βœ… MinIO is deployed and accessible as object storage
- βœ… LokiStack instance is ready and configured
- βœ… OpenTelemetry Collector has proper RBAC permissions
- βœ… Collector is configured with LokiStack OTLP exporter
- βœ… Bearer token authentication is working
- βœ… Log generation job completes successfully
- βœ… Logs are processed through k8sattributes and transform processors
- βœ… Logs are successfully exported to LokiStack
- βœ… Logging UI plugin is enabled for log visualization

## Key Features

- **LokiStack Integration**: Native integration with OpenShift cluster logging
- **Object Storage**: MinIO backend for log persistence
- **Authentication**: Bearer token authentication with service accounts
- **Log Processing**: Kubernetes attributes and log transformation
- **OTLP Protocol**: Uses OTLP HTTP for log export to LokiStack
- **Multi-Pipeline**: Separate pipelines for LokiStack export and debug output
- **UI Integration**: Logging console plugin for log visualization

## Configuration Notes

- LokiStack runs in `1x.demo` size for testing environments
- MinIO uses ephemeral storage with demo credentials (tempo/supersecret) - **FOR TESTING ONLY**
- Collector uses bearer token authentication for LokiStack access
- Service CA certificate is used for TLS communication with LokiStack
- Log processors add Kubernetes metadata and normalize severity levels
- The application log type is set for proper tenant routing in LokiStack
Loading
Loading