Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,4 @@ Dockerfile.cross
examples/sample_secret.yaml
examples/dynatrace_secret.yaml
examples/secret.yaml
/examples/datasink/dynatrace-prod-setup.yaml
18 changes: 14 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ build: manifests generate fmt vet ## Build manager binary.

.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./cmd/main.go start
OPERATOR_CONFIG_NAMESPACE=metrics-operator-system go run ./cmd/main.go start

.PHONY: build-docker-binary
build-docker-binary: manifests generate fmt vet ## Build manager binary.
Expand Down Expand Up @@ -238,13 +238,11 @@ dev-local-all:
$(MAKE) crossplane-provider-sample
$(MAKE) dev-namespace
$(MAKE) dev-secret
$(MAKE) dev-operator-namespace
$(MAKE) dev-basic-metric
$(MAKE) dev-managed-metric





.PHONY: dev-secret
dev-secret:
kubectl apply -f examples/secret.yaml
Expand All @@ -253,6 +251,10 @@ dev-secret:
dev-namespace:
kubectl apply -f examples/namespace.yaml

.PHONY: dev-operator-namespace
dev-operator-namespace:
kubectl create namespace metrics-operator-system --dry-run=client -o yaml | kubectl apply -f -

.PHONY: dev-basic-metric
dev-basic-metric:
kubectl apply -f examples/basic_metric.yaml
Expand All @@ -261,6 +263,14 @@ dev-basic-metric:
dev-managed-metric:
kubectl apply -f examples/managed_metric.yaml

.PHONY: dev-apply-dynatrace-prod-setup
dev-apply-dynatrace-prod-setup:
kubectl apply -f examples/datasink/dynatrace-prod-setup.yaml

.PHONY: dev-apply-metric-dynatrace-prod
dev-apply-metric-dynatrace-prod:
kubectl apply -f examples/datasink/metric-using-dynatrace-prod.yaml

.PHONY: dev-v1beta1-compmetric
dev-v1beta1-compmetric:
kubectl apply -f examples/v1beta1/compmetric.yaml
Expand Down
109 changes: 106 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The Metrics Operator is a powerful tool designed to monitor and provide insights
- [Usage](#usage)
- [RBAC Configuration](#rbac-configuration)
- [Remote Cluster Access](#remote-cluster-access)
- [DataSink Configuration](#datasink-configuration)
- [Data Sink Integration](#data-sink-integration)

## Key Features
Expand Down Expand Up @@ -124,7 +125,7 @@ graph LR
### Prerequisites

1. Create a namespace for the Metrics Operator.
2. Create a secret containing the data sink credentials in the operator's namespace.
2. Create a DataSink resource and associated authentication secret for your metrics destination.

### Deployment

Expand All @@ -139,6 +140,8 @@ helm upgrade --install metrics-operator ghcr.io/sap/github.com/sap/metrics-opera

Replace `<operator-namespace>` and `<version>` with appropriate values.

After deployment, create your DataSink configuration as described in the [DataSink Configuration](#datasink-configuration) section.

## Usage

### Metric
Expand Down Expand Up @@ -320,13 +323,113 @@ kubectl apply -f rbac-config.yaml
Remember to update this RBAC configuration whenever you add new resource types to monitor.


## DataSink Configuration

The Metrics Operator uses DataSink custom resources to define where and how metrics data should be sent. This provides a flexible and secure way to configure data destinations.

### Creating a DataSink

Define a DataSink resource to specify the connection details and authentication for your metrics destination:

```yaml
apiVersion: metrics.cloud.sap/v1alpha1
kind: DataSink
metadata:
name: default
namespace: metrics-operator-system
spec:
connection:
endpoint: "https://your-tenant.live.dynatrace.com/api/v2/metrics/ingest"
protocol: "http"
insecureSkipVerify: false
authentication:
apiKey:
secretKeyRef:
name: dynatrace-credentials
key: api-token
```

### DataSink Specification

The `DataSinkSpec` contains the following fields:

#### Connection
- **endpoint**: The target endpoint URL where metrics will be sent
- **protocol**: Communication protocol (`http` or `grpc`)
- **insecureSkipVerify**: (Optional) Skip TLS certificate verification

#### Authentication
- **apiKey**: API key authentication configuration
- **secretKeyRef**: Reference to a Kubernetes Secret containing the API key
- **name**: Name of the Secret
- **key**: Key within the Secret containing the API token

### Using DataSink in Metrics

All metric types support the `dataSinkRef` field to specify which DataSink to use:

```yaml
apiVersion: metrics.cloud.sap/v1alpha1
kind: Metric
metadata:
name: pod-count
spec:
name: "pods.count"
target:
kind: Pod
group: ""
version: v1
dataSinkRef:
name: default # References the DataSink named "default"
```

### Default Behavior

If no `dataSinkRef` is specified in a metric resource, the operator will automatically use a DataSink named "default" in the operator's namespace. This provides backward compatibility and simplifies configuration for single data sink deployments.

### Supported Metric Types

The `dataSinkRef` field is available in all metric resource types:

- [`Metric`](#metric): Basic metrics for Kubernetes resources
- [`ManagedMetric`](#managed-metric): Metrics for Crossplane managed resources
- [`FederatedMetric`](#federated-metric): Metrics across multiple clusters
- [`FederatedManagedMetric`](#federated-managed-metric): Managed resource metrics across multiple clusters

### Examples and Detailed Documentation

For complete examples and more detailed configuration options:

- See the [`examples/datasink/`](examples/datasink/) directory for practical examples
- Read the comprehensive [DataSink Configuration Guide](docs/datasink-configuration.md) for detailed documentation

The examples directory contains:
- Basic DataSink configuration examples
- Examples showing DataSink usage with different metric types
- Migration guidance from legacy configurations

The detailed guide covers:
- Complete specification reference
- Multiple DataSink scenarios
- Advanced configuration options
- Troubleshooting and best practices

### Migration from Legacy Configuration

**Important**: The old method of using hardcoded secret names (such as `co-dynatrace-credentials`) has been deprecated and removed. You must now use DataSink resources to configure your metrics destinations.

To migrate:
1. Create a DataSink resource pointing to your existing authentication secret
2. Update your metric resources to reference the DataSink using `dataSinkRef`
3. Remove any hardcoded secret references from your configuration

## Data Sink Integration

The Metrics Operator sends collected data to a configured data sink for storage and analysis. The data sink (e.g., Dynatrace) provides tools for data aggregation, filtering, and visualization.
The Metrics Operator sends collected data to configured data sinks for storage and analysis. Data sinks (e.g., Dynatrace) provide tools for data aggregation, filtering, and visualization.

To make the most of your metrics:

1. Configure your data sink according to its documentation.
1. Configure your DataSink resources according to your data sink's documentation.
2. Use the data sink's query language or UI to create custom views of your metrics.
3. Set up alerts based on metric thresholds or patterns.
4. Leverage the data sink's analysis tools to gain insights into your system's behavior and performance.
Expand Down
10 changes: 10 additions & 0 deletions api/v1alpha1/conditions.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,14 @@ const (

// TypeError is a generic condition type that indicates an error has occurred
TypeError = "Error"

// TypeReady is a condition type that indicates the resource is ready
TypeReady = "Ready"

// StatusStringTrue represents the True status string.
StatusStringTrue string = "True"
// StatusStringFalse represents the False status string.
StatusStringFalse string = "False"
// StatusStringUnknown represents the Unknown status string.
StatusStringUnknown string = "Unknown"
)
82 changes: 82 additions & 0 deletions api/v1alpha1/datasink_types.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
/*
Copyright 2024.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package v1alpha1

import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// Connection defines the connection details for the DataSink
type Connection struct {
// Endpoint specifies the target endpoint URL
Endpoint string `json:"endpoint"`
}

// APIKeyAuthentication defines API key authentication configuration
type APIKeyAuthentication struct {
// SecretKeyRef references a key in a Kubernetes Secret containing the API key
SecretKeyRef corev1.SecretKeySelector `json:"secretKeyRef"`
}

// Authentication defines authentication mechanisms for the DataSink
type Authentication struct {
// APIKey specifies API key authentication configuration
// +optional
APIKey *APIKeyAuthentication `json:"apiKey,omitempty"`
}

// DataSinkSpec defines the desired state of DataSink
type DataSinkSpec struct {
// Connection specifies the connection details for the data sink
Connection Connection `json:"connection"`
// Authentication specifies the authentication configuration
// +optional
Authentication *Authentication `json:"authentication,omitempty"`
}

// DataSinkStatus defines the observed state of DataSink
type DataSinkStatus struct {
// Conditions represent the latest available observations of an object's state
// +optional
Conditions []metav1.Condition `json:"conditions,omitempty"`
}

// DataSink is the Schema for the datasinks API
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="ENDPOINT",type="string",JSONPath=".spec.connection.endpoint"
// +kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp"
type DataSink struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec DataSinkSpec `json:"spec,omitempty"`
Status DataSinkStatus `json:"status,omitempty"`
}

// DataSinkList contains a list of DataSink
// +kubebuilder:object:root=true
type DataSinkList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []DataSink `json:"items"`
}

func init() {
SchemeBuilder.Register(&DataSink{}, &DataSinkList{})
}
6 changes: 6 additions & 0 deletions api/v1alpha1/federatedmanagedmetric_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,12 @@ type FederatedManagedMetricSpec struct {
// +kubebuilder:default:="10m"
Interval metav1.Duration `json:"interval,omitempty"`

// DataSinkRef specifies the DataSink to be used for this federated managed metric.
// If not specified, the DataSink named "default" in the operator's
// namespace will be used.
// +optional
DataSinkRef *DataSinkReference `json:"dataSinkRef,omitempty"`

FederatedClusterAccessRef FederateClusterAccessRef `json:"federateClusterAccessRef,omitempty"`
}

Expand Down
6 changes: 6 additions & 0 deletions api/v1alpha1/federatedmetric_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,12 @@ type FederatedMetricSpec struct {
// +kubebuilder:default:="10m"
Interval metav1.Duration `json:"interval,omitempty"`

// DataSinkRef specifies the DataSink to be used for this federated metric.
// If not specified, the DataSink named "default" in the operator's
// namespace will be used.
// +optional
DataSinkRef *DataSinkReference `json:"dataSinkRef,omitempty"`

FederatedClusterAccessRef FederateClusterAccessRef `json:"federateClusterAccessRef,omitempty"`
}

Expand Down
6 changes: 6 additions & 0 deletions api/v1alpha1/managedmetric_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,12 @@ type ManagedMetricSpec struct {
// +kubebuilder:default:="10m"
Interval metav1.Duration `json:"interval,omitempty"`

// DataSinkRef specifies the DataSink to be used for this managed metric.
// If not specified, the DataSink named "default" in the operator's
// namespace will be used.
// +optional
DataSinkRef *DataSinkReference `json:"dataSinkRef,omitempty"`

// +optional
*RemoteClusterAccessRef `json:"remoteClusterAccessRef,omitempty"`
}
Expand Down
13 changes: 13 additions & 0 deletions api/v1alpha1/metric_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,13 @@ const (
PhasePending PhaseType = "Pending"
)

// DataSinkReference holds a reference to a DataSink resource.
type DataSinkReference struct {
// Name is the name of the DataSink resource.
// +kubebuilder:validation:Required
Name string `json:"name"`
}

// MetricSpec defines the desired state of Metric
type MetricSpec struct {
// Sets the name that will be used to identify the metric in Dynatrace(or other providers)
Expand All @@ -55,6 +62,12 @@ type MetricSpec struct {
// +kubebuilder:default:="10m"
Interval metav1.Duration `json:"interval,omitempty"`

// DataSinkRef specifies the DataSink to be used for this metric.
// If not specified, the DataSink named "default" in the operator's
// namespace will be used.
// +optional
DataSinkRef *DataSinkReference `json:"dataSinkRef,omitempty"`

// +optional
*RemoteClusterAccessRef `json:"remoteClusterAccessRef,omitempty"`

Expand Down
Loading
Loading