|
| 1 | +# Configuring the OpenTelemetry Demo App to send data to Oracle Cloud Infrastucture (OCI) backend |
| 2 | + |
| 3 | +The [OpenTelemetry Astronomy Shop](https://github.com/open-telemetry/opentelemetry-demo) is a demo app to illustrate the implementation of OpenTelemetry in a near real-world environment. It is a microservice-based distributed system which can be easily deployed using Docker or Kubernetes, where we will focus here on deploying it using Kubernetes/Helm. |
| 4 | + |
| 5 | +Our aim is to guide you through the needed steps to deploy the OpenTelemetry Demo in a K8s cluster and send its data to OCI Observability and Management Services. |
| 6 | + |
| 7 | +## Prerequisites |
| 8 | +The following prerequisites are needed: |
| 9 | +1. Ensure to have an [OCI account](https://signup.cloud.oracle.com). |
| 10 | +2. In Application Performance Monitoring (APM) service, [create an APM domain.](https://docs.oracle.com/iaas/application-performance-monitoring/doc/create-apm-domain.html) |
| 11 | +3. Have the [Data Upload Endpoint URL and the private data key](https://docs.oracle.com/en-us/iaas/application-performance-monitoring/doc/obtain-data-upload-endpoint-and-data-keys.html#GUID-912EA36F-4E58-4954-B9C2-4E9A9BADDAE9) of that domain available. |
| 12 | +4. Have a Kubernetes Cluster available, either locally, in OCI or at other cloud vendors, and provide access to it using <samp>kubectl</samp>. |
| 13 | + |
| 14 | +## Deploy the Demo App |
| 15 | +We will start by creating a dedicated namespace for the demo app: |
| 16 | +```bash |
| 17 | +kubectl create namespace otel-demo-app |
| 18 | +``` |
| 19 | +Next, we will create a secret named <samp>oci-apm-secret</samp>: |
| 20 | +```bash |
| 21 | +kubectl create secret generic oci-apm-secret -n otel-demo-app --from-literal="OCI_APM_ENDPOINT=<Data Upload Endpoint>" --from-literal="OCI_APM_DATAKEY=<Private Data Key>" |
| 22 | +``` |
| 23 | +Add the OpenTelemetry Helm charts repo to your helm configuration to allow deploying the OpenTelemetry Demo: |
| 24 | +```bash |
| 25 | +helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts |
| 26 | +``` |
| 27 | +Create file <samp>my-values.yaml</samp> with the following content: |
| 28 | +``` |
| 29 | +opentelemetry-collector: |
| 30 | + extraEnvsFrom: |
| 31 | + - secretRef: |
| 32 | + name: oci-apm-secret |
| 33 | +
|
| 34 | + config: |
| 35 | + exporters: |
| 36 | + otlphttp/oci_spans: |
| 37 | + endpoint: "${OCI_APM_ENDPOINT}/20200101/opentelemetry/" |
| 38 | + headers: |
| 39 | + authorization: "dataKey ${OCI_APM_DATAKEY}" |
| 40 | + tls: |
| 41 | + insecure: false |
| 42 | + otlphttp/oci_metrics: |
| 43 | + endpoint: "${OCI_APM_ENDPOINT}/20200101/observations/metric?dataFormat=otlp-metric&dataFormatVersion=1" |
| 44 | + headers: |
| 45 | + authorization: "dataKey ${OCI_APM_DATAKEY}" |
| 46 | + tls: |
| 47 | + insecure: false |
| 48 | +
|
| 49 | + service: |
| 50 | + pipelines: |
| 51 | + traces: |
| 52 | + exporters: [otlp, debug, spanmetrics, otlphttp/oci_spans] |
| 53 | + metrics: |
| 54 | + exporters: [otlphttp/prometheus, debug, otlphttp/oci_metrics] |
| 55 | +``` |
| 56 | +Run the following <samp>helm</samp> command to deploy the OpenTelemetry Demo App: |
| 57 | +```bash |
| 58 | +helm install otel-demo-app open-telemetry/opentelemetry-demo --values my-values.yaml -n otel-demo-app |
| 59 | +``` |
| 60 | +Here is the expected output: |
| 61 | +```bash |
| 62 | +... |
| 63 | + |
| 64 | + ██████╗ ████████╗███████╗██╗ ██████╗ ███████╗███╗ ███╗ ██████╗ |
| 65 | +██╔═══██╗╚══██╔══╝██╔════╝██║ ██╔══██╗██╔════╝████╗ ████║██╔═══██╗ |
| 66 | +██║ ██║ ██║ █████╗ ██║ ██║ ██║█████╗ ██╔████╔██║██║ ██║ |
| 67 | +██║ ██║ ██║ ██╔══╝ ██║ ██║ ██║██╔══╝ ██║╚██╔╝██║██║ ██║ |
| 68 | +╚██████╔╝ ██║ ███████╗███████╗ ██████╔╝███████╗██║ ╚═╝ ██║╚██████╔╝ |
| 69 | + ╚═════╝ ╚═╝ ╚══════╝╚══════╝ ╚═════╝ ╚══════╝╚═╝ ╚═╝ ╚═════╝ |
| 70 | + |
| 71 | + |
| 72 | +- All services are available via the Frontend proxy: http://localhost:8080 |
| 73 | + by running these commands: |
| 74 | + kubectl --namespace default port-forward svc/otel-demo-app-frontendproxy 8080:8080 |
| 75 | + |
| 76 | + The following services are available at these paths after the frontendproxy service is exposed with port forwarding: |
| 77 | + Webstore http://localhost:8080/ |
| 78 | + Jaeger UI http://localhost:8080/jaeger/ui/ |
| 79 | + Grafana http://localhost:8080/grafana/ |
| 80 | + Load Generator UI http://localhost:8080/loadgen/ |
| 81 | + Feature Flags UI http://localhost:8080/feature/ |
| 82 | +``` |
| 83 | + |
| 84 | +## Using OCI APM service to observe Traces and Spans |
| 85 | +With the OpenTelemetry Demo App installed, you are now ready to explore the capabilities of OCI APM (Application Performance Monitoring) and gain deep insights into your application's performance. Lets start by showing you the process of visualising spans and traces using OCI APM, showcasing its powerful features and benefits. |
| 86 | + |
| 87 | +### Getting Started with OCI APM |
| 88 | +Log in to your Oracle Cloud Infrastructure console and navigate to the Application Performance Monitoring. Here, you will find a wealth of information about your application's performance |
| 89 | + |
| 90 | +### Visualising Traces and Spans |
| 91 | +Click on the "Traces" tab in the APM console / Trace Explorer. You will see a list of traces generated by the OTel demo application. Each trace represents a user request or an operation within your application: |
| 92 | + |
| 93 | + |
| 94 | + |
| 95 | +Select a specific trace to view its details, including the topology of the trace and its spans: |
| 96 | + |
| 97 | + |
| 98 | + |
| 99 | +Click on any of the spans to see the span details, which include kubernetes data, SpanId, TraceID, etc.: |
| 100 | + |
| 101 | + |
| 102 | + |
| 103 | +Services Topology view allows to visualize associations between services, several parameters can be used for the arrow width: |
| 104 | + |
| 105 | + |
| 106 | + |
| 107 | + |
| 108 | + |
| 109 | +## Using OCI Logging Analytics service to collect and analyze Logs from Kubernetes Infrastructure and Pods |
| 110 | +OCI Logging Analytics provides a [complete solution](https://docs.oracle.com/en-us/iaas/logging-analytics/doc/kubernetes-solution.html) for monitoring Kubernetes (K8s) cluster deployed in OCI, third party public clouds, private clouds, or on-premises including managed Kubernetes deployments. We will start with discovering the K8s cluster running the OpenTelemetry Demo App. |
| 111 | + |
| 112 | +### Getting Started with monitoring a K8s Cluster using OCI Logging Analytics |
| 113 | +Log in to your Oracle Cloud Infrastructure console and navigate to Logging Analytics Administration. Select Solutions -> Kubernetes -> Connect Clusters -> Monitor Kubernetes -> Oracle OKE, here we are assuming that the K8s cluster is running in OCI. Select the Cluster, press *Next* and select the right compartment to be used for telemetry data and related monitoring resources, usually it will be the same like the one used for collecting the logs. Click on *Configure log collection*, this will create all neded dynamic groups and policies to allow collecting logs, metrics, and object information from related Kubernetes components, computer nodes, subnets, and load balancers. The deployed solution will create these statefulsets, daemonsets and cronjobs in namespace *oci-onm*: |
| 114 | +```bash |
| 115 | +$ kubectl get pods -n oci-onm |
| 116 | +NAME READY STATUS RESTARTS AGE |
| 117 | +oci-onm-discovery-28982290-zdkxx 0/1 Completed 0 14m |
| 118 | +oci-onm-discovery-28982295-bztr2 0/1 Completed 0 9m12s |
| 119 | +oci-onm-discovery-28982300-zrxxq 0/1 Completed 0 4m12s |
| 120 | +oci-onm-logan-krcmb 1/1 Running 3 80d |
| 121 | +oci-onm-logan-npltp 1/1 Running 2 80d |
| 122 | +oci-onm-logan-tthvm 1/1 Running 2 80d |
| 123 | +oci-onm-mgmt-agent-0 1/1 Running 2 80d |
| 124 | +``` |
| 125 | +### Screenshots from the Solution Dashboard |
| 126 | +Cluster overview: |
| 127 | + |
| 128 | + |
| 129 | +Workload overview: |
| 130 | + |
| 131 | + |
| 132 | +Node overview: |
| 133 | + |
| 134 | + |
| 135 | +Pod overview: |
| 136 | + |
| 137 | + |
| 138 | +### Screenshots from Log Explorer |
| 139 | + |
| 140 | +List of TOP Log Sources: |
| 141 | + |
| 142 | + |
| 143 | +Log Records by Pod name: |
| 144 | + |
| 145 | + |
| 146 | +Compare Cluster results: |
| 147 | + |
| 148 | + |
| 149 | + |
| 150 | +## Using OCI Monitoring service to inspect the collected metrics |
| 151 | + |
| 152 | +Metrics from the OpenTelemetry Demo App are routed via OpenTelemetry Collector to OCI APM service which in turn will forwarded them to OCI Monitoring service. In parallel, infrastructure metrics from the Kubernetes cluster itself are getting collected by the OCI Management Agent which is running as statefulset in the cluster. |
| 153 | + |
| 154 | +The collected metrics can be inspected using the Metrics Explorer from the OCI Monitoring service. |
| 155 | + |
| 156 | +Metrics collected by the Management Agent can be found in metric namespace *mgmtagent_kubernetes_metrics*: |
| 157 | + |
| 158 | + |
| 159 | + |
| 160 | + |
| 161 | +Metrics sent by the OpenTelemetry Collector can be found in metric namespace *oracle_apm_monitoring*: |
| 162 | + |
| 163 | + |
| 164 | + |
| 165 | +Finally, out-of-the-box or custom dashboards can be used to visualize the collected metrics: |
| 166 | + |
| 167 | + |
| 168 | + |
| 169 | + |
| 170 | + |
| 171 | + |
0 commit comments