Skip to content

Commit 3ba038f

Browse files
authored
Merge pull request #40239 from JStickler/OSSMDOC-139
OSSMDOC-139: Document installing the OTEL Operator.
2 parents fd99d71 + 3703943 commit 3ba038f

9 files changed

+170
-12
lines changed

_topic_maps/_topic_map.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2806,8 +2806,10 @@ Topics:
28062806
Topics:
28072807
- Name: Installing distributed tracing
28082808
File: distr-tracing-installing
2809-
- Name: Configuring distributed tracing
2809+
- Name: Configuring the distributed tracing platform
28102810
File: distr-tracing-deploying
2811+
- Name: Configuring distributed tracing data collection
2812+
File: distr-tracing-deploying-otel
28112813
- Name: Upgrading distributed tracing
28122814
File: distr-tracing-updating
28132815
- Name: Removing distributed tracing
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
[id="distr-tracing-deploying-otel"]
2+
= Configuring and deploying distributed tracing data collection
3+
include::modules/distr-tracing-document-attributes.adoc[]
4+
:context: deploying-data-collection
5+
6+
toc::[]
7+
8+
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to use when creating and deploying the {OTELShortName} resources. You can either install the default configuration or modify the file to better suit your business requirements.
9+
10+
[IMPORTANT]
11+
====
12+
The {OTELName} Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
13+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
14+
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
15+
====
16+
17+
// The following include statements pull in the module files that comprise the assembly.
18+
19+
include::modules/distr-tracing-deploy-otel-collector.adoc[leveloffset=+1]

distr_tracing/distr_tracing_install/distr-tracing-installing.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ include::modules/distr-tracing-install-elasticsearch.adoc[leveloffset=+1]
3636

3737
include::modules/distr-tracing-install-jaeger-operator.adoc[leveloffset=+1]
3838

39+
include::modules/distr-tracing-install-otel-operator.adoc[leveloffset=+1]
40+
3941
////
4042
== Next steps
4143
* xref:../../distr_tracing/distr_tracing_install/distr-tracing-deploying.adoc#deploying-distributed-tracing[Deploy {DTProductName}].

modules/distr-tracing-deploy-default.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
7171
+
7272
[source,terminal]
7373
----
74-
$ oc login https://{HOSTNAME}:8443
74+
$ oc login https://<HOSTNAME>:8443
7575
----
7676

7777
. Create a new project named `tracing-system`.
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
////
2+
This module included in the following assemblies:
3+
- distr_tracing_install/distr-tracing-deploying.adoc
4+
////
5+
6+
[id="distr-tracing-deploy-otel-collector_{context}"]
7+
= Deploying distributed tracing data collection
8+
9+
The custom resource definition (CRD) defines the configuration used when you deploy an instance of {OTELName}.
10+
11+
.Prerequisites
12+
13+
* The {OTELName} Operator has been installed.
14+
//* You have reviewed the instructions for how to customize the deployment.
15+
* You have access to the cluster as a user with the `cluster-admin` role.
16+
17+
.Procedure
18+
19+
. Log in to the OpenShift web console as a user with the `cluster-admin` role.
20+
21+
. Create a new project, for example `tracing-system`.
22+
+
23+
[NOTE]
24+
====
25+
If you are installing distributed tracing as part of Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource, for example `istio-system`.
26+
====
27+
+
28+
.. Navigate to *Home* -> *Projects*.
29+
30+
.. Click *Create Project*.
31+
32+
.. Enter `tracing-system` in the *Name* field.
33+
34+
.. Click *Create*.
35+
36+
. Navigate to *Operators* -> *Installed Operators*.
37+
38+
. If necessary, select `tracing-system` from the *Project* menu. You might have to wait a few moments for the Operators to be copied to the new project.
39+
40+
. Click the *{OTELName} Operator*. On the *Details* tab, under *Provided APIs*, the Operator provides a single link.
41+
42+
. Under *OpenTelemetryCollector*, click *Create Instance*.
43+
44+
. On the *Create OpenTelemetry Collector* page, to install using the defaults, click *Create* to create the {OTELShortName} instance.
45+
46+
. On the *OpenTelemetryCollectors* page, click the name of the {OTELShortName} instance, for example, `opentelemetrycollector-sample`.
47+
48+
. On the *Details* page, click the *Resources* tab. Wait until the pod has a status of "Running" before continuing.
49+
50+
[id="distr-tracing-deploy-otel-collector-cli_{context}"]
51+
= Deploying {OTELShortName} from the CLI
52+
53+
Follow this procedure to create an instance of {OTELShortName} from the command line.
54+
55+
.Prerequisites
56+
57+
* The {OTELName} Operator has been installed and verified.
58+
+
59+
//* You have reviewed the instructions for how to customize the deployment.
60+
+
61+
* You have access to the OpenShift CLI (`oc`) that matches your {product-title} version.
62+
* You have access to the cluster as a user with the `cluster-admin` role.
63+
64+
.Procedure
65+
66+
. Log in to the {product-title} CLI as a user with the `cluster-admin` role.
67+
+
68+
[source,terminal]
69+
----
70+
$ oc login https://<HOSTNAME>:8443
71+
----
72+
73+
. Create a new project named `tracing-system`.
74+
+
75+
[source,terminal]
76+
----
77+
$ oc new-project tracing-system
78+
----
79+
80+
. Create a custom resource file named `jopentelemetrycollector-sample.yaml` that contains the following text:
81+
+
82+
.Example opentelemetrycollector.yaml
83+
[source,yaml]
84+
----
85+
apiVersion: opentelemetry.io/v1alpha1
86+
kind: OpenTelemetryCollector
87+
metadata:
88+
name: opentelemetrycollector-sample
89+
namespace: openshift-operators
90+
spec:
91+
image: >-
92+
registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:61934ea5793c55900d09893e8f8b1f2dbd2e712faba8e97684e744691b29f25e
93+
config: |
94+
receivers:
95+
jaeger:
96+
protocols:
97+
grpc:
98+
exporters:
99+
logging:
100+
service:
101+
pipelines:
102+
traces:
103+
receivers: [jaeger]
104+
exporters: [logging]
105+
----
106+
107+
. Run the following command to deploy {JaegerShortName}:
108+
+
109+
[source,terminal]
110+
----
111+
$ oc create -n tracing-system -f opentelemetrycollector.yaml
112+
----
113+
114+
. Run the following command to watch the progress of the pods during the installation process:
115+
+
116+
[source,terminal]
117+
----
118+
$ oc get pods -n tracing-system -w
119+
----
120+
+
121+
After the installation process has completed, you should see output similar to the following example:
122+
+
123+
[source,terminal]
124+
----
125+
NAME READY STATUS RESTARTS AGE
126+
opentelemetrycollector-cdff7897b-qhfdx 2/2 Running 0 24s
127+
----

modules/distr-tracing-deploy-production-es.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
9797
+
9898
[source,terminal]
9999
----
100-
$ oc login https://{HOSTNAME}:8443
100+
$ oc login https://<HOSTNAME>:8443
101101
----
102102

103103
. Create a new project named `tracing-system`.

modules/distr-tracing-deploy-streaming.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ Procedure
112112
+
113113
[source,terminal]
114114
----
115-
$ oc login https://{HOSTNAME}:8443
115+
$ oc login https://<HOSTNAME>:8443
116116
----
117117

118118
. Create a new project named `tracing-system`.

modules/distr-tracing-install-jaeger-operator.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,4 +47,6 @@ The *Manual* approval strategy requires a user with appropriate credentials to a
4747

4848
. Click *Install*.
4949

50-
. On the *Subscription Overview* page, select the `openshift-operators` project. Wait until you see that the {JaegerName} Operator shows a status of "InstallSucceeded" before continuing.
50+
. Navigate to *Operators* -> *Installed Operators*.
51+
52+
. On the *Installed Operators* page, select the `openshift-operators` project. Wait until you see that the {JaegerName} Operator shows a status of "Succeeded" before continuing.

modules/distr-tracing-install-otel-operator.adoc

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,12 @@ This module included in the following assemblies:
66
[id="distr-tracing-otel-operator-install_{context}"]
77
= Installing the {OTELName} Operator
88

9-
#TECH PREVIEW BOILERPLATE HERE#
9+
[IMPORTANT]
10+
====
11+
The {OTELName} Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
12+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
13+
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
14+
====
1015

1116
To install {OTELName}, you use the link:https://operatorhub.io/[OperatorHub] to install the {OTELName} Operator.
1217

@@ -15,7 +20,6 @@ By default, the Operator is installed in the `openshift-operators` project.
1520
.Prerequisites
1621
* You have access to the {product-title} web console.
1722
* You have access to the cluster as a user with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
18-
//* If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the {OTELName} Operator.
1923

2024
[WARNING]
2125
====
@@ -28,17 +32,17 @@ Do not install Community versions of the Operators. Community Operators are not
2832

2933
. Navigate to *Operators* -> *OperatorHub*.
3034

31-
. Type *distributing tracing datacollection* into the filter to locate the {OTELName} Operator.
35+
. Type *distributing tracing data collection* into the filter to locate the {OTELName} Operator.
3236

3337
. Click the *{OTELName} Operator* provided by Red Hat to display information about the Operator.
3438

3539
. Click *Install*.
3640

37-
. On the *Install Operator* page, select the *stable* Update Channel. This automatically updates your Operator as new versions are released.
41+
. On the *Install Operator* page, accept the default *stable* Update channel. This automatically updates your Operator as new versions are released.
3842

39-
. Select *All namespaces on the cluster (default)*. This installs the Operator in the default `openshift-operators` project and makes the Operator available to all projects in the cluster.
43+
. Accept the default *All namespaces on the cluster (default)*. This installs the Operator in the default `openshift-operators` project and makes the Operator available to all projects in the cluster.
4044

41-
* Select an approval srategy. You can select *Automatic* or *Manual* updates. If you choose *Automatic* updates for an installed Operator, when a new version of that Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select *Manual* updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
45+
. Accept the default *Automatic* approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select *Manual* updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
4246
+
4347
[NOTE]
4448
====
@@ -48,4 +52,6 @@ The *Manual* approval strategy requires a user with appropriate credentials to a
4852

4953
. Click *Install*.
5054

51-
. On the *Subscription Overview* page, select the `openshift-operators` project. Wait until you see that the {OTELName} Operator shows a status of "InstallSucceeded" before continuing.
55+
. Navigate to *Operators* -> *Installed Operators*.
56+
57+
. On the *Installed Operators* page, select the `openshift-operators` project. Wait until you see that the {OTELName} Operator shows a status of "Succeeded" before continuing.

0 commit comments

Comments
 (0)