|
| 1 | +//// |
| 2 | +This module included in the following assemblies: |
| 3 | +- distr_tracing_install/distr-tracing-deploying.adoc |
| 4 | +//// |
| 5 | + |
| 6 | +[id="distr-tracing-deploy-otel-collector_{context}"] |
| 7 | += Deploying distributed tracing data collection |
| 8 | + |
| 9 | +The custom resource definition (CRD) defines the configuration used when you deploy an instance of {OTELName}. |
| 10 | + |
| 11 | +.Prerequisites |
| 12 | + |
| 13 | +* The {OTELName} Operator has been installed. |
| 14 | +//* You have reviewed the instructions for how to customize the deployment. |
| 15 | +* You have access to the cluster as a user with the `cluster-admin` role. |
| 16 | + |
| 17 | +.Procedure |
| 18 | + |
| 19 | +. Log in to the OpenShift web console as a user with the `cluster-admin` role. |
| 20 | + |
| 21 | +. Create a new project, for example `tracing-system`. |
| 22 | ++ |
| 23 | +[NOTE] |
| 24 | +==== |
| 25 | +If you are installing distributed tracing as part of Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource, for example `istio-system`. |
| 26 | +==== |
| 27 | ++ |
| 28 | +.. Navigate to *Home* -> *Projects*. |
| 29 | + |
| 30 | +.. Click *Create Project*. |
| 31 | + |
| 32 | +.. Enter `tracing-system` in the *Name* field. |
| 33 | + |
| 34 | +.. Click *Create*. |
| 35 | + |
| 36 | +. Navigate to *Operators* -> *Installed Operators*. |
| 37 | + |
| 38 | +. If necessary, select `tracing-system` from the *Project* menu. You might have to wait a few moments for the Operators to be copied to the new project. |
| 39 | + |
| 40 | +. Click the *{OTELName} Operator*. On the *Details* tab, under *Provided APIs*, the Operator provides a single link. |
| 41 | + |
| 42 | +. Under *OpenTelemetryCollector*, click *Create Instance*. |
| 43 | + |
| 44 | +. On the *Create OpenTelemetry Collector* page, to install using the defaults, click *Create* to create the {OTELShortName} instance. |
| 45 | + |
| 46 | +. On the *OpenTelemetryCollectors* page, click the name of the {OTELShortName} instance, for example, `opentelemetrycollector-sample`. |
| 47 | + |
| 48 | +. On the *Details* page, click the *Resources* tab. Wait until the pod has a status of "Running" before continuing. |
| 49 | + |
| 50 | +[id="distr-tracing-deploy-otel-collector-cli_{context}"] |
| 51 | += Deploying {OTELShortName} from the CLI |
| 52 | + |
| 53 | +Follow this procedure to create an instance of {OTELShortName} from the command line. |
| 54 | + |
| 55 | +.Prerequisites |
| 56 | + |
| 57 | +* The {OTELName} Operator has been installed and verified. |
| 58 | ++ |
| 59 | +//* You have reviewed the instructions for how to customize the deployment. |
| 60 | ++ |
| 61 | +* You have access to the OpenShift CLI (`oc`) that matches your {product-title} version. |
| 62 | +* You have access to the cluster as a user with the `cluster-admin` role. |
| 63 | + |
| 64 | +.Procedure |
| 65 | + |
| 66 | +. Log in to the {product-title} CLI as a user with the `cluster-admin` role. |
| 67 | ++ |
| 68 | +[source,terminal] |
| 69 | +---- |
| 70 | +$ oc login https://<HOSTNAME>:8443 |
| 71 | +---- |
| 72 | + |
| 73 | +. Create a new project named `tracing-system`. |
| 74 | ++ |
| 75 | +[source,terminal] |
| 76 | +---- |
| 77 | +$ oc new-project tracing-system |
| 78 | +---- |
| 79 | + |
| 80 | +. Create a custom resource file named `jopentelemetrycollector-sample.yaml` that contains the following text: |
| 81 | ++ |
| 82 | +.Example opentelemetrycollector.yaml |
| 83 | +[source,yaml] |
| 84 | +---- |
| 85 | + apiVersion: opentelemetry.io/v1alpha1 |
| 86 | + kind: OpenTelemetryCollector |
| 87 | + metadata: |
| 88 | + name: opentelemetrycollector-sample |
| 89 | + namespace: openshift-operators |
| 90 | + spec: |
| 91 | + image: >- |
| 92 | + registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:61934ea5793c55900d09893e8f8b1f2dbd2e712faba8e97684e744691b29f25e |
| 93 | + config: | |
| 94 | + receivers: |
| 95 | + jaeger: |
| 96 | + protocols: |
| 97 | + grpc: |
| 98 | + exporters: |
| 99 | + logging: |
| 100 | + service: |
| 101 | + pipelines: |
| 102 | + traces: |
| 103 | + receivers: [jaeger] |
| 104 | + exporters: [logging] |
| 105 | +---- |
| 106 | + |
| 107 | +. Run the following command to deploy {JaegerShortName}: |
| 108 | ++ |
| 109 | +[source,terminal] |
| 110 | +---- |
| 111 | +$ oc create -n tracing-system -f opentelemetrycollector.yaml |
| 112 | +---- |
| 113 | + |
| 114 | +. Run the following command to watch the progress of the pods during the installation process: |
| 115 | ++ |
| 116 | +[source,terminal] |
| 117 | +---- |
| 118 | +$ oc get pods -n tracing-system -w |
| 119 | +---- |
| 120 | ++ |
| 121 | +After the installation process has completed, you should see output similar to the following example: |
| 122 | ++ |
| 123 | +[source,terminal] |
| 124 | +---- |
| 125 | +NAME READY STATUS RESTARTS AGE |
| 126 | +opentelemetrycollector-cdff7897b-qhfdx 2/2 Running 0 24s |
| 127 | +---- |
0 commit comments