Skip to content
This repository was archived by the owner on Sep 2, 2025. It is now read-only.

Commit 0ea8b77

Browse files
committed
fix-k8s-entity-styling
1 parent 4a08836 commit 0ea8b77

File tree

8 files changed

+17
-17
lines changed

8 files changed

+17
-17
lines changed

gdi/opentelemetry/automatic-discovery/k8s/k8s-backend.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -143,8 +143,8 @@ To properly ingest trace telemetry data, the attribute ``deployment.environment`
143143
* - Through the values.yaml file and ``instrumentation.env`` or ``instrumentation.{instrumentation_library}.env`` configuration
144144
- Allows you to set ``deployment.environment`` either for all auto-instrumented applications collectively or per auto-instrumentation language.
145145
- Add the ``OTEL_RESOURCE_ATTRIBUTES`` environment variable, setting its value to ``deployment.environment=prd``.
146-
* - Through your Kubernetes application deployment, daemonset, or pod specification
147-
- Allows you to set ``deployment.environment`` at the level of individual deployments, daemonsets, or pods.
146+
* - Through your Kubernetes application deployment, DaemonSet, or pod specification
147+
- Allows you to set ``deployment.environment`` at the level of individual deployments, DaemonSets, or pods.
148148
- Employ the ``OTEL_RESOURCE_ATTRIBUTES`` environment variable, assigning the value ``deployment.environment=prd``.
149149

150150
The following examples show how to set the attribute using each method:
@@ -283,7 +283,7 @@ The instrumentation in the collector namespace must include the following:
283283
Set annotations to instrument applications
284284
==============================================================
285285

286-
If the related Kubernetes object (deployment, daemonset, or pod) is not deployed, add the ``instrumentation.opentelemetry.io/inject-java`` annotation to the application object YAML.
286+
If the related Kubernetes object (deployment, DaemonSet, or pod) is not deployed, add the ``instrumentation.opentelemetry.io/inject-java`` annotation to the application object YAML.
287287

288288
The annotation you set depends on the language runtime you're using. You can set multiple annotations in the same Kubernetes object. See the following available annotations:
289289

gdi/opentelemetry/collector-kubernetes/kubernetes-config-add.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ This example shows how to add the :ref:`mysql-receiver` to your configuration fi
6767
Add the MySQL receiver in the ``agent`` section
6868
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6969

70-
To use the Collector agent daemonset to collect ``mysql`` metrics from every node the agent is deployed to, add this to your configuration:
70+
To use the Collector agent DaemonSet to collect ``mysql`` metrics from every node the agent is deployed to, add this to your configuration:
7171

7272
.. code:: yaml
7373
@@ -100,7 +100,7 @@ This example shows how to add the :ref:`rabbitmq` integration to your configurat
100100
Add RabbitMQ in the ``agent`` section
101101
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
102102

103-
If you want to activate the RabbitMQ monitor in the Collector agent daemonset, add ``mysql`` to the ``receivers`` section of your agent section in the configuration file:
103+
If you want to activate the RabbitMQ monitor in the Collector agent DaemonSet, add ``mysql`` to the ``receivers`` section of your agent section in the configuration file:
104104

105105
.. code:: yaml
106106

gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -432,7 +432,7 @@ By default, data is persisted in the ``/var/addon/splunk/exporter_queue`` direct
432432

433433
Check the :new-page:`Data Persistence in the OpenTelemetry Collector <https://community.splunk.com/t5/Community-Blog/Data-Persistence-in-the-OpenTelemetry-Collector/ba-p/624583>` for a detailed explantion.
434434

435-
.. note:: Data can only be persisted for agent daemonsets.
435+
.. note:: Data can only be persisted for agent DaemonSets.
436436

437437
Config examples
438438
-----------------------------------------------------------------------------

gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Search for "Autopilot overview" on the :new-page:`Google Cloud documentation sit
8686

8787
.. note:: GKE Autopilot doesn't support native OpenTelemetry logs collection.
8888

89-
The Collector agent daemonset can have problems scheduling in Autopilot mode. If this happens, do the following to assign the daemonset a higher priority class to ensure that the daemonset pods are always present on each node:
89+
The Collector agent DaemonSet can have problems scheduling in Autopilot mode. If this happens, do the following to assign the DaemonSet a higher priority class to ensure that the DaemonSet pods are always present on each node:
9090

9191
1. Create a new priority class for the Collector agent:
9292

@@ -134,7 +134,7 @@ To run the Collector in the Amazon EKS with Fargate profiles, set the required `
134134

135135
This distribution operates similarly to the ``eks`` distribution, but with the following distinctions:
136136

137-
* The Collector agent daemonset is not applied since Fargate does not support daemonsets. Any desired Collector instances running as agents must be configured manually as sidecar containers in your custom deployments. This includes any application logging services like Fluentd. Set ``gateway.enabled`` to ``true`` and configure your instrumented applications to report metrics, traces, and logs to the gateway ``<installed-chart-name>-splunk-otel-collector`` service address. Any desired agent instances that would run as a daemonset should instead run as sidecar containers in your pods.
137+
* The Collector agent DaemonSet is not applied since Fargate does not support DaemonSets. Any desired Collector instances running as agents must be configured manually as sidecar containers in your custom deployments. This includes any application logging services like Fluentd. Set ``gateway.enabled`` to ``true`` and configure your instrumented applications to report metrics, traces, and logs to the gateway ``<installed-chart-name>-splunk-otel-collector`` service address. Any desired agent instances that would run as a DaemonSet should instead run as sidecar containers in your pods.
138138
* Since Fargate nodes use a VM boundary to prevent access to host-based resources used by other pods, pods are not able to reach their own kubelet. The cluster receiver for the Fargate distribution has two primary differences between regular ``eks`` to work around this limitation:
139139
* The configured cluster receiver is deployed as a two-replica StatefulSet instead of a Deployment, and uses a Kubernetes Observer extension that discovers the cluster's nodes and, on the second replica, its pods for user-configurable receiver creator additions.Using this observer dynamically creates the Kubelet Stats receiver instances that report kubelet metrics for all observed Fargate nodes. The first replica monitors the cluster with a ``k8s_cluster`` receiver, and the second cluster monitors all kubelets except its own (due to an EKS/Fargate networking restriction).
140140
* The first replica's Collector monitors the second's kubelet. This is made possible by a Fargate-specific ``splunk-otel-eks-fargate-kubeletstats-receiver-node`` node label. The Collector ClusterRole for ``eks/fargate`` allows the ``patch`` verb on ``nodes`` resources for the default API groups to allow the cluster receiver's init container to add this node label for designated self monitoring.

gdi/opentelemetry/collector-kubernetes/kubernetes-upgrade.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,13 +80,13 @@ To update the access token for your Collector for Kubernetes instance follow the
8080
8181
helm get values <Release_Name>
8282
83-
5. Restart the Collector's daemonset and deployments:
83+
5. Restart the Collector's DaemonSet and deployments:
8484

85-
* If ``agent.enabled=true``, restart the Collector's agent daemonset:
85+
* If ``agent.enabled=true``, restart the Collector's agent DaemonSet:
8686

8787
.. code-block:: bash
8888
89-
kubectl rollout restart daemonset <Release_Name>-agent
89+
kubectl rollout restart DaemonSet <Release_Name>-agent
9090
9191
* If ``clusterReceiver.enabled=true``, restart the Collector's cluster receiver deployment:
9292

gdi/opentelemetry/collector-kubernetes/metrics-ootb-k8s.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ Pod level metrics and dimensions
183183
- Exported?
184184

185185
* - ``k8s.cronjob.active_jobs``
186-
- Active cronjob jobs
186+
- Active CronJob jobs
187187
-
188188
- Yes
189189

@@ -208,7 +208,7 @@ Pod level metrics and dimensions
208208
- Yes
209209

210210
* - ``k8s.job.successful_pods``
211-
- Succesful pod jobs
211+
- Successful pod jobs
212212
-
213213
- Yes
214214

gdi/opentelemetry/components/kubelet-stats-receiver.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ The following example shows how to configure the ``kubeletstats`` receiver with
115115
receivers: [kubeletstats]
116116
exporters: [file]
117117
118-
.. caution:: A missing or empty ``endpoint`` value causes the host name on which the Collector is running to be used as the endpoint. If the ``hostNetwork`` flag is set, and the Collector is running in a Pod, the host name resolves to the node's network namespace.
118+
.. caution:: A missing or empty ``endpoint`` value causes the host name on which the Collector is running to be used as the endpoint. If the ``hostNetwork`` flag is set, and the Collector is running in a pod, the host name resolves to the node's network namespace.
119119

120120
Advanced use cases
121121
==================================================================
@@ -142,7 +142,7 @@ By default, all produced metrics get resource attributes based on what kubelet t
142142
The kubelet stats receiver supports the following metadata:
143143

144144
- ``container.id``: Enriches metric metadata with the Container ID label obtained from container statuses exposed using ``/pods``.
145-
- ``k8s.volume.type``: Collects the volume type from the Pod spec exposed using ``/pods`` and add it as an attribute to volume metrics. If more metadata than the volume type is available, the receiver syncs it depending on the available fields and the type of volume. For example, ``aws.volume.id`` is synced from ``awsElasticBlockStore`` and ``gcp.pd.name`` is synced from ``gcePersistentDisk``.
145+
- ``k8s.volume.type``: Collects the volume type from the pod spec exposed using ``/pods`` and add it as an attribute to volume metrics. If more metadata than the volume type is available, the receiver syncs it depending on the available fields and the type of volume. For example, ``aws.volume.id`` is synced from ``awsElasticBlockStore`` and ``gcp.pd.name`` is synced from ``gcePersistentDisk``.
146146

147147
To add the ``container.id`` label to your metrics, set the ``extra_metadata_labels`` field. For example:
148148

@@ -177,7 +177,7 @@ When dealing with persistent volume claims, you can sync metadata from the under
177177
k8s_api_config:
178178
auth_type: serviceAccount
179179
180-
If ``k8s_api_config`` is set, the receiver attempts to collect metadata from underlying storage resources for persistent volume claims. For example, if a Pod is using a persistent volume claim backed by an Elastic Block Store (EBS) instance on AWS, the receiver sets the ``k8s.volume.type`` label to ``awsElasticBlockStore`` rather than ``persistentVolumeClaim``.
180+
If ``k8s_api_config`` is set, the receiver attempts to collect metadata from underlying storage resources for persistent volume claims. For example, if a pod is using a persistent volume claim backed by an Elastic Block Store (EBS) instance on AWS, the receiver sets the ``k8s.volume.type`` label to ``awsElasticBlockStore`` rather than ``persistentVolumeClaim``.
181181

182182
Configure metric groups
183183
--------------------------------------------------------------

gdi/opentelemetry/components/kubernetes-attributes-processor.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ The following example shows how to give a ServiceAccount the necessary permissio
135135
Discovery filters
136136
-------------------------------------
137137

138-
You can use the Kubernetes attributes processor in Collectors deployed either as agents or as gateways, using DaemonSets or Deployments respectively. See :ref:`otel-deployment-mode` for more information.
138+
You can use the Kubernetes attributes processor in Collectors deployed either as agents or as gateways, using DaemonSets or deployments respectively. See :ref:`otel-deployment-mode` for more information.
139139

140140
Agent configuration
141141
^^^^^^^^^^^^^^^^^^^^^^^^^

0 commit comments

Comments
 (0)