Skip to content

Commit 7aaeadb

Browse files
authored
Merge pull request #2484 from splunk/repo-sync
Pulling refs/heads/main into main
2 parents 154a9ac + f20434a commit 7aaeadb

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+117350
-76265
lines changed

gdi/get-data-in/connect/gcp/gcp-connect.rst

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -98,8 +98,17 @@ Your GCP integration is now complete.
9898

9999
.. note:: Splunk is not responsible for data availability, and it can take up to several minutes (or longer, depending on your configuration) from the time you connect until you start seeing valid data from your account.
100100

101-
Options
102-
++++++++
101+
Using a single principal for your resources
102+
++++++++++++++++++++++++++++++++++++++++++++++++
103+
104+
In IAM you can grant access to your resources to one or more entities called principals, regardless of the authentication method (single Service Account or Workload Identity Federation).
105+
106+
If you're using a single principal for multiple projects, GCP tracks all API usage quota in the project where the principal originates from, which can result in throttling in your integration. To mitigate this, select :strong:`Use quota from the project where metrics are stored`. To use this option the principal provided for the project needs either the ``serviceusage.services.use`` permission or the Service Usage Consumer role.
107+
108+
For a more detailed description see :new-page:`Principals <https://cloud.google.com/iam/docs/overview#concepts_related_identity>` in GCP's docs.
109+
110+
Other options
111+
++++++++++++++++
103112

104113
Optionally you can:
105114

@@ -111,8 +120,6 @@ Optionally you can:
111120

112121
* If you select Compute Engine as one of the services to monitor, you can enter a comma-separated list of Compute Engine Instance metadata keys to send as properties. These metadata keys are sent as properties named ``gcp_metadata_<metadata-key>``.
113122

114-
* Select :strong:`Use quota from the project where metrics are stored` to use a quota from the project where metrics are stored. The service account provided for the project needs either the ``serviceusage.services.use`` permission, or the `Service Usage Consumer` role.
115-
116123
Alternatives to connect to GCP
117124
============================================
118125

gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst

Lines changed: 0 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -503,32 +503,3 @@ Cluster Receiver support
503503
The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because the Kubernetes control plane can select any available node to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts don't work for such environments.
504504

505505
Data persistence is currently not applicable to the Kubernetes cluster metrics and Kubernetes events.
506-
507-
Monitor OpenShift infrastructure nodes
508-
============================================
509-
510-
By default, the Splunk Distribution of OpenTelemetry Collector for Kubernetes doesn't collect data from OpenShift infrastructure nodes.
511-
512-
You can customize the Collector Helm Chart file to activate data collection from OpenShift infrastructure nodes. To do so, complete the following steps:
513-
514-
#. Open your values.yaml file for the Helm Chart.
515-
#. Copy and paste the following YAML snippet into the values.yaml file:
516-
517-
.. code-block:: yaml
518-
519-
tolerations:
520-
- key: node-role.kubernetes.io/master
521-
effect: NoSchedule
522-
- key: node-role.kubernetes.io/control-plane
523-
effect: NoSchedule
524-
- key: node-role.kubernetes.io/infra
525-
effect: NoSchedule
526-
operator: Exists
527-
528-
#. Install the Collector using the Helm Chart:
529-
530-
.. code-block:: bash
531-
532-
helm install my-splunk-otel-collector --values values.yaml splunk-otel-collector-chart/splunk-otel-collector
533-
534-
.. note:: Monitoring OpenShift infrastructure nodes might pose a security risk depending on which method you used to create the Kubernetes environment.

gdi/opentelemetry/collector-linux/collector-configuration-tutorial/collector-config-tutorial-edit.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Adding an empty entry like in the previous example is sometimes enough to get st
4242
listen_address: "0.0.0.0:54526"
4343
protocol: rfc5424
4444
45-
After you've added the Syslog receiver, make sure to add it to the receivers's list under ``service.pipelines``. In this case, the pipeline type is ``logs``, since the Syslog receiver collect logs:
45+
After you've added the Syslog receiver, make sure to add it to the receivers's list under ``service.pipelines``. In this case, the pipeline type is ``logs``, since the Syslog receiver collects logs:
4646

4747
.. code-block:: yaml
4848
:emphasize-lines: 8

gdi/opentelemetry/components/host-metrics-receiver.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ The host metrics receiver generates metrics scraped from host systems when the C
1111

1212
By default, the host metrics receiver is activated in the Splunk Distribution of OpenTelemetry Collector and collects the following metrics:
1313

14+
- System metrics
1415
- CPU usage metrics
1516
- Disk I/O metrics
1617
- CPU load metrics
@@ -91,6 +92,10 @@ Scrapers extract data from endpoints and then send that data to a specified targ
9192
- Description
9293
-
9394

95+
- ``system``
96+
- System metrics
97+
-
98+
9499
- ``cpu``
95100
- CPU utilization metrics
96101
-
21.3 KB
Binary file not shown.

0 commit comments

Comments
 (0)