diff --git a/_images/gdi/gdi-onboarding-diagram.png b/_images/gdi/gdi-onboarding-diagram.png new file mode 100644 index 000000000..627a2320f Binary files /dev/null and b/_images/gdi/gdi-onboarding-diagram.png differ diff --git a/_includes/requirements/collector-linux.rst b/_includes/requirements/collector-linux.rst index 034ec1302..636ad4b47 100644 --- a/_includes/requirements/collector-linux.rst +++ b/_includes/requirements/collector-linux.rst @@ -1,10 +1,10 @@ The Collector supports the following Linux distributions and versions: -* Amazon Linux: 2, 2023. Log collection with Fluentd is not currently supported for Amazon Linux 2023. +* Amazon Linux: 2, 2023. * CentOS: 7, 8, 9 * Red Hat: 7, 8, 9 * Oracle: 8, 9 * Debian: 11, 12 -* SUSE: 12, 15 for version 0.34.0 or higher. Log collection with Fluentd is not currently supported. +* SUSE: 12, 15 for version 0.34.0 or higher. * Ubuntu: 16.04, 18.04, 20.04, 22.04, and 24.04 * Rocky Linux: 8, 9 diff --git a/admin/authentication/allow-services.rst b/admin/authentication/allow-services.rst index 75aca7921..8db9187b1 100644 --- a/admin/authentication/allow-services.rst +++ b/admin/authentication/allow-services.rst @@ -216,9 +216,6 @@ If you're unable to allow all URLs as described in :ref:`allow-urls`, ensure tha # RUM ingest endpoint rum-ingest..signalfx.com/v1/rum - # For td-agent/Fluentd on Linux and Windows - packages.treasuredata.com - # For DEB/RPM collector packages splunk.jfrog.io jfrog-prod-use1-shared-virginia-main.s3.amazonaws.com diff --git a/apm/span-tags/metricsets.rst b/apm/span-tags/metricsets.rst index 76935697a..5b9d17c8f 100644 --- a/apm/span-tags/metricsets.rst +++ b/apm/span-tags/metricsets.rst @@ -9,172 +9,151 @@ Learn about MetricSets in APM MetricSets are key performance indicators, like request rate, error rate, and request duration, that are calculated from traces and spans in Splunk APM. There are 2 categories of MetricSets: Troubleshooting MetricSets (TMS), used for high-cardinality troubleshooting, and Monitoring MetricSets (MMS), used for real-time monitoring. MetricSets are similar to the metric time series (MTS) used in Splunk Infrastructure Monitoring to populate charts and generate alerts. See :ref:`metric-time-series` to learn more. MetricSets are MTS that are specific to Splunk APM. -.. _troubleshooting-metricsets: - -Troubleshooting MetricSets -========================== - -Troubleshooting MetricSets (TMS) are metric time series (MTS) you can use for troubleshooting high-cardinality identities in APM. You can also use TMS to make historical comparisons across spans and workflows. - -Splunk APM indexes and creates Troubleshooting MetricSets for several span tags by default. For more details about each of these tags, see :ref:`apm-default-span-tags`. You can't modify or stop APM from indexing these span tags. - -You can also create custom TMS by indexing additional span tags and processes. To learn how to index span tags and processes to create new Troubleshooting MetricSets, see :ref:`apm-index-span-tags`. - -Available TMS metrics ------------------------ -Every TMS creates the following metrics, known as request, error, and duration (RED) metrics. RED metrics appear when you select a service in the service map. See :ref:`service-map` to learn more about using RED metrics in the service map. - -- Request rate -- Error rate -- Root cause error rate -- p50, p90, and p99 latency - -The measurement precision of Troubleshooting MetricSets is 10 seconds. Splunk APM reports quantiles from a distribution of metrics for each 10-second reporting window. - -Use TMS within Splunk APM ----------------------------------------- - -TMS appear on the service map and in Tag Spotlight. Use TMS to filter the service map and create breakdowns across the values of a given indexed span tag or process. - -See :ref:`apm-service-map` and :ref:`apm-tag-spotlight`. - -TMS retention period ------------------------------------ - -Splunk Observability Cloud retains TMS for the same amount of time as raw traces. By default, the retention period is 8 days. - -For more details about Troubleshooting MetricSets, see :ref:`apm-index-tag-tips`. - .. _monitoring-metricsets: Monitoring MetricSets ===================== -Monitoring MetricSets (MMS) are metric time series (MTS) that power the real-time monitoring capabilities in Splunk APM, including charts and dashboards. MMS power the APM landing page and the dashboard view. MMS are also the metrics that detectors monitor to generate alerts. +Monitoring MetricSets (MMS) are metric time series (MTS) that power the monitoring capabilities in Splunk APM, including charts and dashboards. MMS power the APM landing page and the dashboard view. MMS are also the metrics that detectors monitor to generate alerts. MMS are available for a specific endpoint or for the aggregate of all endpoints in a service. -Endpoint-level MMS reflect the activity of a single endpoint in a service, while service-level MMS aggregate the activity of all of the endpoints in the service. MMS are limited to spans where the ``span.kind`` has a value of ``SERVER`` or ``CONSUMER``. +Endpoint-level MMS reflect the activity of a single endpoint in a service, while service-level MMS aggregate the activity of all of the endpoints in the service. MMS are created for spans where the ``span.kind`` has a value of ``SERVER`` or ``CONSUMER``. Spans might lack a ``kind`` value, or have a different ``kind`` value, in the following situations: * The span originates in self-initiating operations or inferred services * An error in instrumentation occurs. -In addition to the following default MMS, you can create custom MMS. See :ref:`cmms`. +MMS retention period +----------------------------------- + +Splunk Observability Cloud stores MMS for 13 months by default. .. _default-mms: + Available default MMS metrics and dimensions ----------------------------------------------- -MMS are available for the following APM components: - -- service.request -- spans -- inferred.services -- traces -- workflows (Workflow metrics are created by default when you create a Business Workflow. Custom MMS are not available for Business Workflows.) - -Each MMS includes 6 metrics for each component. For histogram MMS, there is a single metric for each component. Use the histogram functions to access the specific histogram bucket you want to use. - -For each metric, there is 1 metric time series (MTS) with responses ``sf_error: true`` or ``sf_error: false``. - -.. list-table:: - :widths: 33 33 33 - :width: 100 - :header-rows: 1 - - * - Description - - Histogram MMS - - MMS (deprecated) - * - Request count - - ```` with a ``count`` function - - ``.count`` - * - Minimum request duration - - ```` with a ``min`` function - - ``.duration.ns.min`` - * - Maximum request duration - - ```` with a ``max`` function - - ``.duration.ns.max`` - * - Median request duration - - ```` with a ``median`` function - - ``.duration.ns.median`` - * - Percentile request duration - - ```` with a ``percentile`` function and a percentile ``value`` - - ``.duration.ns.p90`` - * - Percentile request duration - - ```` with a ``percentile`` function and a percentile ``value`` - - ``.duration.ns.p99`` - - -Each MMS has a set of dimensions you can use to monitor and alert on service performance. - -Deprecated non-histogram metrics ---------------------------------- -Histograms provide more flexibility and accuracy for your application performance data. If you are using any non-histogram metrics, use the equivalent histogram MMS. In the future, only histogram MMS will be used for monitoring in Splunk APM, including in charts and dashboards. For more information about histograms, see :ref:`histograms`. +MMS are available for the APM components listed in the following table. Each MMS also has a set of dimensions you can use to monitor and alert on service performance. In addition to the following default MMS, you can create custom MMS to deep dive on your MMS. See :ref:`cmms`. .. _service-mms: - -Service dimensions ---------------------------------- - -* ``sf_environment`` -* ``deployment.environment`` - This dimension is only available for histogram MMS. -* ``sf_service`` -* ``service.name`` - This dimension is only available for histogram MMS. -* ``sf_error`` - .. _inferred-service-mms-dimensions: - -Inferred service dimensions ------------------------------- - -* ``sf_service`` -* ``service.name`` - This dimension is only available for histogram MMS. -* ``sf_environment`` -* ``deployment.environment`` - This dimension is only available for histogram MMS. -* ``sf_error`` -* ``sf.kind`` - .. _endpoint-mms: -Span dimensions ----------------------------------------------- - -* ``sf_environment`` -* ``deployment.environment`` - This dimension is only available for histogram MMS. -* ``sf_service`` -* ``service.name`` - This dimension is only available for histogram MMS. -* ``sf_operation`` -* ``sf_kind`` -* ``sf_error`` -* ``sf_httpMethod``, where relevant - -Trace dimensions ---------------------------------- - -.. note:: Trace dimensions are not supported for custom MMS. +.. list-table:: + :widths: 33 33 33 + :width: 100 + :header-rows: 1 -* ``sf_environment`` -* ``deployment.environment`` - This dimension is only available for histogram MMS. -* ``sf_service`` -* ``service.name`` - This dimension is only available for histogram MMS. -* ``sf_operation`` -* ``sf_httpMethod`` -* ``sf_error`` + * - Metric name + - Dimensions + - Custom dimension available? (Yes/No) + * - ``service.request`` - the requests to endpoints in a service + - * ``sf_environment`` + * ``deployment.environment`` - This dimension is only available for histogram MMS. + * ``sf_service`` + * ``service.name`` - This dimension is only available for histogram MMS. + * ``sf_error`` + - Yes + * - ``inferred.services`` - + - * ``sf_service`` + * ``service.name`` - This dimension is only available for histogram MMS. + * ``sf_environment`` + * ``deployment.environment`` - This dimension is only available for histogram MMS. + * ``sf_error`` + * ``sf.kind`` + * ``sf_operation`` + * ``sf_httpMethod`` + - No + * - ``spans`` - the count of spans (a single operation) + - * ``sf_environment`` + * ``deployment.environment`` - This dimension is only available for histogram MMS. + * ``sf_service`` + * ``service.name`` - This dimension is only available for histogram MMS. + * ``sf_operation`` + * ``sf_kind`` + * ``sf_error`` + * ``sf_httpMethod``, where relevant + - Yes + * - ``traces`` - the count of traces (collection of spans that represents a transaction) + - * ``sf_environment`` + * ``deployment.environment`` - This dimension is only available for histogram MMS. + * ``sf_service`` + * ``service.name`` - This dimension is only available for histogram MMS. + * ``sf_operation`` + * ``sf_httpMethod`` + * ``sf_error`` + - No + * - ``workflows`` - created by default when you create a business workflow + - * ``sf_environment`` + * ``deployment.environment`` - This dimension is only available for histogram MMS. + * ``sf_workflow`` + * ``sf_error`` + - No + +Monitoring MetricSets in APM are generated as histogram metrics. Histogram metrics represent a distribution of measurements or metrics, with complete percentile data available. Data is distributed into equally sized intervals, allowing you to compute percentiles across multiple services and aggregate datapoints from multiple metric time series. Histogram metrics provide an advantage over other metric types when calculating percentiles, such as the p90 percentile for a single MTS. See more in :ref:`metric-types`. For histogram MMS, there is a single metric for each component. + +Previously, MMS were classified as either a counter or gauge metric type. The previous MMS included 6 metrics for each component. -Workflow dimensions ---------------------------------- +.. list-table:: + :widths: 33 33 33 + :width: 100 + :header-rows: 1 -Workflow metrics and dimensions are created by default when you create a Business Workflow. + * - Description + - Histogram MMS + - MMS (deprecated) + * - Request count + - ```` with a ``count`` function + - ``.count`` + * - Minimum request duration + - ```` with a ``min`` function + - ``.duration.ns.min`` + * - Maximum request duration + - ```` with a ``max`` function + - ``.duration.ns.max`` + * - Median request duration + - ```` with a ``median`` function + - ``.duration.ns.median`` + * - Percentile request duration + - ```` with a ``percentile`` function and a percentile ``value`` + - ``.duration.ns.p90`` + * - Percentile request duration + - ```` with a ``percentile`` function and a percentile ``value`` + - ``.duration.ns.p99`` + +Example metrics in APM +--------------------------------------------- + +A histogram MTS uses the following syntax using SignalFlow: + +.. code-block:: none + + histogram(metric=[,filter=][,resolution=) + +The following table displays example SignalFlow functions: -.. note:: Workflow dimensions are not supported for custom MMS. +.. list-table:: + :widths: 33 33 33 + :width: 100 + :header-rows: 1 -* ``sf_environment`` -* ``deployment.environment`` - This dimension is only available for histogram MMS. -* ``sf_workflow`` -* ``sf_error`` + * - Description + - Histogram MMS + - Previous MMS (deprecated) + * - Aggregate count of all MTS + - ``A = histogram('spans').count().publish(label='A')`` + - ``A = data('spans.count').sum().publish(label='A')`` + * - P90 percentile for single MTS + - ``filter_ = filter('sf_environment', 'environment1') and filter('sf_service', 'service 1') and filter('sf_operation', 'operation1') and filter('sf_httpMethod', 'POST') and filter('sf_error', 'false') A = data('spans.duration.ns.p90', filter=filter_, rollup='sum').publish(label='A')`` + - ``filter_ = filter('sf_environment', 'us1') and filter('sf_service', 'service1') and filter('sf_operation', 'POST /api/autosuggest/tagvalues') and filter('sf_httpMethod', 'POST') and filter('sf_error', 'false') A = data('spans.duration.ns.p90', filter=filter_, rollup='sum').publish(label='A')`` + * - Combined p90 for multiple services + - ``A = histogram('service.request', filter=filter('sf_service', 'service 2', 'service 1')).percentile(pct=90).publish(label='A')`` + - ``A = data('service.request.duration.ns.p90', filter=filter('sf_service', 'service 2', 'service 1'), rollup='average').mean().publish(label='A')`` + +.. note:: Because an aggregation is applied on histogram(), to display all of the metric sets separately, each dimension needs to be applied as a groupby. Use MMS within Splunk APM ---------------------------------------- @@ -196,10 +175,41 @@ Use MMS for alerting and real-time monitoring in Splunk APM. You can create char * - Monitor services in APM dashboards - :ref:`Track service performance using dashboards in Splunk APM` -MMS retention period +.. _troubleshooting-metricsets: + +Troubleshooting MetricSets +========================== + +Troubleshooting MetricSets (TMS) are metric time series (MTS) you can use for troubleshooting high-cardinality identities in APM. You can also use TMS to make historical comparisons across spans and workflows. + +Splunk APM indexes and creates Troubleshooting MetricSets for several span tags by default. For more details about each of these tags, see :ref:`apm-default-span-tags`. You can't modify or stop APM from indexing these span tags. + +You can also create custom TMS by indexing additional span tags and processes. To learn how to index span tags and processes to create new Troubleshooting MetricSets, see :ref:`apm-index-span-tags`. + +Available TMS metrics +----------------------- +Every TMS creates the following metrics, known as request, error, and duration (RED) metrics. RED metrics appear when you select a service in the service map. See :ref:`service-map` to learn more about using RED metrics in the service map. + +- Request rate +- Error rate +- Root cause error rate +- p50, p90, and p99 latency + +The measurement precision of Troubleshooting MetricSets is 10 seconds. Splunk APM reports quantiles from a distribution of metrics for each 10-second reporting window. + +Use TMS within Splunk APM +---------------------------------------- + +TMS appear on the service map and in Tag Spotlight. Use TMS to filter the service map and create breakdowns across the values of a given indexed span tag or process. + +See :ref:`apm-service-map` and :ref:`apm-tag-spotlight`. + +TMS retention period ----------------------------------- -Splunk Observability Cloud stores MMS for 13 months by default. +Splunk Observability Cloud retains TMS for the same amount of time as raw traces. By default, the retention period is 8 days. + +For more details about Troubleshooting MetricSets, see :ref:`apm-index-tag-tips`. Comparing Monitoring MetricSets and Troubleshooting MetricSets ================================================================= @@ -208,4 +218,4 @@ Because endpoint-level and service-level MMS include a subset of the TMS metrics For example, values for ``checkout`` service metrics displayed in the host dashboard might be different from the metrics displayed in the service map because there are multiple span ``kind`` values associated with this service that the MMS that power the dashboard don't monitor. -To compare MMS and TMS directly, restrict your TMS to endpoint-only data by filtering to a specific endpoint. You can also break down the service map by endpoint. \ No newline at end of file +To compare MMS and TMS directly, restrict your TMS to endpoint-only data by filtering to a specific endpoint. You can also break down the service map by endpoint. diff --git a/gdi/get-data-in/get-data-in.rst b/gdi/get-data-in/get-data-in.rst index 55de84bad..8f7725cd3 100644 --- a/gdi/get-data-in/get-data-in.rst +++ b/gdi/get-data-in/get-data-in.rst @@ -24,7 +24,12 @@ Use Splunk Observability Cloud to achieve full-stack observability of all your d - :ref:`Splunk Log Observer Connect ` - :ref:`Splunk Synthetic Monitoring ` - Splunk Synthetic Monitoring does not have a data import component -This guide provides 4 chapters that guide you through the process of setting up each component of Splunk Observability Cloud. +This guide provides 4 chapters that guide you through the process of setting up each component of Splunk Observability Cloud. The following diagram shows the step-by-step process of setting up each Splunk Observability Cloud component: + +.. image:: /_images/gdi/gdi-onboarding-diagram.png + :width: 80% + :alt: The step-by-step process for setting up each Splunk Observability Cloud component. + .. raw:: html diff --git a/gdi/opentelemetry/automatic-discovery/linux/linux-backend.rst b/gdi/opentelemetry/automatic-discovery/linux/linux-backend.rst index 6afc5f967..811187cf6 100644 --- a/gdi/opentelemetry/automatic-discovery/linux/linux-backend.rst +++ b/gdi/opentelemetry/automatic-discovery/linux/linux-backend.rst @@ -89,8 +89,6 @@ Using the installer script, you can install and activate zero-code instrumentati curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --realm -- - .. note:: If you wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option. - The system-wide zero-code instrumentation method automatically adds environment variables to ``/etc/splunk/zeroconfig/java.conf``. To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example: @@ -125,8 +123,6 @@ Using the installer script, you can install and activate zero-code instrumentati The ``systemd`` instrumentation automatically adds environment variables to ``/usr/lib/systemd/system.conf.d/00-splunk-otel-auto-instrumentation.conf``. - .. note:: If you wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option. - To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example: .. code-block:: bash @@ -230,7 +226,6 @@ Using the installer script, you can install and activate zero-code instrumentati curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ sh /tmp/splunk-otel-collector.sh --with-instrumentation --realm -- - .. note:: If you wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option. The system-wide zero-code instrumentation method automatically adds environment variables to ``/etc/splunk/zeroconfig/node.conf``. @@ -257,8 +252,6 @@ Using the installer script, you can install and activate zero-code instrumentati The ``systemd`` zero-code instrumentation method automatically adds environment variables to ``/usr/lib/systemd/system.conf.d/00-splunk-otel-auto-instrumentation.conf``. - .. note:: If you wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option. - You can activate AlwaysOn Profiling for CPU and memory, as well as metrics, using additional options, as in the following example: .. code-block:: bash @@ -309,8 +302,6 @@ Using the installer script, you can install and activate zero-code instrumentati curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ sudo sh /tmp/splunk-otel-collector.sh --with-instrumentation --realm -- - .. note:: If you wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option. - The system-wide zero-code instrumentation method automatically adds environment variables to ``/etc/splunk/zeroconfig/dotnet.conf``. To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example: @@ -345,8 +336,6 @@ Using the installer script, you can install and activate zero-code instrumentati The ``systemd`` instrumentation automatically adds environment variables to ``/usr/lib/systemd/system.conf.d/00-splunk-otel-auto-instrumentation.conf``. - .. note:: If you wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance by specifying the ``--with-fluentd`` option. - To automatically define the optional ``deployment.environment`` resource attribute at installation time, run the installer script with the ``--deployment-environment `` option. Replace ```` with the desired attribute value, for example, ``prod``, as shown in the following example: .. code-block:: bash diff --git a/gdi/opentelemetry/collector-linux/collector-configuration-tutorial/collector-config-tutorial-start.rst b/gdi/opentelemetry/collector-linux/collector-configuration-tutorial/collector-config-tutorial-start.rst index b7cdea50e..388a24f7d 100644 --- a/gdi/opentelemetry/collector-linux/collector-configuration-tutorial/collector-config-tutorial-start.rst +++ b/gdi/opentelemetry/collector-linux/collector-configuration-tutorial/collector-config-tutorial-start.rst @@ -66,7 +66,6 @@ After you've installed the Collector, navigate to /etc/otel/collector to find th . |-- agent_config.yaml |-- config.d - |-- fluentd |-- gateway_config.yaml |-- splunk-otel-collector.conf |-- splunk-otel-collector.conf.example diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst b/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst index 0e426b809..3088ac44c 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-ansible.rst @@ -96,14 +96,6 @@ The following table describes the variables that can be configured for this role - The amount of allocated memory in MiB. The default value is ``512``, or 500 x 2^20 bytes, of memory . * - ``splunk_ballast_size_mib`` - ``splunk_ballast_size_mib`` is deprecated starting on Collector version 0.97.0. If you're using it, see :ref:`how to update your configuration `. - * - ``install_fluentd`` - - The option to install or manage Fluentd and dependencies for log collection. The dependencies include ``capng_c`` for activating Linux capabilities, ``fluent-plugin-systemd`` for systemd journal log collection, and the required libraries or development tools. The default value is ``false``. - * - ``td_agent_version`` - - The version of td-agent (Fluentd package) that is installed. The default value is ``3.3.0`` for Debian jessie, ``3.7.1`` for Debian stretch, and ``4.3.0`` for other distros. - * - ``splunk_fluentd_config`` - - The path to the Fluentd configuration file on the remote host. The default location is ``/etc/otel/collector/fluentd/fluent.conf``. - * - ``splunk_fluentd_config_source`` - - The source path to a Fluentd configuration file on your control host that is uploaded and set in place of the value set in ``splunk_fluentd_config`` on remote hosts. Use this variable to submit a custom Fluentd configuration, for example, ``./custom_fluentd_config.conf``. The default value is ``""``, which means that nothing is copied and the configuration file set with ``splunk_otel_collector_config`` is used. .. _ansible-zero-config: diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst b/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst index 3b2b65ee3..fc7a55844 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-chef.rst @@ -112,18 +112,6 @@ For Linux, the cookbook accepts the attributes described in the following table: * - ``package_stage`` - The Collector package repository stage to use. Can be ``release``, ``beta``, or ``test``. - ``release`` - * - ``with_fluentd`` - - Whether to install or manage Fluentd and dependencies for log collection. On Linux, the dependencies include ``capng_c`` for activating Linux capabilities, ``fluent-plugin-systemd`` for systemd journal log collection, and the required libraries and development tools. - - ``false`` - * - ``fluentd_version`` - - Version of the td-agent (Fluentd) package to install - - ``3.7.1`` for Debian stretch and ``4.3.1`` for all other Linux distros - * - ``fluentd_config_source`` - - Source path to the Fluentd configuration file. This file is copied to the ``$fluentd_config_dest`` path on the node. See the :new-page:`source attribute ` of the file resource for the supported value types. The default source file is provided by the Collector package. Only applicable if ``$with_fluentd`` is set to ``true``. - - ``/etc/otel/collector/fluentd/fluent.conf`` - * - ``fluentd_config_dest`` - - Destination path to the Fluentd configuration file on the node. Only applicable if ``$with_fluentd`` is set to ``true``. - - ``/etc/otel/collector/fluentd/fluent.conf`` .. _chef-zero-config: diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst b/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst index 5a761e55e..320cc2b2a 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-puppet.rst @@ -101,18 +101,6 @@ The class accepts the parameters described in the following table: * - ``service_user and $service_group`` - Sets the user or group ownership for the Collector service. The user or group is created if they do not exist. - ``splunk-otel-collector`` - * - ``with_fluentd`` - - Whether to install or manage Fluentd and dependencies for log collection. On Linux, the dependencies include ``capng_c`` for activating Linux capabilities, ``fluent-plugin-systemd`` for systemd journal log collection, and the required libraries and development tools. - - ``false`` - * - ``fluentd_config_source`` - - Source path to the Fluentd configuration file. This file is copied to the ``$fluentd_config_dest`` path on the node. See the :new-page:`source attribute ` of the file resource for the supported value types. The default source file is provided by the Collector package. Only applicable if ``$with_fluentd`` is set to ``true``. - - ``/etc/otel/collector/fluentd/fluent.conf`` - * - ``fluentd_config_dest`` - - Destination path to the Fluentd configuration file on the node. Only applicable if ``$with_fluentd`` is set to ``true``. - - ``/etc/otel/collector/fluentd/fluent.conf`` - * - ``manage_repo`` - - In cases where the Collector and Fluentd apt/yum repositories are managed externally, set this to ``false`` to deactivate management of the repositories by this module. If set to ``false``, the externally managed repositories should provide the ``splunk-otel-collector`` and ``td-agent`` packages. Also, the apt (``/etc/apt/sources.list.d/splunk-otel-collector.list`` and ``/etc/apt/sources.list.d/splunk-td-agent.list``) and yum (``/etc/yum.repos.d/splunk-otel-collector.repo`` and ``/etc/yum.repos.d/splunk-td-agent.repo``) repository definition files are deleted if they exist in order to avoid any conflicts. - - ``true`` .. _puppet-zero-config: diff --git a/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst b/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst index f07bb2b52..45379c0ae 100644 --- a/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst +++ b/gdi/opentelemetry/collector-linux/deployments-linux-salt.rst @@ -110,21 +110,6 @@ For Linux, the formula accepts the attributes described in the following table: * - ``service_user`` and ``$service_group`` - Sets the user or group ownership for the Collector service. The user or group is created if they do not exist. - ``splunk-otel-collector`` - * - ``install_fluentd`` - - Whether to install or manage Fluentd and dependencies for log collection. On Linux, the dependencies include ``capng_c`` for activating Linux capabilities, ``fluent-plugin-systemd`` for systemd journal log collection, and the required libraries and development tools. - - ``false`` - * - ``td_agent_version`` - - Version of the td-agent (Fluentd) package to install - - ``4.3.0`` - * - ``splunk_fluentd_config`` - - The path to the Fluentd configuration file on the remote host. - - ``/etc/otel/collector/fluentd/fluent.conf`` - * - ``splunk_fluentd_config_source`` - - The source path to a Fluentd configuration file on your control host that is uploaded and set in place of the ``splunk_fluentd_config`` file on remote hosts. To use a custom Fluentd configuration file, add the configuration file into the Salt dir. For example, ``salt://templates/td_agent.conf``. - - ``""`` meaning that nothing is copied and the existing ``splunk_fluentd_config`` file is used. - * - ``fluentd_config_dest`` - - Destination path to the Fluentd configuration file on the node. Only applicable if ``$with_fluentd`` is set to ``true``. - - ``/etc/otel/collector/fluentd/fluent.conf`` .. _salt-zero-config: diff --git a/gdi/opentelemetry/collector-linux/install-linux-manual.rst b/gdi/opentelemetry/collector-linux/install-linux-manual.rst index 6ecf3c72c..f7a7f1218 100644 --- a/gdi/opentelemetry/collector-linux/install-linux-manual.rst +++ b/gdi/opentelemetry/collector-linux/install-linux-manual.rst @@ -89,7 +89,6 @@ See also: * :ref:`linux-packages-post` * :ref:`linux-packages-auto` -* :ref:`linux-packages-fluentd` .. _linux-packages-rpm: @@ -164,7 +163,6 @@ See also: * :ref:`linux-packages-post` * :ref:`linux-packages-auto` -* :ref:`linux-packages-fluentd` .. _linux-packages: @@ -213,7 +211,6 @@ See also: * :ref:`linux-packages-post` * :ref:`linux-packages-auto` -* :ref:`linux-packages-fluentd` .. _linux-packages-post: @@ -305,35 +302,6 @@ The ``splunk-otel-auto-instrumentation`` deb/rpm package installs and supports c To learn more, see :ref:`linux-backend-auto-discovery`. -.. _linux-packages-fluentd: - -Install and configure Fluentd for log collection --------------------------------------------------------------- - -If you require log collection, perform the following steps to install Fluentd and forward collected log events to the Collector. This requires root privileges. - -#. Install, configure, and start the Collector as described in :ref:`linux-packages-repo`. The Collector's default configuration file listens for log events on ``127.0.0.1:8006`` and sends them to Splunk Observability Cloud. - -#. Install the ``td-agent`` package appropriate for the Linux distribution/version of the target system. Find the package in :new-page:`Fluentd installation `. - - * If necessary, install the ``capng_c`` plugin and dependencies to enable Linux capabilities, for example ``cap_dac_read_search`` and/or ``cap_dac_override``. This requires ``td-agent`` version 4.1 or higher. See :new-page:`Linux capabilities `. - - * If necessary, install the ``fluent-plugin-systemd`` plugin to collect log events from the systemd journal. See :new-page:`Fluent plugin systemd `. - -#. Configure Fluentd to collect log events and forward them to the Collector: - - * Option 1: Update the default config file at /etc/td-agent/td-agent.conf provided by the Fluentd package to collect the desired log events and forward them to ``127.0.0.1:8006``. - - * Option 2: The installed Collector package provides a custom Fluentd config file /etc/otel/collector/fluentd/fluent.conf to collect log events from many popular services and forwards them to ``127.0.0.1:8006``. To use these files, you need to override the default config file path for the Fluentd service. To do this, copy the systemd environment file from /etc/otel/collector/fluentd/splunk-otel-collector.conf to /etc/systemd/system/td-agent.service.d/splunk-otel-collector.conf. - -#. Ensure that the ``td-agent`` service user/group has permissions to access to the config file(s) from the previous step. - -#. Restart the Fluentd service to apply the changes by running ``systemctl restart td-agent``. - -#. View Fluentd service logs and errors in /var/log/td-agent/td-agent.log. - -See :new-page:`Fluentd configuration ` for general Fluentd configuration details. - .. _linux-docker: Docker diff --git a/gdi/opentelemetry/collector-linux/install-linux.rst b/gdi/opentelemetry/collector-linux/install-linux.rst index 96eeebf03..8cb093fd8 100644 --- a/gdi/opentelemetry/collector-linux/install-linux.rst +++ b/gdi/opentelemetry/collector-linux/install-linux.rst @@ -41,7 +41,6 @@ Included packages The installer script deploys and configures these elements: * The Splunk Distribution of the OpenTelemetry Collector for Linux -* Fluentd, using the td-agent. Turned off by default. See :ref:`fluentd-manual-config-linux` and :ref:`fluentd-receiver` for more information * JMX metric gatherer .. _linux-scripts: @@ -87,15 +86,15 @@ To configure proxy settings to install and run the OpenTelemetry Collector, see Use configured repos -------------------------------- -By default, apt/yum/zypper repo definition files are created to download the package and Fluentd deb/rpm packages from +By default, apt/yum/zypper repo definition files are created to download the package from :new-page:`https://splunk.jfrog.io/splunk ` and :new-page:`https://packages.treasuredata.com `, respectively. -To skip these steps and use configured repos on the target system that provide the ``splunk-otel-collector`` and ``td-agent`` deb/rpm packages, specify the ``--skip-collector-repo`` or ``--skip-fluentd-repo`` options. For example: +To skip these steps and use configured repos on the target system that provide the ``splunk-otel-collector`` packages, use the ``--skip-collector-repo`` option. For example: .. code-block:: bash curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ - sudo sh /tmp/splunk-otel-collector.sh --realm $SPLUNK_REALM --skip-collector-repo --skip-fluentd-repo \ + sudo sh /tmp/splunk-otel-collector.sh --realm $SPLUNK_REALM --skip-collector-repo \ -- $SPLUNK_ACCESS_TOKEN .. _configure-auto-instrumentation-linux: @@ -156,7 +155,7 @@ To use host bindings, run this command: Options of the installer script of the Collector for Linux ================================================================== -The Linux installer script supports the following options for the Collector, automatic discovery with zero-code instrumentation for back-end services, and Fluentd. +The Linux installer script supports the following options for the Collector, automatic discovery with zero-code instrumentation for back-end services. To display all the configuration options supported by the script, use the ``-h`` flag. @@ -282,29 +281,9 @@ Automatic discovery with zero-code instrumentation for back-end services - The ``splunk-otel-auto-instrumentation`` package version to install. Note: The minimum supported version for Java and Node.js zero-code instrumentation is 0.87.0, and the minimum supported version for .NET zero-code instrumentation is 0.99.0. - ``latest`` -Fluentd --------------------------------------------------------------------- - -.. list-table:: - :header-rows: 1 - :width: 100% - :widths: 30 40 30 - - * - Option - - Description - - Default value - * - ``--with[out]-fluentd`` - - Whether to install and configure fluentd to forward log events to the Collector. See :ref:`fluentd-manual-config-linux` for more information. - - ``--without-fluentd`` - * - ``--skip-fluentd-repo`` - - By default, a apt/yum repo definition file will be created to download the fluentd deb/rpm package from ``https://packages.treasuredata.com``. Use this option to skip the previous step and use a pre-configured repo on the target system that provides the ``td-agent`` deb/rpm package. - - - Next steps ================================== - - .. raw:: html
diff --git a/gdi/opentelemetry/collector-linux/linux-config-logs.rst b/gdi/opentelemetry/collector-linux/linux-config-logs.rst index 1c5602962..5ec7592ca 100644 --- a/gdi/opentelemetry/collector-linux/linux-config-logs.rst +++ b/gdi/opentelemetry/collector-linux/linux-config-logs.rst @@ -11,81 +11,5 @@ Collect logs with the Collector for Linux Use the Universal Forwarder to send logs to the Splunk platform. See more at :ref:`collector-with-the-uf`. -Fluentd is turned off by default. If you already installed Fluentd on a host, re-install the Collector without Fluentd using the ``--without-fluentd`` option. - -.. _fluentd-manual-config-linux: - -Collect Linux logs with Fluentd -=========================================================================== - -If you want to collect logs for the target host with Fluentd, use the ``--with-fluentd`` option to also install Fluentd when installing the Collector. For example: - -.. code-block:: bash - - curl -sSL https://dl.signalfx.com/splunk-otel-collector.sh > /tmp/splunk-otel-collector.sh && \ - sudo sh /tmp/splunk-otel-collector.sh --with-fluentd --realm $SPLUNK_REALM -- $SPLUNK_ACCESS_TOKEN - -When turned on, the Fluentd service is configured by default to collect and forward log events with the ``@SPLUNK`` label to the Collector, which then sends these events to the HEC ingest endpoint determined by the ``--realm `` option. For example, ``https://ingest..signalfx.com/v1/log``. - -The following Fluentd plugins are also installed: - -* ``capng_c`` for activating Linux capabilities. -* ``fluent-plugin-systemd`` for systemd journal log collection. - -Additionally, the following dependencies are installed as prerequisites for the Fluentd plugins: - -.. tabs:: - - .. tab:: Debian-based systems - - * build-essential - * libcap-ng0 - * libcap-ng-dev - * pkg-config - - .. tab:: RPM-based systems - - * Development Tools - * libcap-ng - * libcap-ng-devel - * pkgconfig - -You can specify the following parameters to configure the package to send log events to a custom Splunk HTTP Event Collector (HEC) endpoint URL: - -* ``--hec-url `` -* ``--hec-token `` - -HEC lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. See :new-page:`Set up and use HTTP Event Collector in Splunk Web `. - -The main Fluentd configuration is installed to ``/etc/otel/collector/fluentd/fluent.conf``. Custom Fluentd source configuration files can be added to the ``/etc/otel/collector/fluentd/conf.d`` directory after installation. - -Note the following: - -* In this directory, all files with the .conf extension are automatically included by Fluentd. -* The td-agent user must have permissions to access the configuration files and the paths defined within. -* By default, Fluentd is configured to collect systemd journal log events from ``/var/log/journal``. - -After any configuration modification, run ``sudo systemctl restart td-agent`` to restart the td-agent service. - -If the td-agent package is upgraded after initial installation, you might need to set the Linux capabilities for the new version by performing the following steps for td-agent versions 4.1 or higher: - -#. Check for the activated capabilities: - - .. code-block:: bash - - sudo /opt/td-agent/bin/fluent-cap-ctl --get -f /opt/td-agent/bin/ruby - Capabilities in `` /opt/td-agent/bin/ruby`` , - Effective: dac_override, dac_read_search - Inheritable: dac_override, dac_read_search - Permitted: dac_override, dac_read_search - -#. If the output from the previous command does not include ``dac_override`` and ``dac_read_search`` as shown above, run the following commands: - - .. code-block:: bash - - sudo td-agent-gem install capng_c - sudo /opt/td-agent/bin/fluent-cap-ctl --add "dac_override,dac_read_search" -f /opt/td-agent/bin/ruby - sudo systemctl daemon-reload - sudo systemctl restart td-agent - +Do not use Fluentd to collect logs. If you already installed Fluentd on a host, re-install the Collector without Fluentd using the ``--without-fluentd`` option. diff --git a/gdi/opentelemetry/collector-linux/linux-uninstall.rst b/gdi/opentelemetry/collector-linux/linux-uninstall.rst index 4070268d1..c1e2457ec 100644 --- a/gdi/opentelemetry/collector-linux/linux-uninstall.rst +++ b/gdi/opentelemetry/collector-linux/linux-uninstall.rst @@ -9,9 +9,9 @@ Uninstall the Collector for Linux Follow these instructions to uninstall the Splunk Distribution of the OpenTelemetry Collector for Linux. -You can use commands to uninstall the Collector and Fluentd packages if you used the :ref:`installer script` or :ref:`Debian or RPM package ` to perform the installation. +You can use commands to uninstall the Collector packages if you used the :ref:`installer script` or :ref:`Debian or RPM package ` to perform the installation. -If you installed the Collector and Fluentd using other methods (such as Ansible, Puppet, and Heroku as described in :ref:`otel-install-linux`), follow uninstall instructions specific to the tool you used. +If you installed the Collector using other methods (such as Ansible, Puppet, and Heroku as described in :ref:`otel-install-linux`), follow uninstall instructions specific to the tool you used. .. _otel-linux-uninstall-details: @@ -29,25 +29,6 @@ While not an exhaustive list, here are key notes about some of the files that ar * On Debian-based systems, the following files are deleted. If you want to keep these files, be sure to back up the individual files or the entire ``/etc/otel/collector`` directory before you perform the uninstall. Files not in this list aren't deleted. * ``/etc/otel/collector/agent_config.yaml`` - * ``/etc/otel/collector/fluentd/README`` - * ``/etc/otel/collector/fluentd/conf.d/apache.conf`` - * ``/etc/otel/collector/fluentd/conf.d/cassandra.conf`` - * ``/etc/otel/collector/fluentd/conf.d/docker.conf`` - * ``/etc/otel/collector/fluentd/conf.d/etcd.conf`` - * ``/etc/otel/collector/fluentd/conf.d/jetty.conf`` - * ``/etc/otel/collector/fluentd/conf.d/journald.conf`` - * ``/etc/otel/collector/fluentd/conf.d/memcached.conf`` - * ``/etc/otel/collector/fluentd/conf.d/mongodb.conf`` - * ``/etc/otel/collector/fluentd/conf.d/mysql.conf`` - * ``/etc/otel/collector/fluentd/conf.d/nginx.conf`` - * ``/etc/otel/collector/fluentd/conf.d/postgresql.conf`` - * ``/etc/otel/collector/fluentd/conf.d/rabbitmq.conf`` - * ``/etc/otel/collector/fluentd/conf.d/redis.conf`` - * ``/etc/otel/collector/fluentd/conf.d/syslog.conf`` - * ``/etc/otel/collector/fluentd/conf.d/tomcat.conf`` - * ``/etc/otel/collector/fluentd/conf.d/zookeeper.conf`` - * ``/etc/otel/collector/fluentd/fluent.conf`` - * ``/etc/otel/collector/fluentd/splunk-otel-collector.conf`` * ``/etc/otel/collector/gateway_config.yaml`` * ``/etc/otel/collector/splunk-otel-collector.conf.example`` * ``/etc/otel/collector/splunk-support-bundle.sh`` @@ -55,25 +36,6 @@ While not an exhaustive list, here are key notes about some of the files that ar * On RPM-based systems, if you modified any of the following files, the modified files aren't deleted and are renamed with the .rpmsave extension. For example, the uninstall process renames a modified agent_config.yaml to agent_config.yaml.rpmsave. You can delete these .rpmsave files if you don't need them. Unmodified files in this list are deleted. Files not in this list aren't deleted. * ``/etc/otel/collector/agent_config.yaml`` - * ``/etc/otel/collector/fluentd/README`` - * ``/etc/otel/collector/fluentd/conf.d/apache.conf`` - * ``/etc/otel/collector/fluentd/conf.d/cassandra.conf`` - * ``/etc/otel/collector/fluentd/conf.d/docker.conf`` - * ``/etc/otel/collector/fluentd/conf.d/etcd.conf`` - * ``/etc/otel/collector/fluentd/conf.d/jetty.conf`` - * ``/etc/otel/collector/fluentd/conf.d/journald.conf`` - * ``/etc/otel/collector/fluentd/conf.d/memcached.conf`` - * ``/etc/otel/collector/fluentd/conf.d/mongodb.conf`` - * ``/etc/otel/collector/fluentd/conf.d/mysql.conf`` - * ``/etc/otel/collector/fluentd/conf.d/nginx.conf`` - * ``/etc/otel/collector/fluentd/conf.d/postgresql.conf`` - * ``/etc/otel/collector/fluentd/conf.d/rabbitmq.conf`` - * ``/etc/otel/collector/fluentd/conf.d/redis.conf`` - * ``/etc/otel/collector/fluentd/conf.d/syslog.conf`` - * ``/etc/otel/collector/fluentd/conf.d/tomcat.conf`` - * ``/etc/otel/collector/fluentd/conf.d/zookeeper.conf`` - * ``/etc/otel/collector/fluentd/fluent.conf`` - * ``/etc/otel/collector/fluentd/splunk-otel-collector.conf`` * ``/etc/otel/collector/gateway_config.yaml`` * ``/etc/otel/collector/splunk-otel-collector.conf.example`` * ``/etc/otel/collector/splunk-support-bundle.sh`` @@ -83,12 +45,12 @@ While not an exhaustive list, here are key notes about some of the files that ar .. _otel-linux-uninstall-otel-and-tdagent: .. _otel-linux-uninstall-both-otel-and-tdagent: -Uninstall the Collector and Fluentd on Linux +Uninstall the Collector on Linux ================================================================ .. note:: Before you perform the uninstall, be sure to understand its impact. See :ref:`otel-linux-uninstall-details`. -If you installed the Collector and Fluentd using the :ref:`installer script` or :ref:`Debian or RPM package `, you can uninstall both of these packages by running the following command: +If you installed the Collector using the :ref:`installer script` or :ref:`Debian or RPM package `, you can uninstall both of these packages by running the following command: .. code-block:: bash @@ -103,33 +65,27 @@ Note that this snippet includes a command that downloads the latest ``splunk-ote To verify the uninstall, see :ref:`otel-linux-verify-uninstall`. -If you don't want to uninstall :strong:`both` packages and just want to uninstall the Collector package :strong:`or` Fluentd package, see :ref:`otel-linux-uninstall-only-otel-or-tdagent`. +If you don't want to uninstall :strong:`both` packages and just want to uninstall the Collector package see :ref:`otel-linux-uninstall-only-otel-or-tdagent`. .. _otel-linux-uninstall-only-otel-or-tdagent: -Uninstall only the Collector or Fluentd on Linux +Uninstall only the Collector on Linux ================================================================ -The uninstall command described in :ref:`otel-linux-uninstall-otel-and-tdagent` uninstalls :strong:`both` the Collector and Fluentd packages. +The uninstall command described in :ref:`otel-linux-uninstall-otel-and-tdagent` uninstalls :strong:`both` the Collector packages. -If you want to uninstall only the Collector package :strong:`or` the Fluentd package, use the following command for your platform. +If you want to uninstall only the Collector package, use the following command for your platform. For Debian -------------------------------------------------------------------------------------------- .. note:: Before performing an uninstall, see :ref:`otel-linux-uninstall-details`. -* To uninstall the Collector package only, run the following command: +To uninstall the Collector package only, run the following command: - .. code-block:: bash - - sudo apt-get purge splunk-otel-collector - -* To uninstall the Fluentd package only, run the following command: - - .. code-block:: bash +.. code-block:: bash - sudo apt-get purge td-agent + sudo apt-get purge splunk-otel-collector For RPM -------------------------------------------------------------------------------------------- @@ -154,48 +110,24 @@ For RPM sudo zypper remove splunk-otel-collector -* To uninstall the Fluentd package only, run the command for the package manager on your system: - - .. code-block:: bash - - sudo yum remove td-agent - - or - - .. code-block:: bash - - sudo dnf remove td-agent - - or - - .. code-block:: bash - - sudo zypper remove td-agent - To verify the uninstall, see :ref:`otel-linux-verify-uninstall`. .. _otel-linux-verify-uninstall: -Verify the uninstall of the Collector and Fluentd on Linux +Verify the uninstall of the Collector on Linux ================================================================ -While you can verify the uninstall of the Collector and Fluentd packages by watching for success messages in your command-line interface after running an uninstall command, you can also verify the uninstall by running a command that checks on the status of the Collector and Fluentd services. If the package has been successfully uninstalled, the status reflects this. - -* To verify the uninstall of the Collector package, run this command: - - .. code-block:: bash +While you can verify the uninstall of the Collector packages by watching for success messages in your command-line interface after running an uninstall command, you can also verify the uninstall by running a command that checks on the status of the Collector services. If the package has been successfully uninstalled, the status reflects this. - sudo systemctl status splunk-otel-collector +To verify the uninstall of the Collector package, run this command: +.. code-block:: bash - The expected result is ``Unit splunk-otel-collector.service could not be found.`` + sudo systemctl status splunk-otel-collector -* To verify the uninstall of the Fluentd (td-agent) package, run this command: +The expected result is ``Unit splunk-otel-collector.service could not be found.`` - .. code-block:: bash - sudo systemctl status td-agent - The expected result is ``Unit td-agent.service could not be found.`` diff --git a/gdi/opentelemetry/collector-windows/deployments-windows-ansible.rst b/gdi/opentelemetry/collector-windows/deployments-windows-ansible.rst index 8d93472cc..1198077fd 100644 --- a/gdi/opentelemetry/collector-windows/deployments-windows-ansible.rst +++ b/gdi/opentelemetry/collector-windows/deployments-windows-ansible.rst @@ -109,14 +109,6 @@ The following table describes the variables that can be configured for this role - The amount of allocated memory in MiB. The default value is ``512``, or 500 x 2^20 bytes, of memory . * - ``splunk_ballast_size_mib`` - ``splunk_ballast_size_mib`` is deprecated starting on Collector version 0.97.0. If you're using it, see :ref:`how to update your configuration `. - * - ``install_fluentd`` - - The option to install or manage Fluentd and dependencies for log collection. The default value is ``false``. - * - ``td_agent_version`` - - The version of td-agent (Fluentd package) that is installed. - * - ``splunk_fluentd_config`` - - The path to the Fluentd configuration file on the remote host. The default is ``%SYSTEMDRIVE%\opt\td-agent\etc\td-agent\td-agent.conf``. - * - ``splunk_fluentd_config_source`` - - The source path to a Fluentd configuration file on your control host that is uploaded and set in place of the value set in ``splunk_fluentd_config`` on remote hosts. Use this variable to submit a custom Fluentd configuration, for example, ``./custom_fluentd_config.conf``. The default value is ``""``, which means that nothing is copied and the configuration file set with ``splunk_otel_collector_config`` is used. Next steps ================================== diff --git a/gdi/opentelemetry/collector-windows/deployments-windows-puppet.rst b/gdi/opentelemetry/collector-windows/deployments-windows-puppet.rst index 1c745fe2c..dd5662071 100644 --- a/gdi/opentelemetry/collector-windows/deployments-windows-puppet.rst +++ b/gdi/opentelemetry/collector-windows/deployments-windows-puppet.rst @@ -85,18 +85,10 @@ The class accepts the parameters described in the following table: * - ``collector_config_dest`` - Destination path of the Collector configuration file on the node. The ``SPLUNK_CONFIG`` environment variable is set with this value for the Collector service. - ``%PROGRAMDATA%\Splunk\OpenTelemetry Collector\agent_config.yaml`` - * - ``with_fluentd`` - - Whether to install or manage Fluentd and dependencies for log collection. - - ``false`` - * - ``fluentd_config_source`` - - Source path to the Fluentd configuration file. This file is copied to the ``$fluentd_config_dest`` path on the node. See the :new-page:`source attribute ` of the file resource for the supported value types. The default source file is provided by the Collector package. Only applicable if ``$with_fluentd`` is set to ``true``. - - ``%PROGRAMFILES\Splunk\OpenTelemetry Collector\fluentd\td-agent.conf`` Next steps ================================== - - .. raw:: html
diff --git a/gdi/opentelemetry/collector-windows/install-windows-msi.rst b/gdi/opentelemetry/collector-windows/install-windows-msi.rst index 720e125d7..1c29a2185 100644 --- a/gdi/opentelemetry/collector-windows/install-windows-msi.rst +++ b/gdi/opentelemetry/collector-windows/install-windows-msi.rst @@ -171,56 +171,24 @@ Learn more about advanced configuration options (including Service Logging) usin * :ref:`otel-install-windows-manual` * :ref:`otel-windows-config` -.. _windows-manual-fluentd: - -Install Fluentd MSI for log collection -================================================== - -If you have a wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance. - -.. note:: You need to be an Admin to configure log collection with Fluentd. - -Perform the following steps to install Fluentd and forward ``collected`` log events to the Collector: - -1. Install :new-page:`Fluentd MSI ` version 4.0 or higher. - -2. Configure Fluentd to collect log events and forward them to the Collector: - - - Option 1: Update the default config file provided by the Fluentd MSI at ``\opt\td-agent\etc\td-agent\td-agent.conf`` to collect the desired log events and forward them to ``127.0.0.1:8006``. - - - Option 2: The installed Collector package provides a custom Fluentd config file ``\Program Files\Splunk\OpenTelemetry Collector\fluentd\td-agent.conf`` to collect log events from the Windows Event Log ``\Program Files\Splunk\OpenTelemetry Collector\fluentd\conf.d\eventlog.conf`` and forwards them to ``127.0.0.1:8006``. - - To use these files, backup the ``\opt\td-agent\etc\td-agent``` directory, and copy the contents from ``\Program Files\Splunk\OpenTelemetry Collector\fluentd``` to ``\opt\td-agent\etc\td-agent```. - -3. To apply any changes made to the Fluentd config files, restart the system, or restart ``fluentdwinsvc`` . - - .. code-block:: PowerShell - - - Stop-Service fluentdwinsvc - - Start-Service fluentdwinsvc - -4. View the Fluentd service logs and errors in ``\opt\td-agent\td-agent.log``. - -Learn more about general Fluentd configuration details in the :new-page:`official Fluentd documentation `. Custom MSI URLs ================================================== -By default, the Collector MSI is downloaded from :new-page:`https://dl.signalfx.com ` and -the Fluentd MSI is downloaded from :new-page:`https://packages.treasuredata.com `. +By default, the Collector MSI is downloaded from :new-page:`https://dl.signalfx.com `. -To specify custom URLs for these downloads, replace ``COLLECTOR_MSI_URL`` and ``FLUENTD_MSI_URL`` with the URLs to the desired MSI packages to install: +To specify custom URLs for these downloads, replace ``COLLECTOR_MSI_URL`` with the URL to the desired MSI packages to install: .. code-block:: PowerShell - & {Set-ExecutionPolicy Bypass -Scope Process -Force; $script = ((New-Object System.Net.WebClient).DownloadString('https://dl.signalfx.com/splunk-otel-collector.ps1')); $params = @{access_token = ""; realm = ""; collector_msi_url = ""; fluentd_msi_url = ""}; Invoke-Command -ScriptBlock ([scriptblock]::Create(". {$script} $(&{$args} @params)"))} + & {Set-ExecutionPolicy Bypass -Scope Process -Force; $script = ((New-Object System.Net.WebClient).DownloadString('https://dl.signalfx.com/splunk-otel-collector.ps1')); $params = @{access_token = ""; realm = ""; collector_msi_url = ""; Invoke-Command -ScriptBlock ([scriptblock]::Create(". {$script} $(&{$args} @params)"))} .. _windows-chocolatey: Install the Collector using a Chocolatey package ====================================================== -A :new-page:`Chocolatey package ` is available to download, install, and configure the Collector and Fluentd with the following PowerShell command: +A :new-page:`Chocolatey package ` is available to download, install, and configure the Collector with the following PowerShell command: .. code-block:: PowerShell diff --git a/gdi/opentelemetry/collector-windows/install-windows.rst b/gdi/opentelemetry/collector-windows/install-windows.rst index fd76c30da..5cf5e8d67 100644 --- a/gdi/opentelemetry/collector-windows/install-windows.rst +++ b/gdi/opentelemetry/collector-windows/install-windows.rst @@ -32,8 +32,6 @@ Alternatively, you can also install the Collector for Windows: Prerequisites ========================== - - .. raw:: html
@@ -44,9 +42,6 @@ Prerequisites
- - - .. _windows-otel-packages: Included packages @@ -55,7 +50,6 @@ Included packages The Windows installer script installs the following packages: * Dotnet autoinstrumentation, if enabled. See :ref:`get-started-dotnet-otel`. -* Fluentd, if enabled. See :ref:`fluentd-manual-config-windows`. * JMX metric gatherer. * For Docker environments only, Java JDK and JRE. @@ -64,10 +58,7 @@ The Windows installer script installs the following packages: Install the Collector for Windows using the installer script ================================================================ -The installer script is available for Windows 64-bit environments, and deploys and configures: - -* The Splunk Distribution of the OpenTelemetry Collector for Windows -* Fluentd through the ``td-agent``, which is deactivated by default +The installer script is available for Windows 64-bit environments, and deploys and configures the Splunk Distribution of the OpenTelemetry Collector for Windows. To install the package using the installer script, follow these steps: @@ -109,82 +100,74 @@ Options of the installer script for Windows The Windows installer script supports the following options: .. list-table:: - :header-rows: 1 - :width: 100% - :widths: 30 40 30 - - * - Option - - Description - - Default value - * - ``access_token`` - - The token used to send metric data to Splunk. - - - * - ``realm`` - - The Splunk realm to use. The ingest, API, trace, and HEC endpoint URLs are automatically created using this value. To find your Splunk realm, see :ref:`Note about realms `. - - ``us0`` - * - ``memory`` - - Total memory in MIB to allocate to the Collector. Automatically calculates the ballast size. See :ref:`otel-sizing` for more information. - - ``512`` - * - ``mode`` - - Configure the Collectorservice to run in host monitoring (``agent``) or data forwarding (``gateway``). - - ``agent`` - * - ``network_interface`` - - The network interface the Collectorreceivers listen on. - - ``0.0.0.0`` - * - ``ingest_url`` - - Set the base ingest URL explicitly instead of the URL inferred from the specified realm. - - ``https://ingest.REALM.signalfx.com`` - * - ``api_url`` - - Set the base API URL explicitly instead of the URL inferred from the specified realm. - - ``https://api.REALM.signalfx.com`` - * - ``trace_url`` - - Set the trace endpoint URL explicitly instead of the endpoint inferred from the specified realm. - - ``https://ingest.REALM.signalfx.com/v2/trace`` - * - ``hec_url`` - - Set the HEC endpoint URL explicitly instead of the endpoint inferred from the specified realm. - - ``https://ingest.REALM.signalfx.com/v1/log`` - * - ``hec_token`` - - Set the HEC token if it's different than the specified Splunk access token. - - - * - ``with_fluentd`` - - Whether to install and configure fluentd to forward log events to the collector. See :ref:`fluentd-manual-config-windows` for more information. - - ``$false`` - * - ``with_dotnet_instrumentation`` - - Whether to install and configure .NET tracing to forward .NET application traces to the local collector. - - ``$false`` - * - ``deployment_env`` - - A system-wide environment tag used by .NET instrumentation. Sets the ``SIGNALFX_ENV`` environment variable. Ignored if ``-with_dotnet_instrumentation`` is set to ``false``. - - - * - ``bundle_dir`` - - The location of your Smart Agent bundle for monitor functionality. - - ``C:\Program Files\Splunk\OpenTelemetry Collector\agent-bundle`` - * - ``insecure`` - - If true then certificates aren't checked when downloading resources. - - ``$false`` - * - ``collector_version`` - - Specify a specific version of the Collector to install. - - Latest version available - * - ``stage`` - - The package stage to install from [``test``, ``beta``, ``release``]. - - ``release`` - * - ``collector_msi_url`` - - When installing the Collector, instead of downloading the package, use this local path to a Splunk OpenTelemetry Collector MSI package. If specified, the ``-collector_version`` and ``-stage`` parameters are ignored. - - ``https://dl.signalfx.com/splunk-otel-collector/`` |br| ``msi/release/splunk-otel-collector--amd64.msi`` - * - ``fluentd_msi_url`` - - Specify the URL to the Fluentd MSI package to install. - - ``https://packages.treasuredata.com/4/windows/td-agent-4.1.0-x64.msi`` - * - ``msi_path`` - - Specify a local path to a Splunk OpenTelemetry Collector MSI package to install instead of downloading the package. If specified, the ``-collector_version`` and ``-stage`` parameters will be ignored. - - - * - ``msi_public_properties`` - - Specify public MSI properties to be used when installing the Splunk OpenTelemetry Collector MSI package. - - + :header-rows: 1 + :width: 100% + :widths: 30 40 30 + + * - Option + - Description + - Default value + * - ``access_token`` + - The token used to send metric data to Splunk. + - + * - ``realm`` + - The Splunk realm to use. The ingest, API, trace, and HEC endpoint URLs are automatically created using this value. To find your Splunk realm see :ref:`Note about realms `. + - ``us0`` + * - ``memory`` + - Total memory in MIB to allocate to the Collector. Automatically calculates the ballast size. See :ref:`otel-sizing` for more information. + - ``512`` + * - ``mode`` + - Configure the Collectorservice to run in host monitoring (``agent``) or data forwarding (``gateway``). + - ``agent`` + * - ``network_interface`` + - The network interface the Collectorreceivers listen on. + - ``0.0.0.0`` + * - ``ingest_url`` + - Set the base ingest URL explicitly instead of the URL inferred from the specified realm. + - ``https://ingest.REALM.signalfx.com`` + * - ``api_url`` + - Set the base API URL explicitly instead of the URL inferred from the specified realm. + - ``https://api.REALM.signalfx.com`` + * - ``trace_url`` + - Set the trace endpoint URL explicitly instead of the endpoint inferred from the specified realm. + - ``https://ingest.REALM.signalfx.com/v2/trace`` + * - ``hec_url`` + - Set the HEC endpoint URL explicitly instead of the endpoint inferred from the specified realm. + - ``https://ingest.REALM.signalfx.com/v1/log`` + * - ``hec_token`` + - Set the HEC token if it's different than the specified Splunk access token. + - + * - ``with_dotnet_instrumentation`` + - Whether to install and configure .NET tracing to forward .NET application traces to the local collector. + - ``$false`` + * - ``deployment_env`` + - A system-wide environment tag used by .NET instrumentation. Sets the ``SIGNALFX_ENV`` environment variable. Ignored if ``-with_dotnet_instrumentation`` is set to ``false``. + - + * - ``bundle_dir`` + - The location of your Smart Agent bundle for monitor functionality. + - ``C:\Program Files\Splunk\OpenTelemetry Collector\agent-bundle`` + * - ``insecure`` + - If true then certificates aren't checked when downloading resources. + - ``$false`` + * - ``collector_version`` + - Specify a specific version of the Collector to install. + - Latest version available + * - ``stage`` + - The package stage to install from [``test``, ``beta``, ``release``]. + - ``release`` + * - ``collector_msi_url`` + - When installing the Collector, instead of downloading the package, use this local path to a Splunk OpenTelemetry Collector MSI package. If specified, the ``-collector_version`` and ``-stage`` parameters are ignored. + - ``https://dl.signalfx.com/splunk-otel-collector/`` |br| ``msi/release/splunk-otel-collector--amd64.msi`` + * - ``msi_path`` + - Specify a local path to a Splunk OpenTelemetry Collector MSI package to install instead of downloading the package. If specified, the ``-collector_version`` and ``-stage`` parameters will be ignored. + - + * - ``msi_public_properties`` + - Specify public MSI properties to be used when installing the Splunk OpenTelemetry Collector MSI package. + - Next steps ================================== - - .. raw:: html
diff --git a/gdi/opentelemetry/collector-windows/windows-config-logs.rst b/gdi/opentelemetry/collector-windows/windows-config-logs.rst index 576aad892..eb3c9af57 100644 --- a/gdi/opentelemetry/collector-windows/windows-config-logs.rst +++ b/gdi/opentelemetry/collector-windows/windows-config-logs.rst @@ -10,50 +10,5 @@ Collect logs with the Collector for Windows Use the Universal Forwarder to send logs to the Splunk platform. See more at :ref:`collector-with-the-uf`. -.. _fluentd-manual-config-windows: - -Collect Windows logs with Fluentd -=========================================================================== - -Fluentd is turned off by default. - -If you wish to collect logs for the target host with Fluentd, use the ``with_fluentd = 1`` option to install and enable Fluentd when installing the Collector. - -For example: - -.. code-block:: PowerShell - - & {Set-ExecutionPolicy Bypass -Scope Process -Force; $script = ((New-Object System.Net.WebClient).DownloadString('https://dl.signalfx.com/splunk-otel-collector.ps1')); $params = @{access_token = ""; realm = ""; with_fluentd = 1}; Invoke-Command -ScriptBlock ([scriptblock]::Create(". {$script} $(&{$args} @params)"))} - -When activated, the Fluentd service is configured by default to collect and forward log events with the ``@SPLUNK`` label to the Collector, which then send these events to the HEC ingest endpoint determined by the ``realm = ""`` option. -For example, ``https://ingest..signalfx.com/v1/log``. - -To configure the package to send log events to a custom HTTP Event Collector (HEC) endpoint URL with a token different than ````, you can specify the following parameters for the installer script: - -* ``hec_url = ""`` -* ``hec_token = ""`` - -For example (replace the ```` values in the command for your configuration): - -.. code-block:: PowerShell - - & {Set-ExecutionPolicy Bypass -Scope Process -Force; $script = ((New-Object System.Net.WebClient).DownloadString('https://dl.signalfx.com/splunk-otel-collector.ps1')); $params = @{access_token = ""; realm = ""; hec_url = ""; hec_token = ""}; Invoke-Command -ScriptBlock ([scriptblock]::Create(". {$script} $(&{$args} @params)"))} - -The installation creates the main Fluentd configuration file ``\opt\td-agent\etc\td-agent\td-agent.conf``, where ```` is the drive letter for the fluentd installation directory. - -You can add custom Fluentd source configuration files to the ``\opt\td-agent\etc\td-agent\conf.d`` -directory after installation. - -Note the following: - -* In this directory, Fluentd includes all files with the .conf extension. -* By default, fluentd collects from the Windows Event Log. See ``\opt\td-agent\etc\td-agent\conf.d\eventlog.conf`` for the default configuration. - -After any configuration modification, apply the changes by restarting the system or running the following PowerShell commands: - -.. code-block:: PowerShell - - Stop-Service fluentdwinsvc - Start-Service fluentdwinsvc diff --git a/gdi/opentelemetry/collector-windows/windows-uninstall.rst b/gdi/opentelemetry/collector-windows/windows-uninstall.rst index 52b1e52d9..649434a42 100644 --- a/gdi/opentelemetry/collector-windows/windows-uninstall.rst +++ b/gdi/opentelemetry/collector-windows/windows-uninstall.rst @@ -14,7 +14,7 @@ Follow these instructions to uninstall the Splunk Distribution of the OpenTeleme Uninstall using the Windows Control Panel ==================================================== -If you installed the Collector with the installer script, the Collector and td-agent (Fluentd) can be uninstalled from **Programs and Features** in the Windows Control Panel. The configuration files might persist in ``\ProgramData\Splunk\OpenTelemetry Collector`` and ``\opt\td-agent`` after uninstall. +If you installed the Collector with the installer script, the Collector can be uninstalled from **Programs and Features** in the Windows Control Panel. The configuration files might persist in ``\ProgramData\Splunk\OpenTelemetry Collector`` and ``\opt\td-agent`` after uninstall. .. _otel-windows-uninstall-powershell: diff --git a/gdi/opentelemetry/components/fluentd-receiver.rst b/gdi/opentelemetry/components/fluentd-receiver.rst index 578e8100d..5fc356442 100644 --- a/gdi/opentelemetry/components/fluentd-receiver.rst +++ b/gdi/opentelemetry/components/fluentd-receiver.rst @@ -7,21 +7,15 @@ Fluent Forward receiver .. meta:: :description: The Fluent Forward receiver allows the Splunk Distribution of OpenTelemetry Collector to collect logs and events using the Fluent Forward protocol. +.. caution:: ``fluentd``` will be deprecated in October 2025. In Kubernetes environments use native OpenTelemetry log collection instead. In Linux and Windows platforms use the Universal Forwarder. See :ref:otel-config-logs`. + The Fluent Forward receiver allows the Splunk Distribution of the OpenTelemetry Collector to collect events using the bundled Fluentd application. The supported pipeline type is ``logs``. See :ref:`otel-data-processing` for more information. The receiver accepts data formatted as Fluent Forward events through a TCP connection. All three Fluent event types, message, forward, and packed forward, are supported, including compressed packed forward. -.. caution:: Fluentd is deactivated by default for Linux and Windows. To activate it, use the ``--with-fluentd`` option when installing the Collector for Linux, or the ``with_fluentd = 1`` option when installing the Collector for Windows. - Get started ====================== -.. note:: - - This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector when deploying in host monitoring (agent) mode. See :ref:`otel-deployment-mode` for more information. - - For details about the default configuration, see :ref:`otel-kubernetes-config`, :ref:`linux-config-ootb`, or :ref:`windows-config-ootb`. You can customize your configuration any time as explained in this document. - Follow these steps to configure and activate the component: 1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform: @@ -33,7 +27,7 @@ Follow these steps to configure and activate the component: 2. Configure the receiver as described in the next document. 3. Restart the Collector. -By default, the Splunk Distribution of the OpenTelemetry Collector includes the Fluent Forward receiver in the ``logs`` pipeline: +Next, add the Fluent Forward receiver in the ``logs`` pipeline: .. code-block:: yaml @@ -46,12 +40,6 @@ By default, the Splunk Distribution of the OpenTelemetry Collector includes the logs: receivers: [fluentforward] -For more information on how to install Fluentd when manually installing the Collector, see: - -* :ref:`fluentd-manual-config-linux` -* :ref:`fluentd-manual-config-windows` -* :ref:`windows-manual-fluentd` - Settings ====================== @@ -64,9 +52,12 @@ The following table shows the configuration options for the Fluent Forward recei Troubleshooting ====================== -For troubleshooting Fluentd, see: +.. raw:: html + +
-* :ref:`fluentd-collector-troubleshooting` -* :ref:`otel-linux-uninstall-both-otel-and-tdagent` +.. include:: /_includes/troubleshooting-components.rst + +.. raw:: html -.. caution:: If you wish to collect logs for the target host with Fluentd, make sure Fluentd is installed and turned on in your Collector instance. +
\ No newline at end of file diff --git a/gdi/opentelemetry/install-the-collector.rst b/gdi/opentelemetry/install-the-collector.rst index a0bd32b5f..727335bbe 100644 --- a/gdi/opentelemetry/install-the-collector.rst +++ b/gdi/opentelemetry/install-the-collector.rst @@ -185,24 +185,6 @@ To collect logs with the Splunk Distribution of the OpenTelemetry Collector: * In Kubernetes environments, native OpenTelemetry log collection is supported by default. See more at :ref:`kubernetes-config-logs`. * For Linux and Windows environments (physical hosts and virtual machines), use the Universal Forwarder to send logs to the Splunk platform. See more at :ref:`collector-with-the-uf`. -.. note:: If you wish to collect logs for the target host, install and enable Fluentd in your Collector instance. - -.. raw:: html - - -

Collect logs using Fluentd ΒΆ

- - -The Collector can capture logs using Fluentd, but this option is deactivated by default. To learn more, see :ref:`fluentd-receiver`. - -To activate Fluentd refer to: - -* :ref:`Configure Fluentd for log collection in Kubernetes ` -* :ref:`Configure Fluentd for log collection in Linux ` -* :ref:`Configure Fluentd for log collection in Windows ` - -Common sources such as filelog, journald, and Windows Event Viewer are included in the installation. - .. raw:: html diff --git a/gdi/opentelemetry/opentelemetry.rst b/gdi/opentelemetry/opentelemetry.rst index aa56b70ee..79bcaa052 100644 --- a/gdi/opentelemetry/opentelemetry.rst +++ b/gdi/opentelemetry/opentelemetry.rst @@ -78,8 +78,7 @@ Also, the customizations in the Splunk distribution include these additional fea * Better defaults for Splunk products * Discovery mode for metric sources * Automatic discovery and configuration -* Fluentd for log capture, deactivated by default - + .. note:: Check out the :new-page:`Splunk Distribution of the OpenTelemetry Collector repo in GitHub ` for more details. .. raw:: html @@ -115,7 +114,7 @@ The Splunk Distribution of the OpenTelemetry Collector for Kubernetes ingests, m end Infrastructure -- "metrics, logs (native OTel)" --> receivers - B[Back-end services] -- "traces, metrics, logs (native OTel)" --> receivers + B[Back-end services] -- "traces, metrics, logs (native OTel only)" --> receivers C[Front-end experiences] -- "traces" --> S[Splunk Observability Cloud] receivers --> processors @@ -168,8 +167,6 @@ To collect logs with the Splunk Distribution of the OpenTelemetry Collector: * In Kubernetes environments, native OpenTelemetry log collection is supported by default. See more at :ref:`kubernetes-config-logs`. * For Linux and Windows environments (physical hosts and virtual machines), use the Universal Forwarder to send logs to the Splunk platform. See more at :ref:`collector-with-the-uf`. -.. note:: If you wish to collect logs for the target host, install and enable Fluentd in your Collector instance. - .. _otel-intro-enterprise: .. raw:: html diff --git a/gdi/opentelemetry/other-configuration-sources.rst b/gdi/opentelemetry/other-configuration-sources.rst index d867815f8..bf0a87912 100644 --- a/gdi/opentelemetry/other-configuration-sources.rst +++ b/gdi/opentelemetry/other-configuration-sources.rst @@ -7,7 +7,7 @@ Other configuration sources (Alpha/Beta) .. meta:: :description: Configure these optional components to retrieve data from specific configuration sources. After retrieving the data, you can then insert the data into your Splunk Distribution of OpenTelemetry Collector configuration. -In addition to the Collector packages and Fluentd, the following components can be configured: +In addition to the Collector packages you can configure the following components: * :ref:`Environment variable (Alpha) ` * :ref:`etcd2 (Alpha) ` diff --git a/gdi/opentelemetry/sizing.rst b/gdi/opentelemetry/sizing.rst index f7a52b73d..c1659a51d 100644 --- a/gdi/opentelemetry/sizing.rst +++ b/gdi/opentelemetry/sizing.rst @@ -13,7 +13,7 @@ With a single CPU core, the Collector can ingest the following: * If handling traces, 15,000 spans per second. * If handling metrics, 20,000 data points per second. -* If handling logs, 10,000 log records per second, including Fluentd ``td-agent``, which forwards logs to the ``fluentforward`` receiver in the Collector. See more at :ref:`fluentd-receiver`. +* If handling logs, 10,000 log records per second. Sizing recommendations ========================================== @@ -59,7 +59,7 @@ Scaling recommendations To define and scale your architecture, analyze the behavior of your workload to understand the loads and format of each signal type, as well as the load's distribution in time. -For example, consider a scenario with hundreds of Prometheus endpoints to scrape, a terabyte of logs coming from fluentd instances every minute, and some application metrics and OTLP traces. +For example, consider a scenario with hundreds of Prometheus endpoints to scrape, a terabyte of logs ingested every minute, and some application metrics and OTLP traces. In this scenario: diff --git a/gdi/opentelemetry/splunk-collector-troubleshooting.rst b/gdi/opentelemetry/splunk-collector-troubleshooting.rst index 198a9fa85..5f65a9ebb 100644 --- a/gdi/opentelemetry/splunk-collector-troubleshooting.rst +++ b/gdi/opentelemetry/splunk-collector-troubleshooting.rst @@ -196,11 +196,6 @@ No response means the request was sent successfully. You can also pass ``-v`` to Error codes and messages ================================================================================== -You're getting a "pattern not matched" error message ------------------------------------------------------------- - -If you see an error message such as "pattern not matched", this message is from Fluentd, and means that the ```` was unable to match based on the log message. As a result, the log message is not collected. Check the Fluentd configuration and update as required. - You're receiving an HTTP error code ------------------------------------------------------------ diff --git a/gdi/opentelemetry/troubleshoot-logs.rst b/gdi/opentelemetry/troubleshoot-logs.rst index 7aebde865..e5e31b3c1 100644 --- a/gdi/opentelemetry/troubleshoot-logs.rst +++ b/gdi/opentelemetry/troubleshoot-logs.rst @@ -47,30 +47,9 @@ If using Windows, run the following command to check if the source is generating Get-Content myTestLog.log -.. _fluentd-collector-troubleshooting: - -Fluentd isn't configured correctly -========================================= - -Do the following to check the Fluentd configuration: - -#. Check that td-agent is running. On Linux, run ``systemctl status td-agent``. On Windows, run ``Get-Service td-agent``. -#. If you changed the configuration, restart Fluentd. On Linux, run ``systemctl restart td-agent``. On Windows, run ``Restart-Service -Name td-agent``. -#. Check fluentd.conf and conf.d/\*. ``@label @SPLUNK`` must be added to every source to activate log collection. -#. Manual configuration might be required to collect logs off the source. Add configuration files to in the conf.d directory as needed. -#. Activate debug logging in fluentd.conf (``log_level debug``), restart td-agent, and check that the source is generating logs. - -While every attempt is made to properly configure permissions, it is possible that td-agent does not have the permission required to collect logs. Debug logging should indicate this issue. - -It's possible that the ```` section configuration does not match the log events. - -If you see a message such as "2021-03-17 02:14:44 +0000 [debug]: #0 connect new socket", Fluentd is working as expected. You need to activate debug logging to see this message. - The Collector isn't configured properly ========================================= -.. note:: Fluentd is part of the Splunk Distribution of OpenTelemetry Collector, but deactivated by default for Linux and Windows. To activate it, use the ``--with-fluentd`` option when installing the Collector for Linux, or the ``with_fluentd = 1`` option when installing the Collector for Windows. - Do the following to check the Collector configuration: #. Go to ``http://localhost:55679/debug/tracez`` to check zPages for samples. You might need to configure the endpoint. @@ -81,15 +60,13 @@ Do the following to check the Collector configuration: Test the Collector by sending synthetic data ================================================================================== -You can manually generate logs. By default, Fluentd monitors journald and /var/log/syslog.log for events. +You can manually generate logs. .. code-block:: bash echo "2021-03-17 02:14:44 +0000 [debug]: test" >>/var/log/syslog.log echo "2021-03-17 02:14:44 +0000 [debug]: test" | systemd-cat -.. caution:: Fluentd requires properly structured syslog to pick up the log line. - .. _unwanted_profiling_logs: Unwanted profiling logs appearing in Splunk Observability Cloud diff --git a/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst b/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst index 2c56b2970..417114534 100644 --- a/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst +++ b/infrastructure/metrics-pipeline/metrics-pipeline-intro.rst @@ -167,20 +167,16 @@ MPM is not available for the following types of metrics: Aggregation rules limitations -------------------------------------------------------------------------------- -You can only create aggregation rules using your metrics' dimensions. Aggregation using custom properties or tags is not supported. For more information on each type of metadata, refer to :ref:`metrics-dimensions-mts`. +* You can only create aggregation rules using your metrics dimensions. Aggregation using custom properties or tags is not supported. For more information on each type of metadata, refer to :ref:`metrics-dimensions-mts`. +* New aggregation rules are applied to new MTS only. Existing MTS are only used as a reference to create the rule and display the projected outcome. Histogram metrics limitations -------------------------------------------------------------------------------- -You cannot archive or aggregate histogram metrics. By default, they are routed to the real-time tier, and you can drop them with rules as well. +You can't archive or aggregate histogram metrics. By default, they are routed to the real-time tier, and you can drop them with rules as well. .. _metrics-pipeline-intro-more: -Aggregation rules limitations --------------------------------------------------------------------------------- - -You can only create aggregation rules using your metrics' dimensions. Aggregation using custom properties or tags is not supported. For more information on each type of metadata, refer to :ref:`metrics-dimensions-mts`. - Learn more ===============================================================================