From c9c1440f9f3c0a5fdbeeb2bc358091aea62dff0a Mon Sep 17 00:00:00 2001 From: Marci W <333176+marciw@users.noreply.github.com> Date: Thu, 13 Mar 2025 12:42:39 -0400 Subject: [PATCH 1/3] Delete ece-find.md --- .../cloud/cloud-enterprise/ece-find.md | 20 ------------------- 1 file changed, 20 deletions(-) delete mode 100644 raw-migrated-files/cloud/cloud-enterprise/ece-find.md diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-find.md b/raw-migrated-files/cloud/cloud-enterprise/ece-find.md deleted file mode 100644 index ecbeac2a4c..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-find.md +++ /dev/null @@ -1,20 +0,0 @@ -# Finding deployments, finding problems [ece-find] - -When you installed Elastic Cloud Enterprise and [logged into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md) for the first time, you were greeted by two deployments. We’ve also shown you how to [create your own first deployment](../../../deploy-manage/deploy/cloud-enterprise/create-deployment.md), but that still only makes a few deployments. What if you had hundreds of deployments to look after or maybe even a thousand? How would you find the ones that need your attention? - -The **Deployments** page in the Cloud UI provides several ways to find deployments that might need your attention, whether that’s deployments that have a problem or deployments that are at a specific version level or really almost anything you might want to find on a complex production system: - -* Check the visual health indicators of deployments -* Search for partial or whole deployment names or IDs in the search text box -* Add filters to the **Deployments** view to filter for specific conditions: - - :::{image} ../../../images/cloud-enterprise-deployment-filter.png - :alt: Add a filter - ::: - - Looking for all deployments of a specific version, because you want to upgrade them? Easy. Or what about that deployments you noticed before lunch that seemed to be spending an awfully long time changing its configuration—​is it done? Just add a filter to find any ongoing configuration changes. - - - - - From d150dd40b95d633be6446bb447657929730be3a9 Mon Sep 17 00:00:00 2001 From: Marci W <333176+marciw@users.noreply.github.com> Date: Thu, 13 Mar 2025 16:14:43 -0400 Subject: [PATCH 2/3] delete delete delete --- .../cloud-enterprise/ece-troubleshooting.md | 13 - .../ech-metrics-memory-pressure.md | 49 --- raw-migrated-files/cloud/cloud/ec-get-help.md | 60 ---- .../cloud/cloud/ec-metrics-memory-pressure.md | 49 --- ...report-pipeline-flow-worker-utilization.md | 38 --- .../logstash/health-report-pipeline-status.md | 32 -- raw-migrated-files/logstash/logstash/index.md | 3 - .../logstash/logstash/troubleshooting.md | 28 -- .../logstash/logstash/ts-logstash.md | 297 ------------------ raw-migrated-files/toc.yml | 11 - .../logstash/health-report-pipelines.md | 77 ++++- 11 files changed, 72 insertions(+), 585 deletions(-) delete mode 100644 raw-migrated-files/cloud/cloud-enterprise/ece-troubleshooting.md delete mode 100644 raw-migrated-files/cloud/cloud-heroku/ech-metrics-memory-pressure.md delete mode 100644 raw-migrated-files/cloud/cloud/ec-get-help.md delete mode 100644 raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md delete mode 100644 raw-migrated-files/logstash/logstash/health-report-pipeline-flow-worker-utilization.md delete mode 100644 raw-migrated-files/logstash/logstash/health-report-pipeline-status.md delete mode 100644 raw-migrated-files/logstash/logstash/index.md delete mode 100644 raw-migrated-files/logstash/logstash/troubleshooting.md delete mode 100644 raw-migrated-files/logstash/logstash/ts-logstash.md diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-troubleshooting.md b/raw-migrated-files/cloud/cloud-enterprise/ece-troubleshooting.md deleted file mode 100644 index 417d402d20..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-troubleshooting.md +++ /dev/null @@ -1,13 +0,0 @@ -# Troubleshooting [ece-troubleshooting] - -Use the information in this section to help troubleshoot common issues with Elastic Cloud Enterprise or to get help. - -* [*Common issues*](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md) -* [*Use the emergency roles token*](../../../troubleshoot/deployments/cloud-enterprise/use-emergency-roles-token.md) -* [*Remove Elastic Cloud Enterprise*](../../../troubleshoot/deployments/cloud-enterprise/remove-cloud-enterprise.md) -* [*Verify ZooKeeper Sync Status*](../../../troubleshoot/deployments/cloud-enterprise/verify-zookeeper-sync-status.md) -* [*Rebuilding a broken Zookeeper quorum*](../../../troubleshoot/deployments/cloud-enterprise/rebuilding-broken-zookeeper-quorum.md) -* [*Troubleshooting container engines*](../../../troubleshoot/deployments/cloud-enterprise/troubleshooting-container-engines.md) -* [*Run ECE diagnostics tool*](../../../troubleshoot/deployments/cloud-enterprise/run-ece-diagnostics-tool.md) -* [*Ask for help*](../../../troubleshoot/deployments/cloud-enterprise/ask-for-help.md) - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-metrics-memory-pressure.md b/raw-migrated-files/cloud/cloud-heroku/ech-metrics-memory-pressure.md deleted file mode 100644 index 641f7f3c29..0000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-metrics-memory-pressure.md +++ /dev/null @@ -1,49 +0,0 @@ -# How does high memory pressure affect performance? [ech-metrics-memory-pressure] - -When you load up an {{es}} cluster with an indexing and search workload that matches the size of the cluster well, you typically get the classic JVM heap sawtooth pattern as memory gets used and then gets freed up again by the garbage collector. Memory usage increases until it reaches 75% and then drops again as memory is freed up: - -:::{image} ../../../images/cloud-heroku-metrics-memory-pressure-sawtooth.png -:alt: The classic JVM sawtooth pattern that shows memory usage -::: - -Now let’s suppose you have a cluster with three nodes and much higher memory pressure overall. In this example, two of the three nodes are maxing out very regularly for extended periods and one node is consistently hovering around the 75% mark. - -:::{image} ../../../images/cloud-heroku-metrics-high-memory-pressure.png -:alt: High memory pressure -::: - -High memory pressure works against cluster performance in two ways: As memory pressure rises to 75% and above, less memory remains available, but your cluster now also needs to spend some CPU resources to reclaim memory through garbage collection. These CPU resources are not available to handle user requests while garbage collection is going on. As a result, response times for user requests increases as the system becomes more and more resource constrained. If memory pressure continues to rise and reaches near 100%, a much more aggressive form of garbage collection is used, which will in turn affect cluster response times dramatically. - -:::{image} ../../../images/cloud-heroku-metrics-high-response-times.png -:alt: High response times -::: - -In our example, the **Index Response Times** metric shows that high memory pressure leads to a significant performance impact. As two of the three nodes max out their memory several times and plateau at 100% memory pressure for 30 to 45 minutes at a time, there is a sharp increase in the index response times around 23:00, 00:00, and 01:00. Search response times, which are not shown, also increase but not as dramatically. Only the node in blue that consistently shows a much healthier memory pressure that rarely exceeds 75% can sustain a lower response time. - -If the performance impact from high memory pressure is not acceptable, you need to increase the cluster size or reduce the workload. - - -## Increase the deployment size [echincrease_the_deployment_size] - -Scaling with Elasticsearch Add-On for Heroku is easy: simply log in to the Elasticsearch Add-On for Heroku console, select your deployment, select edit, and either increase the number of zones or the size per zone. - - -## Reduce the workload [echreduce_the_workload] - -By understanding and adjusting the way your data is indexed, retained, and searched you can reduce the amount of memory used and increase performance. - - -### Sharding strategy [echsharding_strategy] - -{{es}} indices are divided into shards. Understanding shards is important when tuning {{es}}. Check [Size your shards](../../../deploy-manage/production-guidance/optimize-performance/size-shards.md) in the {{es}} documentation to learn more. - - -### Data retention [echdata_retention] - -The total amount of data being searched affects search performance. Check the tutorial [Automate rollover with index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md) (ILM) to automate data retention policies. - - -### Tune for search speed [echtune_for_search_speed] - -The documentation [Tune for search speed](../../../deploy-manage/production-guidance/optimize-performance/search-speed.md) provides details on how to analyze queries, optimize field types, minimize the fields searched, and more. - diff --git a/raw-migrated-files/cloud/cloud/ec-get-help.md b/raw-migrated-files/cloud/cloud/ec-get-help.md deleted file mode 100644 index a1065d46d7..0000000000 --- a/raw-migrated-files/cloud/cloud/ec-get-help.md +++ /dev/null @@ -1,60 +0,0 @@ -# Getting help [ec-get-help] - -With your {{ecloud}} subscription, you get access to support from the creators of Elasticsearch, Kibana, Beats, Logstash, and much more. We’re here to help! - - -## How do I open a support case? [ec_how_do_i_open_a_support_case] - -All roads lead to the Elastic Support Portal, where you can access to all your cases, subscriptions, and licenses. - -As an {{ecloud}} customer, you will receive an email with instructions how to log in to the Support Portal, where you can track both current and archived cases. If you are a new customer who just signed up for E{{ecloud}}, it can take a few hours for your Support Portal access to be set up. If you have questions, reach out to us at `support@elastic.co`. - -::::{note} -With the release of the new Support Portal, even if you have an existing account, you might be prompted to update your password. -:::: - - -There are three ways you can get to the portal: - -* Go directly to the Support Portal: [http://support.elastic.co](http://support.elastic.co) -* From the {{ecloud}} Console: Go to the [Support page](https://cloud.elastic.co/support?page=docs&placement=docs-body) or select the support icon, that looks like a life preserver, on any page in the console. -* Contact us by email: `support@elastic.co` - - If you contact us by email, please use the email address that you registered with, so that we can help you more quickly. If you are using a distribution list as your registered email, you can also register a second email address with us. Just open a case to let us know the name and email address you would like to be added. - - -When opening a case, there are a few things you can do to get help faster: - -* Include the deployment ID that you want help with, especially if you have several deployments. The deployment ID can be found on the overview page for your cluster in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -* Describe the problem. Include any relevant details, including error messages you encountered, dates and times when the problem occurred, or anything else you think might be helpful. -* Upload any pertinent files. - - -## What level of support can I expect? [ec_what_level_of_support_can_i_expect] - -Support is governed by the [{{ecloud}} Standard Terms of Service](https://www.elastic.co/legal/terms-of-service/cloud). The level of support you can expect to receive applies to your {{ecloud}} environment only and depends on your subscription level: - -{{ecloud}} Standard subscriptions -: Support is provided by email or through the Elastic Support Portal. The main focus of support is to ensure your {{ech}} deployment shows a green status and is available. There is no guaranteed initial or ongoing response time, but we do strive to engage on every issue within three business days. We do not offer weekend coverage, so we respond Monday through Friday only. To learn more, check [Working with Elastic Support {{ecloud}} Standard](https://www.elastic.co/support/welcome/cloud). - -{{ecloud}} Gold and Platinum subscriptions -: Support is handled by email or through the Elastic Support Portal. Provides guaranteed response times for support issues, better support coverage hours, and support contacts at Elastic. Also includes support for how-to and development questions. The exact support coverage depends on whether you are a Gold or Platinum customer. To learn more, check [{{ecloud}} Premium Support Services Policy](https://www.elastic.co/legal/support_policy/cloud_premium). - -::::{note} -If you are in free trial, you are also eligible to get the {{ecloud}} Standard level support for as long as the trial is active. -:::: - - -If you are on an {{ecloud}} Standard subscription and you are interested in moving to Gold or Platinum support, please [contact us](https://www.elastic.co/cloud/contact). We also recommend that you read our best practices guide for getting the most out of your support experience: [https://www.elastic.co/support/welcome](https://www.elastic.co/support/welcome). - - -## Join the community forums [ec_join_the_community_forums] - -Elasticsearch, Logstash, and Kibana enjoy the benefit of having vibrant and helpful communities. You have our assurance of high-quality support and single source of truth as an {{ecloud}} customer, but the Elastic community can also be a useful resource for you whenever you need it. - -::::{tip} -As of May 1, 2017, support for {{ecloud}} **Standard** customers has moved from the Discuss forum to our link: [Elastic Support Portal](https://support.elastic.co). You should receive login instructions by email. We will also monitor the forum and help you get into the Support Portal, in case you’re unsure where to go. -:::: - - -If you have any technical questions that are not for our Support team, hop on our [Elastic community forums](https://discuss.elastic.co/) and get answers from the experts in the community, including people from Elastic. diff --git a/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md b/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md deleted file mode 100644 index 33a8d51827..0000000000 --- a/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md +++ /dev/null @@ -1,49 +0,0 @@ -# How does high memory pressure affect performance? [ec-metrics-memory-pressure] - -When you load up an {{es}} cluster with an indexing and search workload that matches the size of the cluster well, you typically get the classic JVM heap sawtooth pattern as memory gets used and then gets freed up again by the garbage collector. Memory usage increases until it reaches 75% and then drops again as memory is freed up: - -:::{image} ../../../images/cloud-metrics-memory-pressure-sawtooth.png -:alt: The classic JVM sawtooth pattern that shows memory usage -::: - -Now let’s suppose you have a cluster with three nodes and much higher memory pressure overall. In this example, two of the three nodes are maxing out very regularly for extended periods and one node is consistently hovering around the 75% mark. - -:::{image} ../../../images/cloud-metrics-high-memory-pressure.png -:alt: High memory pressure -::: - -High memory pressure works against cluster performance in two ways: As memory pressure rises to 75% and above, less memory remains available, but your cluster now also needs to spend some CPU resources to reclaim memory through garbage collection. These CPU resources are not available to handle user requests while garbage collection is going on. As a result, response times for user requests increases as the system becomes more and more resource constrained. If memory pressure continues to rise and reaches near 100%, a much more aggressive form of garbage collection is used, which will in turn affect cluster response times dramatically. - -:::{image} ../../../images/cloud-metrics-high-response-times.png -:alt: High response times -::: - -In our example, the **Index Response Times** metric shows that high memory pressure leads to a significant performance impact. As two of the three nodes max out their memory several times and plateau at 100% memory pressure for 30 to 45 minutes at a time, there is a sharp increase in the index response times around 23:00, 00:00, and 01:00. Search response times, which are not shown, also increase but not as dramatically. Only the node in blue that consistently shows a much healthier memory pressure that rarely exceeds 75% can sustain a lower response time. - -If the performance impact from high memory pressure is not acceptable, you need to increase the cluster size or reduce the workload. - - -## Increase the deployment size [ec_increase_the_deployment_size] - -Scaling with {{ech}} is easy: simply log in to the {{ecloud}} Console, select your deployment, select edit, and either increase the number of zones or the size per zone. - - -## Reduce the workload [ec_reduce_the_workload] - -By understanding and adjusting the way your data is indexed, retained, and searched you can reduce the amount of memory used and increase performance. - - -### Sharding strategy [ec_sharding_strategy] - -{{es}} indices are divided into shards. Understanding shards is important when tuning {{es}}. Check [Size your shards](/deploy-manage/production-guidance/optimize-performance/size-shards.md) in the {{es}} documentation to learn more. - - -### Data retention [ec_data_retention] - -The total amount of data being searched affects search performance. Check the tutorial [Automate rollover with index lifecycle management](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md) (ILM) to automate data retention policies. - - -### Tune for search speed [ec_tune_for_search_speed] - -The documentation [Tune for search speed](/deploy-manage/production-guidance/optimize-performance/search-speed.md) provides details on how to analyze queries, optimize field types, minimize the fields searched, and more. - diff --git a/raw-migrated-files/logstash/logstash/health-report-pipeline-flow-worker-utilization.md b/raw-migrated-files/logstash/logstash/health-report-pipeline-flow-worker-utilization.md deleted file mode 100644 index 3515dda81c..0000000000 --- a/raw-migrated-files/logstash/logstash/health-report-pipeline-flow-worker-utilization.md +++ /dev/null @@ -1,38 +0,0 @@ -# Health Report Pipeline Flow: Worker Utilization [health-report-pipeline-flow-worker-utilization] - -The Pipeline indicator has a `flow:worker_utilization` probe that is capable of producing one of several diagnoses about blockages in the pipeline. - -A pipeline is considered "blocked" when its workers are fully-utilized, because if they are consistently spending 100% of their time processing events, they are unable to pick up new events from the queue. This can cause back-pressure to cascade to upstream services, which can result in data loss or duplicate processing depending on upstream configuration. - -The issue typically stems from one or more causes: - -* a downstream resource being blocked, -* a plugin consuming more resources than expected, and/or -* insufficient resources being allocated to the pipeline. - -To address the issue, observe the [Plugin flow rates](https://www.elastic.co/guide/en/logstash/current/node-stats-api.html#plugin-flow-rates) from the [Node Stats API](https://www.elastic.co/guide/en/logstash/current/node-stats-api.html), and identify which plugins have the highest `worker_utilization`. This will tell you which plugins are spending the most of the pipeline’s worker resources. - -* If the offending plugin connects to a downstream service or another pipeline that is exerting back-pressure, the issue needs to be addressed in the downstream service or pipeline. -* If the offending plugin connects to a downstream service with high network latency, throughput for the pipeline may be improved by [allocating more worker resources to the pipeline](logstash://reference/tuning-logstash.md#tuning-logstash-settings). -* If the offending plugin is a computation-heavy filter such as `grok` or `kv`, its configuration may need to be tuned to eliminate wasted computation. - -## $$$blocked-5m$$$Blocked Pipeline (5 minutes) [health-report-pipeline-flow-worker-utilization-diagnosis-blocked-5m] - -A pipeline that has been completely blocked for five minutes or more represents a critical blockage to the flow of events through your pipeline that needs to be addressed immediately to avoid or limit data loss. See above for troubleshooting steps. - - -## $$$nearly-blocked-5m$$$Nearly Blocked Pipeline (5 minutes) [health-report-pipeline-flow-worker-utilization-diagnosis-nearly-blocked-5m] - -A pipeline that has been nearly blocked for five minutes or more may be creating intermittent blockage to the flow of events through your pipeline, which can result in the risk of data loss. See above for troubleshooting steps. - - -## $$$blocked-1m$$$Blocked Pipeline (1 minute) [health-report-pipeline-flow-worker-utilization-diagnosis-blocked-1m] - -A pipeline that has been completely blocked for one minute or more represents a high-risk or upcoming blockage to the flow of events through your pipeline that likely needs to be addressed soon to avoid or limit data loss. See above for troubleshooting steps. - - -## $$$nearly-blocked-1m$$$Nearly Blocked Pipeline (1 minute) [health-report-pipeline-flow-worker-utilization-diagnosis-nearly-blocked-1m] - -A pipeline that has been nearly blocked for one minute or more may be creating intermittent blockage to the flow of events through your pipeline, which can result in the risk of data loss. See above for troubleshooting steps. - - diff --git a/raw-migrated-files/logstash/logstash/health-report-pipeline-status.md b/raw-migrated-files/logstash/logstash/health-report-pipeline-status.md deleted file mode 100644 index a5447d37f6..0000000000 --- a/raw-migrated-files/logstash/logstash/health-report-pipeline-status.md +++ /dev/null @@ -1,32 +0,0 @@ -# Health Report Pipeline Status [health-report-pipeline-status] - -The Pipeline indicator has a `status` probe that is capable of producing one of several diagnoses about the pipeline’s lifecycle, indicating whether the pipeline is currently running. - -## $$$loading$$$Loading Pipeline [health-report-pipeline-status-diagnosis-loading] - -A pipeline that is loading is not yet processing data, and is considered a temporarily-degraded pipeline state. Some plugins perform actions or pre-validation that can delay the starting of the pipeline, such as when a plugin pre-establishes a connection to an external service before allowing the pipeline to start. When these plugins take significant time to start up, the whole pipeline can remain in a loading state for an extended time. - -If your pipeline does not come up in a reasonable amount of time, consider checking the Logstash logs to see if the plugin shows evidence of being caught in a retry loop. - - -## $$$finished$$$Finished Pipeline [health-report-pipeline-status-diagnosis-finished] - -A logstash pipeline whose input plugins have all completed will be shut down once events have finished processing. - -Many plugins can be configured to run indefinitely, either by listening for new inbound events or by polling for events on a schedule. A finished pipeline will not produce or process any more events until it is restarted, which will occur if the pipeline’s definition is changed and pipeline reloads are enabled. If you wish to keep your pipeline runing, consider configuring its input to run on a schedule or otherwise listen for new events. - - -## $$$terminated$$$Terminated Pipeline [health-report-pipeline-status-diagnosis-terminated] - -When a Logstash pipeline’s filter or output plugins crash, the entire pipeline is terminated and intervention is required. - -A terminated pipeline will not produce or process any more events until it is restarted, which will occur if the pipeline’s definition is changed and pipeline reloads are enabled. Check the logs to determine the cause of the crash, and report the issue to the plugin maintainers. - - -## $$$unknown$$$Unknown Pipeline [health-report-pipeline-status-diagnosis-unknown] - -When a Logstash pipeline either cannot be created or has recently been deleted the health report doesn’t know enough to produce a meaningful status. - -Check the logs to determine if the pipeline crashed during creation, and report the issue to the plugin maintainers. - - diff --git a/raw-migrated-files/logstash/logstash/index.md b/raw-migrated-files/logstash/logstash/index.md deleted file mode 100644 index ce48beeca9..0000000000 --- a/raw-migrated-files/logstash/logstash/index.md +++ /dev/null @@ -1,3 +0,0 @@ -# Logstash - -Migrated files from the Logstash book. \ No newline at end of file diff --git a/raw-migrated-files/logstash/logstash/troubleshooting.md b/raw-migrated-files/logstash/logstash/troubleshooting.md deleted file mode 100644 index a0aafa17d1..0000000000 --- a/raw-migrated-files/logstash/logstash/troubleshooting.md +++ /dev/null @@ -1,28 +0,0 @@ -# Troubleshooting [troubleshooting] - -If you have issues installing or running {{ls}}, check out these sections: - -* [Troubleshooting {{ls}}](../../../troubleshoot/ingest/logstash.md) -* [Troubleshooting plugins](https://www.elastic.co/guide/en/logstash/master/ts-plugins-general.html) -* [Troubleshooting specific plugins](https://www.elastic.co/guide/en/logstash/master/ts-plugins.html) - -We are adding more troubleshooting tips, so please check back soon. - - -## Contribute tips [add-tips] - -If you have something to add, please: - -* create an issue at [https://github.com/elastic/logstash/issues](https://github.com/elastic/logstash/issues), or -* create a pull request with your proposed changes at [https://github.com/elastic/logstash](https://github.com/elastic/logstash). - - -## Discussion forums [discuss] - -Also check out the [Logstash discussion forum](https://discuss.elastic.co/c/logstash). - - - - - - diff --git a/raw-migrated-files/logstash/logstash/ts-logstash.md b/raw-migrated-files/logstash/logstash/ts-logstash.md deleted file mode 100644 index 1ebe99868e..0000000000 --- a/raw-migrated-files/logstash/logstash/ts-logstash.md +++ /dev/null @@ -1,297 +0,0 @@ -# Troubleshooting {{ls}} [ts-logstash] - -## Installation and setup [ts-install] - -### Inaccessible temp directory [ts-temp-dir] - -Certain versions of the JRuby runtime and libraries in certain plugins (the Netty network library in the TCP input, for example) copy executable files to the temp directory. This situation causes subsequent failures when `/tmp` is mounted `noexec`. - -**Sample error** - -```sh -[2018-03-25T12:23:01,149][ERROR][org.logstash.Logstash ] -java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: -(LoadError) Could not load FFI Provider: (NotImplementedError) FFI not -available: java.lang.UnsatisfiedLinkError: /tmp/jffi5534463206038012403.so: -/tmp/jffi5534463206038012403.so: failed to map segment from shared object: -Operation not permitted -``` - -**Possible solutions** - -* Change setting to mount `/tmp` with `exec`. -* Specify an alternate directory using the `-Djava.io.tmpdir` setting in the `jvm.options` file. - - - -## {{ls}} start up [ts-startup] - -### *Illegal reflective access* errors [ts-illegal-reflective-error] - -After an upgrade, Logstash may show warnings similar to these: - -```sh -WARNING: An illegal reflective access operation has occurred -WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/{...}/jruby{...}jopenssl.jar) to field java.security.MessageDigest.provider -WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper -WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations -WARNING: All illegal access operations will be denied in a future release -``` - -These errors appear related to [a known issue with JRuby](https://github.com/jruby/jruby/issues/4834). - -**Work around** - -Try adding these values to the `jvm.options` file. - -```sh ---add-opens=java.base/java.security=ALL-UNNAMED ---add-opens=java.base/java.io=ALL-UNNAMED ---add-opens=java.base/java.nio.channels=ALL-UNNAMED ---add-opens=java.base/sun.nio.ch=org.ALL-UNNAMED ---add-opens=java.management/sun.management=ALL-UNNAMED -``` - -**Notes:** - -* These settings allow Logstash to start without warnings. -* This workaround has been tested with simple pipelines. If you have experiences to share, please comment in the [issue](https://github.com/elastic/logstash/issues/10496). - - -### *Permission denied - NUL* errors on Windows [ts-windows-permission-denied-NUL] - -Logstash may not start with some user-supplied versions of the JDK on Windows. - -**Sample error** - -```sh -[FATAL] 2022-04-27 15:13:16.650 [main] Logstash - Logstash stopped processing because of an error: (EACCES) Permission denied - NUL -org.jruby.exceptions.SystemCallError: (EACCES) Permission denied - NUL -``` - -This error appears to be related to a [JDK issue](https://bugs.openjdk.java.net/browse/JDK-8285445) where a new property was added with an inappropriate default. - -This issue affects some OpenJDK-derived JVM versions (Adoptium, OpenJDK, and Azul Zulu) on Windows: - -* `11.0.15+10` -* `17.0.3+7` - -**Work around** - -* Use the [bundled JDK](logstash://reference/getting-started-with-logstash.md#ls-jvm) included with Logstash -* Or, try adding this value to the `jvm.options` file, and restarting Logstash - - ```sh - -Djdk.io.File.enableADS=true - ``` - - - -### Container exits with *An unexpected error occurred!* message [ts-container-cgroup] - -{{ls}} running in a container may not start due to a [bug in the JDK](https://bugs.openjdk.org/browse/JDK-8343191). - -**Sample error** - -```sh -[FATAL] 2024-11-11 11:11:11.465 [LogStash::Runner] runner - An unexpected error occurred! {:error=>#, :backtrace=>[ - "java.util.Objects.requireNonNull(java/util/Objects.java:233)", - "sun.nio.fs.UnixFileSystem.getPath(sun/nio/fs/UnixFileSystem.java:296)", - "java.nio.file.Path.of(java/nio/file/Path.java:148)", - "java.nio.file.Paths.get(java/nio/file/Paths.java:69)", - "jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(jdk/internal/platform/CgroupUtil.java:67)", - "java.security.AccessController.doPrivileged(java/security/AccessController.java:571)", - "jdk.internal.platform.CgroupUtil.readStringValue(jdk/internal/platform/CgroupUtil.java:69)", - "jdk.internal.platform.CgroupSubsystemController.getStringValue(jdk/internal/platform/CgroupSubsystemController.java:65)", - "jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getCpuSetCpus(jdk/internal/platform/cgroupv1/CgroupV1Subsystem.java:275)", - "jdk.internal.platform.CgroupMetrics.getCpuSetCpus(jdk/internal/platform/CgroupMetrics.java:100)", - "com.sun.management.internal.OperatingSystemImpl.isCpuSetSameAsHostCpuSet(com/sun/management/internal/OperatingSystemImpl.java:277)", - "com.sun.management.internal.OperatingSystemImpl$ContainerCpuTicks.getContainerCpuLoad(com/sun/management/internal/OperatingSystemImpl.java:96)", - "com.sun.management.internal.OperatingSystemImpl.getProcessCpuLoad(com/sun/management/internal/OperatingSystemImpl.java:271)", - "org.logstash.instrument.monitors.ProcessMonitor$Report.(org/logstash/instrument/monitors/ProcessMonitor.java:63)", - "org.logstash.instrument.monitors.ProcessMonitor.detect(org/logstash/instrument/monitors/ProcessMonitor.java:136)", - "org.logstash.instrument.reports.ProcessReport.generate(org/logstash/instrument/reports/ProcessReport.java:35)", - "jdk.internal.reflect.DirectMethodHandleAccessor.invoke(jdk/internal/reflect/DirectMethodHandleAccessor.java:103)", - "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:580)", - "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:300)", - "org.jruby.javasupport.JavaMethod.invokeStaticDirect(org/jruby/javasupport/JavaMethod.java:222)", - "RUBY.collect_process_metrics(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb:102)", - "RUBY.collect(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb:73)", - "RUBY.start(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/base.rb:72)", - "org.jruby.RubySymbol$SymbolProcBody.yieldSpecific(org/jruby/RubySymbol.java:1541)", - "org.jruby.RubySymbol$SymbolProcBody.doYield(org/jruby/RubySymbol.java:1534)", - "org.jruby.RubyArray.collectArray(org/jruby/RubyArray.java:2770)", - "org.jruby.RubyArray.map(org/jruby/RubyArray.java:2803)", - "org.jruby.RubyArray$INVOKER$i$0$0$map.call(org/jruby/RubyArray$INVOKER$i$0$0$map.gen)", - "RUBY.start(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_pollers.rb:41)", - "RUBY.configure_metrics_collectors(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:477)", - "RUBY.initialize(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:88)", - "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", - "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", - "RUBY.create_agent(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:552)", - "RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:434)", - "RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/clamp-1.0.1/lib/clamp/command.rb:68)", - "RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:293)", - "RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/clamp-1.0.1/lib/clamp/command.rb:133)", - "usr.share.logstash.lib.bootstrap.environment.
(/usr/share/logstash/lib/bootstrap/environment.rb:89)", - "usr.share.logstash.lib.bootstrap.environment.run(usr/share/logstash/lib/bootstrap//usr/share/logstash/lib/bootstrap/environment.rb)", - "java.lang.invoke.MethodHandle.invokeWithArguments(java/lang/invoke/MethodHandle.java:733)", - "org.jruby.Ruby.runScript(org/jruby/Ruby.java:1245)", - "org.jruby.Ruby.runNormally(org/jruby/Ruby.java:1157)", - "org.jruby.Ruby.runFromMain(org/jruby/Ruby.java:983)", - "org.logstash.Logstash.run(org/logstash/Logstash.java:163)", - "org.logstash.Logstash.main(org/logstash/Logstash.java:73)" - ] -} -[FATAL] 2024-11-11 11:11:11.516 [LogStash::Runner] Logstash - Logstash stopped processing because of an error: (SystemExit) exit - org.jruby.exceptions.SystemExit: (SystemExit) exit - at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java: 921) ~[jruby.jar:?] - at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java: 880) ~[jruby.jar:?] - at usr.share.logstash.lib.bootstrap.environment.
(/usr/share/logstash/lib/bootstrap/environment.rb: 90) ~[?:?] -``` - -This error can happen when cgroups v2 is not enabled, such as when running on a Red Had version 8 operating system. - -**Work around** - -Follow your operating system’s instructions for enabling cgroups v2. - - - -## Troubleshooting persistent queues [ts-pqs] - -Symptoms of persistent queue problems include {{ls}} or one or more pipelines not starting successfully, accompanied by an error message similar to this one. - -``` -message=>"java.io.IOException: Page file size is too small to hold elements" -``` - -See the [troubleshooting information](logstash://reference/persistent-queues.md#troubleshooting-pqs) in the persistent queue section for more information on remediating problems with persistent queues. - - -## Data ingestion [ts-ingest] - -### Error response code 429 [ts-429] - -A `429` message indicates that an application is busy handling other requests. For example, Elasticsearch sends a `429` code to notify Logstash (or other indexers) that the bulk failed because the ingest queue is full. Logstash will retry sending documents. - -**Possible actions** - -Check {{es}} to see if it needs attention. - -* [Cluster stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-stats) -* [Monitor a cluster](../../../deploy-manage/monitor.md) - -**Sample error** - -``` -[2018-08-21T20:05:36,111][INFO ][logstash.outputs.elasticsearch] retrying -failed action with response code: 429 -({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of -org.elasticsearch.transport.TransportService$7@85be457 on -EsThreadPoolExecutor[bulk, queue capacity = 200, -org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@538c9d8a[Running, -pool size = 16, active threads = 16, queued tasks = 200, completed tasks = -685]]"}) -``` - - - -## Performance [ts-performance] - -For general performance tuning tips and guidelines, see [*Performance tuning*](logstash://reference/performance-tuning.md). - - -## Troubleshooting a pipeline [ts-pipeline] - -Pipelines, by definition, are unique. Here are some guidelines to help you get started. - -* Identify the offending pipeline. -* Start small. Create a minimum pipeline that manifests the problem. - -For basic pipelines, this configuration could be enough to make the problem show itself. - -```ruby -input {stdin{}} output {stdout{}} -``` - -{{ls}} can separate logs by pipeline. This feature can help you identify the offending pipeline. Set `pipeline.separate_logs: true` in your `logstash.yml` to enable the log per pipeline feature. - -For more complex pipelines, the problem could be caused by a series of plugins in a specific order. Troubleshooting these pipelines usually requires trial and error. Start by systematically removing input and output plugins until you’re left with the minimum set that manifest the issue. - -We want to expand this section to make it more helpful. If you have troubleshooting tips to share, please: - -* create an issue at [https://github.com/elastic/logstash/issues](https://github.com/elastic/logstash/issues), or -* create a pull request with your proposed changes at [https://github.com/elastic/logstash](https://github.com/elastic/logstash). - - -## Logging level can affect performances [ts-pipeline-logging-level-performance] - -**Symptoms** - -Simple filters such as `mutate` or `json` filter can take several milliseconds per event to execute. Inputs and outputs might be affected, too. - -**Background** - -The different plugins running on Logstash can be quite verbose if the logging level is set to `debug` or `trace`. As the logging library used in Logstash is synchronous, heavy logging can affect performances. - -**Solution** - -Reset the logging level to `info`. - - -## Logging in json format can write duplicate `message` fields [ts-pipeline-logging-json-duplicated-message-field] - -**Symptoms** - -When log format is `json` and certain log events (for example errors from JSON codec plugin) contains two instances of the `message` field. - -Without setting this flag, json log would contain objects like: - -```json -{ - "level":"WARN", - "loggerName":"logstash.codecs.jsonlines", - "timeMillis":1712937761955, - "thread":"[main] Date: Thu, 13 Mar 2025 16:22:23 -0400 Subject: [PATCH 3/3] oof --- troubleshoot/ingest/logstash/health-report-pipelines.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/troubleshoot/ingest/logstash/health-report-pipelines.md b/troubleshoot/ingest/logstash/health-report-pipelines.md index 0b44eeafb8..d2851129ab 100644 --- a/troubleshoot/ingest/logstash/health-report-pipelines.md +++ b/troubleshoot/ingest/logstash/health-report-pipelines.md @@ -1,4 +1,5 @@ --- +navigation_title: "Health report pipelines" mapped_urls: - https://www.elastic.co/guide/en/logstash/current/health-report-pipeline-status.html - https://www.elastic.co/guide/en/logstash/current/health-report-pipeline-flow-worker-utilization.html @@ -9,7 +10,7 @@ mapped_urls: This page helps you troubleshoot Logstash health report pipelines. * [Check health report pipeline status](#health-report-pipeline-status) -* [Check health report pipeline worker utilization ](health-report-pipeline-flow-worker-utilization) +* [Check health report pipeline worker utilization](#health-report-pipeline-flow-worker-utilization) ## Check health report pipeline status [health-report-pipeline-status]