From 8366f4f165eaa18abae2ff7abb44f41ad9578329 Mon Sep 17 00:00:00 2001 From: Mashhur Date: Wed, 22 Jan 2025 01:11:41 -0800 Subject: [PATCH 1/6] A troubleshooting section added, supported processors revised and unsupported processors list added. --- docs/index.asciidoc | 66 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 58 insertions(+), 8 deletions(-) diff --git a/docs/index.asciidoc b/docs/index.asciidoc index a54be234..5bcafe23 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -151,7 +151,7 @@ The plugin starts in an unsafe mode with a runtime error indicating that API per To avoid these issues, set up user authentication and ensure that security in {es} is enabled (default). -- - + [id="plugins-{type}s-{plugin}-supported_ingest_processors"] ==== Supported Ingest Processors @@ -165,27 +165,27 @@ It has access to the Painless and Mustache scripting engines where applicable: | `append` | _none_ | `bytes` | _none_ -| `communityid` | _none_ +| `community_id` | _none_ | `convert` | _none_ | `csv` | _none_ | `date` | _none_ -| `dateindexname` | _none_ +| `date_index_name` | _none_ | `dissect` | _none_ -| `dotexpander` | _none_ +| `dot_expander` | _none_ | `drop` | _none_ | `fail` | _none_ | `fingerprint` | _none_ | `foreach` | _none_ | `grok` | _none_ | `gsub` | _none_ -| `htmlstrip` | _none_ +| `html_strip` | _none_ | `join` | _none_ | `json` | _none_ -| `keyvalue` | _none_ +| `kv` | _none_ | `lowercase` | _none_ -| `networkdirection` | _none_ +| `network_direction` | _none_ | `pipeline` | resolved pipeline _must_ be wholly-composed of supported processors -| `registereddomain` | _none_ +| `registered_domain` | _none_ | `remove` | _none_ | `rename` | _none_ | `reroute` | _none_ @@ -206,6 +206,15 @@ h| GeoIp |======================================================================= +[id="plugins-{type}s-{plugin}-unsupported_ingest_processors"] +==== Unsupported Ingest Processors + +This plugin has a limited capability to execute all processors, as some of them require external access and auxiliary resources. +For example, the `inference` processor relies on the Machine Learning models, which are not naturally supported by this plugin. +Followings (not limited to) are known unsupported processors: +- `set_security_user` +- `inference` +- `enrich` [id="plugins-{type}s-{plugin}-field_mappings"] ===== Field Mappings @@ -279,6 +288,47 @@ To achieve this, mappings are cached for a maximum of {cached-entry-ttl}, and ca * when a reloaded mapping is newly _empty_, the previous non-empty mapping is _replaced_ with a new empty entry so that subsequent events will use the empty value * when the reload of a mapping _fails_, this plugin emits a log warning but the existing cache entry is unchanged and gets closer to its expiry. +[id="plugins-{type}s-{plugin}-troubleshooting"] +==== Troubleshooting + +Troubleshooting ingest pipelines associated with data streams requires a pragmatic approach, involving thorough analysis and debugging techniques. +To identify the root cause of issues with pipeline execution, it is essential to enable debug-level logging. This allows you to monitor the plugin's behavior and detect any anomalies or errors that may be causing pipeline execution. +The plugin operates through following phases: pipeline _resolution_, ingest pipeline _creation_, and pipeline _execution_. + +* If you encounter `No pipeline resolved for event ...` messages in the debug logs, it indicates that the plugin is unable to resolve the ingest pipeline from the data stream. In such cases, explicitly +defining the pipeline name using the <> is a one option to resolve the issue. +To further troubleshoot, check if the data stream index setting contains `default_pipeline` or `final_pipeline`. +You can do this by running a simple query in the {kib} Dev Tools: `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}`. Make sure to replace `{type}-{dataset}-{namespace}` with your actual data stream values. + +For further guidance, we recommend visiting {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] guidelines. + +* If you notice `pipeline not found: ...` debug messages in the logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the ingest pipeline is resolved from `default_pipeline` or `final_pipeline`, but the pipeline itself does not exist. +To confirm this, run a simple request in the {kib} Dev Tools: `GET _ingest/pipeline/{ingest-pipeline-name}`. + +For further guidance, we recommend visiting {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] guidelines to ensure that you are using compatible integrations. + +* If you encounter `failed to create ingest pipeline {pipeline-name} from pipeline configuration` error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration. +For most cases, this is due to unsupported processor(s) in the pipeline configuration. +In such situations, the log output will mostly include a stack trace with detailed information about the issue. +For example, the following error message indicating `inference` processor in the pipeline configuration which is not supported processor type. + + [source] + ---- + 2025-01-21 12:29:13 [2025-01-21T20:29:13,986][ERROR][co.elastic.logstash.filters.elasticintegration.IngestPipelineFactory][main] failed to create ingest pipeline logs-my.custom-1.0.0 from pipeline configuration + 2025-01-21 12:29:13 org.elasticsearch.ElasticsearchParseException: No processor type exists with name [inference] + 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException(ConfigurationUtils.java:470) ~[logstash-filter-elastic_integration-0.1.16.jar:?] + 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635) + ---- + +In such cases, review the pipeline configuration for <> and refer to relevant integration and ingestion pipeline documentation for guidance. + +- **Errors happened during pipeline execution** +If errors are occurred during the pipeline execution, the event will not be processed and the `_ingest_pipeline_failure` tag will be attached. +For this case, the errors mostly contain the stack trace or detail reasoning. +The root cause may depend on the environment or integration you are running. +Check out {ls} and {integrations} documents for further assistance. + + [id="plugins-{type}s-{plugin}-options"] ==== {elastic-integration-name} Filter Configuration Options From 592560ae8f8df2fad394d7ebbe43505809027eb4 Mon Sep 17 00:00:00 2001 From: Mashhur Date: Mon, 27 Jan 2025 10:23:04 -0800 Subject: [PATCH 2/6] Pipeline execution error cases added. --- docs/index.asciidoc | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 5bcafe23..6fd69b15 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -323,11 +323,24 @@ For example, the following error message indicating `inference` processor in the In such cases, review the pipeline configuration for <> and refer to relevant integration and ingestion pipeline documentation for guidance. - **Errors happened during pipeline execution** +There are mainly two cases require investigation: +1. **Logstash catches issues while running ingest pipelines** If errors are occurred during the pipeline execution, the event will not be processed and the `_ingest_pipeline_failure` tag will be attached. -For this case, the errors mostly contain the stack trace or detail reasoning. -The root cause may depend on the environment or integration you are running. +The detailed logs will be available in the Logstash logs for your investigation. +The root cause may depend on configuration, environment or integration you are running. Check out {ls} and {integrations} documents for further assistance. +2. **Errors internally occurred in the ingest pipeline** +To figure out such cases, go to the {kib} Dev Tools, search for the data (`GET index-ingest-pipeline-is-writing/_search`) and see if the index documents contain `error` fields. +If the index documents contain `error` fields, you can analyze the error messages to understand the root cause. +For example, if `rename` processor cannot find the field needs to be renamed, it errors with the following: +[source] +---- +"error": { + "message": "field [non_exist_source_field] doesn't exist" +}, +---- +Check out {integrations} documents for further assistance. [id="plugins-{type}s-{plugin}-options"] ==== {elastic-integration-name} Filter Configuration Options From ceb4c108a05a402b26c1a3b081e85e69948c77ea Mon Sep 17 00:00:00 2001 From: Mashhur Date: Mon, 27 Jan 2025 11:58:49 -0800 Subject: [PATCH 3/6] Update the ingest pipelines internal error case. --- docs/index.asciidoc | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 6fd69b15..cb37b82a 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -331,16 +331,13 @@ The root cause may depend on configuration, environment or integration you are r Check out {ls} and {integrations} documents for further assistance. 2. **Errors internally occurred in the ingest pipeline** -To figure out such cases, go to the {kib} Dev Tools, search for the data (`GET index-ingest-pipeline-is-writing/_search`) and see if the index documents contain `error` fields. -If the index documents contain `error` fields, you can analyze the error messages to understand the root cause. -For example, if `rename` processor cannot find the field needs to be renamed, it errors with the following: -[source] ----- -"error": { - "message": "field [non_exist_source_field] doesn't exist" -}, ----- -Check out {integrations} documents for further assistance. +If ingest pipeline are build with failure processors, failures will be internally handled by ingest pipeline and will not be visible to Logstash. +To figure out such cases, go to the {kib} -> Stack Management -> Ingest pipelines and find the ingest pipeline you are using. +Click on it and navigate to the Failure processors section. If failure processors are set, it usually tells which field includes the failures. +Go to the {kib} Dev Tools and search for the data (`GET index-ingest-pipeline-is-writing/_search`) and see if the index documents contain the fields failure processors mention. +The fields will have error details which will help you to analyze the root cause. + +Check out {integrations} and {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] documents for further assistance. [id="plugins-{type}s-{plugin}-options"] ==== {elastic-integration-name} Filter Configuration Options From f912d99360b914c9e0b5aa5dab7920a1a22f7f3a Mon Sep 17 00:00:00 2001 From: Mashhur Date: Mon, 27 Jan 2025 13:40:33 -0800 Subject: [PATCH 4/6] Revise the troubleshooting statements, add the Logstash behavior to the unsupported processors. --- docs/index.asciidoc | 50 +++++++++++++++++++++++++-------------------- 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/docs/index.asciidoc b/docs/index.asciidoc index cb37b82a..61f6f990 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -209,12 +209,18 @@ h| GeoIp [id="plugins-{type}s-{plugin}-unsupported_ingest_processors"] ==== Unsupported Ingest Processors -This plugin has a limited capability to execute all processors, as some of them require external access and auxiliary resources. -For example, the `inference` processor relies on the Machine Learning models, which are not naturally supported by this plugin. -Followings (not limited to) are known unsupported processors: -- `set_security_user` -- `inference` -- `enrich` +This plugin has limited capabilities to execute all processors, as certain processors require external access or depend on additional resources that are not available in the plugin's environment. +Some processors may not function as expected, potentially leading to errors or silent failures in Logstash. +The following processors are known unsupported processors. Note that this list is not fully comprehensive: + +[cols="<1,<5",options="header"] +|======================================================================= +|Processor | Behavior +| `set_security_user` | Logstash will not warn or error +| `inference` | Logstash raises `No processor type exists with name [inference]` error +| `enrich` | Logstash raises `No processor type exists with name [enrich]` error +|======================================================================= + [id="plugins-{type}s-{plugin}-field_mappings"] ===== Field Mappings @@ -292,23 +298,23 @@ To achieve this, mappings are cached for a maximum of {cached-entry-ttl}, and ca ==== Troubleshooting Troubleshooting ingest pipelines associated with data streams requires a pragmatic approach, involving thorough analysis and debugging techniques. -To identify the root cause of issues with pipeline execution, it is essential to enable debug-level logging. This allows you to monitor the plugin's behavior and detect any anomalies or errors that may be causing pipeline execution. +To identify the root cause of issues with pipeline execution, it is essential to enable debug-level logging. This allows monitoring the plugin's behavior and detect any anomalies or errors that may be causing pipeline execution. The plugin operates through following phases: pipeline _resolution_, ingest pipeline _creation_, and pipeline _execution_. * If you encounter `No pipeline resolved for event ...` messages in the debug logs, it indicates that the plugin is unable to resolve the ingest pipeline from the data stream. In such cases, explicitly defining the pipeline name using the <> is a one option to resolve the issue. -To further troubleshoot, check if the data stream index setting contains `default_pipeline` or `final_pipeline`. -You can do this by running a simple query in the {kib} Dev Tools: `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}`. Make sure to replace `{type}-{dataset}-{namespace}` with your actual data stream values. +To further diagnose and resolve the issue, verify whether the data stream's index settings include a `default_pipeline` or `final_pipeline` configuration. +You can inspect the index settings by running a simple query in the {kib} Dev Tools: `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}`. Make sure to replace `{type}-{dataset}-{namespace}` with values corresponding to your data stream. -For further guidance, we recommend visiting {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] guidelines. +For further guidance, we recommend exploring following {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] resources. -* If you notice `pipeline not found: ...` debug messages in the logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the ingest pipeline is resolved from `default_pipeline` or `final_pipeline`, but the pipeline itself does not exist. -To confirm this, run a simple request in the {kib} Dev Tools: `GET _ingest/pipeline/{ingest-pipeline-name}`. +* If you notice `pipeline not found: ...` debug messages in the logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the plugin has successfully resolved the ingest pipeline from `default_pipeline` or `final_pipeline`, but the specified pipeline does not exist. +To confirm whether pipeline exists, run a simple request in the {kib} Dev Tools console: `GET _ingest/pipeline/{ingest-pipeline-name}`. -For further guidance, we recommend visiting {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] guidelines to ensure that you are using compatible integrations. +For further guidance, it is recommended visiting {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] guidelines. * If you encounter `failed to create ingest pipeline {pipeline-name} from pipeline configuration` error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration. -For most cases, this is due to unsupported processor(s) in the pipeline configuration. +This issue typically arises when the pipeline configuration contains unsupported or invalid processor(s) that the plugin cannot execute. In such situations, the log output will mostly include a stack trace with detailed information about the issue. For example, the following error message indicating `inference` processor in the pipeline configuration which is not supported processor type. @@ -320,21 +326,21 @@ For example, the following error message indicating `inference` processor in the 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635) ---- -In such cases, review the pipeline configuration for <> and refer to relevant integration and ingestion pipeline documentation for guidance. +Review the pipeline configuration for <> and refer to relevant integration and ingestion pipeline documentation for guidance. - **Errors happened during pipeline execution** -There are mainly two cases require investigation: +These errors typically fall into two main categories, each requiring specific investigation and resolution steps: 1. **Logstash catches issues while running ingest pipelines** -If errors are occurred during the pipeline execution, the event will not be processed and the `_ingest_pipeline_failure` tag will be attached. -The detailed logs will be available in the Logstash logs for your investigation. +When errors occur during the execution of ingest pipelines, Logstash will attach the `_ingest_pipeline_failure` tag to the event, making it easier to identify and investigate problematic events. +The detailed logs will be also available in the Logstash logs for your investigation. The root cause may depend on configuration, environment or integration you are running. Check out {ls} and {integrations} documents for further assistance. 2. **Errors internally occurred in the ingest pipeline** -If ingest pipeline are build with failure processors, failures will be internally handled by ingest pipeline and will not be visible to Logstash. -To figure out such cases, go to the {kib} -> Stack Management -> Ingest pipelines and find the ingest pipeline you are using. -Click on it and navigate to the Failure processors section. If failure processors are set, it usually tells which field includes the failures. -Go to the {kib} Dev Tools and search for the data (`GET index-ingest-pipeline-is-writing/_search`) and see if the index documents contain the fields failure processors mention. +If an ingest pipeline is configured with failure processors, failures during pipeline execution are internally handled by the ingest pipeline itself and will not be visible to Logstash. This means that errors are captured and processed within the pipeline, rather than being passed to Logstash for logging or tagging. +To identify and analyze such cases, go to the {kib} -> Stack Management -> Ingest pipelines and find the ingest pipeline you are using. +Click on it and navigate to the Failure processors section. If failure processors are configured, they will typically specify which field contains the failure details. For example, the pipeline might store error information in a field like `error.message` or a custom field defined in the failure processor configuration. +Go to the {kib} Dev Tools and search for the data (`GET index-ingest-pipeline-is-writing/_search`) and look for the fields mentioned in the failure processors . The fields will have error details which will help you to analyze the root cause. Check out {integrations} and {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] documents for further assistance. From b79ea1f6ea690c52d28bcecbae40c2b4121f143b Mon Sep 17 00:00:00 2001 From: Mashhur Date: Tue, 28 Jan 2025 15:30:28 -0800 Subject: [PATCH 5/6] Update the content based on the doc guidelines recommendation, remove unsupported processors section to make a separate case. --- docs/index.asciidoc | 74 ++++++++++++++++++++++----------------------- 1 file changed, 37 insertions(+), 37 deletions(-) diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 61f6f990..a42bd166 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -206,22 +206,6 @@ h| GeoIp |======================================================================= -[id="plugins-{type}s-{plugin}-unsupported_ingest_processors"] -==== Unsupported Ingest Processors - -This plugin has limited capabilities to execute all processors, as certain processors require external access or depend on additional resources that are not available in the plugin's environment. -Some processors may not function as expected, potentially leading to errors or silent failures in Logstash. -The following processors are known unsupported processors. Note that this list is not fully comprehensive: - -[cols="<1,<5",options="header"] -|======================================================================= -|Processor | Behavior -| `set_security_user` | Logstash will not warn or error -| `inference` | Logstash raises `No processor type exists with name [inference]` error -| `enrich` | Logstash raises `No processor type exists with name [enrich]` error -|======================================================================= - - [id="plugins-{type}s-{plugin}-field_mappings"] ===== Field Mappings @@ -298,24 +282,34 @@ To achieve this, mappings are cached for a maximum of {cached-entry-ttl}, and ca ==== Troubleshooting Troubleshooting ingest pipelines associated with data streams requires a pragmatic approach, involving thorough analysis and debugging techniques. -To identify the root cause of issues with pipeline execution, it is essential to enable debug-level logging. This allows monitoring the plugin's behavior and detect any anomalies or errors that may be causing pipeline execution. +To identify the root cause of issues with pipeline execution, you need to enable debug-level logging. +The debug logs allow monitoring the plugin's behavior and help to detect issues. The plugin operates through following phases: pipeline _resolution_, ingest pipeline _creation_, and pipeline _execution_. -* If you encounter `No pipeline resolved for event ...` messages in the debug logs, it indicates that the plugin is unable to resolve the ingest pipeline from the data stream. In such cases, explicitly -defining the pipeline name using the <> is a one option to resolve the issue. +[ingest-pipeline-resolution-errors] +===== Ingest Pipeline Resolution Errors + +*Plugin does not resolve ingest pipeline associated with data stream* + +If you encounter `No pipeline resolved for event ...` messages in the debug logs, the error indicates that the plugin is unable to resolve the ingest pipeline from the data stream. To further diagnose and resolve the issue, verify whether the data stream's index settings include a `default_pipeline` or `final_pipeline` configuration. -You can inspect the index settings by running a simple query in the {kib} Dev Tools: `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}`. Make sure to replace `{type}-{dataset}-{namespace}` with values corresponding to your data stream. +You can inspect the index settings by running a `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}` query in the {kib} Dev Tools. +Make sure to replace `{type}-{dataset}-{namespace}` with values corresponding to your data stream. -For further guidance, we recommend exploring following {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] resources. +For further guidance, we recommend exploring {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources. -* If you notice `pipeline not found: ...` debug messages in the logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the plugin has successfully resolved the ingest pipeline from `default_pipeline` or `final_pipeline`, but the specified pipeline does not exist. -To confirm whether pipeline exists, run a simple request in the {kib} Dev Tools console: `GET _ingest/pipeline/{ingest-pipeline-name}`. +*Ingest pipeline does not exist* -For further guidance, it is recommended visiting {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and https://docs.elastic.co/integrations/all_integrations[Elastic {integrations}] guidelines. +If you notice `pipeline not found: ...` messages in the debug logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the plugin has successfully resolved the ingest pipeline from `default_pipeline` or `final_pipeline`, but the specified pipeline does not exist. +To confirm whether pipeline exists, run a `GET _ingest/pipeline/{ingest-pipeline-name}` query in the {kib} Dev Tools console. +For further guidance, we recommend exploring {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources. -* If you encounter `failed to create ingest pipeline {pipeline-name} from pipeline configuration` error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration. +[ingest-pipeline-creation-errors] +===== Ingest Pipeline Creation Errors + +If you encounter `failed to create ingest pipeline {pipeline-name} from pipeline configuration` error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration. This issue typically arises when the pipeline configuration contains unsupported or invalid processor(s) that the plugin cannot execute. -In such situations, the log output will mostly include a stack trace with detailed information about the issue. +In such situations, the log output includes information about the issue. For example, the following error message indicating `inference` processor in the pipeline configuration which is not supported processor type. [source] @@ -326,22 +320,28 @@ For example, the following error message indicating `inference` processor in the 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635) ---- -Review the pipeline configuration for <> and refer to relevant integration and ingestion pipeline documentation for guidance. +For further guidance, we recommend exploring {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] documentation. + +[ingest-pipeline-execution-errors] +===== Ingest Pipeline Execution Errors -- **Errors happened during pipeline execution** These errors typically fall into two main categories, each requiring specific investigation and resolution steps: -1. **Logstash catches issues while running ingest pipelines** -When errors occur during the execution of ingest pipelines, Logstash will attach the `_ingest_pipeline_failure` tag to the event, making it easier to identify and investigate problematic events. -The detailed logs will be also available in the Logstash logs for your investigation. + +*Logstash catches issues while running ingest pipelines* + +When errors occur during the execution of ingest pipelines, {ls} attaches the `_ingest_pipeline_failure` tag to the event, making it easier to identify and investigate problematic events. +The detailed logs are available in the {ls} logs for your investigation. The root cause may depend on configuration, environment or integration you are running. -Check out {ls} and {integrations} documents for further assistance. -2. **Errors internally occurred in the ingest pipeline** -If an ingest pipeline is configured with failure processors, failures during pipeline execution are internally handled by the ingest pipeline itself and will not be visible to Logstash. This means that errors are captured and processed within the pipeline, rather than being passed to Logstash for logging or tagging. +*Errors internally occurred in the ingest pipeline* + +If an ingest pipeline is configured with `on_failure` conditions, failures during pipeline execution are internally handled by the ingest pipeline itself and not be visible to {ls}. +This means that errors are captured and processed within the pipeline, rather than being passed to {ls} for logging or tagging. To identify and analyze such cases, go to the {kib} -> Stack Management -> Ingest pipelines and find the ingest pipeline you are using. -Click on it and navigate to the Failure processors section. If failure processors are configured, they will typically specify which field contains the failure details. For example, the pipeline might store error information in a field like `error.message` or a custom field defined in the failure processor configuration. -Go to the {kib} Dev Tools and search for the data (`GET index-ingest-pipeline-is-writing/_search`) and look for the fields mentioned in the failure processors . -The fields will have error details which will help you to analyze the root cause. +Click on it and navigate to the _Failure processors_ section. If processors are configured, they may specify which field contains the failure details. +For example, the pipeline might store error information in a `error.message` field or a custom field defined in the _Failure processors_ configuration. +Go to the {kib} Dev Tools and search for the data (`GET {index-ingest-pipeline-is-writing}/_search`) and look for the fields mentioned in the failure processors . +The fields have error details which help you to analyze the root cause. Check out {integrations} and {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] documents for further assistance. From 96c0d8108fe3747b68711990744dd50d49810f0d Mon Sep 17 00:00:00 2001 From: Mashhur Date: Tue, 28 Jan 2025 15:47:43 -0800 Subject: [PATCH 6/6] Resource links corrected. --- docs/index.asciidoc | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/index.asciidoc b/docs/index.asciidoc index a42bd166..843bd5d8 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -295,14 +295,13 @@ If you encounter `No pipeline resolved for event ...` messages in the debug logs To further diagnose and resolve the issue, verify whether the data stream's index settings include a `default_pipeline` or `final_pipeline` configuration. You can inspect the index settings by running a `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}` query in the {kib} Dev Tools. Make sure to replace `{type}-{dataset}-{namespace}` with values corresponding to your data stream. - -For further guidance, we recommend exploring {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources. +For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources. *Ingest pipeline does not exist* If you notice `pipeline not found: ...` messages in the debug logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the plugin has successfully resolved the ingest pipeline from `default_pipeline` or `final_pipeline`, but the specified pipeline does not exist. To confirm whether pipeline exists, run a `GET _ingest/pipeline/{ingest-pipeline-name}` query in the {kib} Dev Tools console. -For further guidance, we recommend exploring {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources. +For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources. [ingest-pipeline-creation-errors] ===== Ingest Pipeline Creation Errors @@ -320,7 +319,7 @@ For example, the following error message indicating `inference` processor in the 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635) ---- -For further guidance, we recommend exploring {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] documentation. +For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] resources. [ingest-pipeline-execution-errors] ===== Ingest Pipeline Execution Errors @@ -332,6 +331,7 @@ These errors typically fall into two main categories, each requiring specific in When errors occur during the execution of ingest pipelines, {ls} attaches the `_ingest_pipeline_failure` tag to the event, making it easier to identify and investigate problematic events. The detailed logs are available in the {ls} logs for your investigation. The root cause may depend on configuration, environment or integration you are running. +For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] resources. *Errors internally occurred in the ingest pipeline* @@ -343,7 +343,7 @@ For example, the pipeline might store error information in a `error.message` fie Go to the {kib} Dev Tools and search for the data (`GET {index-ingest-pipeline-is-writing}/_search`) and look for the fields mentioned in the failure processors . The fields have error details which help you to analyze the root cause. -Check out {integrations} and {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] documents for further assistance. +For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] resources. [id="plugins-{type}s-{plugin}-options"] ==== {elastic-integration-name} Filter Configuration Options