You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A troubleshooting section added and more doc improvements. (#243)
* A troubleshooting section added, supported processors revised and unsupported processors list added.
* Update the ingest pipelines internal error case.
* Revise the troubleshooting statements, add the Logstash behavior to the unsupported processors.
* Update the content based on the doc guidelines recommendation, remove unsupported processors section to make a separate case.
@@ -279,6 +278,73 @@ To achieve this, mappings are cached for a maximum of {cached-entry-ttl}, and ca
279
278
* when a reloaded mapping is newly _empty_, the previous non-empty mapping is _replaced_ with a new empty entry so that subsequent events will use the empty value
280
279
* when the reload of a mapping _fails_, this plugin emits a log warning but the existing cache entry is unchanged and gets closer to its expiry.
281
280
281
+
[id="plugins-{type}s-{plugin}-troubleshooting"]
282
+
==== Troubleshooting
283
+
284
+
Troubleshooting ingest pipelines associated with data streams requires a pragmatic approach, involving thorough analysis and debugging techniques.
285
+
To identify the root cause of issues with pipeline execution, you need to enable debug-level logging.
286
+
The debug logs allow monitoring the plugin's behavior and help to detect issues.
287
+
The plugin operates through following phases: pipeline _resolution_, ingest pipeline _creation_, and pipeline _execution_.
288
+
289
+
[ingest-pipeline-resolution-errors]
290
+
===== Ingest Pipeline Resolution Errors
291
+
292
+
*Plugin does not resolve ingest pipeline associated with data stream*
293
+
294
+
If you encounter `No pipeline resolved for event ...` messages in the debug logs, the error indicates that the plugin is unable to resolve the ingest pipeline from the data stream.
295
+
To further diagnose and resolve the issue, verify whether the data stream's index settings include a `default_pipeline` or `final_pipeline` configuration.
296
+
You can inspect the index settings by running a `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}` query in the {kib} Dev Tools.
297
+
Make sure to replace `{type}-{dataset}-{namespace}` with values corresponding to your data stream.
298
+
For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources.
299
+
300
+
*Ingest pipeline does not exist*
301
+
302
+
If you notice `pipeline not found: ...` messages in the debug logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the plugin has successfully resolved the ingest pipeline from `default_pipeline` or `final_pipeline`, but the specified pipeline does not exist.
303
+
To confirm whether pipeline exists, run a `GET _ingest/pipeline/{ingest-pipeline-name}` query in the {kib} Dev Tools console.
304
+
For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#pipelines-for-fleet-elastic-agent[Ingest pipelines for fleet] and {integrations-docs}[Elastic {integrations}] resources.
305
+
306
+
[ingest-pipeline-creation-errors]
307
+
===== Ingest Pipeline Creation Errors
308
+
309
+
If you encounter `failed to create ingest pipeline {pipeline-name} from pipeline configuration` error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration.
310
+
This issue typically arises when the pipeline configuration contains unsupported or invalid processor(s) that the plugin cannot execute.
311
+
In such situations, the log output includes information about the issue.
312
+
For example, the following error message indicating `inference` processor in the pipeline configuration which is not supported processor type.
313
+
314
+
[source]
315
+
----
316
+
2025-01-21 12:29:13 [2025-01-21T20:29:13,986][ERROR][co.elastic.logstash.filters.elasticintegration.IngestPipelineFactory][main] failed to create ingest pipeline logs-my.custom-1.0.0 from pipeline configuration
317
+
2025-01-21 12:29:13 org.elasticsearch.ElasticsearchParseException: No processor type exists with name [inference]
318
+
2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException(ConfigurationUtils.java:470) ~[logstash-filter-elastic_integration-0.1.16.jar:?]
319
+
2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635)
320
+
----
321
+
322
+
For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] resources.
323
+
324
+
[ingest-pipeline-execution-errors]
325
+
===== Ingest Pipeline Execution Errors
326
+
327
+
These errors typically fall into two main categories, each requiring specific investigation and resolution steps:
328
+
329
+
*Logstash catches issues while running ingest pipelines*
330
+
331
+
When errors occur during the execution of ingest pipelines, {ls} attaches the `_ingest_pipeline_failure` tag to the event, making it easier to identify and investigate problematic events.
332
+
The detailed logs are available in the {ls} logs for your investigation.
333
+
The root cause may depend on configuration, environment or integration you are running.
334
+
For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] resources.
335
+
336
+
*Errors internally occurred in the ingest pipeline*
337
+
338
+
If an ingest pipeline is configured with `on_failure` conditions, failures during pipeline execution are internally handled by the ingest pipeline itself and not be visible to {ls}.
339
+
This means that errors are captured and processed within the pipeline, rather than being passed to {ls} for logging or tagging.
340
+
To identify and analyze such cases, go to the {kib} -> Stack Management -> Ingest pipelines and find the ingest pipeline you are using.
341
+
Click on it and navigate to the _Failure processors_ section. If processors are configured, they may specify which field contains the failure details.
342
+
For example, the pipeline might store error information in a `error.message` field or a custom field defined in the _Failure processors_ configuration.
343
+
Go to the {kib} Dev Tools and search for the data (`GET {index-ingest-pipeline-is-writing}/_search`) and look for the fields mentioned in the failure processors .
344
+
The fields have error details which help you to analyze the root cause.
345
+
346
+
For further guidance, we recommend exploring {fleet-guide}/integrations.html[Manage Elastic Agent Integrations], {es} {ref}/ingest.html#handling-pipeline-failures[Handling pipeline failures] resources.
0 commit comments