Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
---
navigation_title: Collector won't start
description: Learn what to do when the EDOT Collector doesn’t start.
applies_to:
stack:
serverless:
observability:
product:
edot_collector: ga
products:
- id: cloud-serverless
- id: observability
- id: edot-collector
---

# EDOT Collector doesn’t start


If your EDOT Collector fails to start, it's often due to configuration or environment-related issues. This guide walks you through the most common root causes and how to resolve them.

## Symptoms

EDOT Collector fails to start or crashes immediately after launch.

Possible causes include:

* Invalid YAML configuration, including syntax errors or unsupported fields
* Port binding conflicts, for example ports 4317 or 4318 already in use
* Missing or misconfigured required components, such as `receivers` or `exporters`
* Incorrect permissions or volume mounts in containerized environments

## Resolution

The solution depends on your EDOT Collector's setup:

* [Standalone](#standalone-edot-collector)
* [Kubernetes](#kubernetes-edot-collector)

### Standalone EDOT Collector

If you're deploying the EDOT Collector in a standalone configuration, try to:

* Validate configuration syntax

Run the following to validate your configuration without starting the Collector:

```bash
edot-collector --config=/path/to/otel-collector-config.yaml --dry-run
```

This checks for syntax errors and missing components.

EDOT fully supports `--dry-run`, just like the upstream Collector.

* Check logs for stack traces or component errors

Review the Collector logs for error messages indicating configuration problems. Common examples include:

```
error initializing exporter: no endpoint specified
```

Most critical issues, such as missing or invalid exporters or receivers, will be logged.

To increase verbosity, run the Collector with:

```bash
--log-level=debug
```

This is especially helpful for diagnosing configuration parsing issues or startup errors.


* Confirm required components are defined

Ensure `service.pipelines` references valid `receivers`, `processors`, and `exporters`. The minimal configuration depends on your use case:

* **For logs**: `filelog` receiver, `resourcedetection` processor, `elasticsearch` exporter

* **For traces**: `otlp` receiver, `elastictrace` and `elasticapm` processors, `elasticsearch` exporter

* **For managed OTLP endpoint**: use relevant receivers and export using the `otlp` exporter

Refer to [Default configuration of the EDOT Collector (Standalone)](opentelemetry://docs/reference/edot-collector/config/default-config-standalone.md) for full examples for each use case.

Check failure on line 84 in troubleshoot/ingest/opentelemetry/edot-collector/collector-not-starting.md

View workflow job for this annotation

GitHub Actions / preview / build

'docs/reference/edot-collector/config/default-config-standalone.md' is not a valid link in the 'opentelemetry' cross link index: https://elastic-docs-link-index.s3.us-east-2.amazonaws.com/elastic/opentelemetry/main/links.json


* Check for port conflicts

By default, EDOT uses:

* 4317 for OTLP/gRPC
* 4318 for OTLP/HTTP

These are not customized by EDOT.

Run this to check if a port is in use:

```bash
lsof -i :4317
```

If needed, adjust your configuration or free up the port.

### Kubernetes EDOT Collector

If you're deploying the EDOT Collector using the Elastic Helm charts, try to:

* Double-check your custom `values.yaml` or -`-set` overrides for typos or missing fields.

* Ensure volume mounts are correctly defined if you're using custom configuration files.

* If you're managing the Collector through {{fleet}}, confirm that the policy includes a valid configuration and hasn't been corrupted or partially applied.

* Use `kubectl logs <collector-pod>` to get Collector logs and diagnose startup failures.

* Check the status of the pod using:

```bash
kubectl describe pod <collector-pod>
```

Common issues include volume mount errors, image pull failures, or misconfigured environment variables.

## Resources

* [Upstream Collector configuration documentation](https://opentelemetry.io/docs/collector/configuration/)
* [Elastic Stack Kubernetes Helm Charts](https://github.com/elastic/helm-charts)
1 change: 1 addition & 0 deletions troubleshoot/ingest/opentelemetry/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ toc:
children:
- file: edot-collector/collector-oomkilled.md
- file: edot-collector/metadata.md
- file: edot-collector/collector-not-starting.md
- file: edot-sdks/index.md
children:
- file: edot-sdks/dotnet/index.md
Expand Down
Loading