Skip to content
4 changes: 2 additions & 2 deletions manage-data/ingest.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ products:
- id: elasticsearch
---

# Ingestion
# Bring your data to Elastic

Bring your data! Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights.
Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights.

Our ingest tools are flexible, and support a wide range of scenarios. We can help you with everything from popular and straightforward use cases, all the way to advanced use cases that require additional processing in order to modify or reshape your data before it goes to {{es}}.

Expand Down
9 changes: 5 additions & 4 deletions manage-data/ingest/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,9 @@ $$$supported-outputs-beats-and-agent$$$

$$$additional-capabilities-beats-and-agent$$$

Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.
Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools.

Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.

<br>

Expand All @@ -49,14 +51,13 @@ Depending on the type of data you want to ingest, you have a number of methods a
| Integrations | Ingest data using a variety of Elastic integrations. | [Elastic Integrations](integration-docs://reference/index.md) |
| File upload | Upload data from a file and inspect it before importing it into {{es}}. | [Upload data files](/manage-data/ingest/upload-data-files.md) |
| APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. | [Document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) |
| OpenTelemetry | Collect and send your telemetry data to Elastic Observability | [Elastic Distributions of OpenTelemetry](opentelemetry://reference/index.md) |
| OpenTelemetry | Collect and send your telemetry data to Elastic Observability | [Elastic Distributions of OpenTelemetry](opentelemetry://reference/index.md). |
| Fleet and Elastic Agent | Add monitoring for logs, metrics, and other types of data to a host using Elastic Agent, and centrally manage it using Fleet. | [Fleet and {{agent}} overview](/reference/fleet/index.md) <br> [{{fleet}} and {{agent}} restrictions (Serverless)](/reference/fleet/fleet-agent-serverless-restrictions.md) <br> [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)||
| {{elastic-defend}} | {{elastic-defend}} provides organizations with prevention, detection, and response capabilities with deep visibility for EPP, EDR, SIEM, and Security Analytics use cases across Windows, macOS, and Linux operating systems running on both traditional endpoints and public cloud environments. | [Configure endpoint protection with {{elastic-defend}}](/solutions/security/configure-elastic-defend.md) |
| {{ls}} | Dynamically unify data from a wide variety of data sources and normalize it into destinations of your choice with {{ls}}. | [Logstash](logstash://reference/index.md) |
| {{beats}} | Use {{beats}} data shippers to send operational data to Elasticsearch directly or through Logstash. | [{{beats}}](beats://reference/index.md) |
| APM | Collect detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | [Application performance monitoring (APM)](/solutions/observability/apm/index.md) |
| Application logs | Ingest application logs using Filebeat, {{agent}}, or the APM agent, or reformat application logs into Elastic Common Schema (ECS) logs and then ingest them using Filebeat or {{agent}}. | [Stream application logs](/solutions/observability/logs/stream-application-logs.md) <br> [ECS formatted application logs](/solutions/observability/logs/ecs-formatted-application-logs.md) |
| Elastic Serverless forwarder for AWS | Ship logs from your AWS environment to cloud-hosted, self-managed Elastic environments, or {{ls}}. | [Elastic Serverless Forwarder](elastic-serverless-forwarder://reference/index.md) |
| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors
](elasticsearch://reference/search-connectors/index.md) <br> [Connector clients](elasticsearch://reference/search-connectors/index.md) |
| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors](elasticsearch://reference/search-connectors/index.md) <br> [Connector clients](elasticsearch://reference/search-connectors/index.md) |
| Web crawler | Discover, extract, and index searchable content from websites and knowledge bases using the web crawler. | [Elastic Open Web Crawler](https://github.com/elastic/crawler#readme) |
4 changes: 2 additions & 2 deletions solutions/observability/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Elastic provides a powerful LLM observability framework including key metrics, l

Refer to [LLM observability](/solutions/observability/applications/llm-observability.md) for more information.
:::

::::
:::::
::::::

Expand Down Expand Up @@ -178,5 +178,5 @@ Many [{{observability}} integrations](https://www.elastic.co/integrations/data-i
### Other resources

* [What's Elastic {{observability}}](/solutions/observability/get-started/what-is-elastic-observability.md)
* [Whats new in Elastic Stack](/release-notes/elastic-observability/index.md)
* [What's new in Elastic Stack](/release-notes/elastic-observability/index.md)
* [{{obs-serverless}} billing dimensions](/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md)
44 changes: 36 additions & 8 deletions solutions/observability/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,18 +20,29 @@ Elastic Observability allows you to deploy and manage logs at a petabyte scale,
* [Run pattern analysis on log data](/solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data.
* [Troubleshoot logs](/troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs.

## Send log data to your project [observability-log-monitoring-send-logs-data-to-your-project]

## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project]
You can send log data to your project in different ways depending on your needs. When choosing between these options, consider the different features and functionalities between them.

You can send logs data to your project in different ways depending on your needs:
Refer to [Ingest tools overview](/manage-data/ingest/tools.md) for more information on which option best fits your situation.

* {{agent}}
* {{filebeat}}

When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md) for more information on which option best fits your situation.
::::{tab-set}

:::{tab-item} {{edot}}

### {{agent}} [observability-log-monitoring-agent]
The Elastic Distribution of OpenTelemetry (EDOT) Collector and SDKs provide native OpenTelemetry support for collecting logs, metrics, and traces. This approach is ideal for:

* Native OpenTelemetry: When you want to use OpenTelemetry standards and are already using OpenTelemetry in your environment.
* Full observability: When you need to collect logs, metrics, and traces from a single collector.
* Modern applications: When building new applications with OpenTelemetry instrumentation.
* Standards compliance: When you need to follow OpenTelemetry specifications.

For more information, refer to [Elastic Distribution of OpenTelemetry](opentelemetry://reference/index.md).

:::

:::{tab-item} {{agent}}

{{agent}} uses [integrations](https://www.elastic.co/integrations/data-integrations) to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an {{agent}}:

Expand All @@ -45,7 +56,7 @@ See [install {{fleet}}-managed {{agent}}](/reference/fleet/install-fleet-managed

#### Standalone {{agent}} [observability-log-monitoring-standalone-agent]

Install an {{agent}} and manually configure it locally on the system where its installed. You are responsible for managing and upgrading the agents.
Install an {{agent}} and manually configure it locally on the system where it's installed. You are responsible for managing and upgrading the agents.

See [install standalone {{agent}}](/reference/fleet/install-standalone-elastic-agent.md).

Expand All @@ -56,15 +67,32 @@ Run an {{agent}} inside of a container — either with {{fleet-server}} or stand

See [install {{agent}} in containers](/reference/fleet/install-elastic-agents-in-containers.md).

:::

### {{filebeat}} [observability-log-monitoring-filebeat]
:::{tab-item} {{filebeat}}

{{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing.

* [{{filebeat}} overview](beats://reference/filebeat/index.md): General information on {{filebeat}} and how it works.
* [{{filebeat}} quick start](beats://reference/filebeat/filebeat-installation-configuration.md): Basic installation instructions to get you started.
* [Set up and run {{filebeat}}](beats://reference/filebeat/setting-up-running.md): Information on how to install, set up, and run {{filebeat}}.

:::

:::{tab-item} {{ls}}

{{ls}} is a powerful data processing pipeline that can collect, transform, and enrich log data before sending it to Elasticsearch. It's ideal for:

* Complex data processing: When you need to parse, filter, and transform logs before indexing.
* Multiple data sources: When you need to collect logs from various sources and normalize them.
* Advanced use cases: When you need data enrichment, aggregation, or routing to multiple destinations.
* Extending Elastic integrations: When you want to add custom processing to data collected by Elastic Agent or Beats.

For more information, refer to [Logstash](logstash://reference/index.md) and [Using Logstash with Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md).

:::

::::

## Configure logs [observability-log-monitoring-configure-logs]

Expand Down
2 changes: 1 addition & 1 deletion solutions/observability/logs/discover-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ For a contextual logs experience, set the **Solution view** for your space to **

:::{image} ../../images/observability-log-explorer.png
:alt: Screen capture of Discover
:class: screenshot
:screenshot:
:::

## Required {{kib}} privileges [logs-explorer-privileges]
Expand Down
148 changes: 138 additions & 10 deletions solutions/observability/logs/get-started-with-system-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,33 +10,161 @@ products:

# Get started with system logs [observability-get-started-with-logs]

::::{note}
In this guide you can learn how to onboard system log data from a machine or server, then explore the data in **Discover**.

**For Observability Serverless projects**, the **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
## Prerequisites [logs-prereqs]

::::{tab-set}
:group: stack-serverless

:::{tab-item} Elastic Stack
:sync: stack

To follow the steps in this guide, you need an {{stack}} deployment that includes:

* {{es}} for storing and searching data
* {{kib}} for visualizing and managing data
* Kibana user with `All` privileges on {{fleet}} and Integrations. Because many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces.

To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body).

:::

:::{tab-item} Serverless
:sync: serverless

The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).

:::

::::

## Onboard system log data [onboard-system-log-data]

Follow these steps to onboard system log data.

::::::{stepper}

:::::{step} Open your project

Open an [{{obs-serverless}} project](/solutions/observability/get-started.md) or Elastic Stack deployment.

:::::

:::::{step} Select data collection method

In this guide you’ll learn how to onboard system log data from a machine or server, then observe the data in **Discover**.
From the Observability UI, go to **Add data**. Under **What do you want to monitor?**, select **Host**, then select one of these options:

To onboard system log data:
::::{tab-set}
:::{tab-item} OpenTelemetry: Full Observability

1. Open an [{{obs-serverless}} project](/solutions/observability/get-started.md) or Elastic Stack deployment.
2. From the Observability UI, go to **Add data**.
3. Under **What do you want to monitor?**, select **Host** → **Elastic Agent: Logs & Metrics**.
4. Follow the in-product steps to auto-detect your logs and install and configure the {{agent}}.
Collect native OpenTelemetry metrics and logs using the Elastic Distribution of OpenTelemetry Collector (EDOT).

**Recommended for**: Users who want to collect native OpenTelemetry data or are already using OpenTelemetry in their environment.

:::

:::{tab-item} Elastic Agent: Logs & Metrics

Bring data from Elastic integrations using the Elastic Agent.

**Recommended for**: Users who want to leverage Elastic's pre-built integrations and centralized management through Fleet.

:::

::::
:::::

:::::{step} Follow setup instructions

Follow the in-product steps to auto-detect your logs and install and configure your chosen data collector.

:::::

:::::{step} Verify data collection

After the agent is installed and successfully streaming log data, you can view the data in the UI:

1. From the navigation menu, go to **Discover**.
1. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data.
2. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data.

:::::

:::::{step} Explore and analyze your data

Now that you have logs flowing into Elasticsearch, you can start exploring and analyzing your data:

* **[Explore logs in Discover](/solutions/observability/logs/explore-logs.md)**: Search, filter, and tail all your logs from a central location
* **[Parse and route logs](/solutions/observability/logs/parse-route-logs.md)**: Extract structured fields from unstructured logs and route them to specific data streams
* **[Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md)**: Filter logs by specific criteria and aggregate data to find patterns and gain insights

:::::

::::::

## Other ways to collect log data [other-data-collection-methods]

While the Elastic Agent and OpenTelemetry Collector are the recommended approaches for most users, Elastic provides additional tools for specific use cases:

::::{tab-set}

:::{tab-item} Filebeat

Filebeat is a lightweight data shipper that sends log data to Elasticsearch. It's ideal for:

* Simple log collection: When you need to collect logs from specific files or directories.
* Custom parsing: When you need to parse logs using ingest pipelines before indexing.
* Legacy systems: When you can't install the Elastic Agent or OpenTelemetry Collector.

For more information, refer to [Collecting log data with Filebeat](/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md) and [Ingest logs from applications using Filebeat](/solutions/observability/logs/plaintext-application-logs.md).

:::

:::{tab-item} Winlogbeat

Winlogbeat is specifically designed for collecting Windows event logs. It's ideal for:

* Windows environments: When you need to collect Windows security, application, and system event logs.
* Security monitoring: When you need detailed Windows security event data.
* Compliance requirements: When you need to capture specific Windows event IDs.

For more information, refer to the [Winlogbeat documentation](beats://reference/winlogbeat/index.md).

:::

:::{tab-item} Logstash

Logstash is a powerful data processing pipeline that can collect, transform, and enrich log data before sending it to Elasticsearch. It's ideal for:

* Complex data processing: When you need to parse, filter, and transform logs before indexing.
* Multiple data sources: When you need to collect logs from various sources and normalize them.
* Advanced use cases: When you need data enrichment, aggregation, or routing to multiple destinations.
* Extending Elastic integrations: When you want to add custom processing to data collected by Elastic Agent or Beats.

For more information, refer to [Logstash](logstash://reference/index.md) and [Using Logstash with Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md).

:::

:::{tab-item} REST APIs

You can use Elasticsearch REST APIs to send log data directly to Elasticsearch. This approach is ideal for:

* Custom applications: When you want to send logs directly from your application code.
* Programmatic collection: When you need to collect logs using custom scripts or tools.
* Real-time streaming: When you need to send logs as they're generated.

For more information, refer to [Elasticsearch REST APIs](elasticsearch://reference/elasticsearch/rest-apis/index.md).

:::

::::

## Next steps [observability-get-started-with-logs-next-steps]

Now that youve added logs and explored your data, learn how to onboard other types of data:
Now that you've added logs and explored your data, learn how to onboard other types of data:

* [Stream any log file](stream-any-log-file.md)
* [Stream application logs](stream-application-logs.md)
* [Get started with traces and APM](/solutions/observability/apm/get-started.md)

To onboard other types of data, select **Add Data** from the main menu.
Loading
Loading