You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: manage-data/ingest.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,9 @@ products:
16
16
- id: elasticsearch
17
17
---
18
18
19
-
# Ingestion
19
+
# Bring your data to Elastic
20
20
21
-
Bring your data! Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights.
21
+
Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights.
22
22
23
23
Our ingest tools are flexible, and support a wide range of scenarios. We can help you with everything from popular and straightforward use cases, all the way to advanced use cases that require additional processing in order to modify or reshape your data before it goes to {{es}}.
Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.
43
+
Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools.
44
+
45
+
Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.
44
46
45
47
<br>
46
48
@@ -49,14 +51,13 @@ Depending on the type of data you want to ingest, you have a number of methods a
49
51
| Integrations | Ingest data using a variety of Elastic integrations. |[Elastic Integrations](integration-docs://reference/index.md)|
50
52
| File upload | Upload data from a file and inspect it before importing it into {{es}}. |[Upload data files](/manage-data/ingest/upload-data-files.md)|
51
53
| APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. |[Document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)|
52
-
| OpenTelemetry | Collect and send your telemetry data to Elastic Observability |[Elastic Distributions of OpenTelemetry](opentelemetry://reference/index.md)|
54
+
| OpenTelemetry | Collect and send your telemetry data to Elastic Observability |[Elastic Distributions of OpenTelemetry](opentelemetry://reference/index.md).|
53
55
| Fleet and Elastic Agent | Add monitoring for logs, metrics, and other types of data to a host using Elastic Agent, and centrally manage it using Fleet. |[Fleet and {{agent}} overview](/reference/fleet/index.md) <br> [{{fleet}} and {{agent}} restrictions (Serverless)](/reference/fleet/fleet-agent-serverless-restrictions.md) <br> [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)||
54
56
| {{elastic-defend}} | {{elastic-defend}} provides organizations with prevention, detection, and response capabilities with deep visibility for EPP, EDR, SIEM, and Security Analytics use cases across Windows, macOS, and Linux operating systems running on both traditional endpoints and public cloud environments. |[Configure endpoint protection with {{elastic-defend}}](/solutions/security/configure-elastic-defend.md)|
55
57
| {{ls}} | Dynamically unify data from a wide variety of data sources and normalize it into destinations of your choice with {{ls}}. |[Logstash](logstash://reference/index.md)|
56
58
| {{beats}} | Use {{beats}} data shippers to send operational data to Elasticsearch directly or through Logstash. |[{{beats}}](beats://reference/index.md)|
57
59
| APM | Collect detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. |[Application performance monitoring (APM)](/solutions/observability/apm/index.md)|
58
60
| Application logs | Ingest application logs using Filebeat, {{agent}}, or the APM agent, or reformat application logs into Elastic Common Schema (ECS) logs and then ingest them using Filebeat or {{agent}}. |[Stream application logs](/solutions/observability/logs/stream-application-logs.md) <br> [ECS formatted application logs](/solutions/observability/logs/ecs-formatted-application-logs.md)|
59
61
| Elastic Serverless forwarder for AWS | Ship logs from your AWS environment to cloud-hosted, self-managed Elastic environments, or {{ls}}. |[Elastic Serverless Forwarder](elastic-serverless-forwarder://reference/index.md)|
60
-
| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors
| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. |[Ingest content with Elastic connectors](elasticsearch://reference/search-connectors/index.md) <br> [Connector clients](elasticsearch://reference/search-connectors/index.md)|
62
63
| Web crawler | Discover, extract, and index searchable content from websites and knowledge bases using the web crawler. |[Elastic Open Web Crawler](https://github.com/elastic/crawler#readme)|
Copy file name to clipboardExpand all lines: solutions/observability/logs.md
+36-8Lines changed: 36 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,18 +20,29 @@ Elastic Observability allows you to deploy and manage logs at a petabyte scale,
20
20
*[Run pattern analysis on log data](/solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data.
21
21
*[Troubleshoot logs](/troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs.
22
22
23
+
## Send log data to your project [observability-log-monitoring-send-logs-data-to-your-project]
23
24
24
-
## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project]
25
+
You can send log data to your project in different ways depending on your needs. When choosing between these options, consider the different features and functionalities between them.
25
26
26
-
You can send logs data to your project in different ways depending on your needs:
27
+
Refer to [Ingest tools overview](/manage-data/ingest/tools.md) for more information on which option best fits your situation.
27
28
28
-
* {{agent}}
29
-
* {{filebeat}}
30
29
31
-
When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md) for more information on which option best fits your situation.
The Elastic Distribution of OpenTelemetry (EDOT) Collector and SDKs provide native OpenTelemetry support for collecting logs, metrics, and traces. This approach is ideal for:
35
+
36
+
* Native OpenTelemetry: When you want to use OpenTelemetry standards and are already using OpenTelemetry in your environment.
37
+
* Full observability: When you need to collect logs, metrics, and traces from a single collector.
38
+
* Modern applications: When building new applications with OpenTelemetry instrumentation.
39
+
* Standards compliance: When you need to follow OpenTelemetry specifications.
40
+
41
+
For more information, refer to [Elastic Distribution of OpenTelemetry](opentelemetry://reference/index.md).
42
+
43
+
:::
44
+
45
+
:::{tab-item} {{agent}}
35
46
36
47
{{agent}} uses [integrations](https://www.elastic.co/integrations/data-integrations) to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an {{agent}}:
37
48
@@ -45,7 +56,7 @@ See [install {{fleet}}-managed {{agent}}](/reference/fleet/install-fleet-managed
{{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing.
63
75
64
76
*[{{filebeat}} overview](beats://reference/filebeat/index.md): General information on {{filebeat}} and how it works.
65
77
*[{{filebeat}} quick start](beats://reference/filebeat/filebeat-installation-configuration.md): Basic installation instructions to get you started.
66
78
*[Set up and run {{filebeat}}](beats://reference/filebeat/setting-up-running.md): Information on how to install, set up, and run {{filebeat}}.
67
79
80
+
:::
81
+
82
+
:::{tab-item} {{ls}}
83
+
84
+
{{ls}} is a powerful data processing pipeline that can collect, transform, and enrich log data before sending it to Elasticsearch. It's ideal for:
85
+
86
+
* Complex data processing: When you need to parse, filter, and transform logs before indexing.
87
+
* Multiple data sources: When you need to collect logs from various sources and normalize them.
88
+
* Advanced use cases: When you need data enrichment, aggregation, or routing to multiple destinations.
89
+
* Extending Elastic integrations: When you want to add custom processing to data collected by Elastic Agent or Beats.
90
+
91
+
For more information, refer to [Logstash](logstash://reference/index.md) and [Using Logstash with Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md).
# Get started with system logs [observability-get-started-with-logs]
12
12
13
-
::::{note}
13
+
In this guide you can learn how to onboard system log data from a machine or server, then explore the data in **Discover**.
14
14
15
-
**For Observability Serverless projects**, the **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
15
+
## Prerequisites [logs-prereqs]
16
+
17
+
::::{tab-set}
18
+
:group: stack-serverless
19
+
20
+
:::{tab-item} Elastic Stack
21
+
:sync: stack
22
+
23
+
To follow the steps in this guide, you need an {{stack}} deployment that includes:
24
+
25
+
* {{es}} for storing and searching data
26
+
* {{kib}} for visualizing and managing data
27
+
* Kibana user with `All` privileges on {{fleet}} and Integrations. Because many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces.
28
+
29
+
To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body).
30
+
31
+
:::
32
+
33
+
:::{tab-item} Serverless
34
+
:sync: serverless
35
+
36
+
The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles).
37
+
38
+
:::
16
39
17
40
::::
18
41
42
+
## Onboard system log data [onboard-system-log-data]
43
+
44
+
Follow these steps to onboard system log data.
45
+
46
+
::::::{stepper}
47
+
48
+
:::::{step} Open your project
49
+
50
+
Open an [{{obs-serverless}} project](/solutions/observability/get-started.md) or Elastic Stack deployment.
51
+
52
+
:::::
53
+
54
+
:::::{step} Select data collection method
19
55
20
-
In this guide you’ll learn how to onboard system log data from a machine or server, then observe the data in **Discover**.
56
+
From the Observability UI, go to **Add data**. Under **What do you want to monitor?**, select **Host**, then select one of these options:
21
57
22
-
To onboard system log data:
58
+
::::{tab-set}
59
+
:::{tab-item} OpenTelemetry: Full Observability
23
60
24
-
1. Open an [{{obs-serverless}} project](/solutions/observability/get-started.md) or Elastic Stack deployment.
25
-
2. From the Observability UI, go to **Add data**.
26
-
3. Under **What do you want to monitor?**, select **Host** → **Elastic Agent: Logs & Metrics**.
27
-
4. Follow the in-product steps to auto-detect your logs and install and configure the {{agent}}.
61
+
Collect native OpenTelemetry metrics and logs using the Elastic Distribution of OpenTelemetry Collector (EDOT).
62
+
63
+
**Recommended for**: Users who want to collect native OpenTelemetry data or are already using OpenTelemetry in their environment.
64
+
65
+
:::
66
+
67
+
:::{tab-item} Elastic Agent: Logs & Metrics
68
+
69
+
Bring data from Elastic integrations using the Elastic Agent.
70
+
71
+
**Recommended for**: Users who want to leverage Elastic's pre-built integrations and centralized management through Fleet.
72
+
73
+
:::
74
+
75
+
::::
76
+
:::::
77
+
78
+
:::::{step} Follow setup instructions
79
+
80
+
Follow the in-product steps to auto-detect your logs and install and configure your chosen data collector.
81
+
82
+
:::::
83
+
84
+
:::::{step} Verify data collection
28
85
29
86
After the agent is installed and successfully streaming log data, you can view the data in the UI:
30
87
31
88
1. From the navigation menu, go to **Discover**.
32
-
1. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data.
89
+
2. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data.
90
+
91
+
:::::
92
+
93
+
:::::{step} Explore and analyze your data
33
94
95
+
Now that you have logs flowing into Elasticsearch, you can start exploring and analyzing your data:
96
+
97
+
***[Explore logs in Discover](/solutions/observability/logs/explore-logs.md)**: Search, filter, and tail all your logs from a central location
98
+
***[Parse and route logs](/solutions/observability/logs/parse-route-logs.md)**: Extract structured fields from unstructured logs and route them to specific data streams
99
+
***[Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md)**: Filter logs by specific criteria and aggregate data to find patterns and gain insights
100
+
101
+
:::::
102
+
103
+
::::::
104
+
105
+
## Other ways to collect log data [other-data-collection-methods]
106
+
107
+
While the Elastic Agent and OpenTelemetry Collector are the recommended approaches for most users, Elastic provides additional tools for specific use cases:
108
+
109
+
::::{tab-set}
110
+
111
+
:::{tab-item} Filebeat
112
+
113
+
Filebeat is a lightweight data shipper that sends log data to Elasticsearch. It's ideal for:
114
+
115
+
* Simple log collection: When you need to collect logs from specific files or directories.
116
+
* Custom parsing: When you need to parse logs using ingest pipelines before indexing.
117
+
* Legacy systems: When you can't install the Elastic Agent or OpenTelemetry Collector.
118
+
119
+
For more information, refer to [Collecting log data with Filebeat](/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md) and [Ingest logs from applications using Filebeat](/solutions/observability/logs/plaintext-application-logs.md).
120
+
121
+
:::
122
+
123
+
:::{tab-item} Winlogbeat
124
+
125
+
Winlogbeat is specifically designed for collecting Windows event logs. It's ideal for:
126
+
127
+
* Windows environments: When you need to collect Windows security, application, and system event logs.
128
+
* Security monitoring: When you need detailed Windows security event data.
129
+
* Compliance requirements: When you need to capture specific Windows event IDs.
130
+
131
+
For more information, refer to the [Winlogbeat documentation](beats://reference/winlogbeat/index.md).
132
+
133
+
:::
134
+
135
+
:::{tab-item} Logstash
136
+
137
+
Logstash is a powerful data processing pipeline that can collect, transform, and enrich log data before sending it to Elasticsearch. It's ideal for:
138
+
139
+
* Complex data processing: When you need to parse, filter, and transform logs before indexing.
140
+
* Multiple data sources: When you need to collect logs from various sources and normalize them.
141
+
* Advanced use cases: When you need data enrichment, aggregation, or routing to multiple destinations.
142
+
* Extending Elastic integrations: When you want to add custom processing to data collected by Elastic Agent or Beats.
143
+
144
+
For more information, refer to [Logstash](logstash://reference/index.md) and [Using Logstash with Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md).
145
+
146
+
:::
147
+
148
+
:::{tab-item} REST APIs
149
+
150
+
You can use Elasticsearch REST APIs to send log data directly to Elasticsearch. This approach is ideal for:
151
+
152
+
* Custom applications: When you want to send logs directly from your application code.
153
+
* Programmatic collection: When you need to collect logs using custom scripts or tools.
154
+
* Real-time streaming: When you need to send logs as they're generated.
155
+
156
+
For more information, refer to [Elasticsearch REST APIs](elasticsearch://reference/elasticsearch/rest-apis/index.md).
157
+
158
+
:::
159
+
160
+
::::
34
161
35
162
## Next steps [observability-get-started-with-logs-next-steps]
36
163
37
-
Now that you’ve added logs and explored your data, learn how to onboard other types of data:
164
+
Now that you've added logs and explored your data, learn how to onboard other types of data:
0 commit comments