Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions manage-data/data-store/data-streams/logs-data-stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ In `logsdb` index mode, indices are sorted by the fields `host.name` and `@times

* To prioritize the latest data, `host.name` is sorted in ascending order and `@timestamp` is sorted in descending order.

You can override the default sort settings by manually configuring `index.sort.field` and `index.sort.order`. For more details, see [*Index Sorting*](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-settings/index-sorting-settings.md).
You can override the default sort settings by manually configuring `index.sort.field` and `index.sort.order`. For more details, see [Index Sorting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-settings/index-sorting-settings.md).

To modify the sort configuration of an existing data stream, update the data stream’s component templates, and then perform or wait for a [rollover](../data-streams.md#data-streams-rollover).

Expand Down Expand Up @@ -176,6 +176,6 @@ The `logsdb` index mode uses the following settings:
* **`index.mapping.total_fields.ignore_dynamic_beyond_limit`**: `true`


## Notes about upgrading to Logsdb [upgrade-to-logsdb-notes]
% ## Notes about upgrading to Logsdb [upgrade-to-logsdb-notes]

TODO: add notes.
% TODO: add notes.
2 changes: 1 addition & 1 deletion manage-data/data-store/data-streams/set-up-tsds.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ The reasons for this is that each component template needs to be valid on its ow

Now that you’ve set up your TSDS, you can manage and use it like a regular data stream. For more information, refer to:

* [*Use a data stream*](use-data-stream.md)
* [Use a data stream](use-data-stream.md)
* [Change mappings and settings for a data stream](modify-data-stream.md#data-streams-change-mappings-and-settings)
* [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream)

24 changes: 0 additions & 24 deletions manage-data/data-store/mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,6 @@ mapped_urls:

# Mapping

% What needs to be done: Refine

% GitHub issue: docs-projects#370

% Scope notes: Use the content in the linked source and add links to the relevent reference content.

% Use migrated content from existing pages that map to this page:

% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/mapping.md
% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/index-modules-mapper.md
% Notes: redirect only

% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):

$$$mapping-limit-settings$$$

$$$updating-field-mappings$$$

$$$mapping-manage-update$$$

$$$mapping-dynamic$$$

$$$mapping-explicit$$$

Mapping is the process of defining how a document and the fields it contains are stored and indexed.

Each document is a collection of fields, which each have their own [data type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/field-data-types.md). When mapping your data, you create a mapping definition, which contains a list of fields that are pertinent to the document. A mapping definition also includes [metadata fields](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/document-metadata-fields.md), like the `_source` field, which customize how a document’s associated metadata is handled.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,6 @@ mapped_urls:

# Ignore missing component templates

% What needs to be done: Refine

% GitHub issue: docs-projects#372

% Scope notes: Combine with "Usage example" subpage.

% Use migrated content from existing pages that map to this page:

% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/ignore_missing_component_templates.md
% - [x] ./raw-migrated-files/elasticsearch/elasticsearch-reference/_usage_example.md

When an index template references a component template that might not exist, the `ignore_missing_component_templates` configuration option can be used. With this option, every time a data stream is created based on the index template, the existence of the component template will be checked. If it exists, it will used to form the index’s composite settings. If it is missing, it will be ignored.

The following example demonstrates how this configuration option works.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ Loaded dashboards
2. Scroll down to the *Elasticsearch Output* section. Place a comment pound sign (*#*) in front of *output.elasticsearch* and {{es}} *hosts*.
3. Scroll down to the *Logstash Output* section. Remove the comment pound sign (*#*) from in front of *output.logstash* and *hosts*, as follows:

```txt
```json
# ---------------- Logstash Output -----------------
output.logstash:
# The Logstash hosts
Expand Down Expand Up @@ -219,19 +219,21 @@ The system module is now enabled in Filebeat and it will be used the next time F

**Load the Filebeat Kibana dashboards**

Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view *filebeat-**, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet.
Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view **filebeat-**, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet.

1. Open a command line instance and then go to *<localpath>/filebeat-<version>/*
2. Run the following command:

```txt
```json
sudo ./filebeat setup \
-E cloud.id=<cloudID> \ <1>
-E cloud.auth=<username>:<password> <2>
```

1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `<Deploymentname>:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details.
2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between *<username>* and *<password>*.::::{important}
2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between *<username>* and *<password>*.

::::{important}
Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the filebeat.yml.
::::

Expand Down Expand Up @@ -265,7 +267,7 @@ The data views for *filebeat-** and *metricbeat-** are now available in {{es}}.
2. Scroll down to the *Outputs* section. Place a comment pound sign (*#*) in front of *output.elasticsearch* and {{es}} *hosts*.
3. Scroll down to the *Logstash Output* section. Remove the comment pound sign (*#*) from in front of *output.logstash* and *hosts* as follows:

```txt
```json
# ---------------- Logstash Output -----------------
output.logstash:
# The Logstash hosts
Expand All @@ -283,7 +285,7 @@ Now the Filebeat and Metricbeat are set up, let’s configure a {{ls}} pipeline
1. In *<localpath>/logstash-<version>/*, create a new text file named *beats.conf*.
2. Copy and paste the following code into the new text file. This code creates a {{ls}} pipeline that listens for connections from Beats on port 5044 and writes to standard out (typically to your terminal) with formatting provided by the {{ls}} rubydebug output plugin.

```txt
```json
input {
beats{port => 5044} <1>
}
Expand Down Expand Up @@ -423,7 +425,7 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t
1. In your *<localpath>/logstash-<version>/* folder, open *beats.conf* for editing.
2. Replace the *output {}* section of the JSON with the following code:

```txt
```json
output {
elasticsearch {
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch.
1. Open the `jdbc.conf` file in the Logstash folder for editing.
2. Update the output section with the one that follows:

```txt
```json
output {
elasticsearch {
index => "rdbms_idx"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ Filebeat offers a straightforward, easy to configure way to monitor your Python

In *<localpath>/filebeat-<version>/* (where *<localpath>* is the directory where Filebeat is installed and *<version>* is the Filebeat version number), open the *filebeat.yml* configuration file for editing.

```txt
```json
# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
Expand Down
22 changes: 0 additions & 22 deletions manage-data/ingest/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,28 +10,6 @@ mapped_urls:

# Ingest tools overview

% What needs to be done: Finish draft

% GitHub issue: docs-projects#327

% Scope notes: Read more about the scope in the tracking issue

% Use migrated content from existing pages that map to this page:

% - [x] ./raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md
% Notes: These are resources to pull from, but this new "Ingest tools overiew" page will not be a replacement for any of these old AsciiDoc pages. File upload: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#upload-data-kibana https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-file-upload.html API: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#_add_data_with_programming_languages https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-api.html OpenTelemetry: https://github.com/elastic/opentelemetry Fleet and Agent: https://www.elastic.co/guide/en/fleet/current/fleet-overview.html https://www.elastic.co/guide/en/serverless/current/fleet-and-elastic-agent.html Logstash: https://www.elastic.co/guide/en/logstash/current/introduction.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html https://www.elastic.co/guide/en/serverless/current/logstash-pipelines.html Beats: https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-beats.html APM: /solutions/observability/apps/application-performance-monitoring-apm.md Application logging: https://www.elastic.co/guide/en/observability/current/application-logs.html ECS logging: https://www.elastic.co/guide/en/observability/current/logs-ecs-application.html Elastic serverless forwarder for AWS: https://www.elastic.co/guide/en/esf/current/aws-elastic-serverless-forwarder.html Integrations: https://www.elastic.co/guide/en/integrations/current/introduction.html Search connectors: https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-integrations-connector-client.html Web crawler: https://github.com/elastic/crawler/tree/main/docs
% - [This comparison page is being moved to the reference section, so I'm linking to that from the current page - Wajiha] ./raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md
% - [x] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md
% - [x] https://www.elastic.co/customer-success/data-ingestion
% - [x] https://github.com/elastic/ingest-docs/pull/1373

% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
% These IDs are from content that I'm not including on this current page. I've resolved them by changing the internal links to anchor links where needed. - Wajiha

$$$supported-outputs-beats-and-agent$$$

$$$additional-capabilities-beats-and-agent$$$

Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.

<br>
Expand Down
7 changes: 0 additions & 7 deletions manage-data/ingest/upload-data-files.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,6 @@ mapped_urls:

# Upload data files [upload-data-kibana]

% What needs to be done: Align serverless/stateful

% Use migrated content from existing pages that map to this page:

% - [x] ./raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md
% - [x] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md

% Note from David: I've removed the ID $$$upload-data-kibana$$$ from manage-data/ingest.md as those links should instead point to this page. So, please ensure that the following ID is included on this page. I've added it beside the title.

You can upload files, view their fields and metrics, and optionally import them to {{es}} with the Data Visualizer.
Expand Down
2 changes: 1 addition & 1 deletion troubleshoot/elasticsearch/troubleshooting-searches.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ GET /my-index-000001/_analyze
}
```

To change the mapping of an existing field, refer to [Changing the mapping of a field](../../manage-data/data-store/mapping.md#updating-field-mappings).
To change the mapping of an existing field, refer to [Manage and update mappings](../../manage-data/data-store/mapping.md#mapping-manage-update).


## Check the field’s values [troubleshooting-check-field-values]
Expand Down
Loading