Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem is

If the volume driver does not support `ExpandInUsePersistentVolumes`, you must manually delete Pods after the resize so that they can be recreated automatically with the expanded filesystem.

Any other changes in the volumeClaimTemplates—such as changing the storage class or decreasing the volume size—are not allowed. To make changes such as these, you must fully delete the {{ls}} resource, delete and recreate or resize the volume, and create a new {{ls}} resource.
Any other changes in the volumeClaimTemplates— such as changing the storage class or decreasing the volume size— are not allowed. To make changes such as these, you must fully delete the {{ls}} resource, delete and recreate or resize the volume, and create a new {{ls}} resource.

Before you delete a persistent queue (PQ) volume, ensure that the queue is empty. We recommend setting `queue.drain: true` on the {{ls}} Pods to ensure that the queue is drained when Pods are shutdown. Note that you should also increase the `terminationGracePeriodSeconds` to a large enough value to allow the queue to drain.

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ products:

The power of {{ls}} is in the plugins--[inputs](logstash-docs-md://lsr/input-plugins.md), [outputs](logstash-docs-md://lsr/output-plugins.md), [filters](logstash-docs-md://lsr/filter-plugins.md), and [codecs](logstash-docs-md://lsr/codec-plugins.md).

In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances—including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation.
In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances— including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation.

In this section, we’ll cover:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ node.roles: [ data ]

Content data nodes are part of the content tier. Data stored in the content tier is generally a collection of items such as a product catalog or article archive. Unlike time series data, the value of the content remains relatively constant over time, so it doesn’t make sense to move it to a tier with different performance characteristics as it ages. Content data typically has long data retention requirements, and you want to be able to retrieve items quickly regardless of how old they are.

Content tier nodes are usually optimized for query performance—they prioritize processing power over IO throughput so they can process complex searches and aggregations and return results quickly. While they are also responsible for indexing, content data is generally not ingested at as high a rate as time series data such as logs and metrics. From a resiliency perspective the indices in this tier should be configured to use one or more replicas.
Content tier nodes are usually optimized for query performance— they prioritize processing power over IO throughput so they can process complex searches and aggregations and return results quickly. While they are also responsible for indexing, content data is generally not ingested at as high a rate as time series data such as logs and metrics. From a resiliency perspective the indices in this tier should be configured to use one or more replicas.

The content tier is required and is often deployed within the same node grouping as the hot tier. System indices and other indices that aren’t part of a data stream are automatically allocated to the content tier.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ It is critical that all nodes share the same setup. Otherwise, monitoring data m
::::


When the exporters route monitoring data into the monitoring cluster, they use `_bulk` indexing for optimal performance. All monitoring data is forwarded in bulk to all enabled exporters on the same node. From there, the exporters serialize the monitoring data and send a bulk request to the monitoring cluster. There is no queuing—in memory or persisted to disk—so any failure during the export results in the loss of that batch of monitoring data. This design limits the impact on {{es}} and the assumption is that the next pass will succeed.
When the exporters route monitoring data into the monitoring cluster, they use `_bulk` indexing for optimal performance. All monitoring data is forwarded in bulk to all enabled exporters on the same node. From there, the exporters serialize the monitoring data and send a bulk request to the monitoring cluster. There is no queuing— in memory or persisted to disk— so any failure during the export results in the loss of that batch of monitoring data. This design limits the impact on {{es}} and the assumption is that the next pass will succeed.

Routing monitoring data involves indexing it into the appropriate monitoring indices. Once the data is indexed, it exists in a monitoring index that, by default, is named with a daily index pattern. For {{es}} monitoring data, this is an index that matches `.monitoring-es-6-*`. From there, the data lives inside the monitoring cluster and must be curated or cleaned up as necessary. If you do not curate the monitoring data, it eventually fills up the nodes and the cluster might fail due to lack of disk space.

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/remote-clusters/ec-enable-ccs.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ products:

You can configure an {{ech}} deployment to remotely access or (be accessed by) a cluster from:

* Another {{ech}} deployment of your {{ecloud}} organization, across any region or cloud provider (AWS, GCP, Azure…)
* Another {{ech}} deployment of your {{ecloud}} organization, across any region or cloud provider (AWS, GCP, Azure… )
* An {{ech}} deployment of another {{ecloud}} organization
* A deployment in an {{ece}} installation
* A deployment in an {{eck}} installation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ PUT _snapshot/my_backup

## Repository validation rules [repository-azure-validation]

According to the [containers naming guide](https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers—Blobs—and-Metadata), a container name must be a valid DNS name, conforming to the following naming rules:
According to the [containers naming guide](https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers— Blobs— and-Metadata), a container name must be a valid DNS name, conforming to the following naming rules:

* Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
* Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ During setup, you can map users according to their properties to {{ece}} roles.

## Step 3: Change the order of provider profiles [ece-provider-order]

{{ece}} performs authentication checks against the configured providers, in order. When a match is found, the user search stops. The roles specified by that first profile match dictate which permissions the user is granted—regardless of what permissions might be available in another, lower-order profile.
{{ece}} performs authentication checks against the configured providers, in order. When a match is found, the user search stops. The roles specified by that first profile match dictate which permissions the user is granted— regardless of what permissions might be available in another, lower-order profile.

To change the provider order:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ xpack.security.authc:
```

1. The username/principal of the anonymous user. Defaults to `_es_anonymous_user` if not specified.
2. The roles to associate with the anonymous user. If no roles are specified, anonymous access is disabled—anonymous requests will be rejected and return an authentication error.
2. The roles to associate with the anonymous user. If no roles are specified, anonymous access is disabled— anonymous requests will be rejected and return an authentication error.
3. When `true`, a 403 HTTP status code is returned if the anonymous user does not have the permissions needed to perform the requested action and the user will NOT be prompted to provide credentials to access the requested resource. When `false`, a 401 HTTP status code is returned if the anonymous user does not have the necessary permissions and the user is prompted for credentials to access the requested resource. If you are using anonymous access in combination with HTTP, you might need to set `authz_exception` to `false` if your client does not support preemptive basic authentication. Defaults to `true`.


Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ By default, a maintenance window affects all rules in all {{kib}} apps within it

Alerts continue to be generated, however notifications are suppressed as follows:

* When an alert occurs during a maintenance window, there are no notifications. When the alert recovers, there are no notifications—even if the recovery occurs after the maintenance window ends.
* When an alert occurs during a maintenance window, there are no notifications. When the alert recovers, there are no notifications— even if the recovery occurs after the maintenance window ends.
* When an alert occurs before a maintenance window and recovers during or after the maintenance window, notifications are sent as usual.

## Configure access to maintenance windows [setup-maintenance-windows]
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/alerts-cases/watcher/actions-email.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ See [Automating report generation](../../report-and-share/automating-report-gene
$$$email-address$$$

Email Address
: An email address can contain two possible parts—the address itself and an optional personal name as described in [RFC 822](http://www.ietf.org/rfc/rfc822.txt). The address can be represented either as a string of the form `[email protected]` or `Personal Name <[email protected]>`. You can also specify an email address as an object that contains `name` and `address` fields.
: An email address can contain two possible parts— the address itself and an optional personal name as described in [RFC 822](http://www.ietf.org/rfc/rfc822.txt). The address can be represented either as a string of the form `[email protected]` or `Personal Name <[email protected]>`. You can also specify an email address as an object that contains `name` and `address` fields.

$$$address-list$$$

Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/alerts-cases/watcher/input-search.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ In the search input’s `request` object, you specify:
* The [search type](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search)
* The search request body

The search request body supports the full Elasticsearch Query DSL—it’s the same as the body of an Elasticsearch `_search` request.
The search request body supports the full Elasticsearch Query DSL— it’s the same as the body of an Elasticsearch `_search` request.

For example, the following input retrieves all `event` documents from the `logs` index:

Expand Down
4 changes: 2 additions & 2 deletions explore-analyze/alerts-cases/watcher/schedule-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If you don’t specify the `minute` attribute for an `hourly` schedule, it defau

To configure a once an hour schedule, you specify a single time with the `minute` attribute.

For example, the following `hourly` schedule triggers at minute 30 every hour-- `12:30`, `13:30`, `14:30`, …:
For example, the following `hourly` schedule triggers at minute 30 every hour-- `12:30`, `13:30`, `14:30`, … :

```js
{
Expand All @@ -49,7 +49,7 @@ For example, the following `hourly` schedule triggers at minute 30 every hour--

### Configuring a multiple times hourly schedule [_configuring_a_multiple_times_hourly_schedule]

To configure an `hourly` schedule that triggers at multiple times during the hour, you specify an array of minutes. For example, the following schedule triggers every 15 minutes every hour--`12:00`, `12:15`, `12:30`, `12:45`, `1:00`, `1:15`, …:
To configure an `hourly` schedule that triggers at multiple times during the hour, you specify an array of minutes. For example, the following schedule triggers every 15 minutes every hour--`12:00`, `12:15`, `12:30`, `12:45`, `1:00`, `1:15`, … :

```js
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ GET .watcher-history*/_search?pretty

## Take action [health-take-action]

Recording `watch_records` in the watch history is nice, but the real power of {{watcher}} is being able to do something in response to an alert. A watch’s [actions](actions.md) define what to do when the watch condition is true—you can send emails, call third-party webhooks, or write documents to an Elasticsearch index or log when the watch condition is met.
Recording `watch_records` in the watch history is nice, but the real power of {{watcher}} is being able to do something in response to an alert. A watch’s [actions](actions.md) define what to do when the watch condition is true— you can send emails, call third-party webhooks, or write documents to an Elasticsearch index or log when the watch condition is met.

For example, you could add an action to index the cluster status information when the status is RED.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You need to have a compatible visualization on **Dashboard** to create an {{anom
::::

1. Go to **Analytics > Dashboard** from the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md). Select a dashboard with a compatible visualization.
2. Open the **Options (…) menu** for the panel, then select **More**.
2. Open the **Options (… ) menu** for the panel, then select **More**.
3. Select **Create {{anomaly-job}}**. The option is only displayed if the visualization can be converted to an {{anomaly-job}} configuration.
4. (Optional) Select the layer from which the {{anomaly-job}} is created.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Elastic does not endorse, promote or provide support for this application; for n

## Data loading [_data_loading]

First, you’ll need to choose ODBC as the source to load data from. Once launched, click on the *Get Data* button (under *Home* tab), then on the *More…* button at the bottom of the list:
First, you’ll need to choose ODBC as the source to load data from. Once launched, click on the *Get Data* button (under *Home* tab), then on the *More… * button at the bottom of the list:

$$$apps_pbi_fromodbc1$$$
![apps pbi fromodbc1](/explore-analyze/images/elasticsearch-reference-apps_pbi_fromodbc1.png "")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,14 @@ To use the Elasticsearch SQL ODBC Driver to load data into Qlik Sense Desktop pe

2. Name app

then give it a name,
then give it a name,

$$$apps_qlik_create$$$
![apps qlik create](/explore-analyze/images/elasticsearch-reference-apps_qlik_create.png "")

3. Open app

and then open it:
and then open it:

$$$apps_qlik_open$$$
![apps qlik open](/explore-analyze/images/elasticsearch-reference-apps_qlik_open.png "")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Move the {{es}} Connector for Tableau to the Tableau Desktop connectors director
* Windows: `C:\Users\[Windows User]\Documents\My Tableau Repository\Connectors`
* Mac: `/Users/[user]/Documents/My Tableau Repository/Connectors`

Launch Tableau Desktop. In the menu, click **More…** and select **Elasticsearch by Elastic** as the data source.
Launch Tableau Desktop. In the menu, click **More… ** and select **Elasticsearch by Elastic** as the data source.

$$$apps_tableau_desktop_from_connector$$$
![Select Elasticsearch by Elastic as the data source](/explore-analyze/images/elasticsearch-reference-apps_tableau_desktop_from_connector.png "")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ COALESCE(
2. 2nd expression



**N**th expression

Expand Down Expand Up @@ -201,7 +201,7 @@ GREATEST(
2. 2nd expression



**N**th expression

Expand Down Expand Up @@ -360,7 +360,7 @@ LEAST(
2. 2nd expression



**N**th expression

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1158,7 +1158,7 @@ DAY_NAME(datetime_exp) <1>

**Output**: string

**Description**: Extract the day of the week from a date/datetime in text format (`Monday`, `Tuesday`…).
**Description**: Extract the day of the week from a date/datetime in text format (`Monday`, `Tuesday`… ).

```sql
SELECT DAY_NAME(CAST('2018-02-19T10:23:27Z' AS TIMESTAMP)) AS day;
Expand Down Expand Up @@ -1326,7 +1326,7 @@ MONTH_NAME(datetime_exp) <1>

**Output**: string

**Description**: Extract the month from a date/datetime in text format (`January`, `February`…).
**Description**: Extract the month from a date/datetime in text format (`January`, `February`… ).

```sql
SELECT MONTH_NAME(CAST('2018-02-19T10:23:27Z' AS TIMESTAMP)) AS month;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Take the following example:
SELECT * FROM table
```

This query has four tokens: `SELECT`, `*`, `FROM` and `table`. The first three, namely `SELECT`, `*` and `FROM` are *key words* meaning words that have a fixed meaning in SQL. The token `table` is an *identifier* meaning it identifies (by name) an entity inside SQL such as a table (in this case), a column, etc…
This query has four tokens: `SELECT`, `*`, `FROM` and `table`. The first three, namely `SELECT`, `*` and `FROM` are *key words* meaning words that have a fixed meaning in SQL. The token `table` is an *identifier* meaning it identifies (by name) an entity inside SQL such as a table (in this case), a column, etc…

As one can see, both key words and identifiers have the *same* lexical structure and thus one cannot know whether a token is one or the other without knowing the SQL language; the complete list of key words is available in the [reserved appendix](sql-syntax-reserved.md). Do note that key words are case-insensitive meaning the previous example can be written as:

Expand Down Expand Up @@ -132,7 +132,7 @@ A few characters that are not alphanumeric have a dedicated meaning different fr
| --- | --- |
| `*` | The asterisk (or wildcard) is used in some contexts to denote all fields for a table. Can be also used as an argument to some aggregate functions. |
| `,` | Commas are used to enumerate the elements of a list. |
| `.` | Used in numeric constants or to separate identifiers qualifiers (catalog, table, column names, etc…). |
| `.` | Used in numeric constants or to separate identifiers qualifiers (catalog, table, column names, etc… ). |
| `()` | Parentheses are used for specific SQL commands, function declarations or to enforce precedence. |


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ Supported Windows Installer command line arguments can be viewed using:
msiexec.exe /help
```

or by consulting the [Windows Installer SDK Command-Line Options](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx).
or by consulting the [Windows Installer SDK Command-Line Options](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx).

### Command line options [odbc-msi-command-line-options]

Expand Down
Loading
Loading