Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion cloud-account/multifactor-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ To enable multifactor authentication using an authenticator app, you must verify

You can remove a multifactor authentication method after it’s added by clicking **Remove**.

Before you remove an authentication method, you must set up an alternate method. If you can’t use any of your configured authentication methods — for example, if your device is lost or stolen — then [contact support](../troubleshoot/index.md).
Before you remove an authentication method, you must set up an alternate method. If you can’t use any of your configured authentication methodsfor example, if your device is lost or stolenthen [contact support](../troubleshoot/index.md).


## Frequently asked questions [ec-account-security-mfa-faq]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ Data Transfer accounts for the volume of data (payload) going into, out of, and

We meter and bill data transfer using three dimensions:

1. Data in (free)
1. Data in (free)
: *Data in* accounts for all of the traffic going into the deployment. It includes index requests with data payload, as well as queries sent to the deployment (although the byte size of the latter is typically much smaller).

2. Data out
2. Data out
: *Data out* accounts for all of the traffic coming out of the deployment. This includes search results, as well as monitoring data sent from the deployment. The same rate applies regardless of the destination of the data, whether to the internet, to another region, or to a cloud provider account in the same region. Data coming out of the deployment through AWS PrivateLink, GCP Private Service Connect, or Azure Private Link, is also considered *Data out*.

3. Data inter-node
3. Data inter-node
: *Data inter-node* accounts for all of the traffic sent between the components of the deployment. This includes the data sync between nodes of a cluster which is managed automatically by {{es}} cluster sharding. It also includes data related to search queries executed across multiple nodes of a cluster. Note that single-node {{es}} clusters typically have lower charges, but may still incur inter-node charges accounting for data exchanged with {{kib}} nodes or other nodes, such as machine learning or APM.

We provide a free allowance of 100GB per month, which includes the sum of *data out* and *data inter-node*, across all deployments in the account. Once this threshold is passed, a charge is applied for any data transfer used in excess of the 100GB monthly free allowance.
Expand All @@ -71,15 +71,15 @@ Storage costs are tied to the cost of storing the backup snapshots in the underl

As is common with Cloud providers, we meter and bill snapshot storage using two dimensions:

1. Storage size (GB/month)
1. Storage size (GB/month)
: This is calculated by metering the storage space (GBs) occupied by all snapshots of all deployments tied to an account. The same unit price applies to all regions. To calculate the due charges, we meter the amount of storage on an hourly basis and produce an average size (in GB) for a given month. The average amount is then used to bill the account for the GB/month used within a billing cycle (a calendar month).

For example, if the storage used in April 2019 was 100GB for 10 days, and then 130GB for the remaining 20 days of the month, the average storage would be 120 GB/month, calculated as (100*10 + 130*20)/30.

We provide a free allowance of 100 GB/month to all accounts across all the account deployments. Any metered storage usage below that amount will not be billed. Whenever the 100 GB/month threshold is crossed, we bill for the storage used in excess of the 100GB/month free allowance.


2. Storage API requests (1K Requests/month)
2. Storage API requests (1K Requests/month)
: These costs are calculated by counting the total number of calls to backup or restore snapshots made by all deployments associated with an account. Unlike storage size, this dimension is cumulative, summed up across the billing cycle, and is billed at a price of 1,000 requests.

We provide a free allowance of 100,000 API requests to all accounts each month across all the account deployments. Once this threshold is passed, we bill only for the use of API requests in excess of the free allowance.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec
[...]
```

6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section.
6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file existsuse the one in the [engine] section.

Example:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ Using Docker or Podman as container runtime is a configuration local to the host
[...]
```

6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section.
6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file existsuse the one in the [engine] section.

Example:

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/cloud-on-k8s/configure-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ data:
ubi-only: false
```

Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator container — which will trigger an automatic restart of the operator pod by the StatefulSet controller.
Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator containerwhich will trigger an automatic restart of the operator pod by the StatefulSet controller.

## Configure ECK under Operator Lifecycle Manager [k8s-operator-config-olm]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ products:

# Configure the validating webhook [k8s-webhook]

ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{eck_resources_list}}) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configuration — which can help you catch errors early and save time that would otherwise be spent on troubleshooting.
ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{eck_resources_list}}) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configurationwhich can help you catch errors early and save time that would otherwise be spent on troubleshooting.

Validating webhooks are defined using a `ValidatingWebhookConfiguration` object that defines the following:

Expand All @@ -30,7 +30,7 @@ Validating webhooks are defined using a `ValidatingWebhookConfiguration` object
When using the default `operator.yaml` manifest, ECK is installed with a `ValidatingWebhookConfiguration` configured as follows:

* Validate all known Elastic custom resources ({{eck_resources_list}}) on create and update.
* The operator itself is the webhook server — which is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace.
* The operator itself is the webhook serverwhich is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace.
* The operator generates a certificate for the webhook and stores it in a secret named `elastic-webhook-server-cert` in the `elastic-system` namespace. This certificate is automatically rotated by the operator when it is due to expire.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ For the container name, use the name of the Beat in lower case. For example `fil

## Default behavior [k8s-default-behavior]

If `resources` is not defined in the specification of an object, then the operator applies a default memory limit to ensure that Pods have enough resources to start correctly. This memory limit will also be applied to any user-defined init containers that do not have explict resource requirements set. As the operator cannot make assumptions about the available CPU resources in the cluster, no CPU limits will be set — resulting in the Pods having the "Burstable" QoS class. Check if this is acceptable for your use case and follow the instructions in [Set compute resources](#k8s-compute-resources) to configure appropriate limits.
If `resources` is not defined in the specification of an object, then the operator applies a default memory limit to ensure that Pods have enough resources to start correctly. This memory limit will also be applied to any user-defined init containers that do not have explict resource requirements set. As the operator cannot make assumptions about the available CPU resources in the cluster, no CPU limits will be setresulting in the Pods having the "Burstable" QoS class. Check if this is acceptable for your use case and follow the instructions in [Set compute resources](#k8s-compute-resources) to configure appropriate limits.

| Type | Requests | Limits |
| --- | --- | --- |
Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ ECK orchestrates NodeSet changes with no downtime and makes sure that:

Behind the scenes, ECK translates each NodeSet specified in the {{es}} resource into a [StatefulSet in Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). The NodeSet specification is based on the StatefulSet specification:

* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Pod — which corresponds to an {{es}} node.
* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Podwhich corresponds to an {{es}} node.
* `podTemplate` can be used to [customize some aspects of the {{es}} Pods](customize-pods.md) created by the underlying StatefulSet.
* The StatefulSet name is derived from the {{es}} resource name and the NodeSet name. Each Pod in the StatefulSet gets a name generated by suffixing the pod ordinal to the StatefulSet name. {{es}} nodes have the same name as the Pod they are running on.

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/elastic-cloud/find-cloud-id.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ You include your Cloud ID along with your {{ecloud}} user credentials (defined i
Not sure why you need Beats or Logstash? Here’s what they do:

* [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to {{es}}. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted {{es}} cluster on {{ecloud}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in {{es}}.
* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion.
* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sourceslogs, metrics, web applications, data stores, and various AWS servicesall in continuous, streaming fashion.


## Before you begin [ec_before_you_begin_3]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ Because the initial node in the cluster is bootstrapped as a single-node cluster

## Directory layout of archives [targz-layout]

The archive distributions are entirely self-contained. All files and directories are, by default, contained within `$ES_HOME` — the directory created when unpacking the archive.
The archive distributions are entirely self-contained. All files and directories are, by default, contained within `$ES_HOME`the directory created when unpacking the archive.

This is convenient because you don’t have to create any directories to start using {{es}}, and uninstalling {{es}} is as easy as removing the `$ES_HOME` directory. However, you should change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ Because the initial node in the cluster is bootstrapped as a single-node cluster

## Directory layout of `.zip` archive [windows-layout]

The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `%ES_HOME%` — the directory created when unpacking the archive.
The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `%ES_HOME%`the directory created when unpacking the archive.

This is very convenient because you don’t have to create any directories to start using {{es}}, and uninstalling {{es}} is as easy as removing the `%ES_HOME%` directory. However, it is advisable to change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ By default, {{kib}} runs in the foreground, prints its logs to the standard outp

## Directory layout of `.tar.gz` archives [targz-layout]

The `.tar.gz` packages are entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive.
The `.tar.gz` packages are entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME`the directory created when unpacking the archive.

This is very convenient because you don’t have to create any directories to start using {{kib}}, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ By default, {{kib}} runs in the foreground, prints its logs to `STDOUT`, and can

## Directory layout of `.zip` archive [windows-layout]

The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive.
The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME`the directory created when unpacking the archive.

This is very convenient because you don’t have to create any directories to start using {{kib}}, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ The following is a list of the roles that a node can perform in a cluster. A nod

:name: coordinating-node

Requests like search requests or bulk-indexing requests may involve data held on different data nodes. A search request, for example, is executed in two phases which are coordinated by the node which receives the client request — the *coordinating node*.
Requests like search requests or bulk-indexing requests may involve data held on different data nodes. A search request, for example, is executed in two phases which are coordinated by the node which receives the client requestthe *coordinating node*.

In the *scatter* phase, the coordinating node forwards the request to the data nodes which hold the data. Each data node executes the request locally and returns its results to the coordinating node. In the *gather* phase, the coordinating node reduces each data node’s results into a single global result set.

Expand Down Expand Up @@ -266,7 +266,7 @@ If you take away the ability to be able to handle master duties, to hold data, a
Coordinating only nodes can benefit large clusters by offloading the coordinating node role from data and master-eligible nodes. They join the cluster and receive the full [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state), like every other node, and they use the cluster state to route requests directly to the appropriate place(s).

::::{warning}
Adding too many coordinating only nodes to a cluster can increase the burden on the entire cluster because the elected master node must await acknowledgement of cluster state updates from every node! The benefit of coordinating only nodes should not be overstated — data nodes can happily serve the same purpose.
Adding too many coordinating only nodes to a cluster can increase the burden on the entire cluster because the elected master node must await acknowledgement of cluster state updates from every node! The benefit of coordinating only nodes should not be overstateddata nodes can happily serve the same purpose.
::::


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ These indexing stages (coordinating, primary, and replica) are sequential. To en

### Failure handling [_failure_handling]

Many things can go wrong during indexing — disks can get corrupted, nodes can be disconnected from each other, or some configuration mistake could cause an operation to fail on a replica despite it being successful on the primary. These are infrequent but the primary has to respond to them.
Many things can go wrong during indexingdisks can get corrupted, nodes can be disconnected from each other, or some configuration mistake could cause an operation to fail on a replica despite it being successful on the primary. These are infrequent but the primary has to respond to them.

In the case that the primary itself fails, the node hosting the primary will send a message to the master about it. The indexing operation will wait (up to 1 minute, by [default](elasticsearch://reference/elasticsearch/index-settings/index-modules.md)) for the master to promote one of the replicas to be a new primary. The operation will then be forwarded to the new primary for processing. Note that the master also monitors the health of the nodes and may decide to proactively demote a primary. This typically happens when the node holding the primary is isolated from the cluster by a networking issue. See [here](#demoted-primary) for more details.

Expand Down
Loading
Loading