Skip to content

Commit aa1a2e0

Browse files
authored
Replace irregular spaces (#1434)
This replaces irregular whitespaces, as detected by https://github.com/elastic/docs-builder/pull/1262/files
1 parent e3c829d commit aa1a2e0

File tree

67 files changed

+111
-111
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+111
-111
lines changed

cloud-account/multifactor-authentication.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ To enable multifactor authentication using an authenticator app, you must verify
6060

6161
You can remove a multifactor authentication method after it’s added by clicking **Remove**.
6262

63-
Before you remove an authentication method, you must set up an alternate method. If you can’t use any of your configured authentication methods — for example, if your device is lost or stolen — then [contact support](../troubleshoot/index.md).
63+
Before you remove an authentication method, you must set up an alternate method. If you can’t use any of your configured authentication methodsfor example, if your device is lost or stolenthen [contact support](../troubleshoot/index.md).
6464

6565

6666
## Frequently asked questions [ec-account-security-mfa-faq]

deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,13 @@ Data Transfer accounts for the volume of data (payload) going into, out of, and
3939

4040
We meter and bill data transfer using three dimensions:
4141

42-
1. Data in (free)
42+
1. Data in (free)
4343
: *Data in* accounts for all of the traffic going into the deployment. It includes index requests with data payload, as well as queries sent to the deployment (although the byte size of the latter is typically much smaller).
4444

45-
2. Data out
45+
2. Data out
4646
: *Data out* accounts for all of the traffic coming out of the deployment. This includes search results, as well as monitoring data sent from the deployment. The same rate applies regardless of the destination of the data, whether to the internet, to another region, or to a cloud provider account in the same region. Data coming out of the deployment through AWS PrivateLink, GCP Private Service Connect, or Azure Private Link, is also considered *Data out*.
4747

48-
3. Data inter-node
48+
3. Data inter-node
4949
: *Data inter-node* accounts for all of the traffic sent between the components of the deployment. This includes the data sync between nodes of a cluster which is managed automatically by {{es}} cluster sharding. It also includes data related to search queries executed across multiple nodes of a cluster. Note that single-node {{es}} clusters typically have lower charges, but may still incur inter-node charges accounting for data exchanged with {{kib}} nodes or other nodes, such as machine learning or APM.
5050

5151
We provide a free allowance of 100GB per month, which includes the sum of *data out* and *data inter-node*, across all deployments in the account. Once this threshold is passed, a charge is applied for any data transfer used in excess of the 100GB monthly free allowance.
@@ -71,15 +71,15 @@ Storage costs are tied to the cost of storing the backup snapshots in the underl
7171

7272
As is common with Cloud providers, we meter and bill snapshot storage using two dimensions:
7373

74-
1. Storage size (GB/month)
74+
1. Storage size (GB/month)
7575
: This is calculated by metering the storage space (GBs) occupied by all snapshots of all deployments tied to an account. The same unit price applies to all regions. To calculate the due charges, we meter the amount of storage on an hourly basis and produce an average size (in GB) for a given month. The average amount is then used to bill the account for the GB/month used within a billing cycle (a calendar month).
7676

7777
For example, if the storage used in April 2019 was 100GB for 10 days, and then 130GB for the remaining 20 days of the month, the average storage would be 120 GB/month, calculated as (100*10 + 130*20)/30.
7878

7979
We provide a free allowance of 100 GB/month to all accounts across all the account deployments. Any metered storage usage below that amount will not be billed. Whenever the 100 GB/month threshold is crossed, we bill for the storage used in excess of the 100GB/month free allowance.
8080

8181

82-
2. Storage API requests (1K Requests/month)
82+
2. Storage API requests (1K Requests/month)
8383
: These costs are calculated by counting the total number of calls to backup or restore snapshots made by all deployments associated with an account. Unlike storage size, this dimension is cumulative, summed up across the billing cycle, and is billed at a price of 1,000 requests.
8484

8585
We provide a free allowance of 100,000 API requests to all accounts each month across all the account deployments. Once this threshold is passed, we bill only for the use of API requests in excess of the free allowance.

deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec
141141
[...]
142142
```
143143

144-
6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section.
144+
6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file existsuse the one in the [engine] section.
145145

146146
Example:
147147

deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ Using Docker or Podman as container runtime is a configuration local to the host
171171
[...]
172172
```
173173

174-
6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file exists — use the one in the [engine] section.
174+
6. If podman requires a proxy in your infrastructure setup, modify the `/usr/share/containers/containers.conf` file and add the `HTTP_PROXY` and `HTTPS_PROXY` environment variables in the [engine] section. Note that multiple env variables in that configuration file existsuse the one in the [engine] section.
175175

176176
Example:
177177

deploy-manage/deploy/cloud-on-k8s/configure-eck.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ data:
9393
ubi-only: false
9494
```
9595
96-
Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator container — which will trigger an automatic restart of the operator pod by the StatefulSet controller.
96+
Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator containerwhich will trigger an automatic restart of the operator pod by the StatefulSet controller.
9797

9898
## Configure ECK under Operator Lifecycle Manager [k8s-operator-config-olm]
9999

deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ products:
1010

1111
# Configure the validating webhook [k8s-webhook]
1212

13-
ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{eck_resources_list}}) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configuration — which can help you catch errors early and save time that would otherwise be spent on troubleshooting.
13+
ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{eck_resources_list}}) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configurationwhich can help you catch errors early and save time that would otherwise be spent on troubleshooting.
1414

1515
Validating webhooks are defined using a `ValidatingWebhookConfiguration` object that defines the following:
1616

@@ -30,7 +30,7 @@ Validating webhooks are defined using a `ValidatingWebhookConfiguration` object
3030
When using the default `operator.yaml` manifest, ECK is installed with a `ValidatingWebhookConfiguration` configured as follows:
3131

3232
* Validate all known Elastic custom resources ({{eck_resources_list}}) on create and update.
33-
* The operator itself is the webhook server — which is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace.
33+
* The operator itself is the webhook serverwhich is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace.
3434
* The operator generates a certificate for the webhook and stores it in a secret named `elastic-webhook-server-cert` in the `elastic-system` namespace. This certificate is automatically rotated by the operator when it is due to expire.
3535

3636

deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,7 @@ For the container name, use the name of the Beat in lower case. For example `fil
280280

281281
## Default behavior [k8s-default-behavior]
282282

283-
If `resources` is not defined in the specification of an object, then the operator applies a default memory limit to ensure that Pods have enough resources to start correctly. This memory limit will also be applied to any user-defined init containers that do not have explict resource requirements set. As the operator cannot make assumptions about the available CPU resources in the cluster, no CPU limits will be set — resulting in the Pods having the "Burstable" QoS class. Check if this is acceptable for your use case and follow the instructions in [Set compute resources](#k8s-compute-resources) to configure appropriate limits.
283+
If `resources` is not defined in the specification of an object, then the operator applies a default memory limit to ensure that Pods have enough resources to start correctly. This memory limit will also be applied to any user-defined init containers that do not have explict resource requirements set. As the operator cannot make assumptions about the available CPU resources in the cluster, no CPU limits will be setresulting in the Pods having the "Burstable" QoS class. Check if this is acceptable for your use case and follow the instructions in [Set compute resources](#k8s-compute-resources) to configure appropriate limits.
284284

285285
| Type | Requests | Limits |
286286
| --- | --- | --- |

deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ ECK orchestrates NodeSet changes with no downtime and makes sure that:
100100

101101
Behind the scenes, ECK translates each NodeSet specified in the {{es}} resource into a [StatefulSet in Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). The NodeSet specification is based on the StatefulSet specification:
102102

103-
* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Pod — which corresponds to an {{es}} node.
103+
* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Podwhich corresponds to an {{es}} node.
104104
* `podTemplate` can be used to [customize some aspects of the {{es}} Pods](customize-pods.md) created by the underlying StatefulSet.
105105
* The StatefulSet name is derived from the {{es}} resource name and the NodeSet name. Each Pod in the StatefulSet gets a name generated by suffixing the pod ordinal to the StatefulSet name. {{es}} nodes have the same name as the Pod they are running on.
106106

deploy-manage/deploy/elastic-cloud/find-cloud-id.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ You include your Cloud ID along with your {{ecloud}} user credentials (defined i
3030
Not sure why you need Beats or Logstash? Here’s what they do:
3131

3232
* [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to {{es}}. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted {{es}} cluster on {{ecloud}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in {{es}}.
33-
* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion.
33+
* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sourceslogs, metrics, web applications, data stores, and various AWS servicesall in continuous, streaming fashion.
3434

3535

3636
## Before you begin [ec_before_you_begin_3]

deploy-manage/deploy/self-managed/install-elasticsearch-from-archive-on-linux-macos.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ Because the initial node in the cluster is bootstrapped as a single-node cluster
178178

179179
## Directory layout of archives [targz-layout]
180180

181-
The archive distributions are entirely self-contained. All files and directories are, by default, contained within `$ES_HOME` — the directory created when unpacking the archive.
181+
The archive distributions are entirely self-contained. All files and directories are, by default, contained within `$ES_HOME`the directory created when unpacking the archive.
182182

183183
This is convenient because you don’t have to create any directories to start using {{es}}, and uninstalling {{es}} is as easy as removing the `$ES_HOME` directory. However, you should change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.
184184

0 commit comments

Comments
 (0)