Skip to content

Commit 29b0788

Browse files
committed
more
1 parent 264f5e9 commit 29b0788

File tree

6 files changed

+77
-63
lines changed

6 files changed

+77
-63
lines changed

docs/reference/ccr/index.asciidoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
[role="xpack"]
22
[[xpack-ccr]]
33
== {ccr-cap}
4+
45
With {ccr}, you can replicate indices across clusters to:
56

67
* Continue handling search requests in the event of a datacenter outage

docs/reference/data-store-architecture.asciidoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,9 @@ from any node.
88

99
The topics in this section provides information about the architecture of {es} and how it stores and retrieves data:
1010

11-
<<nodes-shards,Nodes and shards>>: Learn about the basic building blocks of an {es} cluster, including nodes, shards, primaries, and replicas.
12-
<<docs-replication,Reading and writing documents>>: Learn how {es} replicates read and write operations across shards and shard copies.
11+
* <<nodes-shards,Nodes and shards>>: Learn about the basic building blocks of an {es} cluster, including nodes, shards, primaries, and replicas.
12+
* <<docs-replication,Reading and writing documents>>: Learn how {es} replicates read and write operations across shards and shard copies.
13+
* <<shard-allocation-relocation-recovery,Shard allocation, relocation, and recovery>>: Learn how {es} allocates and balances shards across nodes.
1314
--
1415
1516
include::nodes-shards.asciidoc[]
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
Your data is important to you. Keeping it safe and available is important to Elastic. Sometimes your cluster may experience hardware failure or a power loss. To help you plan for this, {es} offers a number of features to achieve high availability despite failures. Depending on your deployment type, you might need to provision servers in different zones or configure external repositories to meet your organization's availability needs.
2+
3+
* *<<high-availability-cluster-design,Design for resilience>>*
4+
+
5+
Distributed systems like Elasticsearch are designed to keep working even if some of their components have failed. An Elasticsearch cluster can continue operating normally if some of its nodes are unavailable or disconnected, as long as there are enough well-connected nodes to take over the unavailable node's responsibilities.
6+
+
7+
If you're designing a smaller cluster, you might focus on making your cluster resilient to single-node failures. Designers of larger clusters must also consider cases where multiple nodes fail at the same time.
8+
// need to improve connections to ECE, EC hosted, ECK pod/zone docs in the child topics
9+
* *<<xpack-ccr,Cross-cluster replication>>*
10+
+
11+
To effectively distribute read and write operations across nodes, the nodes in a cluster need good, reliable connections to each other. To provide better connections, you typically co-locate the nodes in the same data center or nearby data centers.
12+
+
13+
Co-locating nodes in a single location exposes you to the risk of a single outage taking your entire cluster offline. To maintain high availability, you can prepare a second cluster that can take over in case of disaster by implementing {ccr} (CCR).
14+
+
15+
CCR provides a way to automatically synchronize indices from a leader cluster to a follower cluster. This cluster could be in a different data center or even a different content from the leader cluster. If the primary cluster fails, the secondary cluster can take over.
16+
+
17+
TIP: You can also use CCR to create secondary clusters to serve read requests in geo-proximity to your users.
18+
* *<<snapshot-restore,Snapshots>>*
19+
+
20+
Take snapshots of your cluster that can be restored in case of failure.

docs/reference/high-availability.asciidoc

Lines changed: 1 addition & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -3,26 +3,7 @@
33

44
[partintro]
55
--
6-
Your data is important to you. Keeping it safe and available is important
7-
to {es}. Sometimes your cluster may experience hardware failure or a power
8-
loss. To help you plan for this, {es} offers a number of features
9-
to achieve high availability despite failures.
10-
11-
* With proper planning, a cluster can be
12-
<<high-availability-cluster-design,designed for resilience>> to many of the
13-
things that commonly go wrong, from the loss of a single node or network
14-
connection right up to a zone-wide outage such as power loss.
15-
16-
* You can use <<xpack-ccr,{ccr}>> to replicate data to a remote _follower_
17-
cluster which may be in a different data centre or even on a different
18-
continent from the leader cluster. The follower cluster acts as a hot
19-
standby, ready for you to fail over in the event of a disaster so severe that
20-
the leader cluster fails. The follower cluster can also act as a geo-replica
21-
to serve searches from nearby clients.
22-
23-
* The last line of defence against data loss is to take
24-
<<snapshots-take-snapshot,regular snapshots>> of your cluster so that you can
25-
restore a completely fresh copy of it elsewhere if needed.
6+
include::{es-ref-dir}/high-availability-overview.asciidoc[]
267
--
278

289
include::high-availability/cluster-design.asciidoc[]

docs/reference/production.asciidoc

Lines changed: 51 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -53,14 +53,26 @@ Refer to the documentation for each deployment method for detailed information a
5353
| ??
5454
| ??
5555

56-
| <<elasticsearch-deployment-options,Manual on-premise>>
56+
| *<<elasticsearch-deployment-options,Manual on-premise>>*
5757
| Self-hosted
5858
| ??
5959
| ??
6060
|===
6161

62+
63+
[discrete]
64+
== Cluster or deployment design
65+
66+
{es} is built to be always available and to scale with your needs. It does this using a distributed architecture. By distributing your cluster, you can keep Elastic online and responsive to requests.
67+
68+
[discrete]
69+
=== Where to start
70+
71+
Many {es} options come with different performance considerations and trade-offs. The best way to determine the
72+
optimal configuration for your use case is through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[testing with your own data and queries]. When you understand the shape and size of your data, as well as your use case, you can make informed decisions about how to configure your cluster.
73+
6274
[discrete]
63-
== Your data retention strategy
75+
=== Your data retention strategy
6476

6577
include::{es-ref-dir}/lifecycle-options.asciidoc[]
6678

@@ -69,63 +81,62 @@ You should determine how long you need to retain your data and how you will mana
6981
something about when to use which one?
7082

7183
[discrete]
72-
== Cluster or deployment design
84+
=== Nodes and shards
7385

74-
Many teams rely on {es} to run their key services. To keep these services running, you can design your {es} deployment
75-
to keep {es} available, even in case of large-scale outages. To keep it running fast, you also can design your
76-
deployment to be responsive to production workloads.
86+
When you move to production, you need to introduce multiple nodes and shards to your cluster. Nodes and shards are what make Elasticsearch distributed and scalable.
7787

78-
{es} is built to be always available and to scale with your needs. It does this using a distributed architecture.
79-
By distributing your cluster, you can keep Elastic online and responsive to requests.
88+
The number of these nodes and shards depends on your data, your use case, and your budget. See <<how-to,Optimizations>> for more information.
8089

81-
Nodes and shards design
82-
Size your shards
83-
Tuning
84-
Reference architectures
90+
The way that you manage your nodes and shards depends on your deployment method:
8591

86-
{es} offers many options that allow you to configure your cluster to meet your organization’s goals, requirements,
87-
and restrictions. You can review the following guides to learn how to tune your cluster to meet your needs:
92+
* If you're using a *manual on-premise deployment*, then you need to size and manage your nodes and shards manually.
8893

89-
* <<high-availability-cluster-design,Designing for resilience>>
90-
* <<tune-for-indexing-speed,Tune for indexing speed>>
91-
* <<tune-for-search-speed,Tune for search speed>>
92-
* <<tune-for-disk-usage,Tune for disk usage>>
93-
* <<use-elasticsearch-for-time-series-data,Tune for time series data>>
94+
* If you're using *Elastic Cloud Hosted* or *Elastic Cloud Enterprise*, then you can choose from different deployment types to apply sensible defaults for your use case, or set the size of your data on a per-zone, per-tier basis. These products can also autoscale resources in response to workload changes.
95+
** *Elastic Cloud Hosted resources*:
96+
*** {cloud}/ec-create-deployment.html[Create a hosted deployment]
97+
*** {cloud}/ec-autoscaling.html[Deployment autoscaling]
98+
** *Elastic Cloud Enterprise resources*:
99+
*** {ece-ref}/ece-stack-getting-started.html[Working with deployments]
100+
*** {ece-ref}/ece-autoscaling.html[Deployment autoscaling]
94101

95-
Many {es} options come with different performance considerations and trade-offs. The best way to determine the
96-
optimal configuration for your use case is through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[testing with your own data and queries].
102+
* If you're using *Elastic Cloud on Kubernetes*, then you can define {eck-ref}/k8s-autoscaling.html[autoscaling policies] and use the {eck-ref}/k8s-stateless-autoscaling.html[Kubernetes horizontal pod autoscaler] to scale different elements in your cluster based on your workload.
103+
104+
Learn more about <<nodes-shards,nodes and shards>>.
97105

98106
[discrete]
99-
== Security
107+
=== High availability and disaster recovery
108+
109+
include::{es-ref-dir}/high-availability-overview.asciidoc[]
100110

101-
<<secure-cluster,Learn about securing an Elasticsearch cluster>>
111+
// each of these topics needs to be reviewed to mark elements related/unrelated to each deployment type
102112

103113
[discrete]
104-
== Disaster recovery
114+
=== Optimize your cluster for your use case
105115

106-
In case of failure, {es} offers tools for cross-cluster replication and cluster snapshots that can
107-
help you fall back or recover quickly. You can also use cross-cluster replication to serve requests based on the
108-
geographic location of your users and your resources.
116+
{es} offers many options that allow you to configure your cluster to meet your organization's goals, requirements, and restrictions. Review these guidelines to learn how to tune your cluster to meet your needs. These guidelines cover elements from hardware provision to query optimization.
109117

118+
* <<tune-for-indexing-speed,Tune for indexing speed>>
119+
* <<tune-for-search-speed,Tune for search speed>>
120+
* <<tune-for-disk-usage,Tune for disk usage>>
121+
* <<use-elasticsearch-for-time-series-data,Tune for time series data>>
122+
// do we need this last topic anymore? Is this the best version we have? It's not referenced anywhere. it also isn't updated to use data stream lifecycle
123+
124+
// each of these topics needs to be reviewed to mark elements related/unrelated to each deployment type
110125

111-
To effectively distribute read and write operations across nodes, the nodes in a cluster need good, reliable connections
112-
to each other. To provide better connections, you typically co-locate the nodes in the same data center or nearby data centers.
126+
[discrete]
127+
== Security
113128

114-
Co-locating nodes in a single location exposes you to the risk of a single outage taking your entire cluster offline. To
115-
maintain high availability, you can prepare a second cluster that can take over in case of disaster by implementing
116-
cross-cluster replication (CCR).
129+
The {stack} is composed of many moving parts. There are the {es} nodes that form the cluster, plus {ls} instances, {kib} instances, {beats} agents, and clients all communicating with the cluster. In the case of *Elastic Cloud Hosted*, *Elastic Cloud Enterprise*, or *Elastic Cloud Serverless* deployments, you also need to consider the security of the Elastic Cloud instance.
117130

118-
CCR provides a way to automatically synchronize indices from your primary cluster to a secondary remote cluster that
119-
can serve as a hot backup. If the primary cluster fails, the secondary cluster can take over.
131+
Review the following topics
120132

121-
You can also use CCR to create secondary clusters to serve read requests in geo-proximity to your users.
133+
Enabling security protects {es} clusters by:
122134

123-
Learn more about <<xpack-ccr,cross-cluster replication>> and about <<high-availability-cluster-design,designing for resilience>>.
135+
* <<preventing-unauthorized-access, Preventing unauthorized access>> with password protection, role-based access control, and IP filtering.
136+
* <<preserving-data-integrity, Preserving the integrity of your data>> with SSL/TLS encryption.
137+
* <<maintaining-audit-trail, Maintaining an audit trail>> so you know who's doing what to your cluster and the data it stores.
124138

125-
[TIP]
126-
====
127-
You can also take <<snapshot-restore,snapshots>> of your cluster that can be restored in case of failure.
128-
====
139+
<<secure-cluster,Learn about securing an Elasticsearch cluster>>.
129140

130141
[discrete]
131142
== Monitoring

docs/reference/security/index.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
[partintro]
55
--
66

7-
The {stack} is comprised of many moving parts. There are the {es}
7+
The {stack} is composed of many moving parts. There are the {es}
88
nodes that form the cluster, plus {ls} instances, {kib} instances, {beats}
99
agents, and clients all communicating with the cluster. To keep your cluster
1010
safe, adhere to the <<es-security-principles,{es} security principles>>.

0 commit comments

Comments
 (0)