You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis.
21
+
The Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis.
22
22
23
-
This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging.
23
+
This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging.
24
24
25
-
This Marketplace application stands up a multi-cluster Elastic Stack with the ease of a few clicks!
25
+
This Marketplace application stands up a multi-node Elastic Stack cluster using an automated deployment script configured by Akamai.
**Estimated deployment time:**Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters will take longer to provision but we can use the formula, 8 minutes per 5 nodes, to estimate completion.
33
+
{{< note title="Estimated deployment time" >}}
34
+
Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters may take longer to provision, and you can use the formula, 8 minutes per 5 nodes, to estimate completion time.
35
35
{{< /note >}}
36
36
37
37
## Configuration Options
@@ -40,7 +40,7 @@ This Marketplace application stands up a multi-cluster Elastic Stack with the ea
40
40
41
41
-**Linode API Token***(required)*: Your API token is used to deploy additional Compute Instances as part of this cluster. At a minimum, this token must have Read/Write access to *Linodes*. If you do not yet have an API token, see [Get an API Access Token](/docs/products/platform/accounts/guides/manage-api-tokens/) to create one.
42
42
43
-
-**Email address (for the Let's Encrypt SSL certificate)***(required)*: Your email will be used for Let's Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This will allow you to visit Kibana securely through a browser.
43
+
-**Email address (for the Let's Encrypt SSL certificate)***(required)*: Your email is used for Let's Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This allows you to visit Kibana securely through a browser.
@@ -59,107 +59,110 @@ The following fields are used when creating the self-signed TLS/SSL certificates
59
59
60
60
#### Picking The Correct Instance Plan and Size
61
61
62
-
In the **Cluster Settings** section you will find a way to designate the size for each component in your Elastic deployment. The size of the cluster will depend on your needs. If you are looking for a quick deployment, stick with the defaults provided.
62
+
In the **Cluster Settings** section you can designate the size for each component in your Elastic deployment. The size of the cluster depends on your needs--if you are looking for a faster deployment, stick with the defaults provided.
63
63
64
-
-**Kibana Size**: This deployment will only create a single Kibina instance will Let's Encrypt certificates. This option cannot be changed.
65
-
-**Elasticsearch Cluster Size**: The total number of nodes your Elasticsearch cluster will have.
66
-
-**Logstash Cluster Size**: The total number of nodes your Logstash cluster will have.
64
+
-**Kibana Size**: This deployment creates a single Kibana instance with Let's Encrypt certificates. This option cannot be changed.
65
+
-**Elasticsearch Cluster Size**: The total number of nodes in your Elasticsearch cluster.
66
+
-**Logstash Cluster Size**: The total number of nodes in your Logstash cluster.
67
67
68
-
In this next part, you will be able to associate your Elasticsearch and Logstash clusters with their corresponding instance plans.
68
+
Next, associate your Elasticsearch and Logstash clusters with a corresponding instance plan option.
69
69
70
-
-**Elasticsearch Instance Type**: This is plan type that will be used for your Elasticsearch cluster.
71
-
-**Logstash Instance Type**: This is plan type that will be used for your Logstash cluster.
70
+
-**Elasticsearch Instance Type**: This is the plan type used for your Elasticsearch cluster.
71
+
-**Logstash Instance Type**: This is the plan type used for your Logstash cluster.
72
72
73
-
{{< note >}}
74
-
**Kibana Instance Type:** To choose the Kibana instance type you will first need to click the region that you want to deploy and then picking the plan from the **[Linode Plan](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance#choose-a-linode-type-and-plan)** section.
73
+
{{< note title="Kibana Instance Type" >}}
74
+
In order to choose the Kibana instance, you first need to select a deployment region and then pick a plan from the **[Linode Plan](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance#choose-a-linode-type-and-plan)** section.
75
75
{{< /note >}}
76
76
77
77
#### Additional Configuration
78
78
79
-
-**Filebeat IP addresses allowed to access Logstash**: If you have existing filebeat agents already installed, you can provide their IP addresses for whitelisting. The IP addresses must be comma separated.
79
+
-**Filebeat IP addresses allowed to access Logstash**: If you have existing Filebeat agents already installed, you can provide their IP addresses for an allowlist. The IP addresses must be comma separated.
80
80
81
-
-**Logstash username to be created for index**: This is the username that is created and can access index below. We create this so that you can start ingesting logs after deployment.
81
+
-**Logstash username to be created for index**: This is the username that is created and can access index below. This is created so that you can begin ingesting logs after deployment.
82
82
83
-
-**Elasticsearch index to be created for log ingestion**: We are creating this index so that you can start ingesting logs quickly. For example, if you have wordpress application you want perform log aggregation for the index name `wordpress-logs` would make sense. You can change this for your specific use-case.
83
+
-**Elasticsearch index to be created for log ingestion**: This lets you start ingesting logs. Edit the index name for your specific use-case. For example, if you have WordPress application you want to perform log aggregation for, the index name `wordpress-logs` would be appropriate.
84
84
85
85
## Getting Started After Deployment
86
86
87
87
### Accessing Elastic Frontend
88
88
89
-
Once you cluster has finished deploying, you will be able to log into your Elastic cluster via the browser. The first thing you need to do is log into the provisioner node and open the the credentials file. You use the following command:
89
+
Once you cluster has finished deploying, you can log into your Elastic cluster using your local browser.
90
90
91
-
```command
92
-
cat /home/$user/.credentials
93
-
```
91
+
1. Log into the provisioner node as your limited sudo user, replacing `{{< placeholder "USER" >}}` with the sudo username you created, and `{{< placeholder "IP_ADDRESS" >}}` with your instance's IPv4 address:
94
92
95
-
Where `$user` is the sudo user you created. In the credentials file you will notice the Kibana URL. Paste that into your browser of choice and you should be able to see the login page.
1. In the `.credentials` file, locate the Kibana URL. Paste the URL into your browser of choice, and you should be greeted with a login page.
100
104
101
-
To access the console enter `elastic` as the username and the password posted in the credentials file. A successful login will redirect you to the welcome page. From there you are able to add integrations, visualization and other config changes.
1. To access the console, enter `elastic` as the username along with the password posted in the `.credentials` file. A successful login redirects you to the welcome page. From there you are able to add integrations, visualizations, and make other config changes.
# Each - is an input. Most options can be set at the input level, so
127
+
# you can use different inputs for various configurations.
128
+
# Below are the input-specific configurations.
116
129
130
+
# filestream is an input for collecting log messages from files.
131
+
- type: filestream
117
132
118
-
`/etc/filebeat/filebeat.yml`:
119
-
```yaml
120
-
filebeat.inputs:
133
+
# Unique ID among all inputs, an ID is required.
134
+
id: web-01
121
135
122
-
# Each - is an input. Most options can be set at the input level, so
123
-
# you can use different inputs for various configurations.
124
-
# Below are the input-specific configurations.
136
+
# Change to true to enable this input configuration.
137
+
#enabled: false
138
+
enabled: true
125
139
126
-
# filestream is an input for collecting log messages from files.
127
-
- type: filestream
128
-
129
-
# Unique ID among all inputs, an ID is required.
130
-
id: web-01
131
-
132
-
# Change to true to enable this input configuration.
133
-
#enabled: false
134
-
enabled: true
135
-
136
-
# Paths that should be crawled and fetched. Glob based paths.
137
-
paths:
138
-
- /var/log/apache2/access.log
139
-
```
140
+
# Paths that should be crawled and fetched. Glob based paths.
141
+
paths:
142
+
- /var/log/apache2/access.log
143
+
```
140
144
141
-
In this example, the `id` must be unique to the instance that way you know the source of the log. Ideally this should be the hostname of the instance, in this example we are calling it **web-01**. Update `paths` to the log that you want to send to Logstash.
145
+
In this example, the `id` must be unique to the instance so you know the source of the log. Ideally this should be the hostname of the instance, and this example uses the value**web-01**. Update `paths` to the log that you want to send to Logstash.
142
146
143
-
Next, while in the `/etc/filebeat/filebeat.yml`, update the Filbeat output directive:
147
+
1. While in the `/etc/filebeat/filebeat.yml`, update the Filebeat output directive:
The `hosts` param can be the IP addresses of your Logstash host or a FQDN if wanted. In this example, we added **logstash-1.example.com** and **logstash-2.example.com** to our `/etc/hosts` file.
155
+
# List of root certificates for HTTPS server verifications
The `hosts` param can be the IP addresses of your Logstash host or a FQDN. In this example, **logstash-1.example.com** and **logstash-2.example.com** are added to the `/etc/hosts` file.
157
160
158
-
The last thing that you want to do is add the `ca.pem` to `/etc/filebeat/certs/ca.pem`. You can grab the `ca.crt` from any node in the cluster. Once that's in place, just restart the filebeat service:
161
+
1. Add a CA certificate by adding the contents of the `ca.crt` certificate file to `/etc/filebeat/certs/ca.pem`. You can obtain the `ca.crt` from any node in the cluster. Once you've added the certificate to your `ca.pem` file, restart the Filebeat service:
159
162
160
-
```bash
161
-
systemctl start filebeat
162
-
systemctl enable filebeat
163
-
```
163
+
```command
164
+
systemctl start filebeat
165
+
systemctl enable filebeat
166
+
```
164
167
165
-
At this point you should be able to start ingesting logs into your cluster using the index you created!
168
+
Once complete, you should be able to start ingesting logs into your cluster using the index you created.
0 commit comments