Skip to content

Commit 7d39782

Browse files
committed
copy and formatting edits
1 parent 563a819 commit 7d39782

File tree

1 file changed

+77
-74
lines changed
  • docs/marketplace-docs/guides/elk-cluster

1 file changed

+77
-74
lines changed
Lines changed: 77 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
---
2-
title: "Deploy A Elastic Stack through the Linode Marketplace"
3-
description: "This guide will help you configure an Elastic Stack using Akamai's Compute Marketplace."
4-
published: 2025-11-25
5-
modified: 2025-11-25
2+
title: "Deploy the Elastic Stack through the Linode Marketplace"
3+
description: "This guide helps you configure the Elastic Stack using the Akamai Compute Marketplace."
4+
published: 2025-12-05
5+
modified: 2025-12-05
66
keywords: ['elk stack', 'elk', 'kibana', 'logstash', 'elasticsearch', 'logging', 'siem', 'cluster', 'elastic stack']
77
tags: ["marketplace", "linode platform", "cloud manager", "elk", "logging"]
88
aliases: ['/products/tools/marketplace/guides/elastic-stack/']
@@ -18,20 +18,20 @@ marketplace_app_name: "Elastic Stack"
1818

1919
!["Elastic Stack Cluster Architecture"](elasticstack-overview.png "Elastic Stack Cluster Architecture")
2020

21-
An Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis.
21+
The Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis.
2222

23-
This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging.
23+
This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging.
2424

25-
This Marketplace application stands up a multi-cluster Elastic Stack with the ease of a few clicks!
25+
This Marketplace application stands up a multi-node Elastic Stack cluster using an automated deployment script configured by Akamai.
2626

2727
## Deploying a Marketplace App
2828

2929
{{% content "deploy-marketplace-apps-shortguide" %}}
3030

3131
{{% content "marketplace-verify-standard-shortguide" %}}
3232

33-
{{< note >}}
34-
**Estimated deployment time:** Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters will take longer to provision but we can use the formula, 8 minutes per 5 nodes, to estimate completion.
33+
{{< note title="Estimated deployment time" >}}
34+
Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters may take longer to provision, and you can use the formula, 8 minutes per 5 nodes, to estimate completion time.
3535
{{< /note >}}
3636

3737
## Configuration Options
@@ -40,7 +40,7 @@ This Marketplace application stands up a multi-cluster Elastic Stack with the ea
4040

4141
- **Linode API Token** *(required)*: Your API token is used to deploy additional Compute Instances as part of this cluster. At a minimum, this token must have Read/Write access to *Linodes*. If you do not yet have an API token, see [Get an API Access Token](/docs/products/platform/accounts/guides/manage-api-tokens/) to create one.
4242

43-
- **Email address (for the Let's Encrypt SSL certificate)** *(required)*: Your email will be used for Let's Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This will allow you to visit Kibana securely through a browser.
43+
- **Email address (for the Let's Encrypt SSL certificate)** *(required)*: Your email is used for Let's Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This allows you to visit Kibana securely through a browser.
4444

4545
{{% content "marketplace-required-limited-user-fields-shortguide" %}}
4646

@@ -59,107 +59,110 @@ The following fields are used when creating the self-signed TLS/SSL certificates
5959

6060
#### Picking The Correct Instance Plan and Size
6161

62-
In the **Cluster Settings** section you will find a way to designate the size for each component in your Elastic deployment. The size of the cluster will depend on your needs. If you are looking for a quick deployment, stick with the defaults provided.
62+
In the **Cluster Settings** section you can designate the size for each component in your Elastic deployment. The size of the cluster depends on your needs--if you are looking for a faster deployment, stick with the defaults provided.
6363

64-
- **Kibana Size**: This deployment will only create a single Kibina instance will Let's Encrypt certificates. This option cannot be changed.
65-
- **Elasticsearch Cluster Size**: The total number of nodes your Elasticsearch cluster will have.
66-
- **Logstash Cluster Size**: The total number of nodes your Logstash cluster will have.
64+
- **Kibana Size**: This deployment creates a single Kibana instance with Let's Encrypt certificates. This option cannot be changed.
65+
- **Elasticsearch Cluster Size**: The total number of nodes in your Elasticsearch cluster.
66+
- **Logstash Cluster Size**: The total number of nodes in your Logstash cluster.
6767

68-
In this next part, you will be able to associate your Elasticsearch and Logstash clusters with their corresponding instance plans.
68+
Next, associate your Elasticsearch and Logstash clusters with a corresponding instance plan option.
6969

70-
- **Elasticsearch Instance Type**: This is plan type that will be used for your Elasticsearch cluster.
71-
- **Logstash Instance Type**: This is plan type that will be used for your Logstash cluster.
70+
- **Elasticsearch Instance Type**: This is the plan type used for your Elasticsearch cluster.
71+
- **Logstash Instance Type**: This is the plan type used for your Logstash cluster.
7272

73-
{{< note >}}
74-
**Kibana Instance Type:** To choose the Kibana instance type you will first need to click the region that you want to deploy and then picking the plan from the **[Linode Plan](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance#choose-a-linode-type-and-plan)** section.
73+
{{< note title="Kibana Instance Type" >}}
74+
In order to choose the Kibana instance, you first need to select a deployment region and then pick a plan from the **[Linode Plan](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance#choose-a-linode-type-and-plan)** section.
7575
{{< /note >}}
7676

7777
#### Additional Configuration
7878

79-
- **Filebeat IP addresses allowed to access Logstash**: If you have existing filebeat agents already installed, you can provide their IP addresses for whitelisting. The IP addresses must be comma separated.
79+
- **Filebeat IP addresses allowed to access Logstash**: If you have existing Filebeat agents already installed, you can provide their IP addresses for an allowlist. The IP addresses must be comma separated.
8080

81-
- **Logstash username to be created for index**: This is the username that is created and can access index below. We create this so that you can start ingesting logs after deployment.
81+
- **Logstash username to be created for index**: This is the username that is created and can access index below. This is created so that you can begin ingesting logs after deployment.
8282

83-
- **Elasticsearch index to be created for log ingestion**: We are creating this index so that you can start ingesting logs quickly. For example, if you have wordpress application you want perform log aggregation for the index name `wordpress-logs` would make sense. You can change this for your specific use-case.
83+
- **Elasticsearch index to be created for log ingestion**: This lets you start ingesting logs. Edit the index name for your specific use-case. For example, if you have WordPress application you want to perform log aggregation for, the index name `wordpress-logs` would be appropriate.
8484

8585
## Getting Started After Deployment
8686

8787
### Accessing Elastic Frontend
8888

89-
Once you cluster has finished deploying, you will be able to log into your Elastic cluster via the browser. The first thing you need to do is log into the provisioner node and open the the credentials file. You use the following command:
89+
Once you cluster has finished deploying, you can log into your Elastic cluster using your local browser.
9090

91-
```command
92-
cat /home/$user/.credentials
93-
```
91+
1. Log into the provisioner node as your limited sudo user, replacing `{{< placeholder "USER" >}}` with the sudo username you created, and `{{< placeholder "IP_ADDRESS" >}}` with your instance's IPv4 address:
9492

95-
Where `$user` is the sudo user you created. In the credentials file you will notice the Kibana URL. Paste that into your browser of choice and you should be able to see the login page.
93+
```command
94+
ssh {{< placeholder "USER" >}}@{{< placeholder "IP_ADDRESS" >}}
95+
```
9696

97+
1. Open the `.credentials` file with the following command. Replace `{{< placeholder "USER" >}}` with your sudo username:
9798

98-
!["Elastic Login Page"](elastic-login.png "Elastic Login Page")
99+
```command
100+
sudo cat /home/{{< placeholder "USER" >}}/.credentials
101+
```
99102

103+
1. In the `.credentials` file, locate the Kibana URL. Paste the URL into your browser of choice, and you should be greeted with a login page.
100104

101-
To access the console enter `elastic` as the username and the password posted in the credentials file. A successful login will redirect you to the welcome page. From there you are able to add integrations, visualization and other config changes.
105+
!["Elastic Login Page"](elastic-login.png "Elastic Login Page")
102106

103-
!["Elastic Welcome Page"](elastic-home.png "Elastic Welcome Page")
107+
1. To access the console, enter `elastic` as the username along with the password posted in the `.credentials` file. A successful login redirects you to the welcome page. From there you are able to add integrations, visualizations, and make other config changes.
108+
109+
!["Elastic Welcome Page"](elastic-home.png "Elastic Welcome Page")
104110

105111
#### Configure Filebeat (Optional)
106112

107-
If you already have Filebeat configured on a system follow the next steps.
113+
Follow the next steps if you already have Filebeat configured on a system.
114+
115+
1. Create a backup of your `/etc/filebeat/filebeat.yml` configuration:
116+
117+
```command
118+
cp /etc/filebeat/filebeat.yml{,.bak}
119+
```
108120

109-
- Create a backup of your `/etc/filebeat/filebeat.yml` configuration:
121+
1. Update your Filebeat inputs:
110122

111-
```command
112-
cp /etc/filebeat/filebeat.yml{,.bak}
113-
```
123+
```file {title="/etc/filebeat/filebeat.yml" lang="yaml"}
124+
filebeat.inputs:
114125
115-
- Update your filebeat inputs:
126+
# Each - is an input. Most options can be set at the input level, so
127+
# you can use different inputs for various configurations.
128+
# Below are the input-specific configurations.
116129
130+
# filestream is an input for collecting log messages from files.
131+
- type: filestream
117132
118-
`/etc/filebeat/filebeat.yml`:
119-
```yaml
120-
filebeat.inputs:
133+
# Unique ID among all inputs, an ID is required.
134+
id: web-01
121135
122-
# Each - is an input. Most options can be set at the input level, so
123-
# you can use different inputs for various configurations.
124-
# Below are the input-specific configurations.
136+
# Change to true to enable this input configuration.
137+
#enabled: false
138+
enabled: true
125139
126-
# filestream is an input for collecting log messages from files.
127-
- type: filestream
128-
129-
# Unique ID among all inputs, an ID is required.
130-
id: web-01
131-
132-
# Change to true to enable this input configuration.
133-
#enabled: false
134-
enabled: true
135-
136-
# Paths that should be crawled and fetched. Glob based paths.
137-
paths:
138-
- /var/log/apache2/access.log
139-
```
140+
# Paths that should be crawled and fetched. Glob based paths.
141+
paths:
142+
- /var/log/apache2/access.log
143+
```
140144

141-
In this example, the `id` must be unique to the instance that way you know the source of the log. Ideally this should be the hostname of the instance, in this example we are calling it **web-01**. Update `paths` to the log that you want to send to Logstash.
145+
In this example, the `id` must be unique to the instance so you know the source of the log. Ideally this should be the hostname of the instance, and this example uses the value **web-01**. Update `paths` to the log that you want to send to Logstash.
142146

143-
Next, while in the `/etc/filebeat/filebeat.yml`, update the Filbeat output directive:
147+
1. While in the `/etc/filebeat/filebeat.yml`, update the Filebeat output directive:
144148

145-
```yaml
146-
output.logstash:
147-
# Logstash hosts
148-
hosts: ["logstash-1.example.com:5044", "logstash-2.example.com:5044"]
149-
loadbalance: true
149+
```file {title="/etc/filebeat/filebeat.yml" lang="yaml"}
150+
output.logstash:
151+
# Logstash hosts
152+
hosts: ["logstash-1.example.com:5044", "logstash-2.example.com:5044"]
153+
loadbalance: true
150154
151-
# List of root certificates for HTTPS server verifications
152-
ssl.certificate_authorities: ["/etc/filebeat/certs/ca.pem"]
153-
```
154-
The `hosts` param can be the IP addresses of your Logstash host or a FQDN if wanted. In this example, we added **logstash-1.example.com** and **logstash-2.example.com** to our `/etc/hosts` file.
155+
# List of root certificates for HTTPS server verifications
156+
ssl.certificate_authorities: ["/etc/filebeat/certs/ca.pem"]
157+
```
155158

156-
- Add CA certificate.
159+
The `hosts` param can be the IP addresses of your Logstash host or a FQDN. In this example, **logstash-1.example.com** and **logstash-2.example.com** are added to the `/etc/hosts` file.
157160

158-
The last thing that you want to do is add the `ca.pem` to `/etc/filebeat/certs/ca.pem`. You can grab the `ca.crt` from any node in the cluster. Once that's in place, just restart the filebeat service:
161+
1. Add a CA certificate by adding the contents of the `ca.crt` certificate file to `/etc/filebeat/certs/ca.pem`. You can obtain the `ca.crt` from any node in the cluster. Once you've added the certificate to your `ca.pem` file, restart the Filebeat service:
159162
160-
```bash
161-
systemctl start filebeat
162-
systemctl enable filebeat
163-
```
163+
```command
164+
systemctl start filebeat
165+
systemctl enable filebeat
166+
```
164167
165-
At this point you should be able to start ingesting logs into your cluster using the index you created!
168+
Once complete, you should be able to start ingesting logs into your cluster using the index you created.

0 commit comments

Comments
 (0)