Skip to content

Commit 29317e8

Browse files
Merge pull request #7374 from nmelehan-akamai/rc-v1.393.0
[Release] v1.393.0
2 parents f065e86 + 481f4a6 commit 29317e8

File tree

8 files changed

+256
-26
lines changed

8 files changed

+256
-26
lines changed

ci/vale/dictionary.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2852,6 +2852,7 @@ wazuh
28522852
wc
28532853
wchar
28542854
wchar_t
2855+
Weaviate
28552856
webadmin
28562857
webalizer
28572858
webapp

docs/guides/applications/messaging/install-mastodon-on-ubuntu-1604/docker-compose.yml

Lines changed: 1 addition & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -81,24 +81,6 @@ services:
8181
- ./public/system:/mastodon/public/system
8282
- ./public/packs:/mastodon/public/packs
8383

84-
## Uncomment to enable federation with tor instances along with adding the following ENV variables
85-
## http_proxy=http://privoxy:8118
86-
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
87-
# tor:
88-
# build: https://github.com/usbsnowcrash/docker-tor.git
89-
# networks:
90-
# - external_network
91-
# - internal_network
92-
#
93-
# privoxy:
94-
# build: https://github.com/usbsnowcrash/docker-privoxy.git
95-
# command: /opt/sbin/privoxy --no-daemon --user privoxy.privoxy /opt/config
96-
# volumes:
97-
# - ./priv-config:/opt/config
98-
# networks:
99-
# - external_network
100-
# - internal_network
101-
10284
nginx:
10385
build:
10486
context: ./nginx
@@ -121,4 +103,4 @@ services:
121103
networks:
122104
external_network:
123105
internal_network:
124-
internal: true
106+
internal: true

docs/guides/security/vulnerabilities/linux-red-team-defense-evasion-rootkits/index.md

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -79,8 +79,6 @@ We can leverage the ability to load Apache2 modules to load our own rootkit modu
7979

8080
Command injection vulnerabilities allow attackers to execute arbitrary commands on the target operating system.
8181

82-
To achieve this, we will be using the apache-rootkit module that can be found here: https://github.com/ChristianPapathanasiou/apache-rootkit
83-
8482
Apache-rootkit is a malicious Apache module with rootkit functionality that can be loaded into an Apache2 configuration with ease and with minimal artifacts.
8583

8684
The following procedures outline the process of setting up the apache-rootkit module on a target Linux system:
@@ -97,10 +95,7 @@ The following procedures outline the process of setting up the apache-rootkit mo
9795

9896
cd /tmp
9997

100-
1. The next step will involve cloning the apache-rootkit repository on to the target system, this can be done by running the following command:
101-
102-
git clone https://github.com/ChristianPapathanasiou/apache-rootkit.git
103-
98+
1. The next step will involve cloning the apache-rootkit repository on to the target system.
10499
1. After cloning the repository you will need to navigate to the “apache-rootkit” directory:
105100

106101
cd apache-rootkit
@@ -215,4 +210,4 @@ Given that the target server is running the LAMP stack, we can create a PHP mete
215210

216211
![Meterpreter session receiving connection from Commix PHP backdoor](meterpreter-session-receiving-connection-from-commix-php-backdoor.png "Meterpreter session receiving connection from Commix PHP backdoor")
217212

218-
We have been able to successfully set up the apache-rootkit module and leverage the command injection functionality afforded by the module to execute arbitrary commands on the target system as well as upload a PHP backdoor that will provide you with a meterpreter session.
213+
We have been able to successfully set up the apache-rootkit module and leverage the command injection functionality afforded by the module to execute arbitrary commands on the target system as well as upload a PHP backdoor that will provide you with a meterpreter session.
159 KB
Loading
13.1 KB
Loading
67.2 KB
Loading
Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
---
2+
title: "Deploy the Elastic Stack through the Linode Marketplace"
3+
description: "This guide helps you configure the Elastic Stack using the Akamai Compute Marketplace."
4+
published: 2025-12-05
5+
modified: 2025-12-05
6+
keywords: ['elk stack', 'elk', 'kibana', 'logstash', 'elasticsearch', 'logging', 'siem', 'cluster', 'elastic stack']
7+
tags: ["marketplace", "linode platform", "cloud manager", "elk", "logging"]
8+
aliases: ['/products/tools/marketplace/guides/elastic-stack/']
9+
external_resources:
10+
- '[Elastic Stack Documentation](https://www.elastic.co/docs)'
11+
authors: ["Akamai"]
12+
contributors: ["Akamai"]
13+
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
14+
marketplace_app_id: 804144
15+
marketplace_app_name: "Elastic Stack"
16+
---
17+
## Cluster Deployment Architecture
18+
19+
!["Elastic Stack Cluster Architecture"](elasticstack-overview.png "Elastic Stack Cluster Architecture")
20+
21+
The Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis.
22+
23+
This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging.
24+
25+
This Marketplace application stands up a multi-node Elastic Stack cluster using an automated deployment script configured by Akamai.
26+
27+
## Deploying a Marketplace App
28+
29+
{{% content "deploy-marketplace-apps-shortguide" %}}
30+
31+
{{% content "marketplace-verify-standard-shortguide" %}}
32+
33+
{{< note title="Estimated deployment time" >}}
34+
Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters may take longer to provision, and you can use the formula, 8 minutes per 5 nodes, to estimate completion time.
35+
{{< /note >}}
36+
37+
## Configuration Options
38+
39+
### Elastic Stack Options
40+
41+
- **Linode API Token** *(required)*: Your API token is used to deploy additional Compute Instances as part of this cluster. At a minimum, this token must have Read/Write access to *Linodes*. If you do not yet have an API token, see [Get an API Access Token](/docs/products/platform/accounts/guides/manage-api-tokens/) to create one.
42+
43+
- **Email address (for the Let's Encrypt SSL certificate)** *(required)*: Your email is used for Let's Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This allows you to visit Kibana securely through a browser.
44+
45+
{{% content "marketplace-required-limited-user-fields-shortguide" %}}
46+
47+
{{% content "marketplace-special-character-limitations-shortguide" %}}
48+
49+
#### TLS/SSL Certificate Options
50+
51+
The following fields are used when creating the self-signed TLS/SSL certificates for the cluster.
52+
53+
- **Country or region** *(required)*: Enter the country or region for you or your organization.
54+
- **State or province** *(required)*: Enter the state or province for you or your organization.
55+
- **Locality** *(required)*: Enter the town or other locality for you or your organization.
56+
- **Organization** *(required)*: Enter the name of your organization.
57+
- **Email address** *(required)*: Enter the email address you wish to use for your certificate file.
58+
- **CA Common name:** This is the common name for the self-signed Certificate Authority.
59+
60+
#### Picking the Correct Instance Plan and Size
61+
62+
In the **Cluster Settings** section you can designate the size for each component in your Elastic deployment. The size of the cluster depends on your needs--if you are looking for a faster deployment, stick with the defaults provided.
63+
64+
- **Kibana Size**: This deployment creates a single Kibana instance with Let's Encrypt certificates. This option cannot be changed.
65+
- **Elasticsearch Cluster Size**: The total number of nodes in your Elasticsearch cluster.
66+
- **Logstash Cluster Size**: The total number of nodes in your Logstash cluster.
67+
68+
Next, associate your Elasticsearch and Logstash clusters with a corresponding instance plan option.
69+
70+
- **Elasticsearch Instance Type**: This is the plan type used for your Elasticsearch cluster.
71+
- **Logstash Instance Type**: This is the plan type used for your Logstash cluster.
72+
73+
{{< note title="Kibana instance type" >}}
74+
In order to choose the Kibana instance, you first need to select a deployment region and then pick a plan from the **[Linode Plan](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance#choose-a-linode-type-and-plan)** section.
75+
{{< /note >}}
76+
77+
#### Additional Configuration
78+
79+
- **Filebeat IP addresses allowed to access Logstash**: If you have existing Filebeat agents already installed, you can provide their IP addresses for an allowlist. The IP addresses must be comma separated.
80+
81+
- **Logstash username to be created for index**: This is the username that is created and can access index below. This is created so that you can begin ingesting logs after deployment.
82+
83+
- **Elasticsearch index to be created for log ingestion**: This lets you start ingesting logs. Edit the index name for your specific use-case. For example, if you have WordPress application you want to perform log aggregation for, the index name `wordpress-logs` would be appropriate.
84+
85+
## Getting Started After Deployment
86+
87+
### Accessing Elastic Frontend
88+
89+
Once you cluster has finished deploying, you can log into your Elastic cluster using your local browser.
90+
91+
1. Log into the provisioner node as your limited sudo user, replacing `{{< placeholder "USER" >}}` with the sudo username you created, and `{{< placeholder "IP_ADDRESS" >}}` with the instance's IPv4 address:
92+
93+
```command
94+
ssh {{< placeholder "USER" >}}@{{< placeholder "IP_ADDRESS" >}}
95+
```
96+
97+
{{< note title="The provisioner node is also the Kibana node" >}}
98+
Your provisioner node is the first Linode created in your cluster and is also the instance running Kibana. To identify the node in your list of Linodes, look for the node appended with the name "kibana". For example: `kibana-76f0443c`
99+
{{< /note >}}
100+
101+
1. Open the `.credentials` file with the following command. Replace `{{< placeholder "USER" >}}` with your sudo username:
102+
103+
```command
104+
sudo cat /home/{{< placeholder "USER" >}}/.credentials
105+
```
106+
107+
1. In the `.credentials` file, locate the Kibana URL. Paste the URL into your browser of choice, and you should be greeted with a login page.
108+
109+
!["Elastic Login Page"](elastic-login.png "Elastic Login Page")
110+
111+
1. To access the console, enter `elastic` as the username along with the password posted in the `.credentials` file. A successful login redirects you to the welcome page. From there you are able to add integrations, visualizations, and make other config changes.
112+
113+
!["Elastic Welcome Page"](elastic-home.png "Elastic Welcome Page")
114+
115+
#### Configure Filebeat (Optional)
116+
117+
Follow the next steps if you already have Filebeat configured on a system.
118+
119+
1. Create a backup of your `/etc/filebeat/filebeat.yml` configuration:
120+
121+
```command
122+
cp /etc/filebeat/filebeat.yml{,.bak}
123+
```
124+
125+
1. Update your Filebeat inputs:
126+
127+
```file {title="/etc/filebeat/filebeat.yml" lang="yaml"}
128+
filebeat.inputs:
129+
130+
# Each - is an input. Most options can be set at the input level, so
131+
# you can use different inputs for various configurations.
132+
# Below are the input-specific configurations.
133+
134+
# filestream is an input for collecting log messages from files.
135+
- type: filestream
136+
137+
# Unique ID among all inputs, an ID is required.
138+
id: web-01
139+
140+
# Change to true to enable this input configuration.
141+
#enabled: false
142+
enabled: true
143+
144+
# Paths that should be crawled and fetched. Glob based paths.
145+
paths:
146+
- /var/log/apache2/access.log
147+
```
148+
149+
In this example, the `id` must be unique to the instance so you know the source of the log. Ideally this should be the hostname of the instance, and this example uses the value **web-01**. Update `paths` to the log that you want to send to Logstash.
150+
151+
1. While in the `/etc/filebeat/filebeat.yml`, update the Filebeat output directive:
152+
153+
```file {title="/etc/filebeat/filebeat.yml" lang="yaml"}
154+
output.logstash:
155+
# Logstash hosts
156+
hosts: ["logstash-1.example.com:5044", "logstash-2.example.com:5044"]
157+
loadbalance: true
158+
159+
# List of root certificates for HTTPS server verifications
160+
ssl.certificate_authorities: ["/etc/filebeat/certs/ca.pem"]
161+
```
162+
163+
The `hosts` param can be the IP addresses of your Logstash host or a FQDN. In this example, **logstash-1.example.com** and **logstash-2.example.com** are added to the `/etc/hosts` file.
164+
165+
1. Add a Certificate Authority (CA) certificate by adding the contents of `ca.crt` to your `/etc/filebeat/certs/ca.pem` file.
166+
167+
To obtain your `ca.crt`, open a separate terminal session, and log into your Kibana node. Navigate to the `/etc/kibana/certs/ca` directory, and view the file contents with the `cat` command:
168+
169+
```command
170+
cd /etc/kibana/certs/ca
171+
sudo cat ca.crt
172+
```
173+
174+
Copy the file contents, and add it to your `ca.pem` file on your Filebeat system.
175+
176+
1. Once you've added the certificate to your `ca.pem` file, restart the Filebeat service:
177+
178+
```command
179+
systemctl start filebeat
180+
systemctl enable filebeat
181+
```
182+
183+
Once complete, you should be able to start ingesting logs into your cluster using the index you created.
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
title: "Deploy Weaviate through the Linode Marketplace"
3+
description: "Learn how to deploy Weaviate, an AI-native vector database with GPU-accelerated semantic search capabilities, on an Akamai Compute Instance."
4+
published: 2025-12-05
5+
modified: 2025-12-05
6+
keywords: ['vector database','database','weaviate']
7+
tags: ["marketplace", "linode platform", "cloud manager"]
8+
external_resources:
9+
- '[Weaviate Official Documentation](https://docs.weaviate.io/weaviate)'
10+
aliases: ['/products/tools/marketplace/guides/weaviate/','/guides/weaviate-marketplace-app/']
11+
authors: ["Akamai"]
12+
contributors: ["Akamai"]
13+
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
14+
marketplace_app_id: 1902904 # Need to update
15+
marketplace_app_name: "Weaviate"
16+
---
17+
18+
[Weaviate](https://www.weaviate.io/) is an open-source AI-native vector database designed for building advanced AI applications. It stores and indexes both data objects and their vector embeddings, enabling semantic search, hybrid search, and Retrieval Augmented Generation (RAG) workflows. This deployment includes GPU acceleration for transformer models and comes pre-configured with the sentence-transformers model for high-performance semantic search capabilities.
19+
20+
## Deploying a Marketplace App
21+
22+
{{% content "deploy-marketplace-apps-shortguide" %}}
23+
24+
{{% content "marketplace-verify-standard-shortguide" %}}
25+
26+
{{< note title="Estimated deployment time" >}}
27+
Weaviate should be fully installed within 5-10 minutes after your instance has finished provisioning.
28+
{{< /note >}}
29+
30+
## Configuration Options
31+
32+
- **Supported distributions**: Ubuntu 24.04 LTS
33+
- **Recommended plan**: All GPU plan types and sizes can be used.
34+
35+
### Weaviate Options
36+
37+
{{% content "marketplace-required-limited-user-fields-shortguide" %}}
38+
39+
{{% content "marketplace-custom-domain-fields-shortguide" %}}
40+
41+
{{% content "marketplace-special-character-limitations-shortguide" %}}
42+
43+
## Getting Started after Deployment
44+
45+
### Obtain Your API Keys
46+
47+
Weaviate is a database service accessed programmatically through its API rather than through a web-based user interface. Your deployment includes two API keys stored in a credentials file.
48+
49+
1. Log in to your instance via SSH or Lish. See [Connecting to a Remote Server Over SSH](/docs/guides/connect-to-server-over-ssh/) for assistance, or use the [Lish Console](/docs/products/compute/compute-instances/guides/lish/).
50+
51+
1. Once logged in, retrieve your API keys from the `.credentials` file:
52+
53+
```command
54+
sudo cat /home/$USER/.credentials
55+
```
56+
57+
The credentials file contains two API keys:
58+
- **Admin API Key**: Full read/write access to all Weaviate operations
59+
- **User API Key**: Read-only access for querying data
60+
61+
### Connect Your Application to Weaviate
62+
63+
To integrate Weaviate into your application, use one of the official client libraries. Weaviate provides native clients for multiple programming languages, allowing you to perform all database operations including creating schemas, importing data, and running vector searches.
64+
65+
See the [Weaviate Client Libraries documentation](https://docs.weaviate.io/weaviate/client-libraries) for installation instructions and API references.
66+
67+
For complete examples and advanced usage, refer to the [Weaviate Quickstart Guide](https://docs.weaviate.io/weaviate/quickstart) and the client library documentation for your preferred language.
68+
69+
{{% content "marketplace-update-note-shortguide" %}}

0 commit comments

Comments
 (0)